categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
sequence |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.17494 | null | null | http://arxiv.org/pdf/2405.17494v2 | 2024-06-01T18:55:33Z | 2024-05-25T13:03:38Z | Transitional Uncertainty with Layered Intermediate Predictions | In this paper, we discuss feature engineering for single-pass uncertainty estimation. For accurate uncertainty estimates, neural networks must extract differences in the feature space that quantify uncertainty. This could be achieved by current single-pass approaches that maintain feature distances between data points as they traverse the network. While initial results are promising, maintaining feature distances within the network representations frequently inhibits information compression and opposes the learning objective. We study this effect theoretically and empirically to arrive at a simple conclusion: preserving feature distances in the output is beneficial when the preserved features contribute to learning the label distribution and act in opposition otherwise. We then propose Transitional Uncertainty with Layered Intermediate Predictions (TULIP) as a simple approach to address the shortcomings of current single-pass estimators. Specifically, we implement feature preservation by extracting features from intermediate representations before information is collapsed by subsequent layers. We refer to the underlying preservation mechanism as transitional feature preservation. We show that TULIP matches or outperforms current single-pass methods on standard benchmarks and in practical settings where these methods are less reliable (imbalances, complex architectures, medical modalities). | [
"['Ryan Benkert' 'Mohit Prabhushankar' 'Ghassan AlRegib']"
] |
null | null | 2405.17495 | null | null | http://arxiv.org/pdf/2405.17495v2 | 2024-06-04T13:04:53Z | 2024-05-25T16:05:06Z | Vertical Federated Learning for Effectiveness, Security, Applicability:
A Survey | Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm where different parties collaboratively learn models using partitioned features of shared samples, without leaking private data. Recent research has shown promising results addressing various challenges in VFL, highlighting its potential for practical applications in cross-domain collaboration. However, the corresponding research is scattered and lacks organization. To advance VFL research, this survey offers a systematic overview of recent developments. First, we provide a history and background introduction, along with a summary of the general training protocol of VFL. We then revisit the taxonomy in recent reviews and analyze limitations in-depth. For a comprehensive and structured discussion, we synthesize recent research from three fundamental perspectives: effectiveness, security, and applicability. Finally, we discuss several critical future research directions in VFL, which will facilitate the developments in this field. We provide a collection of research lists and periodically update them at https://github.com/shentt67/VFL_Survey. | [
"['Mang Ye' 'Wei Shen' 'Bo Du' 'Eduard Snezhko' 'Vassili Kovalev'\n 'Pong C. Yuen']"
] |
null | null | 2405.17497 | null | null | http://arxiv.org/pdf/2405.17497v1 | 2024-05-25T18:31:20Z | 2024-05-25T18:31:20Z | Secure Hierarchical Federated Learning in Vehicular Networks Using
Dynamic Client Selection and Anomaly Detection | Hierarchical Federated Learning (HFL) faces the significant challenge of adversarial or unreliable vehicles in vehicular networks, which can compromise the model's integrity through misleading updates. Addressing this, our study introduces a novel framework that integrates dynamic vehicle selection and robust anomaly detection mechanisms, aiming to optimize participant selection and mitigate risks associated with malicious contributions. Our approach involves a comprehensive vehicle reliability assessment, considering historical accuracy, contribution frequency, and anomaly records. An anomaly detection algorithm is utilized to identify anomalous behavior by analyzing the cosine similarity of local or model parameters during the federated learning (FL) process. These anomaly records are then registered and combined with past performance for accuracy and contribution frequency to identify the most suitable vehicles for each learning round. Dynamic client selection and anomaly detection algorithms are deployed at different levels, including cluster heads (CHs), cluster members (CMs), and the Evolving Packet Core (EPC), to detect and filter out spurious updates. Through simulation-based performance evaluation, our proposed algorithm demonstrates remarkable resilience even under intense attack conditions. Even in the worst-case scenarios, it achieves convergence times at $63$% as effective as those in scenarios without any attacks. Conversely, in scenarios without utilizing our proposed algorithm, there is a high likelihood of non-convergence in the FL process. | [
"['M. Saeid HaghighiFard' 'Sinem Coleri']"
] |
null | null | 2405.17501 | null | null | http://arxiv.org/pdf/2405.17501v1 | 2024-05-26T02:32:28Z | 2024-05-26T02:32:28Z | Geometry of Critical Sets and Existence of Saddle Branches for Two-layer
Neural Networks | This paper presents a comprehensive analysis of critical point sets in two-layer neural networks. To study such complex entities, we introduce the critical embedding operator and critical reduction operator as our tools. Given a critical point, we use these operators to uncover the whole underlying critical set representing the same output function, which exhibits a hierarchical structure. Furthermore, we prove existence of saddle branches for any critical set whose output function can be represented by a narrower network. Our results provide a solid foundation to the further study of optimization and training behavior of neural networks. | [
"['Leyang Zhang' 'Yaoyu Zhang' 'Tao Luo']"
] |
null | null | 2405.17502 | null | null | http://arxiv.org/pdf/2405.17502v1 | 2024-05-26T03:18:47Z | 2024-05-26T03:18:47Z | Exploring Nutritional Impact on Alzheimer's Mortality: An Explainable AI
Approach | This article uses machine learning (ML) and explainable artificial intelligence (XAI) techniques to investigate the relationship between nutritional status and mortality rates associated with Alzheimers disease (AD). The Third National Health and Nutrition Examination Survey (NHANES III) database is employed for analysis. The random forest model is selected as the base model for XAI analysis, and the Shapley Additive Explanations (SHAP) method is used to assess feature importance. The results highlight significant nutritional factors such as serum vitamin B12 and glycated hemoglobin. The study demonstrates the effectiveness of random forests in predicting AD mortality compared to other diseases. This research provides insights into the impact of nutrition on AD and contributes to a deeper understanding of disease progression. | [
"['Ziming Liu' 'Longjian Liu' 'Robert E. Heidel' 'Xiaopeng Zhao']"
] |
null | null | 2405.17505 | null | null | http://arxiv.org/pdf/2405.17505v1 | 2024-05-26T07:01:33Z | 2024-05-26T07:01:33Z | Predicting Rental Price of Lane Houses in Shanghai with Machine Learning
Methods and Large Language Models | Housing has emerged as a crucial concern among young individuals residing in major cities, including Shanghai. Given the unprecedented surge in property prices in this metropolis, young people have increasingly resorted to the rental market to address their housing needs. This study utilizes five traditional machine learning methods: multiple linear regression (MLR), ridge regression (RR), lasso regression (LR), decision tree (DT), and random forest (RF), along with a Large Language Model (LLM) approach using ChatGPT, for predicting the rental prices of lane houses in Shanghai. It applies these methods to examine a public data sample of about 2,609 lane house rental transactions in 2021 in Shanghai, and then compares the results of these methods. In terms of predictive power, RF has achieved the best performance among the traditional methods. However, the LLM approach, particularly in the 10-shot scenario, shows promising results that surpass traditional methods in terms of R-Squared value. The three performance metrics: mean squared error (MSE), mean absolute error (MAE), and R-Squared, are used to evaluate the models. Our conclusion is that while traditional machine learning models offer robust techniques for rental price prediction, the integration of LLM such as ChatGPT holds significant potential for enhancing predictive accuracy. | [
"['Tingting Chen' 'Shijing Si']"
] |
null | null | 2405.17506 | null | null | http://arxiv.org/pdf/2405.17506v1 | 2024-05-26T14:27:26Z | 2024-05-26T14:27:26Z | Subspace Node Pruning | A significant increase in the commercial use of deep neural network models increases the need for efficient AI. Node pruning is the art of removing computational units such as neurons, filters, attention heads, or even entire layers while keeping network performance at a maximum. This can significantly reduce the inference time of a deep network and thus enhance its efficiency. Few of the previous works have exploited the ability to recover performance by reorganizing network parameters while pruning. In this work, we propose to create a subspace from unit activations which enables node pruning while recovering maximum accuracy. We identify that for effective node pruning, a subspace can be created using a triangular transformation matrix, which we show to be equivalent to Gram-Schmidt orthogonalization, which automates this procedure. We further improve this method by reorganizing the network prior to subspace formation. Finally, we leverage the orthogonal subspaces to identify layer-wise pruning ratios appropriate to retain a significant amount of the layer-wise information. We show that this measure outperforms existing pruning methods on VGG networks. We further show that our method can be extended to other network architectures such as residual networks. | [
"['Joshua Offergeld' 'Marcel van Gerven' 'Nasir Ahmad']"
] |
null | null | 2405.17507 | null | null | http://arxiv.org/pdf/2405.17507v1 | 2024-05-26T17:14:50Z | 2024-05-26T17:14:50Z | Enhancing Sustainable Urban Mobility Prediction with Telecom Data: A
Spatio-Temporal Framework Approach | Traditional traffic prediction, limited by the scope of sensor data, falls short in comprehensive traffic management. Mobile networks offer a promising alternative using network activity counts, but these lack crucial directionality. Thus, we present the TeltoMob dataset, featuring undirected telecom counts and corresponding directional flows, to predict directional mobility flows on roadways. To address this, we propose a two-stage spatio-temporal graph neural network (STGNN) framework. The first stage uses a pre-trained STGNN to process telecom data, while the second stage integrates directional and geographic insights for accurate prediction. Our experiments demonstrate the framework's compatibility with various STGNN models and confirm its effectiveness. We also show how to incorporate the framework into real-world transportation systems, enhancing sustainable urban mobility. | [
"['ChungYi Lin' 'Shen-Lung Tung' 'Hung-Ting Su' 'Winston H. Hsu']"
] |
null | null | 2405.17508 | null | null | http://arxiv.org/pdf/2405.17508v1 | 2024-05-26T18:05:12Z | 2024-05-26T18:05:12Z | Unveiling the Secrets: How Masking Strategies Shape Time Series
Imputation | In this study, we explore the impact of different masking strategies on time series imputation models. We evaluate the effects of pre-masking versus in-mini-batch masking, normalization timing, and the choice between augmenting and overlaying artificial missingness. Using three diverse datasets, we benchmark eleven imputation models with different missing rates. Our results demonstrate that masking strategies significantly influence imputation accuracy, revealing that more sophisticated and data-driven masking designs are essential for robust model evaluation. We advocate for refined experimental designs and comprehensive disclosureto better simulate real-world patterns, enhancing the practical applicability of imputation models. | [
"['Linglong Qian' 'Zina Ibrahim' 'Wenjie Du' 'Yiyuan Yang'\n 'Richard JB Dobson']"
] |
null | null | 2405.17509 | null | null | http://arxiv.org/pdf/2405.17509v1 | 2024-05-27T06:50:17Z | 2024-05-27T06:50:17Z | Reference Neural Operators: Learning the Smooth Dependence of Solutions
of PDEs on Geometric Deformations | For partial differential equations on domains of arbitrary shapes, existing works of neural operators attempt to learn a mapping from geometries to solutions. It often requires a large dataset of geometry-solution pairs in order to obtain a sufficiently accurate neural operator. However, for many industrial applications, e.g., engineering design optimization, it can be prohibitive to satisfy the requirement since even a single simulation may take hours or days of computation. To address this issue, we propose reference neural operators (RNO), a novel way of implementing neural operators, i.e., to learn the smooth dependence of solutions on geometric deformations. Specifically, given a reference solution, RNO can predict solutions corresponding to arbitrary deformations of the referred geometry. This approach turns out to be much more data efficient. Through extensive experiments, we show that RNO can learn the dependence across various types and different numbers of geometry objects with relatively small datasets. RNO outperforms baseline models in accuracy by a large lead and achieves up to 80% error reduction. | [
"['Ze Cheng' 'Zhongkai Hao' 'Xiaoqiang Wang' 'Jianing Huang' 'Youjia Wu'\n 'Xudan Liu' 'Yiru Zhao' 'Songming Liu' 'Hang Su']"
] |
null | null | 2405.17512 | null | null | http://arxiv.org/pdf/2405.17512v1 | 2024-05-27T07:37:43Z | 2024-05-27T07:37:43Z | On Fairness of Low-Rank Adaptation of Large Models | Low-rank adaptation of large models, particularly LoRA, has gained traction due to its computational efficiency. This efficiency, contrasted with the prohibitive costs of full-model fine-tuning, means that practitioners often turn to LoRA and sometimes without a complete understanding of its ramifications. In this study, we focus on fairness and ask whether LoRA has an unexamined impact on utility, calibration, and resistance to membership inference across different subgroups (e.g., genders, races, religions) compared to a full-model fine-tuning baseline. We present extensive experiments across vision and language domains and across classification and generation tasks using ViT-Base, Swin-v2-Large, Llama-2 7B, and Mistral 7B. Intriguingly, experiments suggest that while one can isolate cases where LoRA exacerbates model bias across subgroups, the pattern is inconsistent -- in many cases, LoRA has equivalent or even improved fairness compared to the base model or its full fine-tuning baseline. We also examine the complications of evaluating fine-tuning fairness relating to task design and model token bias, calling for more careful fairness evaluations in future work. | [
"['Zhoujie Ding' 'Ken Ziyu Liu' 'Pura Peetathawatchai' 'Berivan Isik'\n 'Sanmi Koyejo']"
] |
null | null | 2405.17516 | null | null | http://arxiv.org/pdf/2405.17516v2 | 2024-06-13T07:34:10Z | 2024-05-27T09:01:30Z | Time Elastic Neural Networks | We introduce and detail an atypical neural network architecture, called time elastic neural network (teNN), for multivariate time series classification. The novelty compared to classical neural network architecture is that it explicitly incorporates time warping ability, as well as a new way of considering attention. In addition, this architecture is capable of learning a dropout strategy, thus optimizing its own architecture.Behind the design of this architecture, our overall objective is threefold: firstly, we are aiming at improving the accuracy of instance based classification approaches that shows quite good performances as far as enough training data is available. Secondly we seek to reduce the computational complexity inherent to these methods to improve their scalability. Ideally, we seek to find an acceptable balance between these first two criteria. And finally, we seek to enhance the explainability of the decision provided by this kind of neural architecture.The experiment demonstrates that the stochastic gradient descent implemented to train a teNN is quite effective. To the extent that the selection of some critical meta-parameters is correct, convergence is generally smooth and fast.While maintaining good accuracy, we get a drastic gain in scalability by first reducing the required number of reference time series, i.e. the number of teNN cells required. Secondly, we demonstrate that, during the training process, the teNN succeeds in reducing the number of neurons required within each cell. Finally, we show that the analysis of the activation and attention matrices as well as the reference time series after training provides relevant information to interpret and explain the classification results.The comparative study that we have carried out and which concerns around thirty diverse and multivariate datasets shows that the teNN obtains results comparable to those of the state of the art, in particular similar to those of a network mixing LSTM and CNN architectures for example. | [
"['Pierre-François Marteau']"
] |
null | null | 2405.17517 | null | null | http://arxiv.org/pdf/2405.17517v1 | 2024-05-27T09:02:57Z | 2024-05-27T09:02:57Z | WASH: Train your Ensemble with Communication-Efficient Weight Shuffling,
then Average | The performance of deep neural networks is enhanced by ensemble methods, which average the output of several models. However, this comes at an increased cost at inference. Weight averaging methods aim at balancing the generalization of ensembling and the inference speed of a single model by averaging the parameters of an ensemble of models. Yet, naive averaging results in poor performance as models converge to different loss basins, and aligning the models to improve the performance of the average is challenging. Alternatively, inspired by distributed training, methods like DART and PAPA have been proposed to train several models in parallel such that they will end up in the same basin, resulting in good averaging accuracy. However, these methods either compromise ensembling accuracy or demand significant communication between models during training. In this paper, we introduce WASH, a novel distributed method for training model ensembles for weight averaging that achieves state-of-the-art image classification accuracy. WASH maintains models within the same basin by randomly shuffling a small percentage of weights during training, resulting in diverse models and lower communication costs compared to standard parameter averaging methods. | [
"['Louis Fournier' 'Adel Nabli' 'Masih Aminbeidokhti' 'Marco Pedersoli'\n 'Eugene Belilovsky' 'Edouard Oyallon']"
] |
null | null | 2405.17522 | null | null | http://arxiv.org/pdf/2405.17522v1 | 2024-05-27T12:17:47Z | 2024-05-27T12:17:47Z | Efficient Model Compression for Hierarchical Federated Learning | Federated learning (FL), as an emerging collaborative learning paradigm, has garnered significant attention due to its capacity to preserve privacy within distributed learning systems. In these systems, clients collaboratively train a unified neural network model using their local datasets and share model parameters rather than raw data, enhancing privacy. Predominantly, FL systems are designed for mobile and edge computing environments where training typically occurs over wireless networks. Consequently, as model sizes increase, the conventional FL frameworks increasingly consume substantial communication resources. To address this challenge and improve communication efficiency, this paper introduces a novel hierarchical FL framework that integrates the benefits of clustered FL and model compression. We present an adaptive clustering algorithm that identifies a core client and dynamically organizes clients into clusters. Furthermore, to enhance transmission efficiency, each core client implements a local aggregation with compression (LC aggregation) algorithm after collecting compressed models from other clients within the same cluster. Simulation results affirm that our proposed algorithms not only maintain comparable predictive accuracy but also significantly reduce energy consumption relative to existing FL mechanisms. | [
"['Xi Zhu' 'Songcan Yu' 'Junbo Wang' 'Qinglin Yang']"
] |
null | null | 2405.17523 | null | null | http://arxiv.org/pdf/2405.17523v2 | 2024-05-29T07:40:40Z | 2024-05-27T12:52:45Z | Locally Testing Model Detections for Semantic Global Concepts | Ensuring the quality of black-box Deep Neural Networks (DNNs) has become ever more significant, especially in safety-critical domains such as automated driving. While global concept encodings generally enable a user to test a model for a specific concept, linking global concept encodings to the local processing of single network inputs reveals their strengths and limitations. Our proposed framework global-to-local Concept Attribution (glCA) uses approaches from local (why a specific prediction originates) and global (how a model works generally) eXplainable Artificial Intelligence (xAI) to test DNNs for a predefined semantical concept locally. The approach allows for conditioning local, post-hoc explanations on predefined semantic concepts encoded as linear directions in the model's latent space. Pixel-exact scoring concerning the global concept usage assists the tester in further understanding the model processing of single data points for the selected concept. Our approach has the advantage of fully covering the model-internal encoding of the semantic concept and allowing the localization of relevant concept-related information. The results show major differences in the local perception and usage of individual global concept encodings and demand for further investigations regarding obtaining thorough semantic concept encodings. | [
"['Franz Motzkus' 'Georgii Mikriukov' 'Christian Hellert' 'Ute Schmid']"
] |
null | null | 2405.17525 | null | null | http://arxiv.org/pdf/2405.17525v1 | 2024-05-27T14:23:30Z | 2024-05-27T14:23:30Z | SmoothGNN: Smoothing-based GNN for Unsupervised Node Anomaly Detection | The smoothing issue leads to indistinguishable node representations, which poses a significant challenge in the field of graph learning. However, this issue also presents an opportunity to reveal underlying properties behind different types of nodes, which have been overlooked in previous studies. Through empirical and theoretical analysis of real-world node anomaly detection (NAD) datasets, we observe that anomalous and normal nodes show different patterns in the smoothing process, which can be leveraged to enhance NAD tasks. Motivated by these findings, in this paper, we propose a novel unsupervised NAD framework. Specifically, according to our theoretical analysis, we design a Smoothing Learning Component. Subsequently, we introduce a Smoothing-aware Spectral Graph Neural Network, which establishes the connection between the spectral space of graphs and the smoothing process. Additionally, we demonstrate that the Dirichlet Energy, which reflects the smoothness of a graph, can serve as coefficients for node representations across different dimensions of the spectral space. Building upon these observations and analyses, we devise a novel anomaly measure for the NAD task. Extensive experiments on 9 real-world datasets show that SmoothGNN outperforms the best rival by an average of 14.66% in AUC and 7.28% in Precision, with 75x running time speed-up, which validates the effectiveness and efficiency of our framework. | [
"['Xiangyu Dong' 'Xingyi Zhang' 'Yanni Sun' 'Lei Chen' 'Mingxuan Yuan'\n 'Sibo Wang']"
] |
null | null | 2405.17527 | null | null | http://arxiv.org/pdf/2405.17527v2 | 2024-06-01T13:28:27Z | 2024-05-27T15:34:35Z | Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers | Deep models have recently emerged as a promising tool to solve partial differential equations (PDEs), known as neural PDE solvers. While neural solvers trained from either simulation data or physics-informed loss can solve the PDEs reasonably well, they are mainly restricted to a specific set of PDEs, e.g. a certain equation or a finite set of coefficients. This bottleneck limits the generalizability of neural solvers, which is widely recognized as its major advantage over numerical solvers. In this paper, we present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs by leveraging a Transformer pre-trained on diverse data and conditioned on diverse PDEs. Instead of simply scaling up data and parameters, Unisolver stems from the theoretical analysis of the PDE-solving process. Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components, e.g. equation symbols, coefficients, and initial and boundary conditions. Inspired by the mathematical structure of PDEs, we define a complete set of PDE components and correspondingly embed them as domain-wise (e.g. equation symbols) and point-wise (e.g. boundaries) conditions for Transformer PDE solvers. Integrating physical insights with recent Transformer advances, Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks, showing impressive gains and endowing favorable generalizability and scalability. | [
"['Hang Zhou' 'Yuezhou Ma' 'Haixu Wu' 'Haowen Wang' 'Mingsheng Long']"
] |
null | null | 2405.17529 | null | null | http://arxiv.org/pdf/2405.17529v1 | 2024-05-27T16:30:11Z | 2024-05-27T16:30:11Z | Clip Body and Tail Separately: High Probability Guarantees for DPSGD
with Heavy Tails | Differentially Private Stochastic Gradient Descent (DPSGD) is widely utilized to preserve training data privacy in deep learning, which first clips the gradients to a predefined norm and then injects calibrated noise into the training procedure. Existing DPSGD works typically assume the gradients follow sub-Gaussian distributions and design various clipping mechanisms to optimize training performance. However, recent studies have shown that the gradients in deep learning exhibit a heavy-tail phenomenon, that is, the tails of the gradient have infinite variance, which may lead to excessive clipping loss to the gradients with existing DPSGD mechanisms. To address this problem, we propose a novel approach, Discriminative Clipping~(DC)-DPSGD, with two key designs. First, we introduce a subspace identification technique to distinguish between body and tail gradients. Second, we present a discriminative clipping mechanism that applies different clipping thresholds for body and tail gradients to reduce the clipping loss. Under the non-convex condition, ourtech{} reduces the empirical gradient norm from {${mathbb{O}left(log^{max(0,theta-1)}(T/delta)log^{2theta}(sqrt{T})right)}$} to {${mathbb{O}left(log(sqrt{T})right)}$} with heavy-tailed index $thetageq 1/2$, iterations $T$, and arbitrary probability $delta$. Extensive experiments on four real-world datasets demonstrate that our approach outperforms three baselines by up to 9.72% in terms of accuracy. | [
"['Haichao Sha' 'Yang Cao' 'Yong Liu' 'Yuncheng Wu' 'Ruixuan Liu'\n 'Hong Chen']"
] |
null | null | 2405.17533 | null | null | http://arxiv.org/pdf/2405.17533v1 | 2024-05-27T17:50:25Z | 2024-05-27T17:50:25Z | PAE: LLM-based Product Attribute Extraction for E-Commerce Fashion
Trends | Product attribute extraction is an growing field in e-commerce business, with several applications including product ranking, product recommendation, future assortment planning and improving online shopping customer experiences. Understanding the customer needs is critical part of online business, specifically fashion products. Retailers uses assortment planning to determine the mix of products to offer in each store and channel, stay responsive to market dynamics and to manage inventory and catalogs. The goal is to offer the right styles, in the right sizes and colors, through the right channels. When shoppers find products that meet their needs and desires, they are more likely to return for future purchases, fostering customer loyalty. Product attributes are a key factor in assortment planning. In this paper we present PAE, a product attribute extraction algorithm for future trend reports consisting text and images in PDF format. Most existing methods focus on attribute extraction from titles or product descriptions or utilize visual information from existing product images. Compared to the prior works, our work focuses on attribute extraction from PDF files where upcoming fashion trends are explained. This work proposes a more comprehensive framework that fully utilizes the different modalities for attribute extraction and help retailers to plan the assortment in advance. Our contributions are three-fold: (a) We develop PAE, an efficient framework to extract attributes from unstructured data (text and images); (b) We provide catalog matching methodology based on BERT representations to discover the existing attributes using upcoming attribute values; (c) We conduct extensive experiments with several baselines and show that PAE is an effective, flexible and on par or superior (avg 92.5% F1-Score) framework to existing state-of-the-art for attribute value extraction task. | [
"['Apurva Sinha' 'Ekta Gujral']"
] |
null | null | 2405.17534 | null | null | http://arxiv.org/pdf/2405.17534v2 | 2024-06-08T16:49:00Z | 2024-05-27T17:53:32Z | SMR: State Memory Replay for Long Sequence Modeling | Despite the promising performance of state space models (SSMs) in long sequence modeling, limitations still exist. Advanced SSMs like S5 and S6 (Mamba) in addressing non-uniform sampling, their recursive structures impede efficient SSM computation via convolution. To overcome compatibility limitations in parallel convolutional computation, this paper proposes a novel non-recursive non-uniform sample processing strategy. Theoretical analysis of SSMs through the lens of Event-Triggered Control (ETC) theory reveals the Non-Stable State (NSS) problem, where deviations from sampling point requirements lead to error transmission and accumulation, causing the divergence of the SSM's hidden state. Our analysis further reveals that adjustments of input sequences with early memories can mitigate the NSS problem, achieving Sampling Step Adaptation (SSA). Building on this insight, we introduce a simple yet effective plug-and-play mechanism, State Memory Replay (SMR), which utilizes learnable memories to adjust the current state with multi-step information for generalization at sampling points different from those in the training data. This enables SSMs to stably model varying sampling points. Experiments on long-range modeling tasks in autoregressive language modeling and Long Range Arena demonstrate the general effectiveness of the SMR mechanism for a series of SSM models. | [
"['Biqing Qi' 'Junqi Gao' 'Kaiyan Zhang' 'Dong Li' 'Jianxing Liu'\n 'Ligang Wu' 'Bowen Zhou']"
] |
null | null | 2405.17535 | null | null | http://arxiv.org/pdf/2405.17535v1 | 2024-05-27T17:55:01Z | 2024-05-27T17:55:01Z | Calibrated Dataset Condensation for Faster Hyperparameter Search | Dataset condensation can be used to reduce the computational cost of training multiple models on a large dataset by condensing the training dataset into a small synthetic set. State-of-the-art approaches rely on matching the model gradients between the real and synthetic data. However, there is no theoretical guarantee of the generalizability of the condensed data: data condensation often generalizes poorly across hyperparameters/architectures in practice. This paper considers a different condensation objective specifically geared toward hyperparameter search. We aim to generate a synthetic validation dataset so that the validation-performance rankings of the models, with different hyperparameters, on the condensed and original datasets are comparable. We propose a novel hyperparameter-calibrated dataset condensation (HCDC) algorithm, which obtains the synthetic validation dataset by matching the hyperparameter gradients computed via implicit differentiation and efficient inverse Hessian approximation. Experiments demonstrate that the proposed framework effectively maintains the validation-performance rankings of models and speeds up hyperparameter/architecture search for tasks on both images and graphs. | [
"['Mucong Ding' 'Yuancheng Xu' 'Tahseen Rabbani' 'Xiaoyu Liu'\n 'Brian Gravelle' 'Teresa Ranadive' 'Tai-Ching Tuan' 'Furong Huang']"
] |
null | null | 2405.17538 | null | null | http://arxiv.org/pdf/2405.17538v1 | 2024-05-27T18:00:00Z | 2024-05-27T18:00:00Z | Bayesian RG Flow in Neural Network Field Theories | The Neural Network Field Theory correspondence (NNFT) is a mapping from neural network (NN) architectures into the space of statistical field theories (SFTs). The Bayesian renormalization group (BRG) is an information-theoretic coarse graining scheme that generalizes the principles of the Exact Renormalization Group (ERG) to arbitrarily parameterized probability distributions, including those of NNs. In BRG, coarse graining is performed in parameter space with respect to an information-theoretic distinguishability scale set by the Fisher information metric. In this paper, we unify NNFT and BRG to form a powerful new framework for exploring the space of NNs and SFTs, which we coin BRG-NNFT. With BRG-NNFT, NN training dynamics can be interpreted as inducing a flow in the space of SFTs from the information-theoretic `IR' $rightarrow$ `UV'. Conversely, applying an information-shell coarse graining to the trained network's parameters induces a flow in the space of SFTs from the information-theoretic `UV' $rightarrow$ `IR'. When the information-theoretic cutoff scale coincides with a standard momentum scale, BRG is equivalent to ERG. We demonstrate the BRG-NNFT correspondence on two analytically tractable examples. First, we construct BRG flows for trained, infinite-width NNs, of arbitrary depth, with generic activation functions. As a special case, we then restrict to architectures with a single infinitely-wide layer, scalar outputs, and generalized cos-net activations. In this case, we show that BRG coarse-graining corresponds exactly to the momentum-shell ERG flow of a free scalar SFT. Our analytic results are corroborated by a numerical experiment in which an ensemble of asymptotically wide NNs are trained and subsequently renormalized using an information-shell BRG scheme. | [
"['Jessica N. Howard' 'Marc S. Klinger' 'Anindita Maiti'\n 'Alexander G. Stapleton']"
] |
null | null | 2405.17541 | null | null | http://arxiv.org/pdf/2405.17541v1 | 2024-05-27T18:00:00Z | 2024-05-27T18:00:00Z | Approximately-symmetric neural networks for quantum spin liquids | We propose and analyze a family of approximately-symmetric neural networks for quantum spin liquid problems. These tailored architectures are parameter-efficient, scalable, and significantly out-perform existing symmetry-unaware neural network architectures. Utilizing the mixed-field toric code model, we demonstrate that our approach is competitive with the state-of-the-art tensor network and quantum Monte Carlo methods. Moreover, at the largest system sizes (N=480), our method allows us to explore Hamiltonians with sign problems beyond the reach of both quantum Monte Carlo and finite-size matrix-product states. The network comprises an exactly symmetric block following a non-symmetric block, which we argue learns a transformation of the ground state analogous to quasiadiabatic continuation. Our work paves the way toward investigating quantum spin liquid problems within interpretable neural network architectures | [
"['Dominik S. Kufel' 'Jack Kemp' 'Simon M. Linsel' 'Chris R. Laumann'\n 'Norman Y. Yao']"
] |
null | null | 2405.17544 | null | null | http://arxiv.org/pdf/2405.17544v1 | 2024-05-27T18:00:00Z | 2024-05-27T18:00:00Z | Towards Human-AI Complementarity with Predictions Sets | Decision support systems based on prediction sets have proven to be effective at helping human experts solve classification tasks. Rather than providing single-label predictions, these systems provide sets of label predictions constructed using conformal prediction, namely prediction sets, and ask human experts to predict label values from these sets. In this paper, we first show that the prediction sets constructed using conformal prediction are, in general, suboptimal in terms of average accuracy. Then, we show that the problem of finding the optimal prediction sets under which the human experts achieve the highest average accuracy is NP-hard. More strongly, unless P = NP, we show that the problem is hard to approximate to any factor less than the size of the label set. However, we introduce a simple and efficient greedy algorithm that, for a large class of expert models and non-conformity scores, is guaranteed to find prediction sets that provably offer equal or greater performance than those constructed using conformal prediction. Further, using a simulation study with both synthetic and real expert predictions, we demonstrate that, in practice, our greedy algorithm finds near-optimal prediction sets offering greater performance than conformal prediction. | [
"['Giovanni De Toni' 'Nastaran Okati' 'Suhas Thejaswi' 'Eleni Straitouri'\n 'Manuel Gomez-Rodriguez']"
] |
null | null | 2405.17556 | null | null | http://arxiv.org/pdf/2405.17556v1 | 2024-05-27T18:00:03Z | 2024-05-27T18:00:03Z | Probabilistic Verification of Neural Networks using Branch and Bound | Probabilistic verification of neural networks is concerned with formally analysing the output distribution of a neural network under a probability distribution of the inputs. Examples of probabilistic verification include verifying the demographic parity fairness notion or quantifying the safety of a neural network. We present a new algorithm for the probabilistic verification of neural networks based on an algorithm for computing and iteratively refining lower and upper bounds on probabilities over the outputs of a neural network. By applying state-of-the-art bound propagation and branch and bound techniques from non-probabilistic neural network verification, our algorithm significantly outpaces existing probabilistic verification algorithms, reducing solving times for various benchmarks from the literature from tens of minutes to tens of seconds. Furthermore, our algorithm compares favourably even to dedicated algorithms for restricted subsets of probabilistic verification. We complement our empirical evaluation with a theoretical analysis, proving that our algorithm is sound and, under mildly restrictive conditions, also complete when using a suitable set of heuristics. | [
"['David Boetius' 'Stefan Leue' 'Tobias Sutter']"
] |
null | null | 2405.17566 | null | null | http://arxiv.org/pdf/2405.17566v1 | 2024-05-27T18:00:49Z | 2024-05-27T18:00:49Z | A deep-learning algorithm to disentangle self-interacting dark matter
and AGN feedback models | Different models of dark matter can alter the distribution of mass in galaxy clusters in a variety of ways. However, so can uncertain astrophysical feedback mechanisms. Here we present a Machine Learning method that ''learns'' how the impact of dark matter self-interactions differs from that of astrophysical feedback in order to break this degeneracy and make inferences on dark matter. We train a Convolutional Neural Network on images of galaxy clusters from hydro-dynamic simulations. In the idealised case our algorithm is 80% accurate at identifying if a galaxy cluster harbours collisionless dark matter, dark matter with ${sigma}_{rm DM}/m = 0.1$cm$^2/$g or with ${sigma}_{DM}/m = 1$cm$^2$/g. Whilst we find adding X-ray emissivity maps does not improve the performance in differentiating collisional dark matter, it does improve the ability to disentangle different models of astrophysical feedback. We include noise to resemble data expected from Euclid and Chandra and find our model has a statistical error of < 0.01cm$^2$/g and that our algorithm is insensitive to shape measurement bias and photometric redshift errors. This method represents a new way to analyse data from upcoming telescopes that is an order of magnitude more precise and many orders faster, enabling us to explore the dark matter parameter space like never before. | [
"['David Harvey']"
] |
null | null | 2405.17569 | null | null | http://arxiv.org/abs/2405.17569v1 | 2024-05-27T18:04:49Z | 2024-05-27T18:04:49Z | Discriminant audio properties in deep learning based respiratory
insufficiency detection in Brazilian Portuguese | This work investigates Artificial Intelligence (AI) systems that detect respiratory insufficiency (RI) by analyzing speech audios, thus treating speech as a RI biomarker. Previous works collected RI data (P1) from COVID-19 patients during the first phase of the pandemic and trained modern AI models, such as CNNs and Transformers, which achieved $96.5%$ accuracy, showing the feasibility of RI detection via AI. Here, we collect RI patient data (P2) with several causes besides COVID-19, aiming at extending AI-based RI detection. We also collected control data from hospital patients without RI. We show that the considered models, when trained on P1, do not generalize to P2, indicating that COVID-19 RI has features that may not be found in all RI types. | [
"['Marcelo Matheus Gauy' 'Larissa Cristina Berti' 'Arnaldo Cândido Jr'\n 'Augusto Camargo Neto' 'Alfredo Goldman' 'Anna Sara Shafferman Levin'\n 'Marcus Martins' 'Beatriz Raposo de Medeiros' 'Marcelo Queiroz'\n 'Ester Cerdeira Sabino' 'Flaviane Romani Fernandes Svartman'\n 'Marcelo Finger']"
] |
null | null | 2405.17573 | null | null | http://arxiv.org/pdf/2405.17573v1 | 2024-05-27T18:15:05Z | 2024-05-27T18:15:05Z | Hamiltonian Mechanics of Feature Learning: Bottleneck Structure in Leaky
ResNets | We study Leaky ResNets, which interpolate between ResNets ($tilde{L}=0$) and Fully-Connected nets ($tilde{L}toinfty$) depending on an 'effective depth' hyper-parameter $tilde{L}$. In the infinite depth limit, we study 'representation geodesics' $A_{p}$: continuous paths in representation space (similar to NeuralODEs) from input $p=0$ to output $p=1$ that minimize the parameter norm of the network. We give a Lagrangian and Hamiltonian reformulation, which highlight the importance of two terms: a kinetic energy which favors small layer derivatives $partial_{p}A_{p}$ and a potential energy that favors low-dimensional representations, as measured by the 'Cost of Identity'. The balance between these two forces offers an intuitive understanding of feature learning in ResNets. We leverage this intuition to explain the emergence of a bottleneck structure, as observed in previous work: for large $tilde{L}$ the potential energy dominates and leads to a separation of timescales, where the representation jumps rapidly from the high dimensional inputs to a low-dimensional representation, move slowly inside the space of low-dimensional representations, before jumping back to the potentially high-dimensional outputs. Inspired by this phenomenon, we train with an adaptive layer step-size to adapt to the separation of timescales. | [
"['Arthur Jacot' 'Alexandre Kaiser']"
] |
null | null | 2405.17575 | null | null | http://arxiv.org/pdf/2405.17575v1 | 2024-05-27T18:15:40Z | 2024-05-27T18:15:40Z | Interpretable Prognostics with Concept Bottleneck Models | Deep learning approaches have recently been extensively explored for the prognostics of industrial assets. However, they still suffer from a lack of interpretability, which hinders their adoption in safety-critical applications. To improve their trustworthiness, explainable AI (XAI) techniques have been applied in prognostics, primarily to quantify the importance of input variables for predicting the remaining useful life (RUL) using post-hoc attribution methods. In this work, we propose the application of Concept Bottleneck Models (CBMs), a family of inherently interpretable neural network architectures based on concept explanations, to the task of RUL prediction. Unlike attribution methods, which explain decisions in terms of low-level input features, concepts represent high-level information that is easily understandable by users. Moreover, once verified in actual applications, CBMs enable domain experts to intervene on the concept activations at test-time. We propose using the different degradation modes of an asset as intermediate concepts. Our case studies on the New Commercial Modular AeroPropulsion System Simulation (N-CMAPSS) aircraft engine dataset for RUL prediction demonstrate that the performance of CBMs can be on par or superior to black-box models, while being more interpretable, even when the available labeled concepts are limited. Code available at href{https://github.com/EPFL-IMOS/concept-prognostics/}{url{github.com/EPFL-IMOS/concept-prognostics/}}. | [
"['Florent Forest' 'Katharina Rombach' 'Olga Fink']"
] |
null | null | 2405.17580 | null | null | http://arxiv.org/pdf/2405.17580v1 | 2024-05-27T18:29:23Z | 2024-05-27T18:29:23Z | Mixed Dynamics In Linear Networks: Unifying the Lazy and Active Regimes | The training dynamics of linear networks are well studied in two distinct setups: the lazy regime and balanced/active regime, depending on the initialization and width of the network. We provide a surprisingly simple unyfing formula for the evolution of the learned matrix that contains as special cases both lazy and balanced regimes but also a mixed regime in between the two. In the mixed regime, a part of the network is lazy while the other is balanced. More precisely the network is lazy along singular values that are below a certain threshold and balanced along those that are above the same threshold. At initialization, all singular values are lazy, allowing for the network to align itself with the task, so that later in time, when some of the singular value cross the threshold and become active they will converge rapidly (convergence in the balanced regime is notoriously difficult in the absence of alignment). The mixed regime is the `best of both worlds': it converges from any random initialization (in contrast to balanced dynamics which require special initialization), and has a low rank bias (absent in the lazy dynamics). This allows us to prove an almost complete phase diagram of training behavior as a function of the variance at initialization and the width, for a MSE training task. | [
"['Zhenfeng Tu' 'Santiago Aranguri' 'Arthur Jacot']"
] |
null | null | 2405.17582 | null | null | http://arxiv.org/abs/2405.17582v1 | 2024-05-27T18:32:36Z | 2024-05-27T18:32:36Z | Building a temperature forecasting model for the city with the
regression neural network (RNN) | In recent years, a study by environmental organizations in the world and Vietnam shows that weather change is quite complex. global warming has become a serious problem in the modern world, which is a concern for scientists. last century, it was difficult to forecast the weather due to missing weather monitoring stations and technological limitations. this made it hard to collect data for building predictive models to make accurate simulations. in Vietnam, research on weather forecast models is a recent development, having only begun around 2000. along with advancements in computer science, mathematical models are being built and applied with machine learning techniques to create more accurate and reliable predictive models. this article will summarize the research and solutions for applying recurrent neural networks to forecast urban temperatures. | [
"['Nguyen Phuc Tran' 'Duy Thanh Tran' 'Thi Thuy Nga Duong']"
] |
null | null | 2405.17583 | null | null | http://arxiv.org/pdf/2405.17583v1 | 2024-05-27T18:33:37Z | 2024-05-27T18:33:37Z | Understanding Forgetting in Continual Learning with Linear Regression | Continual learning, focused on sequentially learning multiple tasks, has gained significant attention recently. Despite the tremendous progress made in the past, the theoretical understanding, especially factors contributing to catastrophic forgetting, remains relatively unexplored. In this paper, we provide a general theoretical analysis of forgetting in the linear regression model via Stochastic Gradient Descent (SGD) applicable to both underparameterized and overparameterized regimes. Our theoretical framework reveals some interesting insights into the intricate relationship between task sequence and algorithmic parameters, an aspect not fully captured in previous studies due to their restrictive assumptions. Specifically, we demonstrate that, given a sufficiently large data size, the arrangement of tasks in a sequence, where tasks with larger eigenvalues in their population data covariance matrices are trained later, tends to result in increased forgetting. Additionally, our findings highlight that an appropriate choice of step size will help mitigate forgetting in both underparameterized and overparameterized settings. To validate our theoretical analysis, we conducted simulation experiments on both linear regression models and Deep Neural Networks (DNNs). Results from these simulations substantiate our theoretical findings. | [
"['Meng Ding' 'Kaiyi Ji' 'Di Wang' 'Jinhui Xu']"
] |
null | null | 2405.17587 | null | null | http://arxiv.org/pdf/2405.17587v1 | 2024-05-27T18:40:49Z | 2024-05-27T18:40:49Z | RAGSys: Item-Cold-Start Recommender as RAG System | Large Language Models (LLM) hold immense promise for real-world applications, but their generic knowledge often falls short of domain-specific needs. Fine-tuning, a common approach, can suffer from catastrophic forgetting and hinder generalizability. In-Context Learning (ICL) offers an alternative, which can leverage Retrieval-Augmented Generation (RAG) to provide LLMs with relevant demonstrations for few-shot learning tasks. This paper explores the desired qualities of a demonstration retrieval system for ICL. We argue that ICL retrieval in this context resembles item-cold-start recommender systems, prioritizing discovery and maximizing information gain over strict relevance. We propose a novel evaluation method that measures the LLM's subsequent performance on NLP tasks, eliminating the need for subjective diversity scores. Our findings demonstrate the critical role of diversity and quality bias in retrieved demonstrations for effective ICL, and highlight the potential of recommender system techniques in this domain. | [
"['Emile Contal' 'Garrin McGoldrick']"
] |
null | null | 2405.17604 | null | null | http://arxiv.org/pdf/2405.17604v1 | 2024-05-27T19:07:13Z | 2024-05-27T19:07:13Z | LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters | The recent trend in scaling language models has led to a growing demand for parameter-efficient tuning (PEFT) methods such as LoRA (Low-Rank Adaptation). LoRA consistently matches or surpasses the full fine-tuning baseline with fewer parameters. However, handling numerous task-specific or user-specific LoRA modules on top of a base model still presents significant storage challenges. To address this, we introduce LoRA-XS (Low-Rank Adaptation with eXtremely Small number of parameters), a novel approach leveraging Singular Value Decomposition (SVD) for parameter-efficient fine-tuning. LoRA-XS introduces a small r x r weight matrix between frozen LoRA matrices, which are constructed by SVD of the original weight matrix. Training only r x r weight matrices ensures independence from model dimensions, enabling more parameter-efficient fine-tuning, especially for larger models. LoRA-XS achieves a remarkable reduction of trainable parameters by over 100x in 7B models compared to LoRA. Our benchmarking across various scales, including GLUE, GSM8k, and MATH benchmarks, shows that our approach outperforms LoRA and recent state-of-the-art approaches like VeRA in terms of parameter efficiency while maintaining competitive performance. | [
"['Klaudia Bałazy' 'Mohammadreza Banaei' 'Karl Aberer' 'Jacek Tabor']"
] |
null | null | 2405.17607 | null | null | http://arxiv.org/pdf/2405.17607v1 | 2024-05-27T19:12:53Z | 2024-05-27T19:12:53Z | Advancing Cultural Inclusivity: Optimizing Embedding Spaces for Balanced
Music Recommendations | Popularity bias in music recommendation systems -- where artists and tracks with the highest listen counts are recommended more often -- can also propagate biases along demographic and cultural axes. In this work, we identify these biases in recommendations for artists from underrepresented cultural groups in prototype-based matrix factorization methods. Unlike traditional matrix factorization methods, prototype-based approaches are interpretable. This allows us to directly link the observed bias in recommendations for minority artists (the effect) to specific properties of the embedding space (the cause). We mitigate popularity bias in music recommendation through capturing both users' and songs' cultural nuances in the embedding space. To address these challenges while maintaining recommendation quality, we propose two novel enhancements to the embedding space: i) we propose an approach to filter-out the irrelevant prototypes used to represent each user and item to improve generalizability, and ii) we introduce regularization techniques to reinforce a more uniform distribution of prototypes within the embedding space. Our results demonstrate significant improvements in reducing popularity bias and enhancing demographic and cultural fairness in music recommendations while achieving competitive -- if not better -- overall performance. | [
"['Armin Moradi' 'Nicola Neophytou' 'Golnoosh Farnadi']"
] |
null | null | 2405.17610 | null | null | http://arxiv.org/abs/2405.17610v1 | 2024-05-27T19:16:42Z | 2024-05-27T19:16:42Z | Explainable machine learning multi-label classification of Spanish legal
judgements | Artificial Intelligence techniques such as Machine Learning (ML) have not been exploited to their maximum potential in the legal domain. This has been partially due to the insufficient explanations they provided about their decisions. Automatic expert systems with explanatory capabilities can be specially useful when legal practitioners search jurisprudence to gather contextual knowledge for their cases. Therefore, we propose a hybrid system that applies ML for multi-label classification of judgements (sentences) and visual and natural language descriptions for explanation purposes, boosted by Natural Language Processing techniques and deep legal reasoning to identify the entities, such as the parties, involved. We are not aware of any prior work on automatic multi-label classification of legal judgements also providing natural language explanations to the end-users with comparable overall quality. Our solution achieves over 85 % micro precision on a labelled data set annotated by legal experts. This endorses its interest to relieve human experts from monotonous labour-intensive legal classification tasks. | [
"['Francisco de Arriba-Pérez' 'Silvia García-Méndez'\n 'Francisco J. González-Castaño' 'Jaime González-González']"
] |
null | null | 2405.17612 | null | null | http://arxiv.org/pdf/2405.17612v2 | 2024-05-29T19:39:12Z | 2024-05-27T19:20:22Z | A note on the error analysis of data-driven closure models for large
eddy simulations of turbulence | In this work, we provide a mathematical formulation for error propagation in flow trajectory prediction using data-driven turbulence closure modeling. Under the assumption that the predicted state of a large eddy simulation prediction must be close to that of a subsampled direct numerical simulation, we retrieve an upper bound for the prediction error when utilizing a data-driven closure model. We also demonstrate that this error is significantly affected by the time step size and the Jacobian which play a role in amplifying the initial one-step error made by using the closure. Our analysis also shows that the error propagates exponentially with rollout time and the upper bound of the system Jacobian which is itself influenced by the Jacobian of the closure formulation. These findings could enable the development of new regularization techniques for ML models based on the identified error-bound terms, improving their robustness and reducing error propagation. | [
"['Dibyajyoti Chakraborty' 'Shivam Barwey' 'Hong Zhang' 'Romit Maulik']"
] |
null | null | 2405.17613 | null | null | http://arxiv.org/pdf/2405.17613v1 | 2024-05-27T19:22:41Z | 2024-05-27T19:22:41Z | A Framework for Multi-modal Learning: Jointly Modeling Inter- &
Intra-Modality Dependencies | Supervised multi-modal learning involves mapping multiple modalities to a target label. Previous studies in this field have concentrated on capturing in isolation either the inter-modality dependencies (the relationships between different modalities and the label) or the intra-modality dependencies (the relationships within a single modality and the label). We argue that these conventional approaches that rely solely on either inter- or intra-modality dependencies may not be optimal in general. We view the multi-modal learning problem from the lens of generative models where we consider the target as a source of multiple modalities and the interaction between them. Towards that end, we propose inter- & intra-modality modeling (I2M2) framework, which captures and integrates both the inter- and intra-modality dependencies, leading to more accurate predictions. We evaluate our approach using real-world healthcare and vision-and-language datasets with state-of-the-art models, demonstrating superior performance over traditional methods focusing only on one type of modality dependency. | [
"['Divyam Madaan' 'Taro Makino' 'Sumit Chopra' 'Kyunghyun Cho']"
] |
null | null | 2405.17615 | null | null | http://arxiv.org/pdf/2405.17615v1 | 2024-05-27T19:25:42Z | 2024-05-27T19:25:42Z | Listenable Maps for Zero-Shot Audio Classifiers | Interpreting the decisions of deep learning models, including audio classifiers, is crucial for ensuring the transparency and trustworthiness of this technology. In this paper, we introduce LMAC-ZS (Listenable Maps for Audio Classifiers in the Zero-Shot context), which, to the best of our knowledge, is the first decoder-based post-hoc interpretation method for explaining the decisions of zero-shot audio classifiers. The proposed method utilizes a novel loss function that maximizes the faithfulness to the original similarity between a given text-and-audio pair. We provide an extensive evaluation using the Contrastive Language-Audio Pretraining (CLAP) model to showcase that our interpreter remains faithful to the decisions in a zero-shot classification context. Moreover, we qualitatively show that our method produces meaningful explanations that correlate well with different text prompts. | [
"['Francesco Paissan' 'Luca Della Libera' 'Mirco Ravanelli' 'Cem Subakan']"
] |
null | null | 2405.17618 | null | null | http://arxiv.org/pdf/2405.17618v2 | 2024-05-29T04:19:00Z | 2024-05-27T19:28:33Z | Symmetric Reinforcement Learning Loss for Robust Learning on Diverse
Tasks and Model Scales | Reinforcement learning (RL) training is inherently unstable due to factors such as moving targets and high gradient variance. Reinforcement Learning from Human Feedback (RLHF) and Reinforcement Learning from AI Feedback (RLAIF) can introduce additional difficulty. Differing preferences can complicate the alignment process, and prediction errors in a trained reward model can become more severe as the LLM generates unseen outputs. To enhance training robustness, RL has adopted techniques from supervised learning, such as ensembles and layer normalization. In this work, we improve the stability of RL training by adapting the reverse cross entropy (RCE) from supervised learning for noisy data to define a symmetric RL loss. We demonstrate performance improvements across various tasks and scales. We conduct experiments in discrete action tasks (Atari games) and continuous action space tasks (MuJoCo benchmark and Box2D) using Symmetric A2C (SA2C) and Symmetric PPO (SPPO), with and without added noise with especially notable performance in SPPO across different hyperparameters. Furthermore, we validate the benefits of the symmetric RL loss when using SPPO for large language models through improved performance in RLHF tasks, such as IMDB positive sentiment sentiment and TL;DR summarization tasks. | [
"['Ju-Seung Byun' 'Andrew Perrault']"
] |
null | null | 2405.17625 | null | null | http://arxiv.org/pdf/2405.17625v1 | 2024-05-27T19:46:31Z | 2024-05-27T19:46:31Z | Matrix Low-Rank Trust Region Policy Optimization | Most methods in reinforcement learning use a Policy Gradient (PG) approach to learn a parametric stochastic policy that maps states to actions. The standard approach is to implement such a mapping via a neural network (NN) whose parameters are optimized using stochastic gradient descent. However, PG methods are prone to large policy updates that can render learning inefficient. Trust region algorithms, like Trust Region Policy Optimization (TRPO), constrain the policy update step, ensuring monotonic improvements. This paper introduces low-rank matrix-based models as an efficient alternative for estimating the parameters of TRPO algorithms. By gathering the stochastic policy's parameters into a matrix and applying matrix-completion techniques, we promote and enforce low rank. Our numerical studies demonstrate that low-rank matrix-based policy models effectively reduce both computational and sample complexities compared to NN models, while maintaining comparable aggregated rewards. | [
"['Sergio Rozada' 'Antonio G. Marques']"
] |
null | null | 2405.17626 | null | null | http://arxiv.org/pdf/2405.17626v1 | 2024-05-27T19:49:08Z | 2024-05-27T19:49:08Z | Matrix Low-Rank Approximation For Policy Gradient Methods | Estimating a policy that maps states to actions is a central problem in reinforcement learning. Traditionally, policies are inferred from the so called value functions (VFs), but exact VF computation suffers from the curse of dimensionality. Policy gradient (PG) methods bypass this by learning directly a parametric stochastic policy. Typically, the parameters of the policy are estimated using neural networks (NNs) tuned via stochastic gradient descent. However, finding adequate NN architectures can be challenging, and convergence issues are common as well. In this paper, we put forth low-rank matrix-based models to estimate efficiently the parameters of PG algorithms. We collect the parameters of the stochastic policy into a matrix, and then, we leverage matrix-completion techniques to promote (enforce) low rank. We demonstrate via numerical studies how low-rank matrix-based policy models reduce the computational and sample complexities relative to NN models, while achieving a similar aggregated reward. | [
"['Sergio Rozada' 'Antonio G. Marques']"
] |
null | null | 2405.17627 | null | null | http://arxiv.org/pdf/2405.17627v1 | 2024-05-27T19:49:18Z | 2024-05-27T19:49:18Z | Salutary Labeling with Zero Human Annotation | Active learning strategically selects informative unlabeled data points and queries their ground truth labels for model training. The prevailing assumption underlying this machine learning paradigm is that acquiring these ground truth labels will optimally enhance model performance. However, this assumption may not always hold true or maximize learning capacity, particularly considering the costly labor annotations required for ground truth labels. In contrast to traditional ground truth labeling, this paper proposes salutary labeling, which automatically assigns the most beneficial labels to the most informative samples without human annotation. Specifically, we utilize the influence function, a tool for estimating sample influence, to select newly added samples and assign their salutary labels by choosing the category that maximizes their positive influence. This process eliminates the need for human annotation. Extensive experiments conducted on nine benchmark datasets demonstrate the superior performance of our salutary labeling approach over traditional active learning strategies. Additionally, we provide several in-depth explorations and practical applications of large language model (LLM) fine-tuning. | [
"['Wenxiao Xiao' 'Hongfu Liu']"
] |
null | null | 2405.17628 | null | null | http://arxiv.org/pdf/2405.17628v1 | 2024-05-27T19:52:00Z | 2024-05-27T19:52:00Z | Tensor Low-rank Approximation of Finite-horizon Value Functions | The goal of reinforcement learning is estimating a policy that maps states to actions and maximizes the cumulative reward of a Markov Decision Process (MDP). This is oftentimes achieved by estimating first the optimal (reward) value function (VF) associated with each state-action pair. When the MDP has an infinite horizon, the optimal VFs and policies are stationary under mild conditions. However, in finite-horizon MDPs, the VFs (hence, the policies) vary with time. This poses a challenge since the number of VFs to estimate grows not only with the size of the state-action space but also with the time horizon. This paper proposes a non-parametric low-rank stochastic algorithm to approximate the VFs of finite-horizon MDPs. First, we represent the (unknown) VFs as a multi-dimensional array, or tensor, where time is one of the dimensions. Then, we use rewards sampled from the MDP to estimate the optimal VFs. More precisely, we use the (truncated) PARAFAC decomposition to design an online low-rank algorithm that recovers the entries of the tensor of VFs. The size of the low-rank PARAFAC model grows additively with respect to each of its dimensions, rendering our approach efficient, as demonstrated via numerical experiments. | [
"['Sergio Rozada' 'Antonio G. Marques']"
] |
null | null | 2405.17638 | null | null | http://arxiv.org/pdf/2405.17638v1 | 2024-05-27T20:18:20Z | 2024-05-27T20:18:20Z | The surprising efficiency of temporal difference learning for rare event
prediction | We quantify the efficiency of temporal difference (TD) learning over the direct, or Monte Carlo (MC), estimator for policy evaluation in reinforcement learning, with an emphasis on estimation of quantities related to rare events. Policy evaluation is complicated in the rare event setting by the long timescale of the event and by the need for emph{relative accuracy} in estimates of very small values. Specifically, we focus on least-squares TD (LSTD) prediction for finite state Markov chains, and show that LSTD can achieve relative accuracy far more efficiently than MC. We prove a central limit theorem for the LSTD estimator and upper bound the emph{relative asymptotic variance} by simple quantities characterizing the connectivity of states relative to the transition probabilities between them. Using this bound, we show that, even when both the timescale of the rare event and the relative accuracy of the MC estimator are exponentially large in the number of states, LSTD maintains a fixed level of relative accuracy with a total number of observed transitions of the Markov chain that is only emph{polynomially} large in the number of states. | [
"['Xiaoou Cheng' 'Jonathan Weare']"
] |
null | null | 2405.17640 | null | null | http://arxiv.org/pdf/2405.17640v1 | 2024-05-27T20:24:03Z | 2024-05-27T20:24:03Z | Probabilistically Plausible Counterfactual Explanations with Normalizing
Flows | We present PPCEF, a novel method for generating probabilistically plausible counterfactual explanations (CFs). PPCEF advances beyond existing methods by combining a probabilistic formulation that leverages the data distribution with the optimization of plausibility within a unified framework. Compared to reference approaches, our method enforces plausibility by directly optimizing the explicit density function without assuming a particular family of parametrized distributions. This ensures CFs are not only valid (i.e., achieve class change) but also align with the underlying data's probability density. For that purpose, our approach leverages normalizing flows as powerful density estimators to capture the complex high-dimensional data distribution. Furthermore, we introduce a novel loss that balances the trade-off between achieving class change and maintaining closeness to the original instance while also incorporating a probabilistic plausibility term. PPCEF's unconstrained formulation allows for efficient gradient-based optimization with batch processing, leading to orders of magnitude faster computation compared to prior methods. Moreover, the unconstrained formulation of PPCEF allows for the seamless integration of future constraints tailored to specific counterfactual properties. Finally, extensive evaluations demonstrate PPCEF's superiority in generating high-quality, probabilistically plausible counterfactual explanations in high-dimensional tabular settings. This makes PPCEF a powerful tool for not only interpreting complex machine learning models but also for improving fairness, accountability, and trust in AI systems. | [
"['Patryk Wielopolski' 'Oleksii Furman' 'Jerzy Stefanowski' 'Maciej Zięba']"
] |
null | null | 2405.17642 | null | null | http://arxiv.org/pdf/2405.17642v1 | 2024-05-27T20:32:09Z | 2024-05-27T20:32:09Z | Unifying Perspectives: Plausible Counterfactual Explanations on Global,
Group-wise, and Local Levels | Growing regulatory and societal pressures demand increased transparency in AI, particularly in understanding the decisions made by complex machine learning models. Counterfactual Explanations (CFs) have emerged as a promising technique within Explainable AI (xAI), offering insights into individual model predictions. However, to understand the systemic biases and disparate impacts of AI models, it is crucial to move beyond local CFs and embrace global explanations, which offer a~holistic view across diverse scenarios and populations. Unfortunately, generating Global Counterfactual Explanations (GCEs) faces challenges in computational complexity, defining the scope of "global," and ensuring the explanations are both globally representative and locally plausible. We introduce a novel unified approach for generating Local, Group-wise, and Global Counterfactual Explanations for differentiable classification models via gradient-based optimization to address these challenges. This framework aims to bridge the gap between individual and systemic insights, enabling a deeper understanding of model decisions and their potential impact on diverse populations. Our approach further innovates by incorporating a probabilistic plausibility criterion, enhancing actionability and trustworthiness. By offering a cohesive solution to the optimization and plausibility challenges in GCEs, our work significantly advances the interpretability and accountability of AI models, marking a step forward in the pursuit of transparent AI. | [
"['Patryk Wielopolski' 'Oleksii Furman' 'Jerzy Stefanowski' 'Maciej Zięba']"
] |
null | null | 2405.17653 | null | null | http://arxiv.org/pdf/2405.17653v3 | 2024-07-15T13:30:52Z | 2024-05-27T20:53:22Z | InversionView: A General-Purpose Method for Reading Information from
Neural Activations | The inner workings of neural networks can be better understood if we can fully decipher the information encoded in neural activations. In this paper, we argue that this information is embodied by the subset of inputs that give rise to similar activations. Computing such subsets is nontrivial as the input space is exponentially large. We propose InversionView, which allows us to practically inspect this subset by sampling from a trained decoder model conditioned on activations. This helps uncover the information content of activation vectors, and facilitates understanding of the algorithms implemented by transformer models. We present four case studies where we investigate models ranging from small transformers to GPT-2. In these studies, we demonstrate the characteristics of our method, show the distinctive advantages it offers, and provide causally verified circuits. | [
"['Xinting Huang' 'Madhur Panwar' 'Navin Goyal' 'Michael Hahn']"
] |
null | null | 2405.17656 | null | null | http://arxiv.org/pdf/2405.17656v1 | 2024-05-27T20:57:19Z | 2024-05-27T20:57:19Z | Alignment is Key for Applying Diffusion Models to Retrosynthesis | Retrosynthesis, the task of identifying precursors for a given molecule, can be naturally framed as a conditional graph generation task. Diffusion models are a particularly promising modelling approach, enabling post-hoc conditioning and trading off quality for speed during generation. We show mathematically that permutation equivariant denoisers severely limit the expressiveness of graph diffusion models and thus their adaptation to retrosynthesis. To address this limitation, we relax the equivariance requirement such that it only applies to aligned permutations of the conditioning and the generated graphs obtained through atom mapping. Our new denoiser achieves the highest top-$1$ accuracy ($54.7$%) across template-free and template-based methods on USPTO-50k. We also demonstrate the ability for flexible post-training conditioning and good sample quality with small diffusion step counts, highlighting the potential for interactive applications and additional controls for multi-step planning. | [
"['Najwa Laabid' 'Severi Rissanen' 'Markus Heinonen' 'Arno Solin'\n 'Vikas Garg']"
] |
null | null | 2405.17663 | null | null | http://arxiv.org/pdf/2405.17663v1 | 2024-05-27T21:28:26Z | 2024-05-27T21:28:26Z | What's the Opposite of a Face? Finding Shared Decodable Concepts and
their Negations in the Brain | Prior work has offered evidence for functional localization in the brain; different anatomical regions preferentially activate for certain types of visual input. For example, the fusiform face area preferentially activates for visual stimuli that include a face. However, the spectrum of visual semantics is extensive, and only a few semantically-tuned patches of cortex have so far been identified in the human brain. Using a multimodal (natural language and image) neural network architecture (CLIP) we train a highly accurate contrastive model that maps brain responses during naturalistic image viewing to CLIP embeddings. We then use a novel adaptation of the DBSCAN clustering algorithm to cluster the parameters of these participant-specific contrastive models. This reveals what we call Shared Decodable Concepts (SDCs): clusters in CLIP space that are decodable from common sets of voxels across multiple participants. Examining the images most and least associated with each SDC cluster gives us additional insight into the semantic properties of each SDC. We note SDCs for previously reported visual features (e.g. orientation tuning in early visual cortex) as well as visual semantic concepts such as faces, places and bodies. In cases where our method finds multiple clusters for a visuo-semantic concept, the least associated images allow us to dissociate between confounding factors. For example, we discovered two clusters of food images, one driven by color, the other by shape. We also uncover previously unreported areas such as regions of extrastriate body area (EBA) tuned for legs/hands and sensitivity to numerosity in right intraparietal sulcus, and more. Thus, our contrastive-learning methodology better characterizes new and existing visuo-semantic representations in the brain by leveraging multimodal neural network representations and a novel adaptation of clustering algorithms. | [
"['Cory Efird' 'Alex Murphy' 'Joel Zylberberg' 'Alona Fyshe']"
] |
null | null | 2405.17666 | null | null | http://arxiv.org/pdf/2405.17666v2 | 2024-07-02T12:55:33Z | 2024-05-27T21:40:31Z | Structured Partial Stochasticity in Bayesian Neural Networks | Bayesian neural network posterior distributions have a great number of modes that correspond to the same network function. The abundance of such modes can make it difficult for approximate inference methods to do their job. Recent work has demonstrated the benefits of partial stochasticity for approximate inference in Bayesian neural networks; inference can be less costly and performance can sometimes be improved. I propose a structured way to select the deterministic subset of weights that removes neuron permutation symmetries, and therefore the corresponding redundant posterior modes. With a drastically simplified posterior distribution, the performance of existing approximate inference schemes is found to be greatly improved. | [
"['Tommy Rochussen']"
] |
null | null | 2405.17667 | null | null | http://arxiv.org/pdf/2405.17667v2 | 2024-06-24T19:11:57Z | 2024-05-27T21:44:14Z | Hunting for Polluted White Dwarfs and Other Treasures with Gaia XP
Spectra and Unsupervised Machine Learning | White dwarfs (WDs) polluted by exoplanetary material provide the unprecedented opportunity to directly observe the interiors of exoplanets. However, spectroscopic surveys are often limited by brightness constraints, and WDs tend to be very faint, making detections of large populations of polluted WDs difficult. In this paper, we aim to increase considerably the number of WDs with multiple metals in their atmospheres. Using 96,134 WDs with Gaia DR3 BP/RP (XP) spectra, we constructed a 2D map using an unsupervised machine learning technique called Uniform Manifold Approximation and Projection (UMAP) to organize the WDs into identifiable spectral regions. The polluted WDs are among the distinct spectral groups identified in our map. We have shown that this selection method could potentially increase the number of known WDs with 5 or more metal species in their atmospheres by an order of magnitude. Such systems are essential for characterizing exoplanet diversity and geology. | [
"['Malia L. Kao' 'Keith Hawkins' 'Laura K. Rogers' 'Amy Bonsor'\n 'Bart H. Dunlap' 'Jason L. Sanders' 'M. H. Montgomery' 'D. E. Winget']"
] |
null | null | 2405.17672 | null | null | http://arxiv.org/pdf/2405.17672v1 | 2024-05-27T21:49:57Z | 2024-05-27T21:49:57Z | Exploring Loss Design Techniques For Decision Tree Robustness To Label
Noise | In the real world, data is often noisy, affecting not only the quality of features but also the accuracy of labels. Current research on mitigating label errors stems primarily from advances in deep learning, and a gap exists in exploring interpretable models, particularly those rooted in decision trees. In this study, we investigate whether ideas from deep learning loss design can be applied to improve the robustness of decision trees. In particular, we show that loss correction and symmetric losses, both standard approaches, are not effective. We argue that other directions need to be explored to improve the robustness of decision trees to label noise. | [
"['Lukasz Sztukiewicz' 'Jack Henry Good' 'Artur Dubrawski']"
] |
null | null | 2405.17673 | null | null | http://arxiv.org/pdf/2405.17673v1 | 2024-05-27T21:50:16Z | 2024-05-27T21:50:16Z | Fast Samplers for Inverse Problems in Iterative Refinement Models | Constructing fast samplers for unconditional diffusion and flow-matching models has received much attention recently; however, existing methods for solving inverse problems, such as super-resolution, inpainting, or deblurring, still require hundreds to thousands of iterative steps to obtain high-quality results. We propose a plug-and-play framework for constructing efficient samplers for inverse problems, requiring only pre-trained diffusion or flow-matching models. We present Conditional Conjugate Integrators, which leverage the specific form of the inverse problem to project the respective conditional diffusion/flow dynamics into a more amenable space for sampling. Our method complements popular posterior approximation methods for solving inverse problems using diffusion/flow models. We evaluate the proposed method's performance on various linear image restoration tasks across multiple datasets, employing diffusion and flow-matching models. Notably, on challenging inverse problems like 4$times$ super-resolution on the ImageNet dataset, our method can generate high-quality samples in as few as 5 conditional sampling steps and outperforms competing baselines requiring 20-1000 steps. Our code and models will be publicly available at https://github.com/mandt-lab/CI2RM. | [
"['Kushagra Pandey' 'Ruihan Yang' 'Stephan Mandt']"
] |
null | null | 2405.17691 | null | null | http://arxiv.org/pdf/2405.17691v1 | 2024-05-27T22:52:23Z | 2024-05-27T22:52:23Z | Ontology-Enhanced Decision-Making for Autonomous Agents in Dynamic and
Partially Observable Environments | Agents, whether software or hardware, perceive their environment through sensors and act using actuators, often operating in dynamic, partially observable settings. They face challenges like incomplete and noisy data, unforeseen situations, and the need to adapt goals in real-time. Traditional reasoning and ML methods, including Reinforcement Learning (RL), help but are limited by data needs, predefined goals, and extensive exploration periods. Ontologies offer a solution by integrating diverse information sources, enhancing decision-making in complex environments. This thesis introduces an ontology-enhanced decision-making model (OntoDeM) for autonomous agents. OntoDeM enriches agents' domain knowledge, allowing them to interpret unforeseen events, generate or adapt goals, and make better decisions. Key contributions include: 1. An ontology-based method to improve agents' real-time observations using prior knowledge. 2. The OntoDeM model for handling dynamic, unforeseen situations by evolving or generating new goals. 3. Implementation and evaluation in four real-world applications, demonstrating its effectiveness. Compared to traditional and advanced learning algorithms, OntoDeM shows superior performance in improving agents' observations and decision-making in dynamic, partially observable environments. | [
"['Saeedeh Ghanadbashi' 'Fatemeh Golpayegani']"
] |
null | null | 2405.17693 | null | null | http://arxiv.org/pdf/2405.17693v1 | 2024-05-27T23:00:40Z | 2024-05-27T23:00:40Z | Tamed Langevin sampling under weaker conditions | Motivated by applications to deep learning which often fail standard Lipschitz smoothness requirements, we examine the problem of sampling from distributions that are not log-concave and are only weakly dissipative, with log-gradients allowed to grow superlinearly at infinity. In terms of structure, we only assume that the target distribution satisfies either a log-Sobolev or a Poincar'e inequality and a local Lipschitz smoothness assumption with modulus growing possibly polynomially at infinity. This set of assumptions greatly exceeds the operational limits of the "vanilla" unadjusted Langevin algorithm (ULA), making sampling from such distributions a highly involved affair. To account for this, we introduce a taming scheme which is tailored to the growth and decay properties of the target distribution, and we provide explicit non-asymptotic guarantees for the proposed sampler in terms of the Kullback-Leibler (KL) divergence, total variation, and Wasserstein distance to the target distribution. | [
"['Iosif Lytras' 'Panayotis Mertikopoulos']"
] |
null | null | 2405.17696 | null | null | http://arxiv.org/pdf/2405.17696v1 | 2024-05-27T23:03:21Z | 2024-05-27T23:03:21Z | Physics-guided Full Waveform Inversion using Encoder-Solver
Convolutional Neural Networks | Full Waveform Inversion (FWI) is an inverse problem for estimating the wave velocity distribution in a given domain, based on observed data on the boundaries. The inversion is computationally demanding because we are required to solve multiple forward problems, either in time or frequency domains, to simulate data that are then iteratively fitted to the observed data. We consider FWI in the frequency domain, where the Helmholtz equation is used as a forward model, and its repeated solution is the main computational bottleneck of the inversion process. To ease this cost, we integrate a learning process of an encoder-solver preconditioner that is based on convolutional neural networks (CNNs). The encoder-solver is trained to effectively precondition the discretized Helmholtz operator given velocity medium parameters. Then, by re-training the CNN between the iterations of the optimization process, the encoder-solver is adapted to the iteratively evolving velocity medium as part of the inversion. Without retraining, the performance of the solver deteriorates as the medium changes. Using our light retraining procedures, we obtain the forward simulations effectively throughout the process. We demonstrate our approach to solving FWI problems using 2D geophysical models with high-frequency data. | [
"['Matan Goren' 'Eran Treister']"
] |
null | null | 2405.17697 | null | null | http://arxiv.org/pdf/2405.17697v2 | 2024-05-31T17:47:52Z | 2024-05-27T23:04:37Z | P4: Towards private, personalized, and Peer-to-Peer learning | Personalized learning is a proposed approach to address the problem of data heterogeneity in collaborative machine learning. In a decentralized setting, the two main challenges of personalization are client clustering and data privacy. In this paper, we address these challenges by developing P4 (Personalized Private Peer-to-Peer) a method that ensures that each client receives a personalized model while maintaining differential privacy guarantee of each client's local dataset during and after the training. Our approach includes the design of a lightweight algorithm to identify similar clients and group them in a private, peer-to-peer (P2P) manner. Once grouped, we develop differentially-private knowledge distillation for clients to co-train with minimal impact on accuracy. We evaluate our proposed method on three benchmark datasets (FEMNIST or Federated EMNIST, CIFAR-10 and CIFAR-100) and two different neural network architectures (Linear and CNN-based networks) across a range of privacy parameters. The results demonstrate the potential of P4, as it outperforms the state-of-the-art of differential private P2P by up to 40 percent in terms of accuracy. We also show the practicality of P4 by implementing it on resource constrained devices, and validating that it has minimal overhead, e.g., about 7 seconds to run collaborative training between two clients. | [
"['Mohammad Mahdi Maheri' 'Sandra Siby' 'Sina Abdollahi'\n 'Anastasia Borovykh' 'Hamed Haddadi']"
] |
null | null | 2405.17700 | null | null | http://arxiv.org/pdf/2405.17700v1 | 2024-05-27T23:16:52Z | 2024-05-27T23:16:52Z | Learning Social Welfare Functions | Is it possible to understand or imitate a policy maker's rationale by looking at past decisions they made? We formalize this question as the problem of learning social welfare functions belonging to the well-studied family of power mean functions. We focus on two learning tasks; in the first, the input is vectors of utilities of an action (decision or policy) for individuals in a group and their associated social welfare as judged by a policy maker, whereas in the second, the input is pairwise comparisons between the welfares associated with a given pair of utility vectors. We show that power mean functions are learnable with polynomial sample complexity in both cases, even if the comparisons are social welfare information is noisy. Finally, we design practical algorithms for these tasks and evaluate their performance. | [
"['Kanad Shrikar Pardeshi' 'Itai Shapira' 'Ariel D. Procaccia'\n 'Aarti Singh']"
] |
null | null | 2405.17703 | null | null | http://arxiv.org/pdf/2405.17703v1 | 2024-05-27T23:22:23Z | 2024-05-27T23:22:23Z | Mechanistic Interpretability of Binary and Ternary Transformers | Recent research (arXiv:2310.11453, arXiv:2402.17764) has proposed binary and ternary transformer networks as a way to significantly reduce memory and improve inference speed in Large Language Models (LLMs) while maintaining accuracy. In this work, we apply techniques from mechanistic interpretability to investigate whether such networks learn distinctly different or similar algorithms when compared to full-precision transformer networks. In particular, we reverse engineer the algorithms learned for the toy problem of modular addition where we find that binary and ternary networks learn similar algorithms as full precision networks. This provides evidence against the possibility of using binary and ternary networks as a more interpretable alternative in the LLM setting. | [
"['Jason Li']"
] |
null | null | 2405.17708 | null | null | http://arxiv.org/pdf/2405.17708v1 | 2024-05-27T23:51:20Z | 2024-05-27T23:51:20Z | OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates
of Multiple Estimators | Offline policy evaluation (OPE) allows us to evaluate and estimate a new sequential decision-making policy's performance by leveraging historical interaction data collected from other policies. Evaluating a new policy online without a confident estimate of its performance can lead to costly, unsafe, or hazardous outcomes, especially in education and healthcare. Several OPE estimators have been proposed in the last decade, many of which have hyperparameters and require training. Unfortunately, choosing the best OPE algorithm for each task and domain is still unclear. In this paper, we propose a new algorithm that adaptively blends a set of OPE estimators given a dataset without relying on an explicit selection using a statistical procedure. We prove that our estimator is consistent and satisfies several desirable properties for policy evaluation. Additionally, we demonstrate that when compared to alternative approaches, our estimator can be used to select higher-performing policies in healthcare and robotics. Our work contributes to improving ease of use for a general-purpose, estimator-agnostic, off-policy evaluation framework for offline RL. | [
"['Allen Nie' 'Yash Chandak' 'Christina J. Yuan' 'Anirudhan Badrinath'\n 'Yannis Flet-Berliac' 'Emma Brunskil']"
] |
null | null | 2405.17712 | null | null | http://arxiv.org/pdf/2405.17712v1 | 2024-05-28T00:08:29Z | 2024-05-28T00:08:29Z | CLAIM Your Data: Enhancing Imputation Accuracy with Contextual Large
Language Models | This paper introduces the Contextual Language model for Accurate Imputation Method (CLAIM), a novel strategy that capitalizes on the expansive knowledge and reasoning capabilities of pre-trained large language models (LLMs) to address missing data challenges in tabular datasets. Unlike traditional imputation methods, which predominantly rely on numerical estimations, CLAIM utilizes contextually relevant natural language descriptors to fill missing values. This approach transforms datasets into natural language contextualized formats that are inherently more aligned with LLMs' capabilities, thereby facilitating the dual use of LLMs: first, to generate missing value descriptors, and then, to fine-tune the LLM on the enriched dataset for improved performance in downstream tasks. Our evaluations across diverse datasets and missingness patterns reveal CLAIM's superior performance over existing imputation techniques. Furthermore, our investigation into the effectiveness of context-specific versus generic descriptors for missing data highlights the importance of contextual accuracy in enhancing LLM performance for data imputation. The results underscore CLAIM's potential to markedly improve the reliability and quality of data analysis and machine learning models, offering a more nuanced and effective solution for handling missing data. | [
"['Ahatsham Hayat' 'Mohammad Rashedul Hasan']"
] |
null | null | 2405.17713 | null | null | http://arxiv.org/pdf/2405.17713v1 | 2024-05-28T00:08:46Z | 2024-05-28T00:08:46Z | AI Alignment with Changing and Influenceable Reward Functions | Existing AI alignment approaches assume that preferences are static, which is unrealistic: our preferences change, and may even be influenced by our interactions with AI systems themselves. To clarify the consequences of incorrectly assuming static preferences, we introduce Dynamic Reward Markov Decision Processes (DR-MDPs), which explicitly model preference changes and the AI's influence on them. We show that despite its convenience, the static-preference assumption may undermine the soundness of existing alignment techniques, leading them to implicitly reward AI systems for influencing user preferences in ways users may not truly want. We then explore potential solutions. First, we offer a unifying perspective on how an agent's optimization horizon may partially help reduce undesirable AI influence. Then, we formalize different notions of AI alignment that account for preference change from the outset. Comparing the strengths and limitations of 8 such notions of alignment, we find that they all either err towards causing undesirable AI influence, or are overly risk-averse, suggesting that a straightforward solution to the problems of changing preferences may not exist. As there is no avoiding grappling with changing preferences in real-world settings, this makes it all the more important to handle these issues with care, balancing risks and capabilities. We hope our work can provide conceptual clarity and constitute a first step towards AI alignment practices which explicitly account for (and contend with) the changing and influenceable nature of human preferences. | [
"['Micah Carroll' 'Davis Foote' 'Anand Siththaranjan' 'Stuart Russell'\n 'Anca Dragan']"
] |
null | null | 2405.17718 | null | null | http://arxiv.org/pdf/2405.17718v1 | 2024-05-28T00:25:41Z | 2024-05-28T00:25:41Z | AdapNet: Adaptive Noise-Based Network for Low-Quality Image Retrieval | Image retrieval aims to identify visually similar images within a database using a given query image. Traditional methods typically employ both global and local features extracted from images for matching, and may also apply re-ranking techniques to enhance accuracy. However, these methods often fail to account for the noise present in query images, which can stem from natural or human-induced factors, thereby negatively impacting retrieval performance. To mitigate this issue, we introduce a novel setting for low-quality image retrieval, and propose an Adaptive Noise-Based Network (AdapNet) to learn robust abstract representations. Specifically, we devise a quality compensation block trained to compensate for various low-quality factors in input images. Besides, we introduce an innovative adaptive noise-based loss function, which dynamically adjusts its focus on the gradient in accordance with image quality, thereby augmenting the learning of unknown noisy samples during training and enhancing intra-class compactness. To assess the performance, we construct two datasets with low-quality queries, which is built by applying various types of noise on clean query images on the standard Revisited Oxford and Revisited Paris datasets. Comprehensive experimental results illustrate that AdapNet surpasses state-of-the-art methods on the Noise Revisited Oxford and Noise Revisited Paris benchmarks, while maintaining competitive performance on high-quality datasets. The code and constructed datasets will be made available. | [
"['Sihe Zhang' 'Qingdong He' 'Jinlong Peng' 'Yuxi Li' 'Zhengkai Jiang'\n 'Jiafu Wu' 'Mingmin Chi' 'Yabiao Wang' 'Chengjie Wang']"
] |
null | null | 2405.17720 | null | null | http://arxiv.org/pdf/2405.17720v1 | 2024-05-28T00:36:25Z | 2024-05-28T00:36:25Z | MindFormer: A Transformer Architecture for Multi-Subject Brain Decoding
via fMRI | Research efforts to understand neural signals have been ongoing for many years, with visual decoding from fMRI signals attracting considerable attention. Particularly, the advent of image diffusion models has advanced the reconstruction of images from fMRI data significantly. However, existing approaches often introduce inter- and intra- subject variations in the reconstructed images, which can compromise accuracy. To address current limitations in multi-subject brain decoding, we introduce a new Transformer architecture called MindFormer. This model is specifically designed to generate fMRI-conditioned feature vectors that can be used for conditioning Stable Diffusion model. More specifically, MindFormer incorporates two key innovations: 1) a novel training strategy based on the IP-Adapter to extract semantically meaningful features from fMRI signals, and 2) a subject specific token and linear layer that effectively capture individual differences in fMRI signals while synergistically combines multi subject fMRI data for training. Our experimental results demonstrate that Stable Diffusion, when integrated with MindFormer, produces semantically consistent images across different subjects. This capability significantly surpasses existing models in multi-subject brain decoding. Such advancements not only improve the accuracy of our reconstructions but also deepen our understanding of neural processing variations among individuals. | [
"['Inhwa Han' 'Jaayeon Lee' 'Jong Chul Ye']"
] |
null | null | 2405.17730 | null | null | http://arxiv.org/pdf/2405.17730v1 | 2024-05-28T01:19:13Z | 2024-05-28T01:19:13Z | MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance | Multimodal learning methods with targeted unimodal learning objectives have exhibited their superior efficacy in alleviating the imbalanced multimodal learning problem. However, in this paper, we identify the previously ignored gradient conflict between multimodal and unimodal learning objectives, potentially misleading the unimodal encoder optimization. To well diminish these conflicts, we observe the discrepancy between multimodal loss and unimodal loss, where both gradient magnitude and covariance of the easier-to-learn multimodal loss are smaller than the unimodal one. With this property, we analyze Pareto integration under our multimodal scenario and propose MMPareto algorithm, which could ensure a final gradient with direction that is common to all learning objectives and enhanced magnitude to improve generalization, providing innocent unimodal assistance. Finally, experiments across multiple types of modalities and frameworks with dense cross-modal interaction indicate our superior and extendable method performance. Our method is also expected to facilitate multi-task cases with a clear discrepancy in task difficulty, demonstrating its ideal scalability. The source code and dataset are available at https://github.com/GeWu-Lab/MMPareto_ICML2024. | [
"['Yake Wei' 'Di Hu']"
] |
null | null | 2405.17734 | null | null | http://arxiv.org/pdf/2405.17734v1 | 2024-05-28T01:34:35Z | 2024-05-28T01:34:35Z | Towards Efficient Disaster Response via Cost-effective Unbiased Class
Rate Estimation through Neyman Allocation Stratified Sampling Active Learning | With the rapid development of earth observation technology, we have entered an era of massively available satellite remote-sensing data. However, a large amount of satellite remote sensing data lacks a label or the label cost is too high to hinder the potential of AI technology mining satellite data. Especially in such an emergency response scenario that uses satellite data to evaluate the degree of disaster damage. Disaster damage assessment encountered bottlenecks due to excessive focus on the damage of a certain building in a specific geographical space or a certain area on a larger scale. In fact, in the early days of disaster emergency response, government departments were more concerned about the overall damage rate of the disaster area instead of single-building damage, because this helps the government decide the level of emergency response. We present an innovative algorithm that constructs Neyman stratified random sampling trees for binary classification and extends this approach to multiclass problems. Through extensive experimentation on various datasets and model structures, our findings demonstrate that our method surpasses both passive and conventional active learning techniques in terms of class rate estimation and model enhancement with only 30%-60% of the annotation cost of simple sampling. It effectively addresses the 'sampling bias' challenge in traditional active learning strategies and mitigates the 'cold start' dilemma. The efficacy of our approach is further substantiated through application to disaster evaluation tasks using Xview2 Satellite imagery, showcasing its practical utility in real-world contexts. | [
"['Yanbing Bai' 'Xinyi Wu' 'Lai Xu' 'Jihan Pei' 'Erick Mas'\n 'Shunichi Koshimura']"
] |
null | null | 2405.17743 | null | null | http://arxiv.org/pdf/2405.17743v2 | 2024-05-30T02:12:05Z | 2024-05-28T01:55:35Z | ORLM: Training Large Language Models for Optimization Modeling | Large Language Models (LLMs) have emerged as powerful tools for tackling complex Operations Research (OR) problem by providing the capacity in automating optimization modeling. However, current methodologies heavily rely on prompt engineering (e.g., multi-agent cooperation) with proprietary LLMs, raising data privacy concerns that could be prohibitive in industry applications. To tackle this issue, we propose training open-source LLMs for optimization modeling. We identify four critical requirements for the training dataset of OR LLMs, design and implement OR-Instruct, a semi-automated process for creating synthetic data tailored to specific requirements. We also introduce the IndustryOR benchmark, the first industrial benchmark for testing LLMs on solving real-world OR problems. We apply the data from OR-Instruct to various open-source LLMs of 7b size (termed as ORLMs), resulting in a significantly improved capability for optimization modeling. Our best-performing ORLM achieves state-of-the-art performance on the NL4OPT, MAMO, and IndustryOR benchmarks. Our code and data are available at url{https://github.com/Cardinal-Operations/ORLM}. | [
"['Zhengyang Tang' 'Chenyu Huang' 'Xin Zheng' 'Shixi Hu' 'Zizhuo Wang'\n 'Dongdong Ge' 'Benyou Wang']"
] |
null | null | 2405.17746 | null | null | http://arxiv.org/pdf/2405.17746v1 | 2024-05-28T01:59:06Z | 2024-05-28T01:59:06Z | Rethinking Pruning for Backdoor Mitigation: An Optimization Perspective | Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks, posing concerning threats to their reliable deployment. Recent research reveals that backdoors can be erased from infected DNNs by pruning a specific group of neurons, while how to effectively identify and remove these backdoor-associated neurons remains an open challenge. Most of the existing defense methods rely on defined rules and focus on neuron's local properties, ignoring the exploration and optimization of pruning policies. To address this gap, we propose an Optimized Neuron Pruning (ONP) method combined with Graph Neural Network (GNN) and Reinforcement Learning (RL) to repair backdoor models. Specifically, ONP first models the target DNN as graphs based on neuron connectivity, and then uses GNN-based RL agents to learn graph embeddings and find a suitable pruning policy. To the best of our knowledge, this is the first attempt to employ GNN and RL for optimizing pruning policies in the field of backdoor defense. Experiments show, with a small amount of clean data, ONP can effectively prune the backdoor neurons implanted by a set of backdoor attacks at the cost of negligible performance degradation, achieving a new state-of-the-art performance for backdoor mitigation. | [
"['Nan Li' 'Haiyang Yu' 'Ping Yi']"
] |
null | null | 2405.17750 | null | null | http://arxiv.org/pdf/2405.17750v1 | 2024-05-28T02:05:39Z | 2024-05-28T02:05:39Z | Magnitude-based Neuron Pruning for Backdoor Defens | Deep Neural Networks (DNNs) are known to be vulnerable to backdoor attacks, posing concerning threats to their reliable deployment. Recent research reveals that backdoors can be erased from infected DNNs by pruning a specific group of neurons, while how to effectively identify and remove these backdoor-associated neurons remains an open challenge. In this paper, we investigate the correlation between backdoor behavior and neuron magnitude, and find that backdoor neurons deviate from the magnitude-saliency correlation of the model. The deviation inspires us to propose a Magnitude-based Neuron Pruning (MNP) method to detect and prune backdoor neurons. Specifically, MNP uses three magnitude-guided objective functions to manipulate the magnitude-saliency correlation of backdoor neurons, thus achieving the purpose of exposing backdoor behavior, eliminating backdoor neurons and preserving clean neurons, respectively. Experiments show our pruning strategy achieves state-of-the-art backdoor defense performance against a variety of backdoor attacks with a limited amount of clean data, demonstrating the crucial role of magnitude for guiding backdoor defenses. | [
"['Nan Li' 'Haoyu Jiang' 'Ping Yi']"
] |
null | null | 2405.17756 | null | null | http://arxiv.org/pdf/2405.17756v1 | 2024-05-28T02:16:35Z | 2024-05-28T02:16:35Z | Motion-Informed Deep Learning for Brain MR Image Reconstruction
Framework | Motion artifacts in Magnetic Resonance Imaging (MRI) are one of the frequently occurring artifacts due to patient movements during scanning. Motion is estimated to be present in approximately 30% of clinical MRI scans; however, motion has not been explicitly modeled within deep learning image reconstruction models. Deep learning (DL) algorithms have been demonstrated to be effective for both the image reconstruction task and the motion correction task, but the two tasks are considered separately. The image reconstruction task involves removing undersampling artifacts such as noise and aliasing artifacts, whereas motion correction involves removing artifacts including blurring, ghosting, and ringing. In this work, we propose a novel method to simultaneously accelerate imaging and correct motion. This is achieved by integrating a motion module into the deep learning-based MRI reconstruction process, enabling real-time detection and correction of motion. We model motion as a tightly integrated auxiliary layer in the deep learning model during training, making the deep learning model 'motion-informed'. During inference, image reconstruction is performed from undersampled raw k-space data using a trained motion-informed DL model. Experimental results demonstrate that the proposed motion-informed deep learning image reconstruction network outperformed the conventional image reconstruction network for motion-degraded MRI datasets. | [
"['Zhifeng Chen' 'Kamlesh Pawar' 'Kh Tohidul Islam' 'Himashi Peiris'\n 'Gary Egan' 'Zhaolin Chen']"
] |
null | null | 2405.17761 | null | null | http://arxiv.org/pdf/2405.17761v1 | 2024-05-28T02:27:53Z | 2024-05-28T02:27:53Z | Double Variance Reduction: A Smoothing Trick for Composite Optimization
Problems without First-Order Gradient | Variance reduction techniques are designed to decrease the sampling variance, thereby accelerating convergence rates of first-order (FO) and zeroth-order (ZO) optimization methods. However, in composite optimization problems, ZO methods encounter an additional variance called the coordinate-wise variance, which stems from the random gradient estimation. To reduce this variance, prior works require estimating all partial derivatives, essentially approximating FO information. This approach demands O(d) function evaluations (d is the dimension size), which incurs substantial computational costs and is prohibitive in high-dimensional scenarios. This paper proposes the Zeroth-order Proximal Double Variance Reduction (ZPDVR) method, which utilizes the averaging trick to reduce both sampling and coordinate-wise variances. Compared to prior methods, ZPDVR relies solely on random gradient estimates, calls the stochastic zeroth-order oracle (SZO) in expectation $mathcal{O}(1)$ times per iteration, and achieves the optimal $mathcal{O}(d(n + kappa)log (frac{1}{epsilon}))$ SZO query complexity in the strongly convex and smooth setting, where $kappa$ represents the condition number and $epsilon$ is the desired accuracy. Empirical results validate ZPDVR's linear convergence and demonstrate its superior performance over other related methods. | [
"['Hao Di' 'Haishan Ye' 'Yueling Zhang' 'Xiangyu Chang' 'Guang Dai'\n 'Ivor W. Tsang']"
] |
null | null | 2405.17766 | null | null | http://arxiv.org/pdf/2405.17766v1 | 2024-05-28T02:43:53Z | 2024-05-28T02:43:53Z | SleepFM: Multi-modal Representation Learning for Sleep Across Brain
Activity, ECG and Respiratory Signals | Sleep is a complex physiological process evaluated through various modalities recording electrical brain, cardiac, and respiratory activities. We curate a large polysomnography dataset from over 14,000 participants comprising over 100,000 hours of multi-modal sleep recordings. Leveraging this extensive dataset, we developed SleepFM, the first multi-modal foundation model for sleep analysis. We show that a novel leave-one-out approach for contrastive learning significantly improves downstream task performance compared to representations from standard pairwise contrastive learning. A logistic regression model trained on SleepFM's learned embeddings outperforms an end-to-end trained convolutional neural network (CNN) on sleep stage classification (macro AUROC 0.88 vs 0.72 and macro AUPRC 0.72 vs 0.48) and sleep disordered breathing detection (AUROC 0.85 vs 0.69 and AUPRC 0.77 vs 0.61). Notably, the learned embeddings achieve 48% top-1 average accuracy in retrieving the corresponding recording clips of other modalities from 90,000 candidates. This work demonstrates the value of holistic multi-modal sleep modeling to fully capture the richness of sleep recordings. SleepFM is open source and available at https://github.com/rthapa84/sleepfm-codebase. | [
"['Rahul Thapa' 'Bryan He' 'Magnus Ruud Kjaer' 'Hyatt Moore' 'Gauri Ganjoo'\n 'Emmanuel Mignot' 'James Zou']"
] |
null | null | 2405.17767 | null | null | http://arxiv.org/pdf/2405.17767v1 | 2024-05-28T02:46:11Z | 2024-05-28T02:46:11Z | Linguistic Collapse: Neural Collapse in (Large) Language Models | Neural collapse ($mathcal{NC}$) is a phenomenon observed in classification tasks where top-layer representations collapse into their class means, which become equinorm, equiangular and aligned with the classifiers. These behaviors -- associated with generalization and robustness -- would manifest under specific conditions: models are trained towards zero loss, with noise-free labels belonging to balanced classes, which do not outnumber the model's hidden dimension. Recent studies have explored $mathcal{NC}$ in the absence of one or more of these conditions to extend and capitalize on the associated benefits of ideal geometries. Language modeling presents a curious frontier, as textit{training by token prediction} constitutes a classification task where none of the conditions exist: the vocabulary is imbalanced and exceeds the embedding dimension; different tokens might correspond to similar contextual embeddings; and large language models (LLMs) in particular are typically only trained for a few epochs. This paper empirically investigates the impact of scaling the architectures and training of causal language models (CLMs) on their progression towards $mathcal{NC}$. We find that $mathcal{NC}$ properties that develop with scaling are linked to generalization. Moreover, there is evidence of some relationship between $mathcal{NC}$ and generalization independent of scale. Our work therefore underscores the generality of $mathcal{NC}$ as it extends to the novel and more challenging setting of language modeling. Downstream, we seek to inspire further research on the phenomenon to deepen our understanding of LLMs -- and neural networks at large -- and improve existing architectures based on $mathcal{NC}$-related properties. | [
"['Robert Wu' 'Vardan Papyan']"
] |
null | null | 2405.17768 | null | null | http://arxiv.org/pdf/2405.17768v1 | 2024-05-28T02:47:53Z | 2024-05-28T02:47:53Z | Revisiting the Message Passing in Heterophilous Graph Neural Networks | Graph Neural Networks (GNNs) have demonstrated strong performance in graph mining tasks due to their message-passing mechanism, which is aligned with the homophily assumption that adjacent nodes exhibit similar behaviors. However, in many real-world graphs, connected nodes may display contrasting behaviors, termed as heterophilous patterns, which has attracted increased interest in heterophilous GNNs (HTGNNs). Although the message-passing mechanism seems unsuitable for heterophilous graphs due to the propagation of class-irrelevant information, it is still widely used in many existing HTGNNs and consistently achieves notable success. This raises the question: why does message passing remain effective on heterophilous graphs? To answer this question, in this paper, we revisit the message-passing mechanisms in heterophilous graph neural networks and reformulate them into a unified heterophilious message-passing (HTMP) mechanism. Based on HTMP and empirical analysis, we reveal that the success of message passing in existing HTGNNs is attributed to implicitly enhancing the compatibility matrix among classes. Moreover, we argue that the full potential of the compatibility matrix is not completely achieved due to the existence of incomplete and noisy semantic neighborhoods in real-world heterophilous graphs. To bridge this gap, we introduce a new approach named CMGNN, which operates within the HTMP mechanism to explicitly leverage and improve the compatibility matrix. A thorough evaluation involving 10 benchmark datasets and comparative analysis against 13 well-established baselines highlights the superior performance of the HTMP mechanism and CMGNN method. | [
"['Zhuonan Zheng' 'Yuanchen Bei' 'Sheng Zhou' 'Yao Ma' 'Ming Gu'\n 'HongJia XU' 'Chengyu Lai' 'Jiawei Chen' 'Jiajun Bu']"
] |
null | null | 2405.17776 | null | null | http://arxiv.org/pdf/2405.17776v1 | 2024-05-28T03:12:33Z | 2024-05-28T03:12:33Z | The Binary Quantized Neural Network for Dense Prediction via Specially
Designed Upsampling and Attention | Deep learning-based information processing consumes long time and requires huge computing resources, especially for dense prediction tasks which require an output for each pixel, like semantic segmentation and salient object detection. There are mainly two challenges for quantization of dense prediction tasks. Firstly, directly applying the upsampling operation that dense prediction tasks require is extremely crude and causes unacceptable accuracy reduction. Secondly, the complex structure of dense prediction networks means it is difficult to maintain a fast speed as well as a high accuracy when performing quantization. In this paper, we propose an effective upsampling method and an efficient attention computation strategy to transfer the success of the binary neural networks (BNN) from single prediction tasks to dense prediction tasks. Firstly, we design a simple and robust multi-branch parallel upsampling structure to achieve the high accuracy. Then we further optimize the attention method which plays an important role in segmentation but has huge computation complexity. Our attention method can reduce the computational complexity by a factor of one hundred times but retain the original effect. Experiments on Cityscapes, KITTI road, and ECSSD fully show the effectiveness of our work. | [
"['Xingyu Ding' 'Lianlei Shan' 'Guiqin Zhao' 'Meiqi Wu' 'Wenzhang Zhou'\n 'Wei Li']"
] |
null | null | 2405.17779 | null | null | http://arxiv.org/pdf/2405.17779v1 | 2024-05-28T03:19:15Z | 2024-05-28T03:19:15Z | Online Analytic Exemplar-Free Continual Learning with Large Models for
Imbalanced Autonomous Driving Task | In the field of autonomous driving, even a meticulously trained model can encounter failures when faced with unfamiliar sceanrios. One of these scenarios can be formulated as an online continual learning (OCL) problem. That is, data come in an online fashion, and models are updated according to these streaming data. Two major OCL challenges are catastrophic forgetting and data imbalance. To address these challenges, in this paper, we propose an Analytic Exemplar-Free Online Continual Learning (AEF-OCL). The AEF-OCL leverages analytic continual learning principles and employs ridge regression as a classifier for features extracted by a large backbone network. It solves the OCL problem by recursively calculating the analytical solution, ensuring an equalization between the continual learning and its joint-learning counterpart, and works without the need to save any used samples (i.e., exemplar-free). Additionally, we introduce a Pseudo-Features Generator (PFG) module that recursively estimates the deviation of real features. The PFG generates offset pseudo-features following a normal distribution, thereby addressing the data imbalance issue. Experimental results demonstrate that despite being an exemplar-free strategy, our method outperforms various methods on the autonomous driving SODA10M dataset. Source code is available at https://github.com/ZHUANGHP/Analytic-continual-learning. | [
"['Huiping Zhuang' 'Di Fang' 'Kai Tong' 'Yuchen Liu' 'Ziqian Zeng'\n 'Xu Zhou' 'Cen Chen']"
] |
null | null | 2405.17782 | null | null | http://arxiv.org/pdf/2405.17782v1 | 2024-05-28T03:26:00Z | 2024-05-28T03:26:00Z | Post-Fair Federated Learning: Achieving Group and Community Fairness in
Federated Learning via Post-processing | Federated Learning (FL) is a distributed machine learning framework in which a set of local communities collaboratively learn a shared global model while retaining all training data locally within each community. Two notions of fairness have recently emerged as important issues for federated learning: group fairness and community fairness. Group fairness requires that a model's decisions do not favor any particular group based on a set of legally protected attributes such as race or gender. Community fairness requires that global models exhibit similar levels of performance (accuracy) across all collaborating communities. Both fairness concepts can coexist within an FL framework, but the existing literature has focused on either one concept or the other. This paper proposes and analyzes a post-processing fair federated learning (FFL) framework called post-FFL. Post-FFL uses a linear program to simultaneously enforce group and community fairness while maximizing the utility of the global model. Because Post-FFL is a post-processing approach, it can be used with existing FL training pipelines whose convergence properties are well understood. This paper uses post-FFL on real-world datasets to mimic how hospital networks, for example, use federated learning to deliver community health care. Theoretical results bound the accuracy lost when post-FFL enforces both notion of fairness. Experimental results illustrate that post-FFL simultaneously improves both group and community fairness in FL. Moreover, post-FFL outperforms the existing in-processing fair federated learning in terms of improving both notions of fairness, communication efficiency and computation cost. | [
"['Yuying Duan' 'Yijun Tian' 'Nitesh Chawla' 'Michael Lemmon']"
] |
null | null | 2405.17784 | null | null | http://arxiv.org/pdf/2405.17784v2 | 2024-06-03T20:23:49Z | 2024-05-28T03:28:00Z | Adaptive Horizon Actor-Critic for Policy Learning in Contact-Rich
Differentiable Simulation | Model-Free Reinforcement Learning (MFRL), leveraging the policy gradient theorem, has demonstrated considerable success in continuous control tasks. However, these approaches are plagued by high gradient variance due to zeroth-order gradient estimation, resulting in suboptimal policies. Conversely, First-Order Model-Based Reinforcement Learning (FO-MBRL) methods employing differentiable simulation provide gradients with reduced variance but are susceptible to sampling error in scenarios involving stiff dynamics, such as physical contact. This paper investigates the source of this error and introduces Adaptive Horizon Actor-Critic (AHAC), an FO-MBRL algorithm that reduces gradient error by adapting the model-based horizon to avoid stiff dynamics. Empirical findings reveal that AHAC outperforms MFRL baselines, attaining 40% more reward across a set of locomotion tasks and efficiently scaling to high-dimensional control environments with improved wall-clock-time efficiency. | [
"['Ignat Georgiev' 'Krishnan Srinivasan' 'Jie Xu' 'Eric Heiden'\n 'Animesh Garg']"
] |
null | null | 2405.17796 | null | null | http://arxiv.org/pdf/2405.17796v1 | 2024-05-28T03:47:41Z | 2024-05-28T03:47:41Z | Offline Oracle-Efficient Learning for Contextual MDPs via Layerwise
Exploration-Exploitation Tradeoff | Motivated by the recent discovery of a statistical and computational reduction from contextual bandits to offline regression (Simchi-Levi and Xu, 2021), we address the general (stochastic) Contextual Markov Decision Process (CMDP) problem with horizon H (as known as CMDP with H layers). In this paper, we introduce a reduction from CMDPs to offline density estimation under the realizability assumption, i.e., a model class M containing the true underlying CMDP is provided in advance. We develop an efficient, statistically near-optimal algorithm requiring only O(HlogT) calls to an offline density estimation algorithm (or oracle) across all T rounds of interaction. This number can be further reduced to O(HloglogT) if T is known in advance. Our results mark the first efficient and near-optimal reduction from CMDPs to offline density estimation without imposing any structural assumptions on the model class. A notable feature of our algorithm is the design of a layerwise exploration-exploitation tradeoff tailored to address the layerwise structure of CMDPs. Additionally, our algorithm is versatile and applicable to pure exploration tasks in reward-free reinforcement learning. | [
"['Jian Qian' 'Haichen Hu' 'David Simchi-Levi']"
] |
null | null | 2405.17799 | null | null | http://arxiv.org/pdf/2405.17799v1 | 2024-05-28T03:49:54Z | 2024-05-28T03:49:54Z | Exploring Activation Patterns of Parameters in Language Models | Most work treats large language models as black boxes without in-depth understanding of their internal working mechanism. In order to explain the internal representations of LLMs, we propose a gradient-based metric to assess the activation level of model parameters. Based on this metric, we obtain three preliminary findings. (1) When the inputs are in the same domain, parameters in the shallow layers will be activated densely, which means a larger portion of parameters will have great impacts on the outputs. In contrast, parameters in the deep layers are activated sparsely. (2) When the inputs are across different domains, parameters in shallow layers exhibit higher similarity in the activation behavior than deep layers. (3) In deep layers, the similarity of the distributions of activated parameters is positively correlated to the empirical data relevance. Further, we develop three validation experiments to solidify these findings. (1) Firstly, starting from the first finding, we attempt to configure different prune ratios for different layers, and find this method can benefit model pruning. (2) Secondly, we find that a pruned model based on one calibration set can better handle tasks related to the calibration task than those not related, which validate the second finding. (3) Thirdly, Based on the STS-B and SICK benchmark, we find that two sentences with consistent semantics tend to share similar parameter activation patterns in deep layers, which aligns with our third finding. Our work sheds light on the behavior of parameter activation in LLMs, and we hope these findings will have the potential to inspire more practical applications. | [
"['Yudong Wang' 'Damai Dai' 'Zhifang Sui']"
] |
null | null | 2405.17802 | null | null | http://arxiv.org/pdf/2405.17802v1 | 2024-05-28T03:53:26Z | 2024-05-28T03:53:26Z | Multi-level Interaction Modeling for Protein Mutational Effect
Prediction | Protein-protein interactions are central mediators in many biological processes. Accurately predicting the effects of mutations on interactions is crucial for guiding the modulation of these interactions, thereby playing a significant role in therapeutic development and drug discovery. Mutations generally affect interactions hierarchically across three levels: mutated residues exhibit different sidechain conformations, which lead to changes in the backbone conformation, eventually affecting the binding affinity between proteins. However, existing methods typically focus only on sidechain-level interaction modeling, resulting in suboptimal predictions. In this work, we propose a self-supervised multi-level pre-training framework, ProMIM, to fully capture all three levels of interactions with well-designed pretraining objectives. Experiments show ProMIM outperforms all the baselines on the standard benchmark, especially on mutations where significant changes in backbone conformations may occur. In addition, leading results from zero-shot evaluations for SARS-CoV-2 mutational effect prediction and antibody optimization underscore the potential of ProMIM as a powerful next-generation tool for developing novel therapeutic approaches and new drugs. | [
"['Yuanle Mo' 'Xin Hong' 'Bowen Gao' 'Yinjun Jia' 'Yanyan Lan']"
] |
null | null | 2405.17816 | null | null | http://arxiv.org/pdf/2405.17816v1 | 2024-05-28T04:24:38Z | 2024-05-28T04:24:38Z | Pursuing Feature Separation based on Neural Collapse for
Out-of-Distribution Detection | In the open world, detecting out-of-distribution (OOD) data, whose labels are disjoint with those of in-distribution (ID) samples, is important for reliable deep neural networks (DNNs). To achieve better detection performance, one type of approach proposes to fine-tune the model with auxiliary OOD datasets to amplify the difference between ID and OOD data through a separation loss defined on model outputs. However, none of these studies consider enlarging the feature disparity, which should be more effective compared to outputs. The main difficulty lies in the diversity of OOD samples, which makes it hard to describe their feature distribution, let alone design losses to separate them from ID features. In this paper, we neatly fence off the problem based on an aggregation property of ID features named Neural Collapse (NC). NC means that the penultimate features of ID samples within a class are nearly identical to the last layer weight of the corresponding class. Based on this property, we propose a simple but effective loss called OrthLoss, which binds the features of OOD data in a subspace orthogonal to the principal subspace of ID features formed by NC. In this way, the features of ID and OOD samples are separated by different dimensions. By optimizing the feature separation loss rather than purely enlarging output differences, our detection achieves SOTA performance on CIFAR benchmarks without any additional data augmentation or sampling, demonstrating the importance of feature separation in OOD detection. The code will be published. | [
"['Yingwen Wu' 'Ruiji Yu' 'Xinwen Cheng' 'Zhengbao He' 'Xiaolin Huang']"
] |
null | null | 2405.17823 | null | null | http://arxiv.org/pdf/2405.17823v2 | 2024-05-30T11:12:25Z | 2024-05-28T04:47:12Z | Spectral Truncation Kernels: Noncommutativity in $C^*$-algebraic Kernel
Machines | In this paper, we propose a new class of positive definite kernels based on the spectral truncation, which has been discussed in the fields of noncommutative geometry and $C^*$-algebra. We focus on kernels whose inputs and outputs are functions and generalize existing kernels, such as polynomial, product, and separable kernels, by introducing a truncation parameter $n$ that describes the noncommutativity of the products appearing in the kernels. When $n$ goes to infinity, the proposed kernels tend to the existing commutative kernels. If $n$ is finite, they exhibit different behavior, and the noncommutativity induces interactions along the data function domain. We show that the truncation parameter $n$ is a governing factor leading to performance enhancement: by setting an appropriate $n$, we can balance the representation power and the complexity of the representation space. The flexibility of the proposed class of kernels allows us to go beyond previous commutative kernels. | [
"['Yuka Hashimoto' 'Ayoub Hafid' 'Masahiro Ikeda' 'Hachem Kadri']"
] |
null | null | 2405.17829 | null | null | http://arxiv.org/pdf/2405.17829v1 | 2024-05-28T04:59:13Z | 2024-05-28T04:59:13Z | LDMol: Text-Conditioned Molecule Diffusion Model Leveraging Chemically
Informative Latent Space | With the emergence of diffusion models as the frontline of generative models, many researchers have proposed molecule generation techniques using conditional diffusion models. However, due to the fundamental nature of a molecule, which carries highly entangled correlations within a small number of atoms and bonds, it becomes difficult for a model to connect raw data with the conditions when the conditions become more complex as natural language. To address this, here we present a novel latent diffusion model dubbed LDMol, which enables a natural text-conditioned molecule generation. Specifically, LDMol is composed of three building blocks: a molecule encoder that produces a chemically informative feature space, a natural language-conditioned latent diffusion model using a Diffusion Transformer (DiT), and an autoregressive decoder for molecule re. In particular, recognizing that multiple SMILES notations can represent the same molecule, we employ a contrastive learning strategy to extract the chemical informative feature space. LDMol not only beats the existing baselines on the text-to-molecule generation benchmark but is also capable of zero-shot inference with unseen scenarios. Furthermore, we show that LDMol can be applied to downstream tasks such as molecule-to-text retrieval and text-driven molecule editing, demonstrating its versatility as a diffusion model. | [
"['Jinho Chang' 'Jong Chul Ye']"
] |
null | null | 2405.17832 | null | null | http://arxiv.org/pdf/2405.17832v1 | 2024-05-28T05:05:33Z | 2024-05-28T05:05:33Z | Mollification Effects of Policy Gradient Methods | Policy gradient methods have enabled deep reinforcement learning (RL) to approach challenging continuous control problems, even when the underlying systems involve highly nonlinear dynamics that generate complex non-smooth optimization landscapes. We develop a rigorous framework for understanding how policy gradient methods mollify non-smooth optimization landscapes to enable effective policy search, as well as the downside of it: while making the objective function smoother and easier to optimize, the stochastic objective deviates further from the original problem. We demonstrate the equivalence between policy gradient methods and solving backward heat equations. Following the ill-posedness of backward heat equations from PDE theory, we present a fundamental challenge to the use of policy gradient under stochasticity. Moreover, we make the connection between this limitation and the uncertainty principle in harmonic analysis to understand the effects of exploration with stochastic policies in RL. We also provide experimental results to illustrate both the positive and negative aspects of mollification effects in practice. | [
"['Tao Wang' 'Sylvia Herbert' 'Sicun Gao']"
] |
null | null | 2405.17836 | null | null | http://arxiv.org/pdf/2405.17836v1 | 2024-05-28T05:20:01Z | 2024-05-28T05:20:01Z | An Innovative Networks in Federated Learning | This paper presents the development and application of Wavelet Kolmogorov-Arnold Networks (Wav-KAN) in federated learning. We implemented Wav-KAN cite{wav-kan} in the clients. Indeed, we have considered both continuous wavelet transform (CWT) and also discrete wavelet transform (DWT) to enable multiresolution capabaility which helps in heteregeneous data distribution across clients. Extensive experiments were conducted on different datasets, demonstrating Wav-KAN's superior performance in terms of interpretability, computational speed, training and test accuracy. Our federated learning algorithm integrates wavelet-based activation functions, parameterized by weight, scale, and translation, to enhance local and global model performance. Results show significant improvements in computational efficiency, robustness, and accuracy, highlighting the effectiveness of wavelet selection in scalable neural network design. | [
"['Zavareh Bozorgasl' 'Hao Chen']"
] |
null | null | 2405.17838 | null | null | http://arxiv.org/pdf/2405.17838v1 | 2024-05-28T05:28:49Z | 2024-05-28T05:28:49Z | Trust and Terror: Hazards in Text Reveal Negatively Biased Credulity and
Partisan Negativity Bias | Socio-linguistic indicators of text, such as emotion or sentiment, are often extracted using neural networks in order to better understand features of social media. One indicator that is often overlooked, however, is the presence of hazards within text. Recent psychological research suggests that statements about hazards are more believable than statements about benefits (a property known as negatively biased credulity), and that political liberals and conservatives differ in how often they share hazards. Here, we develop a new model to detect information concerning hazards, trained on a new collection of annotated X posts, as well as urban legends annotated in previous work. We show that not only does this model perform well (outperforming, e.g., zero-shot human annotator proxies, such as GPT-4) but that the hazard information it extracts is not strongly correlated with other indicators, namely moral outrage, sentiment, emotions, and threat words. (That said, consonant with expectations, hazard information does correlate positively with such emotions as fear, and negatively with emotions like joy.) We then apply this model to three datasets: X posts about COVID-19, X posts about the 2023 Hamas-Israel war, and a new expanded collection of urban legends. From these data, we uncover words associated with hazards unique to each dataset as well as differences in this language between groups of users, such as conservatives and liberals, which informs what these groups perceive as hazards. We further show that information about hazards peaks in frequency after major hazard events, and therefore acts as an automated indicator of such events. Finally, we find that information about hazards is especially prevalent in urban legends, which is consistent with previous work that finds that reports of hazards are more likely to be both believed and transmitted. | [
"['Keith Burghardt' 'Daniel M. T. Fessler' 'Chyna Tang' 'Anne Pisor'\n 'Kristina Lerman']"
] |
null | null | 2405.17842 | null | null | http://arxiv.org/pdf/2405.17842v1 | 2024-05-28T05:43:03Z | 2024-05-28T05:43:03Z | Discriminator-Guided Cooperative Diffusion for Joint Audio and Video
Generation | In this study, we aim to construct an audio-video generative model with minimal computational cost by leveraging pre-trained single-modal generative models for audio and video. To achieve this, we propose a novel method that guides each single-modal model to cooperatively generate well-aligned samples across modalities. Specifically, given two pre-trained base diffusion models, we train a lightweight joint guidance module to adjust scores separately estimated by the base models to match the score of joint distribution over audio and video. We theoretically show that this guidance can be computed through the gradient of the optimal discriminator distinguishing real audio-video pairs from fake ones independently generated by the base models. On the basis of this analysis, we construct the joint guidance module by training this discriminator. Additionally, we adopt a loss function to make the gradient of the discriminator work as a noise estimator, as in standard diffusion models, stabilizing the gradient of the discriminator. Empirical evaluations on several benchmark datasets demonstrate that our method improves both single-modal fidelity and multi-modal alignment with a relatively small number of parameters. | [
"['Akio Hayakawa' 'Masato Ishii' 'Takashi Shibuya' 'Yuki Mitsufuji']"
] |
null | null | 2405.17849 | null | null | http://arxiv.org/pdf/2405.17849v2 | 2024-06-05T15:26:58Z | 2024-05-28T05:56:11Z | I-LLM: Efficient Integer-Only Inference for Fully-Quantized Low-Bit
Large Language Models | Post-training quantization (PTQ) serves as a potent technique to accelerate the inference of large language models (LLMs). Nonetheless, existing works still necessitate a considerable number of floating-point (FP) operations during inference, including additional quantization and de-quantization, as well as non-linear operators such as RMSNorm and Softmax. This limitation hinders the deployment of LLMs on the edge and cloud devices. In this paper, we identify the primary obstacle to integer-only quantization for LLMs lies in the large fluctuation of activations across channels and tokens in both linear and non-linear operations. To address this issue, we propose I-LLM, a novel integer-only fully-quantized PTQ framework tailored for LLMs. Specifically, (1) we develop Fully-Smooth Block-Reconstruction (FSBR) to aggressively smooth inter-channel variations of all activations and weights. (2) to alleviate degradation caused by inter-token variations, we introduce a novel approach called Dynamic Integer-only MatMul (DI-MatMul). This method enables dynamic quantization in full-integer matrix multiplication by dynamically quantizing the input and outputs with integer-only operations. (3) we design DI-ClippedSoftmax, DI-Exp, and DI-Normalization, which utilize bit shift to execute non-linear operators efficiently while maintaining accuracy. The experiment shows that our I-LLM achieves comparable accuracy to the FP baseline and outperforms non-integer quantization methods. For example, I-LLM can operate at W4A4 with negligible loss of accuracy. To our knowledge, we are the first to bridge the gap between integer-only quantization and LLMs. We've published our code on anonymous.4open.science, aiming to contribute to the advancement of this field. | [
"['Xing Hu' 'Yuan Cheng' 'Dawei Yang' 'Zhihang Yuan' 'Jiangyong Yu'\n 'Chen Xu' 'Sifan Zhou']"
] |
null | null | 2405.17862 | null | null | http://arxiv.org/pdf/2405.17862v1 | 2024-05-28T06:20:14Z | 2024-05-28T06:20:14Z | Towards robust prediction of material properties for nuclear reactor
design under scarce data -- a study in creep rupture property | Advances in Deep Learning bring further investigation into credibility and robustness, especially for safety-critical engineering applications such as the nuclear industry. The key challenges include the availability of data set (often scarce and sparse) and insufficient consideration of the uncertainty in the data, model, and prediction. This paper therefore presents a meta-learning based approach that is both uncertainty- and prior knowledge-informed, aiming at trustful predictions of material properties for the nuclear reactor design. It is suited for robust learning under limited data. Uncertainty has been accounted for where a distribution of predictor functions are produced for extrapolation. Results suggest it achieves superior performance than existing empirical methods in rupture life prediction, a case which is typically under a small data regime. While demonstrated herein with rupture properties, this learning approach is transferable to solve similar problems of data scarcity across the nuclear industry. It is of great importance to boosting the AI analytics in the nuclear industry by proving the applicability and robustness while providing tools that can be trusted. | [
"['Yu Chen' 'Edoardo Patelli' 'Zhen Yang' 'Adolphus Lye']"
] |
null | null | 2405.17874 | null | null | http://arxiv.org/abs/2405.17874v1 | 2024-05-28T06:51:42Z | 2024-05-28T06:51:42Z | NUTS, NARS, and Speech | To investigate whether "Intelligence is the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources", we look at utilising the non axiomatic reasoning system (NARS) for speech recognition. This article presents NUTS: raNdom dimensionality redUction non axiomaTic reasoning few Shot learner for perception. NUTS consists of naive dimensionality reduction, some pre-processing, and then non axiomatic reasoning (NARS). With only 2 training examples NUTS performs similarly to the Whisper Tiny model for discrete word identification. | [
"['D. van der Sluis']"
] |
null | null | 2405.17875 | null | null | http://arxiv.org/pdf/2405.17875v1 | 2024-05-28T06:52:17Z | 2024-05-28T06:52:17Z | BO4IO: A Bayesian optimization approach to inverse optimization with
uncertainty quantification | This work addresses data-driven inverse optimization (IO), where the goal is to estimate unknown parameters in an optimization model from observed decisions that can be assumed to be optimal or near-optimal solutions to the optimization problem. The IO problem is commonly formulated as a large-scale bilevel program that is notoriously difficult to solve. Deviating from traditional exact solution methods, we propose a derivative-free optimization approach based on Bayesian optimization, which we call BO4IO, to solve general IO problems. We treat the IO loss function as a black box and approximate it with a Gaussian process model. Using the predicted posterior function, an acquisition function is minimized at each iteration to query new candidate solutions and sequentially converge to the optimal parameter estimates. The main advantages of using Bayesian optimization for IO are two-fold: (i) it circumvents the need of complex reformulations of the bilevel program or specialized algorithms and can hence enable computational tractability even when the underlying optimization problem is nonconvex or involves discrete variables, and (ii) it allows approximations of the profile likelihood, which provide uncertainty quantification on the IO parameter estimates. We apply the proposed method to three computational case studies, covering different classes of forward optimization problems ranging from convex nonlinear to nonconvex mixed-integer nonlinear programs. Our extensive computational results demonstrate the efficacy and robustness of BO4IO to accurately estimate unknown model parameters from small and noisy datasets. In addition, the proposed profile likelihood analysis has proven to be effective in providing good approximations of the confidence intervals on the parameter estimates and assessing the identifiability of the unknown parameters. | [
"['Yen-An Lu' 'Wei-Shou Hu' 'Joel A. Paulson' 'Qi Zhang']"
] |
null | null | 2405.17876 | null | null | http://arxiv.org/pdf/2405.17876v1 | 2024-05-28T06:52:19Z | 2024-05-28T06:52:19Z | Decentralized Directed Collaboration for Personalized Federated Learning | Personalized Federated Learning (PFL) is proposed to find the greatest personalized models for each client. To avoid the central failure and communication bottleneck in the server-based FL, we concentrate on the Decentralized Personalized Federated Learning (DPFL) that performs distributed model training in a Peer-to-Peer (P2P) manner. Most personalized works in DPFL are based on undirected and symmetric topologies, however, the data, computation and communication resources heterogeneity result in large variances in the personalized models, which lead the undirected aggregation to suboptimal personalized performance and unguaranteed convergence. To address these issues, we propose a directed collaboration DPFL framework by incorporating stochastic gradient push and partial model personalized, called textbf{D}ecentralized textbf{Fed}erated textbf{P}artial textbf{G}radient textbf{P}ush (textbf{DFedPGP}). It personalizes the linear classifier in the modern deep model to customize the local solution and learns a consensus representation in a fully decentralized manner. Clients only share gradients with a subset of neighbors based on the directed and asymmetric topologies, which guarantees flexible choices for resource efficiency and better convergence. Theoretically, we show that the proposed DFedPGP achieves a superior convergence rate of $mathcal{O}(frac{1}{sqrt{T}})$ in the general non-convex setting, and prove the tighter connectivity among clients will speed up the convergence. The proposed method achieves state-of-the-art (SOTA) accuracy in both data and computation heterogeneity scenarios, demonstrating the efficiency of the directed collaboration and partial gradient push. | [
"['Yingqi Liu' 'Yifan Shi' 'Qinglun Li' 'Baoyuan Wu' 'Xueqian Wang'\n 'Li Shen']"
] |
null | null | 2405.17878 | null | null | http://arxiv.org/pdf/2405.17878v1 | 2024-05-28T06:57:01Z | 2024-05-28T06:57:01Z | An Information Theoretic Metric for Evaluating Unlearning Models | Machine unlearning (MU) addresses privacy concerns by removing information of `forgetting data' samples from trained models. Typically, evaluating MU methods involves comparing unlearned models to those retrained from scratch without forgetting data, using metrics such as membership inference attacks (MIA) and accuracy measurements. These evaluations implicitly assume that if the output logits of the unlearned and retrained models are similar, the unlearned model has successfully forgotten the data. Here, we challenge if this assumption is valid. In particular, we conduct a simple experiment of training only the last layer of a given original model using a novel masked-distillation technique while keeping the rest fixed. Surprisingly, simply altering the last layer yields favorable outcomes in the existing evaluation metrics, while the model does not successfully unlearn the samples or classes. For better evaluating the MU methods, we propose a metric that quantifies the residual information about forgetting data samples in intermediate features using mutual information, called information difference index or IDI for short. The IDI provides a comprehensive evaluation of MU methods by efficiently analyzing the internal structure of DNNs. Our metric is scalable to large datasets and adaptable to various model architectures. Additionally, we present COLapse-and-Align (COLA), a simple contrastive-based method that effectively unlearns intermediate features. | [
"['Dongjae Jeon' 'Wonje Jeung' 'Taeheon Kim' 'Albert No' 'Jonghyun Choi']"
] |
null | null | 2405.17879 | null | null | http://arxiv.org/pdf/2405.17879v2 | 2024-06-07T12:27:03Z | 2024-05-28T06:57:22Z | Resisting Stochastic Risks in Diffusion Planners with the Trajectory
Aggregation Tree | Diffusion planners have shown promise in handling long-horizon and sparse-reward tasks due to the non-autoregressive plan generation. However, their inherent stochastic risk of generating infeasible trajectories presents significant challenges to their reliability and stability. We introduce a novel approach, the Trajectory Aggregation Tree (TAT), to address this issue in diffusion planners. Compared to prior methods that rely solely on raw trajectory predictions, TAT aggregates information from both historical and current trajectories, forming a dynamic tree-like structure. Each trajectory is conceptualized as a branch and individual states as nodes. As the structure evolves with the integration of new trajectories, unreliable states are marginalized, and the most impactful nodes are prioritized for decision-making. TAT can be deployed without modifying the original training and sampling pipelines of diffusion planners, making it a training-free, ready-to-deploy solution. We provide both theoretical analysis and empirical evidence to support TAT's effectiveness. Our results highlight its remarkable ability to resist the risk from unreliable trajectories, guarantee the performance boosting of diffusion planners in $100%$ of tasks, and exhibit an appreciable tolerance margin for sample quality, thereby enabling planning with a more than $3times$ acceleration. | [
"['Lang Feng' 'Pengjie Gu' 'Bo An' 'Gang Pan']"
] |
null | null | 2405.17880 | null | null | http://arxiv.org/pdf/2405.17880v1 | 2024-05-28T07:00:28Z | 2024-05-28T07:00:28Z | Diffusion Rejection Sampling | Recent advances in powerful pre-trained diffusion models encourage the development of methods to improve the sampling performance under well-trained diffusion models. This paper introduces Diffusion Rejection Sampling (DiffRS), which uses a rejection sampling scheme that aligns the sampling transition kernels with the true ones at each timestep. The proposed method can be viewed as a mechanism that evaluates the quality of samples at each intermediate timestep and refines them with varying effort depending on the sample. Theoretical analysis shows that DiffRS can achieve a tighter bound on sampling error compared to pre-trained models. Empirical results demonstrate the state-of-the-art performance of DiffRS on the benchmark datasets and the effectiveness of DiffRS for fast diffusion samplers and large-scale text-to-image diffusion models. Our code is available at https://github.com/aailabkaist/DiffRS. | [
"['Byeonghu Na' 'Yeongmin Kim' 'Minsang Park' 'Donghyeok Shin' 'Wanmo Kang'\n 'Il-Chul Moon']"
] |
null | null | 2405.17881 | null | null | http://arxiv.org/pdf/2405.17881v1 | 2024-05-28T07:03:49Z | 2024-05-28T07:03:49Z | Crystal-LSBO: Automated Design of De Novo Crystals with Latent Space
Bayesian Optimization | Generative modeling of crystal structures is significantly challenged by the complexity of input data, which constrains the ability of these models to explore and discover novel crystals. This complexity often confines de novo design methodologies to merely small perturbations of known crystals and hampers the effective application of advanced optimization techniques. One such optimization technique, Latent Space Bayesian Optimization (LSBO) has demonstrated promising results in uncovering novel objects across various domains, especially when combined with Variational Autoencoders (VAEs). Recognizing LSBO's potential and the critical need for innovative crystal discovery, we introduce Crystal-LSBO, a de novo design framework for crystals specifically tailored to enhance explorability within LSBO frameworks. Crystal-LSBO employs multiple VAEs, each dedicated to a distinct aspect of crystal structure: lattice, coordinates, and chemical elements, orchestrated by an integrative model that synthesizes these components into a cohesive output. This setup not only streamlines the learning process but also produces explorable latent spaces thanks to the decreased complexity of the learning task for each model, enabling LSBO approaches to operate. Our study pioneers the use of LSBO for de novo crystal design, demonstrating its efficacy through optimization tasks focused mainly on formation energy values. Our results highlight the effectiveness of our methodology, offering a new perspective for de novo crystal discovery. | [
"['Onur Boyar' 'Yanheng Gu' 'Yuji Tanaka' 'Shunsuke Tonogai'\n 'Tomoya Itakura' 'Ichiro Takeuchi']"
] |
null | null | 2405.17882 | null | null | http://arxiv.org/pdf/2405.17882v1 | 2024-05-28T07:08:29Z | 2024-05-28T07:08:29Z | When is exponential asymptotic optimality achievable in average-reward
restless bandits? | We consider the discrete-time infinite-horizon average-reward restless bandit problem. We propose a novel policy that maintains two dynamic subsets of arms: one subset of arms has a nearly optimal state distribution and takes actions according to an Optimal Local Control routine; the other subset of arms is driven towards the optimal state distribution and gradually merged into the first subset. We show that our policy is asymptotically optimal with an $O(exp(-C N))$ optimality gap for an $N$-armed problem, under the mild assumptions of aperiodic-unichain, non-degeneracy, and local stability. Our policy is the first to achieve exponential asymptotic optimality under the above set of easy-to-verify assumptions, whereas prior work either requires a strong Global Attractor assumption or only achieves an $O(1/sqrt{N})$ optimality gap. We further discuss the fundamental obstacles in significantly weakening our assumptions. In particular, we prove a lower bound showing that local stability is fundamental for exponential asymptotic optimality. | [
"['Yige Hong' 'Qiaomin Xie' 'Yudong Chen' 'Weina Wang']"
] |
null | null | 2405.17889 | null | null | http://arxiv.org/pdf/2405.17889v1 | 2024-05-28T07:11:30Z | 2024-05-28T07:11:30Z | Improving Discrete Diffusion Models via Structured Preferential
Generation | In the domains of image and audio, diffusion models have shown impressive performance. However, their application to discrete data types, such as language, has often been suboptimal compared to autoregressive generative models. This paper tackles the challenge of improving discrete diffusion models by introducing a structured forward process that leverages the inherent information hierarchy in discrete categories, such as words in text. Our approach biases the generative process to produce certain categories before others, resulting in a notable improvement in log-likelihood scores on the text8 dataset. This work paves the way for more advances in discrete diffusion models with potentially significant enhancements in performance. | [
"['Severi Rissanen' 'Markus Heinonen' 'Arno Solin']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.