categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.11515
| null | null |
http://arxiv.org/pdf/2402.11515v4
|
2024-04-29T13:09:57Z
|
2024-02-18T09:07:30Z
|
Optimal Parallelization Strategies for Active Flow Control in Deep
Reinforcement Learning-Based Computational Fluid Dynamics
|
Deep Reinforcement Learning (DRL) has emerged as a promising approach for handling highly dynamic and nonlinear Active Flow Control (AFC) problems. However, the computational cost associated with training DRL models presents a significant performance bottleneck. To address this challenge and enable efficient scaling on high-performance computing architectures, this study focuses on optimizing DRL-based algorithms in parallel settings. We validate an existing state-of-the-art DRL framework used for AFC problems and discuss its efficiency bottlenecks. Subsequently, by deconstructing the overall framework and conducting extensive scalability benchmarks for individual components, we investigate various hybrid parallelization configurations and propose efficient parallelization strategies. Moreover, we refine input/output (I/O) operations in multi-environment DRL training to tackle critical overhead associated with data movement. Finally, we demonstrate the optimized framework for a typical AFC problem where near-linear scaling can be obtained for the overall framework. We achieve a significant boost in parallel efficiency from around 49% to approximately 78%, and the training process is accelerated by approximately 47 times using 60 CPU cores. These findings are expected to provide valuable insights for further advancements in DRL-based AFC studies.
|
[
"['Wang Jia' 'Hang Xu']"
] |
null | null |
2402.11518
| null | null |
http://arxiv.org/abs/2402.11518v2
|
2024-06-22T04:03:30Z
|
2024-02-18T09:21:12Z
|
Large Language Model-driven Meta-structure Discovery in Heterogeneous
Information Network
|
Heterogeneous information networks (HIN) have gained increasing popularity in recent years for capturing complex relations between diverse types of nodes. Meta-structures are proposed as a useful tool to identify the important patterns in HINs, but hand-crafted meta-structures pose significant challenges for scaling up, drawing wide research attention towards developing automatic search algorithms. Previous efforts primarily focused on searching for meta-structures with good empirical performance, overlooking the importance of human comprehensibility and generalizability. To address this challenge, we draw inspiration from the emergent reasoning abilities of large language models (LLMs). We propose ReStruct, a meta-structure search framework that integrates LLM reasoning into the evolutionary procedure. ReStruct uses a grammar translator to encode the meta-structures into natural language sentences, and leverages the reasoning power of LLMs to evaluate their semantic feasibility. Besides, ReStruct also employs performance-oriented evolutionary operations. These two competing forces allow ReStruct to jointly optimize the semantic explainability and empirical performance of meta-structures. Furthermore, ReStruct contains a differential LLM explainer to generate and refine natural language explanations for the discovered meta-structures by reasoning through the search history. Experiments on eight representative HIN datasets demonstrate that ReStruct achieves state-of-the-art performance in both recommendation and node classification tasks. Moreover, a survey study involving 73 graduate students shows that the discovered meta-structures and generated explanations by ReStruct are substantially more comprehensible. Our code and questionnaire are available at https://github.com/LinChen-65/ReStruct.
|
[
"['Lin Chen' 'Fengli Xu' 'Nian Li' 'Zhenyu Han' 'Meng Wang' 'Yong Li'\n 'Pan Hui']"
] |
null | null |
2402.11525
| null | null |
http://arxiv.org/pdf/2402.11525v3
|
2024-02-27T17:12:38Z
|
2024-02-18T09:51:49Z
|
Advancing Translation Preference Modeling with RLHF: A Step Towards
Cost-Effective Solution
|
Faithfulness, expressiveness, and elegance is the constant pursuit in machine translation. However, traditional metrics like textit{BLEU} do not strictly align with human preference of translation quality. In this paper, we explore leveraging reinforcement learning with human feedback (textit{RLHF}) to improve translation quality. It is non-trivial to collect a large high-quality dataset of human comparisons between translations, especially for low-resource languages. To address this issue, we propose a cost-effective preference learning strategy, optimizing reward models by distinguishing between human and machine translations. In this manner, the reward model learns the deficiencies of machine translation compared to human and guides subsequent improvements in machine translation. Experimental results demonstrate that textit{RLHF} can effectively enhance translation quality and this improvement benefits other translation directions not trained with textit{RLHF}. Further analysis indicates that the model's language capabilities play a crucial role in preference learning. A reward model with strong language capabilities can more sensitively learn the subtle differences in translation quality and align better with real human translation preferences.
|
[
"['Nuo Xu' 'Jun Zhao' 'Can Zu' 'Sixian Li' 'Lu Chen' 'Zhihao Zhang'\n 'Rui Zheng' 'Shihan Dou' 'Wenjuan Qin' 'Tao Gui' 'Qi Zhang'\n 'Xuanjing Huang']"
] |
null | null |
2402.11538
| null | null |
http://arxiv.org/pdf/2402.11538v1
|
2024-02-18T10:38:34Z
|
2024-02-18T10:38:34Z
|
PASCL: Supervised Contrastive Learning with Perturbative Augmentation
for Particle Decay Reconstruction
|
In high-energy physics, particles produced in collision events decay in a format of a hierarchical tree structure, where only the final decay products can be observed using detectors. However, the large combinatorial space of possible tree structures makes it challenging to recover the actual decay process given a set of final particles. To better analyse the hierarchical tree structure, we propose a graph-based deep learning model to infer the tree structure to reconstruct collision events. In particular, we use a compact matrix representation termed as lowest common ancestor generations (LCAG) matrix, to encode the particle decay tree structure. Then, we introduce a perturbative augmentation technique applied to node features, aiming to mimic experimental uncertainties and increase data diversity. We further propose a supervised graph contrastive learning algorithm to utilize the information of inter-particle relations from multiple decay processes. Extensive experiments show that our proposed supervised graph contrastive learning with perturbative augmentation (PASCL) method outperforms state-of-the-art baseline models on an existing physics-based dataset, significantly improving the reconstruction accuracy. This method provides a more effective training strategy for models with the same parameters and makes way for more accurate and efficient high-energy particle physics data analysis.
|
[
"['Junjian Lu' 'Siwei Liu' 'Dmitrii Kobylianski' 'Etienne Dreyer'\n 'Eilam Gross' 'Shangsong Liang']"
] |
null | null |
2402.11552
| null | null |
http://arxiv.org/pdf/2402.11552v1
|
2024-02-18T11:49:38Z
|
2024-02-18T11:49:38Z
|
Empirical Density Estimation based on Spline Quasi-Interpolation with
applications to Copulas clustering modeling
|
Density estimation is a fundamental technique employed in various fields to model and to understand the underlying distribution of data. The primary objective of density estimation is to estimate the probability density function of a random variable. This process is particularly valuable when dealing with univariate or multivariate data and is essential for tasks such as clustering, anomaly detection, and generative modeling. In this paper we propose the mono-variate approximation of the density using spline quasi interpolation and we applied it in the context of clustering modeling. The clustering technique used is based on the construction of suitable multivariate distributions which rely on the estimation of the monovariate empirical densities (marginals). Such an approximation is achieved by using the proposed spline quasi-interpolation, while the joint distributions to model the sought clustering partition is constructed with the use of copulas functions. In particular, since copulas can capture the dependence between the features of the data independently from the marginal distributions, a finite mixture copula model is proposed. The presented algorithm is validated on artificial and real datasets.
|
[
"['Cristiano Tamborrino' 'Antonella Falini' 'Francesca Mazzia']"
] |
null | null |
2402.11558
| null | null |
http://arxiv.org/pdf/2402.11558v2
|
2024-03-22T08:41:08Z
|
2024-02-18T11:59:04Z
|
A Temporally Disentangled Contrastive Diffusion Model for Spatiotemporal
Imputation
|
Spatiotemporal data analysis is pivotal across various domains, such as transportation, meteorology, and healthcare. The data collected in real-world scenarios are often incomplete due to device malfunctions and network errors. Spatiotemporal imputation aims to predict missing values by exploiting the spatial and temporal dependencies in the observed data. Traditional imputation approaches based on statistical and machine learning techniques require the data to conform to their distributional assumptions, while graph and recurrent neural networks are prone to error accumulation problems due to their recurrent structures. Generative models, especially diffusion models, can potentially circumvent the reliance on inaccurate, previously imputed values for future predictions; However, diffusion models still face challenges in generating stable results. We propose to address these challenges by designing conditional information to guide the generative process and expedite the training process. We introduce a conditional diffusion framework called C$^2$TSD, which incorporates disentangled temporal (trend and seasonality) representations as conditional information and employs contrastive learning to improve generalizability. Our extensive experiments on three real-world datasets demonstrate the superior performance of our approach compared to a number of state-of-the-art baselines.
|
[
"['Yakun Chen' 'Kaize Shi' 'Zhangkai Wu' 'Juan Chen' 'Xianzhi Wang'\n 'Julian McAuley' 'Guandong Xu' 'Shui Yu']"
] |
null | null |
2402.11565
| null | null |
http://arxiv.org/pdf/2402.11565v1
|
2024-02-18T12:24:45Z
|
2024-02-18T12:24:45Z
|
Continual Learning on Graphs: Challenges, Solutions, and Opportunities
|
Continual learning on graph data has recently attracted paramount attention for its aim to resolve the catastrophic forgetting problem on existing tasks while adapting the sequentially updated model to newly emerged graph tasks. While there have been efforts to summarize progress on continual learning research over Euclidean data, e.g., images and texts, a systematic review of progress in continual learning on graphs, a.k.a, continual graph learning (CGL) or lifelong graph learning, is still demanding. Graph data are far more complex in terms of data structures and application scenarios, making CGL task settings, model designs, and applications extremely challenging. To bridge the gap, we provide a comprehensive review of existing continual graph learning (CGL) algorithms by elucidating the different task settings and categorizing the existing methods based on their characteristics. We compare the CGL methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to CGL tasks. Additionally, we review the benchmark works that are crucial to CGL research. Finally, we discuss the remaining challenges and propose several future directions. We will maintain an up-to-date GitHub repository featuring a comprehensive list of CGL algorithms, accessible at https://github.com/UConn-DSIS/Survey-of-Continual-Learning-on-Graphs.
|
[
"['Xikun Zhang' 'Dongjin Song' 'Dacheng Tao']"
] |
null | null |
2402.11585
| null | null |
http://arxiv.org/pdf/2402.11585v3
|
2024-02-28T08:52:36Z
|
2024-02-18T13:24:48Z
|
PolypNextLSTM: A lightweight and fast polyp video segmentation network
using ConvNext and ConvLSTM
|
Commonly employed in polyp segmentation, single image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with the least parameter overhead, making it possibly suitable for edge devices. PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion. Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNSPlusNet (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion. PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Access code here https://github.com/mtec-tuhh/PolypNextLSTM
|
[
"['Debayan Bhattacharya' 'Konrad Reuter' 'Finn Behrendt' 'Lennart Maack'\n 'Sarah Grube' 'Alexander Schlaefer']"
] |
null | null |
2402.11592
| null | null |
http://arxiv.org/pdf/2402.11592v3
|
2024-05-28T03:27:06Z
|
2024-02-18T14:08:48Z
|
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM
Fine-Tuning: A Benchmark
|
In the evolving landscape of natural language processing (NLP), fine-tuning pre-trained Large Language Models (LLMs) with first-order (FO) optimizers like SGD and Adam has become standard. Yet, as LLMs grow {in size}, the substantial memory overhead from back-propagation (BP) for FO gradient computation presents a significant challenge. Addressing this issue is crucial, especially for applications like on-device training where memory efficiency is paramount. This paper proposes a shift towards BP-free, zeroth-order (ZO) optimization as a solution for reducing memory costs during LLM fine-tuning, building on the initial concept introduced by MeZO. Unlike traditional ZO-SGD methods, our work expands the exploration to a wider array of ZO optimization techniques, through a comprehensive, first-of-its-kind benchmarking study across five LLM families (Roberta, OPT, LLaMA, Vicuna, Mistral), three task complexities, and five fine-tuning schemes. Our study unveils previously overlooked optimization principles, highlighting the importance of task alignment, the role of the forward gradient method, and the balance between algorithm complexity and fine-tuning performance. We further introduce novel enhancements to ZO optimization, including block-wise descent, hybrid training, and gradient sparsity. Our study offers a promising direction for achieving further memory-efficient LLM fine-tuning. Codes to reproduce all our experiments are at https://github.com/ZO-Bench/ZO-LLM .
|
[
"['Yihua Zhang' 'Pingzhi Li' 'Junyuan Hong' 'Jiaxiang Li' 'Yimeng Zhang'\n 'Wenqing Zheng' 'Pin-Yu Chen' 'Jason D. Lee' 'Wotao Yin' 'Mingyi Hong'\n 'Zhangyang Wang' 'Sijia Liu' 'Tianlong Chen']"
] |
null | null |
2402.11594
| null | null |
http://arxiv.org/pdf/2402.11594v1
|
2024-02-18T14:12:15Z
|
2024-02-18T14:12:15Z
|
Simplifying Hyperparameter Tuning in Online Machine Learning -- The
spotRiverGUI
|
Batch Machine Learning (BML) reaches its limits when dealing with very large amounts of streaming data. This is especially true for available memory, handling drift in data streams, and processing new, unknown data. Online Machine Learning (OML) is an alternative to BML that overcomes the limitations of BML. OML is able to process data in a sequential manner, which is especially useful for data streams. The `river` package is a Python OML-library, which provides a variety of online learning algorithms for classification, regression, clustering, anomaly detection, and more. The `spotRiver` package provides a framework for hyperparameter tuning of OML models. The `spotRiverGUI` is a graphical user interface for the `spotRiver` package. The `spotRiverGUI` releases the user from the burden of manually searching for the optimal hyperparameter setting. After the data is provided, users can compare different OML algorithms from the powerful `river` package in a convenient way and tune the selected algorithms very efficiently.
|
[
"['Thomas Bartz-Beielstein']"
] |
null | null |
2402.11604
| null | null |
http://arxiv.org/pdf/2402.11604v1
|
2024-02-18T14:42:47Z
|
2024-02-18T14:42:47Z
|
Self-evolving Autoencoder Embedded Q-Network
|
In the realm of sequential decision-making tasks, the exploration capability of a reinforcement learning (RL) agent is paramount for achieving high rewards through interactions with the environment. To enhance this crucial ability, we propose SAQN, a novel approach wherein a self-evolving autoencoder (SA) is embedded with a Q-Network (QN). In SAQN, the self-evolving autoencoder architecture adapts and evolves as the agent explores the environment. This evolution enables the autoencoder to capture a diverse range of raw observations and represent them effectively in its latent space. By leveraging the disentangled states extracted from the encoder generated latent space, the QN is trained to determine optimal actions that improve rewards. During the evolution of the autoencoder architecture, a bias-variance regulatory strategy is employed to elicit the optimal response from the RL agent. This strategy involves two key components: (i) fostering the growth of nodes to retain previously acquired knowledge, ensuring a rich representation of the environment, and (ii) pruning the least contributing nodes to maintain a more manageable and tractable latent space. Extensive experimental evaluations conducted on three distinct benchmark environments and a real-world molecular environment demonstrate that the proposed SAQN significantly outperforms state-of-the-art counterparts. The results highlight the effectiveness of the self-evolving autoencoder and its collaboration with the Q-Network in tackling sequential decision-making tasks.
|
[
"['J. Senthilnath' 'Bangjian Zhou' 'Zhen Wei Ng' 'Deeksha Aggarwal'\n 'Rajdeep Dutta' 'Ji Wei Yoon' 'Aye Phyu Phyu Aung' 'Keyu Wu' 'Min Wu'\n 'Xiaoli Li']"
] |
null | null |
2402.11622
| null | null |
http://arxiv.org/pdf/2402.11622v2
|
2024-06-28T07:20:22Z
|
2024-02-18T15:28:39Z
|
Logical Closed Loop: Uncovering Object Hallucinations in Large
Vision-Language Models
|
Object hallucination has been an Achilles' heel which hinders the broader applications of large vision-language models (LVLMs). Object hallucination refers to the phenomenon that the LVLMs claim non-existent objects in the image. To mitigate the object hallucinations, instruction tuning and external model-based detection methods have been proposed, which either require large-scare computational resources or depend on the detection result of external models. However, there remains an under-explored field to utilize the LVLM itself to alleviate object hallucinations. In this work, we adopt the intuition that the LVLM tends to respond logically consistently for existent objects but inconsistently for hallucinated objects. Therefore, we propose a Logical Closed Loop-based framework for Object Hallucination Detection and Mitigation, namely LogicCheckGPT. In specific, we devise logical consistency probing to raise questions with logical correlations, inquiring about attributes from objects and vice versa. Whether their responses can form a logical closed loop serves as an indicator of object hallucination. As a plug-and-play method, it can be seamlessly applied to all existing LVLMs. Comprehensive experiments conducted on three benchmarks across four LVLMs have demonstrated significant improvements brought by our method, indicating its effectiveness and generality.
|
[
"['Junfei Wu' 'Qiang Liu' 'Ding Wang' 'Jinghao Zhang' 'Shu Wu' 'Liang Wang'\n 'Tieniu Tan']"
] |
null | null |
2402.11628
| null | null |
http://arxiv.org/pdf/2402.11628v1
|
2024-02-18T16:03:04Z
|
2024-02-18T16:03:04Z
|
Discrete Neural Algorithmic Reasoning
|
Neural algorithmic reasoning aims to capture computations with neural networks via learning the models to imitate the execution of classical algorithms. While common architectures are expressive enough to contain the correct model in the weights space, current neural reasoners are struggling to generalize well on out-of-distribution data. On the other hand, classical computations are not affected by distribution shifts as they can be described as transitions between discrete computational states. In this work, we propose to force neural reasoners to maintain the execution trajectory as a combination of finite predefined states. Trained with supervision on the algorithm's state transitions, such models are able to perfectly align with the original algorithm. To show this, we evaluate our approach on the SALSA-CLRS benchmark, where we get perfect test scores for all tasks. Moreover, the proposed architectural choice allows us to prove the correctness of the learned algorithms for any test data.
|
[
"['Gleb Rodionov' 'Liudmila Prokhorenkova']"
] |
null | null |
2402.11637
| null | null |
http://arxiv.org/pdf/2402.11637v1
|
2024-02-18T16:34:12Z
|
2024-02-18T16:34:12Z
|
Poisoning Federated Recommender Systems with Fake Users
|
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems. Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space.
|
[
"['Ming Yin' 'Yichang Xu' 'Minghong Fang' 'Neil Zhenqiang Gong']"
] |
null | null |
2402.11639
| null | null |
http://arxiv.org/pdf/2402.11639v2
|
2024-05-28T05:15:53Z
|
2024-02-18T16:37:32Z
|
In-Context Learning with Transformers: Softmax Attention Adapts to
Function Lipschitzness
|
A striking property of transformers is their ability to perform in-context learning (ICL), a machine learning framework in which the learner is presented with a novel context during inference implicitly through some data, and tasked with making a prediction in that context. As such, that learner must adapt to the context without additional training. We explore the role of softmax attention in an ICL setting where each context encodes a regression task. We show that an attention unit learns a window that it uses to implement a nearest-neighbors predictor adapted to the landscape of the pretraining tasks. Specifically, we show that this window widens with decreasing Lipschitzness and increasing label noise in the pretraining tasks. We also show that on low-rank, linear problems, the attention unit learns to project onto the appropriate subspace before inference. Further, we show that this adaptivity relies crucially on the softmax activation and thus cannot be replicated by the linear activation often studied in prior theoretical analyses.
|
[
"['Liam Collins' 'Advait Parulekar' 'Aryan Mokhtari' 'Sujay Sanghavi'\n 'Sanjay Shakkottai']"
] |
null | null |
2402.11641
| null | null |
http://arxiv.org/pdf/2402.11641v2
|
2024-02-23T09:18:30Z
|
2024-02-18T16:43:21Z
|
Towards Versatile Graph Learning Approach: from the Perspective of Large
Language Models
|
Graph-structured data are the commonly used and have wide application scenarios in the real world. For these diverse applications, the vast variety of learning tasks, graph domains, and complex graph learning procedures present challenges for human experts when designing versatile graph learning approaches. Facing these challenges, large language models (LLMs) offer a potential solution due to the extensive knowledge and the human-like intelligence. This paper proposes a novel conceptual prototype for designing versatile graph learning methods with LLMs, with a particular focus on the "where" and "how" perspectives. From the "where" perspective, we summarize four key graph learning procedures, including task definition, graph data feature engineering, model selection and optimization, deployment and serving. We then explore the application scenarios of LLMs in these procedures across a wider spectrum. In the "how" perspective, we align the abilities of LLMs with the requirements of each procedure. Finally, we point out the promising directions that could better leverage the strength of LLMs towards versatile graph learning methods.
|
[
"['Lanning Wei' 'Jun Gao' 'Huan Zhao' 'Quanming Yao']"
] |
null | null |
2402.11650
| null | null |
http://arxiv.org/pdf/2402.11650v1
|
2024-02-18T17:02:39Z
|
2024-02-18T17:02:39Z
|
Theoretical foundations for programmatic reinforcement learning
|
The field of Reinforcement Learning (RL) is concerned with algorithms for learning optimal policies in unknown stochastic environments. Programmatic RL studies representations of policies as programs, meaning involving higher order constructs such as control loops. Despite attracting a lot of attention at the intersection of the machine learning and formal methods communities, very little is known on the theoretical front about programmatic RL: what are good classes of programmatic policies? How large are optimal programmatic policies? How can we learn them? The goal of this paper is to give first answers to these questions, initiating a theoretical study of programmatic RL.
|
[
"['Guruprerana Shabadi' 'Nathanaël Fijalkow' 'Théo Matricon']"
] |
null | null |
2402.11652
| null | null |
http://arxiv.org/pdf/2402.11652v2
|
2024-04-15T16:39:15Z
|
2024-02-18T17:13:46Z
|
Doubly Robust Inference in Causal Latent Factor Models
|
This article introduces a new estimator of average treatment effects under unobserved confounding in modern data-rich environments featuring large numbers of units and outcomes. The proposed estimator is doubly robust, combining outcome imputation, inverse probability weighting, and a novel cross-fitting procedure for matrix completion. We derive finite-sample and asymptotic guarantees, and show that the error of the new estimator converges to a mean-zero Gaussian distribution at a parametric rate. Simulation results demonstrate the practical relevance of the formal properties of the estimators analyzed in this article.
|
[
"['Alberto Abadie' 'Anish Agarwal' 'Raaz Dwivedi' 'Abhin Shah']"
] |
null | null |
2402.11654
| null | null |
http://arxiv.org/pdf/2402.11654v1
|
2024-02-18T17:17:17Z
|
2024-02-18T17:17:17Z
|
Model-Free $μ$-Synthesis: A Nonsmooth Optimization Perspective
|
In this paper, we revisit model-free policy search on an important robust control benchmark, namely $mu$-synthesis. In the general output-feedback setting, there do not exist convex formulations for this problem, and hence global optimality guarantees are not expected. Apkarian (2011) presented a nonconvex nonsmooth policy optimization approach for this problem, and achieved state-of-the-art design results via using subgradient-based policy search algorithms which generate update directions in a model-based manner. Despite the lack of convexity and global optimality guarantees, these subgradient-based policy search methods have led to impressive numerical results in practice. Built upon such a policy optimization persepctive, our paper extends these subgradient-based search methods to a model-free setting. Specifically, we examine the effectiveness of two model-free policy optimization strategies: the model-free non-derivative sampling method and the zeroth-order policy search with uniform smoothing. We performed an extensive numerical study to demonstrate that both methods consistently replicate the design outcomes achieved by their model-based counterparts. Additionally, we provide some theoretical justifications showing that convergence guarantees to stationary points can be established for our model-free $mu$-synthesis under some assumptions related to the coerciveness of the cost function. Overall, our results demonstrate that derivative-free policy optimization offers a competitive and viable approach for solving general output-feedback $mu$-synthesis problems in the model-free setting.
|
[
"['Darioush Keivan' 'Xingang Guo' 'Peter Seiler' 'Geir Dullerud' 'Bin Hu']"
] |
null | null |
2402.11656
| null | null |
http://arxiv.org/pdf/2402.11656v2
|
2024-06-28T23:00:45Z
|
2024-02-18T17:27:51Z
|
Integrating Pre-Trained Language Model with Physical Layer
Communications
|
The burgeoning field of on-device AI communication, where devices exchange information directly through embedded foundation models, such as language models (LMs), requires robust, efficient, and generalizable communication frameworks. However, integrating these frameworks with existing wireless systems and effectively managing noise and bit errors pose significant challenges. In this work, we introduce a practical ondevice AI communication framework, integrated with physical layer (PHY) communication functions, demonstrated through its performance on a link-level simulator. Our framework incorporates end-to-end training with channel noise to enhance resilience, incorporates vector quantized variational autoencoders (VQ-VAE) for efficient and robust communication, and utilizes pre-trained encoder-decoder transformers for improved generalization capabilities. Simulations, across various communication scenarios, reveal that our framework achieves a 50% reduction in transmission size while demonstrating substantial generalization ability and noise robustness under standardized 3GPP channel models.
|
[
"['Ju-Hyung Lee' 'Dong-Ho Lee' 'Joohan Lee' 'Jay Pujara']"
] |
null | null |
2402.11658
| null | null |
http://arxiv.org/pdf/2402.11658v2
|
2024-06-28T15:16:53Z
|
2024-02-18T17:32:53Z
|
Dynamic planning in hierarchical active inference
|
By dynamic planning, we refer to the ability of the human brain to infer and impose motor trajectories related to cognitive decisions. A recent paradigm, active inference, brings fundamental insights into the adaptation of biological organisms, constantly striving to minimize prediction errors to restrict themselves to life-compatible states. Over the past years, many studies have shown how human and animal behavior could be explained in terms of an active inferential process - either as discrete decision-making or continuous motor control - inspiring innovative solutions in robotics and artificial intelligence. Still, the literature lacks a comprehensive outlook on how to effectively plan actions in changing environments. Setting ourselves the goal of modeling tool use, we delve into the topic of dynamic planning in active inference, keeping in mind two crucial aspects of biological goal-directed behavior: the capacity to understand and exploit affordances for object manipulation, and to learn the hierarchical interactions between the self and the environment, including other agents. We start from a simple unit and gradually describe more advanced structures, comparing recently proposed design choices and providing basic examples for each section. This study distances itself from traditional views centered on neural networks and reinforcement learning, and points toward a yet unexplored direction in active inference: hybrid representations in hierarchical models.
|
[
"['Matteo Priorelli' 'Ivilin Peev Stoianov']"
] |
null | null |
2402.11664
| null | null |
http://arxiv.org/pdf/2402.11664v1
|
2024-02-18T17:55:59Z
|
2024-02-18T17:55:59Z
|
Interpretable Short-Term Load Forecasting via Multi-Scale Temporal
Decomposition
|
Rapid progress in machine learning and deep learning has enabled a wide range of applications in the electricity load forecasting of power systems, for instance, univariate and multivariate short-term load forecasting. Though the strong capabilities of learning the non-linearity of the load patterns and the high prediction accuracy have been achieved, the interpretability of typical deep learning models for electricity load forecasting is less studied. This paper proposes an interpretable deep learning method, which learns a linear combination of neural networks that each attends to an input time feature. We also proposed a multi-scale time series decomposition method to deal with the complex time patterns. Case studies have been carried out on the Belgium central grid load dataset and the proposed model demonstrated better accuracy compared to the frequently applied baseline model. Specifically, the proposed multi-scale temporal decomposition achieves the best MSE, MAE and RMSE of 0.52, 0.57 and 0.72 respectively. As for interpretability, on one hand, the proposed method displays generalization capability. On the other hand, it can demonstrate not only the feature but also the temporal interpretability compared to other baseline methods. Besides, the global time feature interpretabilities are also obtained. Obtaining global feature interpretabilities allows us to catch the overall patterns, trends, and cyclicality in load data while also revealing the significance of various time-related features in forming the final outputs.
|
[
"['Yuqi Jiang' 'Yan Li' 'Yize Chen']"
] |
null | null |
2402.11670
| null | null |
http://arxiv.org/pdf/2402.11670v1
|
2024-02-18T18:16:43Z
|
2024-02-18T18:16:43Z
|
Challenging the Black Box: A Comprehensive Evaluation of Attribution
Maps of CNN Applications in Agriculture and Forestry
|
In this study, we explore the explainability of neural networks in agriculture and forestry, specifically in fertilizer treatment classification and wood identification. The opaque nature of these models, often considered 'black boxes', is addressed through an extensive evaluation of state-of-the-art Attribution Maps (AMs), also known as class activation maps (CAMs) or saliency maps. Our comprehensive qualitative and quantitative analysis of these AMs uncovers critical practical limitations. Findings reveal that AMs frequently fail to consistently highlight crucial features and often misalign with the features considered important by domain experts. These discrepancies raise substantial questions about the utility of AMs in understanding the decision-making process of neural networks. Our study provides critical insights into the trustworthiness and practicality of AMs within the agriculture and forestry sectors, thus facilitating a better understanding of neural networks in these application areas.
|
[
"['Lars Nieradzik' 'Henrike Stephani' 'Jördis Sieburg-Rockel'\n 'Stephanie Helmling' 'Andrea Olbrich' 'Janis Keuper']"
] |
null | null |
2402.11674
| null | null |
http://arxiv.org/pdf/2402.11674v2
|
2024-06-06T09:13:14Z
|
2024-02-18T18:33:48Z
|
A Fast Algorithm to Simulate Nonlinear Resistive Networks
|
Analog electrical networks have long been investigated as energy-efficient computing platforms for machine learning, leveraging analog physics during inference. More recently, resistor networks have sparked particular interest due to their ability to learn using local rules (such as equilibrium propagation), enabling potentially important energy efficiency gains for training as well. Despite their potential advantage, the simulations of these resistor networks has been a significant bottleneck to assess their scalability, with current methods either being limited to linear networks or relying on realistic, yet slow circuit simulators like SPICE. Assuming ideal circuit elements, we introduce a novel approach for the simulation of nonlinear resistive networks, which we frame as a quadratic programming problem with linear inequality constraints, and which we solve using a fast, exact coordinate descent algorithm. Our simulation methodology significantly outperforms existing SPICE-based simulations, enabling the training of networks up to 327 times larger at speeds 160 times faster, resulting in a 50,000-fold improvement in the ratio of network size to epoch duration. Our approach can foster more rapid progress in the simulations of nonlinear analog electrical networks.
|
[
"['Benjamin Scellier']"
] |
null | null |
2402.11682
| null | null |
http://arxiv.org/pdf/2402.11682v1
|
2024-02-18T19:12:18Z
|
2024-02-18T19:12:18Z
|
Learning Conditional Invariances through Non-Commutativity
|
Invariance learning algorithms that conditionally filter out domain-specific random variables as distractors, do so based only on the data semantics, and not the target domain under evaluation. We show that a provably optimal and sample-efficient way of learning conditional invariances is by relaxing the invariance criterion to be non-commutatively directed towards the target domain. Under domain asymmetry, i.e., when the target domain contains semantically relevant information absent in the source, the risk of the encoder $varphi^*$ that is optimal on average across domains is strictly lower-bounded by the risk of the target-specific optimal encoder $Phi^*_tau$. We prove that non-commutativity steers the optimization towards $Phi^*_tau$ instead of $varphi^*$, bringing the $mathcal{H}$-divergence between domains down to zero, leading to a stricter bound on the target risk. Both our theory and experiments demonstrate that non-commutative invariance (NCI) can leverage source domain samples to meet the sample complexity needs of learning $Phi^*_tau$, surpassing SOTA invariance learning algorithms for domain adaptation, at times by over $2%$, approaching the performance of an oracle. Implementation is available at https://github.com/abhrac/nci.
|
[
"['Abhra Chaudhuri' 'Serban Georgescu' 'Anjan Dutta']"
] |
null | null |
2402.11686
| null | null |
http://arxiv.org/pdf/2402.11686v2
|
2024-03-29T19:16:16Z
|
2024-02-18T19:31:26Z
|
Learning the Topology and Behavior of Discrete Dynamical Systems
|
Discrete dynamical systems are commonly used to model the spread of contagions on real-world networks. Under the PAC framework, existing research has studied the problem of learning the behavior of a system, assuming that the underlying network is known. In this work, we focus on a more challenging setting: to learn both the behavior and the underlying topology of a black-box system. We show that, in general, this learning problem is computationally intractable. On the positive side, we present efficient learning methods under the PAC model when the underlying graph of the dynamical system belongs to some classes. Further, we examine a relaxed setting where the topology of an unknown system is partially observed. For this case, we develop an efficient PAC learner to infer the system and establish the sample complexity. Lastly, we present a formal analysis of the expressive power of the hypothesis class of dynamical systems where both the topology and behavior are unknown, using the well-known formalism of the Natarajan dimension. Our results provide a theoretical foundation for learning both the behavior and topology of discrete dynamical systems.
|
[
"['Zirou Qiu' 'Abhijin Adiga' 'Madhav V. Marathe' 'S. S. Ravi'\n 'Daniel J. Rosenkrantz' 'Richard E. Stearns' 'Anil Vullikanti']"
] |
null | null |
2402.11687
| null | null |
http://arxiv.org/pdf/2402.11687v1
|
2024-02-18T19:35:30Z
|
2024-02-18T19:35:30Z
|
Evaluating Efficacy of Model Stealing Attacks and Defenses on Quantum
Neural Networks
|
Cloud hosting of quantum machine learning (QML) models exposes them to a range of vulnerabilities, the most significant of which is the model stealing attack. In this study, we assess the efficacy of such attacks in the realm of quantum computing. We conducted comprehensive experiments on various datasets with multiple QML model architectures. Our findings revealed that model stealing attacks can produce clone models achieving up to $0.9times$ and $0.99times$ clone test accuracy when trained using Top-$1$ and Top-$k$ labels, respectively ($k:$ num_classes). To defend against these attacks, we leverage the unique properties of current noisy hardware and perturb the victim model outputs and hinder the attacker's training process. In particular, we propose: 1) hardware variation-induced perturbation (HVIP) and 2) hardware and architecture variation-induced perturbation (HAVIP). Although noise and architectural variability can provide up to $sim16%$ output obfuscation, our comprehensive analysis revealed that models cloned under noisy conditions tend to be resilient, suffering little to no performance degradation due to such obfuscations. Despite limited success with our defense techniques, this outcome has led to an important discovery: QML models trained on noisy hardwares are naturally resistant to perturbation or obfuscation-based defenses or attacks.
|
[
"['Satwik Kundu' 'Debarshi Kundu' 'Swaroop Ghosh']"
] |
null | null |
2402.11701
| null | null |
http://arxiv.org/pdf/2402.11701v2
|
2024-04-12T10:36:28Z
|
2024-02-18T20:47:33Z
|
Explaining the Machine Learning Solution of the Ising Model
|
As powerful as machine learning (ML) techniques are in solving problems involving data with large dimensionality, explaining the results from the fitted parameters remains a challenging task of utmost importance, especially in physics applications. This work shows how this can be accomplished for the ferromagnetic Ising model, the main target of several ML studies in statistical physics. Here it is demonstrated that the successful unsupervised identification of the phases and order parameter by principal component analysis, a common method in those studies, detects that the magnetization per spin has its greatest variation with the temperature, the actual control parameter of the phase transition. Then, by using a neural network (NN) without hidden layers (the simplest possible) and informed by the symmetry of the Hamiltonian, an explanation is provided for the strategy used in finding the supervised learning solution for the critical temperature of the model's continuous phase transition. This allows the prediction of the minimal extension of the NN to solve the problem when the symmetry is not known, which becomes also explainable. These results pave the way to a physics-informed explainable generalized framework, enabling the extraction of physical laws and principles from the parameters of the models.
|
[
"['Roberto C. Alamino']"
] |
null | null |
2402.11702
| null | null |
http://arxiv.org/abs/2402.11702v2
|
2024-03-16T22:16:40Z
|
2024-02-18T20:48:09Z
|
Can ChatGPT Support Developers? An Empirical Evaluation of Large
Language Models for Code Generation
|
Large language models (LLMs) have demonstrated notable proficiency in code generation, with numerous prior studies showing their promising capabilities in various development scenarios. However, these studies mainly provide evaluations in research settings, which leaves a significant gap in understanding how effectively LLMs can support developers in real-world. To address this, we conducted an empirical analysis of conversations in DevGPT, a dataset collected from developers' conversations with ChatGPT (captured with the Share Link feature on platforms such as GitHub). Our empirical findings indicate that the current practice of using LLM-generated code is typically limited to either demonstrating high-level concepts or providing examples in documentation, rather than to be used as production-ready code. These findings indicate that there is much future work needed to improve LLMs in code generation before they can be integral parts of modern software development.
|
[
"['Kailun Jin' 'Chung-Yu Wang' 'Hung Viet Pham' 'Hadi Hemmati']"
] |
null | null |
2402.11705
| null | null |
http://arxiv.org/pdf/2402.11705v2
|
2024-04-02T03:04:09Z
|
2024-02-18T21:01:49Z
|
Learning Memory Kernels in Generalized Langevin Equations
|
We introduce a novel approach for learning memory kernels in Generalized Langevin Equations. This approach initially utilizes a regularized Prony method to estimate correlation functions from trajectory data, followed by regression over a Sobolev norm-based loss function with RKHS regularization. Our method guarantees improved performance within an exponentially weighted L^2 space, with the kernel estimation error controlled by the error in estimated correlation functions. We demonstrate the superiority of our estimator compared to other regression estimators that rely on L^2 loss functions and also an estimator derived from the inverse Laplace transform, using numerical examples that highlight its consistent advantage across various weight parameter selections. Additionally, we provide examples that include the application of force and drift terms in the equation.
|
[
"['Quanjun Lang' 'Jianfeng Lu']"
] |
null | null |
2402.11722
| null | null |
http://arxiv.org/pdf/2402.11722v1
|
2024-02-18T22:16:43Z
|
2024-02-18T22:16:43Z
|
Invertible Fourier Neural Operators for Tackling Both Forward and
Inverse Problems
|
Fourier Neural Operator (FNO) is a popular operator learning method, which has demonstrated state-of-the-art performance across many tasks. However, FNO is mainly used in forward prediction, yet a large family of applications rely on solving inverse problems. In this paper, we propose an invertible Fourier Neural Operator (iFNO) that tackles both the forward and inverse problems. We designed a series of invertible Fourier blocks in the latent channel space to share the model parameters, efficiently exchange the information, and mutually regularize the learning for the bi-directional tasks. We integrated a variational auto-encoder to capture the intrinsic structures within the input space and to enable posterior inference so as to overcome challenges of illposedness, data shortage, noises, etc. We developed a three-step process for pre-training and fine tuning for efficient training. The evaluations on five benchmark problems have demonstrated the effectiveness of our approach.
|
[
"['Da Long' 'Shandian Zhe']"
] |
null | null |
2402.11728
| null | null |
http://arxiv.org/pdf/2402.11728v1
|
2024-02-18T22:55:26Z
|
2024-02-18T22:55:26Z
|
Numerical Claim Detection in Finance: A New Financial Dataset,
Weak-Supervision Model, and Market Analysis
|
In this paper, we investigate the influence of claims in analyst reports and earnings calls on financial market returns, considering them as significant quarterly events for publicly traded companies. To facilitate a comprehensive analysis, we construct a new financial dataset for the claim detection task in the financial domain. We benchmark various language models on this dataset and propose a novel weak-supervision model that incorporates the knowledge of subject matter experts (SMEs) in the aggregation function, outperforming existing approaches. Furthermore, we demonstrate the practical utility of our proposed model by constructing a novel measure ``optimism". Furthermore, we observed the dependence of earnings surprise and return on our optimism measure. Our dataset, models, and code will be made publicly (under CC BY 4.0 license) available on GitHub and Hugging Face.
|
[
"['Agam Shah' 'Arnav Hiray' 'Pratvi Shah' 'Arkaprabha Banerjee'\n 'Anushka Singh' 'Dheeraj Eidnani' 'Bhaskar Chaudhury' 'Sudheer Chava']"
] |
null | null |
2402.11729
| null | null |
http://arxiv.org/pdf/2402.11729v2
|
2024-06-20T00:29:16Z
|
2024-02-18T23:01:28Z
|
Prospector Heads: Generalized Feature Attribution for Large Models &
Data
|
Feature attribution, the ability to localize regions of the input data that are relevant for classification, is an important capability for ML models in scientific and biomedical domains. Current methods for feature attribution, which rely on "explaining" the predictions of end-to-end classifiers, suffer from imprecise feature localization and are inadequate for use with small sample sizes and high-dimensional datasets due to computational challenges. We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods that can be applied to any encoder and any data modality. Prospector heads generalize across modalities through experiments on sequences (text), images (pathology), and graphs (protein structures), outperforming baseline attribution methods by up to 26.3 points in mean localization AUPRC. We also demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data. Through their high performance, flexibility, and generalizability, prospectors provide a framework for improving trust and transparency for ML models in complex domains.
|
[
"['Gautam Machiraju' 'Alexander Derry' 'Arjun Desai' 'Neel Guha'\n 'Amir-Hossein Karimi' 'James Zou' 'Russ Altman' 'Christopher Ré'\n 'Parag Mallick']"
] |
null | null |
2402.11733
| null | null |
http://arxiv.org/pdf/2402.11733v1
|
2024-02-18T23:14:40Z
|
2024-02-18T23:14:40Z
|
The Effectiveness of Random Forgetting for Robust Generalization
|
Deep neural networks are susceptible to adversarial attacks, which can compromise their performance and accuracy. Adversarial Training (AT) has emerged as a popular approach for protecting neural networks against such attacks. However, a key challenge of AT is robust overfitting, where the network's robust performance on test data deteriorates with further training, thus hindering generalization. Motivated by the concept of active forgetting in the brain, we introduce a novel learning paradigm called "Forget to Mitigate Overfitting (FOMO)". FOMO alternates between the forgetting phase, which randomly forgets a subset of weights and regulates the model's information through weight reinitialization, and the relearning phase, which emphasizes learning generalizable features. Our experiments on benchmark datasets and adversarial attacks show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy while improving the state-of-the-art robustness. Furthermore, FOMO provides a better trade-off between standard and robust accuracy, outperforming baseline adversarial methods. Finally, our framework is robust to AutoAttacks and increases generalization in many real-world scenarios.
|
[
"['Vijaya Raghavan T Ramkumar' 'Bahram Zonooz' 'Elahe Arani']"
] |
null | null |
2402.11736
| null | null |
http://arxiv.org/pdf/2402.11736v1
|
2024-02-18T23:39:00Z
|
2024-02-18T23:39:00Z
|
Monte Carlo with kernel-based Gibbs measures: Guarantees for
probabilistic herding
|
Kernel herding belongs to a family of deterministic quadratures that seek to minimize the worst-case integration error over a reproducing kernel Hilbert space (RKHS). In spite of strong experimental support, it has revealed difficult to prove that this worst-case error decreases at a faster rate than the standard square root of the number of quadrature nodes, at least in the usual case where the RKHS is infinite-dimensional. In this theoretical paper, we study a joint probability distribution over quadrature nodes, whose support tends to minimize the same worst-case error as kernel herding. We prove that it does outperform i.i.d. Monte Carlo, in the sense of coming with a tighter concentration inequality on the worst-case integration error. While not improving the rate yet, this demonstrates that the mathematical tools of the study of Gibbs measures can help understand to what extent kernel herding and its variants improve on computationally cheaper methods. Moreover, we provide early experimental evidence that a faster rate of convergence, though not worst-case, is likely.
|
[
"['Martin Rouault' 'Rémi Bardenet' 'Mylène Maïda']"
] |
null | null |
2402.11737
| null | null |
http://arxiv.org/pdf/2402.11737v1
|
2024-02-18T23:41:38Z
|
2024-02-18T23:41:38Z
|
Compression Repair for Feedforward Neural Networks Based on Model
Equivalence Evaluation
|
In this paper, we propose a method of repairing compressed Feedforward Neural Networks (FNNs) based on equivalence evaluation of two neural networks. In the repairing framework, a novel neural network equivalence evaluation method is developed to compute the output discrepancy between two neural networks. The output discrepancy can quantitatively characterize the output difference produced by compression procedures. Based on the computed output discrepancy, the repairing method first initializes a new training set for the compressed networks to narrow down the discrepancy between the two neural networks and improve the performance of the compressed network. Then, we repair the compressed FNN by re-training based on the training set. We apply our developed method to the MNIST dataset to demonstrate the effectiveness and advantages of our proposed repair method.
|
[
"['Zihao Mo' 'Yejiang Yang' 'Shuaizheng Lu' 'Weiming Xiang']"
] |
null | null |
2402.11739
| null | null |
http://arxiv.org/pdf/2402.11739v1
|
2024-02-18T23:49:18Z
|
2024-02-18T23:49:18Z
|
A Transition System Abstraction Framework for Neural Network Dynamical
System Models
|
This paper proposes a transition system abstraction framework for neural network dynamical system models to enhance the model interpretability, with applications to complex dynamical systems such as human behavior learning and verification. To begin with, the localized working zone will be segmented into multiple localized partitions under the data-driven Maximum Entropy (ME) partitioning method. Then, the transition matrix will be obtained based on the set-valued reachability analysis of neural networks. Finally, applications to human handwriting dynamics learning and verification are given to validate our proposed abstraction framework, which demonstrates the advantages of enhancing the interpretability of the black-box model, i.e., our proposed framework is able to abstract a data-driven neural network model into a transition system, making the neural network model interpretable through verifying specifications described in Computational Tree Logic (CTL) languages.
|
[
"['Yejiang Yang' 'Zihao Mo' 'Hoang-Dung Tran' 'Weiming Xiang']"
] |
null | null |
2402.11740
| null | null |
http://arxiv.org/abs/2402.11740v3
|
2024-06-27T01:49:19Z
|
2024-02-18T23:54:35Z
|
Extraction of nonlinearity in neural networks with Koopman operator
|
Nonlinearity plays a crucial role in deep neural networks. In this paper, we investigate the degree to which the nonlinearity of the neural network is essential. For this purpose, we employ the Koopman operator, extended dynamic mode decomposition, and the tensor-train format. The Koopman operator approach has been recently developed in physics and nonlinear sciences; the Koopman operator deals with the time evolution in the observable space instead of the state space. Since we can replace the nonlinearity in the state space with the linearity in the observable space, it is a hopeful candidate for understanding complex behavior in nonlinear systems. Here, we analyze learned neural networks for the classification problems. As a result, the replacement of the nonlinear middle layers with the Koopman matrix yields enough accuracy in numerical experiments. In addition, we confirm that the pruning of the Koopman matrix gives sufficient accuracy even at high compression ratios. These results indicate the possibility of extracting some features in the neural networks with the Koopman operator approach.
|
[
"['Naoki Sugishita' 'Kayo Kinjo' 'Jun Ohkubo']"
] |
null | null |
2402.11742
| null | null |
http://arxiv.org/pdf/2402.11742v2
|
2024-06-03T14:09:10Z
|
2024-02-18T23:59:54Z
|
Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with
Spectral Imbalance
|
Classification models are expected to perform equally well for different classes, yet in practice, there are often large gaps in their performance. This issue of class bias is widely studied in cases of datasets with sample imbalance, but is relatively overlooked in balanced datasets. In this work, we introduce the concept of spectral imbalance in features as a potential source for class disparities and study the connections between spectral imbalance and class bias in both theory and practice. To build the connection between spectral imbalance and class gap, we develop a theoretical framework for studying class disparities and derive exact expressions for the per-class error in a high-dimensional mixture model setting. We then study this phenomenon in 11 different state-of-the-art pretrained encoders and show how our proposed framework can be used to compare the quality of encoders, as well as evaluate and combine data augmentation strategies to mitigate the issue. Our work sheds light on the class-dependent effects of learning, and provides new insights into how state-of-the-art pretrained features may have unknown biases that can be diagnosed through their spectra.
|
[
"['Chiraag Kaushik' 'Ran Liu' 'Chi-Heng Lin' 'Amrit Khera' 'Matthew Y Jin'\n 'Wenrui Ma' 'Vidya Muthukumar' 'Eva L Dyer']"
] |
null | null |
2402.11747
| null | null |
http://arxiv.org/abs/2402.11747v1
|
2024-02-19T00:21:07Z
|
2024-02-19T00:21:07Z
|
Parameter Efficient Finetuning for Speech Emotion Recognition and Domain
Adaptation
|
Foundation models have shown superior performance for speech emotion recognition (SER). However, given the limited data in emotion corpora, finetuning all parameters of large pre-trained models for SER can be both resource-intensive and susceptible to overfitting. This paper investigates parameter-efficient finetuning (PEFT) for SER. Various PEFT adaptors are systematically studied for both classification of discrete emotion categories and prediction of dimensional emotional attributes. The results demonstrate that the combination of PEFT methods surpasses full finetuning with a significant reduction in the number of trainable parameters. Furthermore, a two-stage adaptation strategy is proposed to adapt models trained on acted emotion data, which is more readily available, to make the model more adept at capturing natural emotional expressions. Both intra- and cross-corpus experiments validate the efficacy of the proposed approach in enhancing the performance on both the source and target domains.
|
[
"['Nineli Lashkarashvili' 'Wen Wu' 'Guangzhi Sun' 'Philip C. Woodland']"
] |
null | null |
2402.11752
| null | null |
http://arxiv.org/pdf/2402.11752v2
|
2024-02-20T02:58:38Z
|
2024-02-19T00:43:22Z
|
Diagonalisation SGD: Fast & Convergent SGD for Non-Differentiable Models
via Reparameterisation and Smoothing
|
It is well-known that the reparameterisation gradient estimator, which exhibits low variance in practice, is biased for non-differentiable models. This may compromise correctness of gradient-based optimisation methods such as stochastic gradient descent (SGD). We introduce a simple syntactic framework to define non-differentiable functions piecewisely and present a systematic approach to obtain smoothings for which the reparameterisation gradient estimator is unbiased. Our main contribution is a novel variant of SGD, Diagonalisation Stochastic Gradient Descent, which progressively enhances the accuracy of the smoothed approximation during optimisation, and we prove convergence to stationary points of the unsmoothed (original) objective. Our empirical evaluation reveals benefits over the state of the art: our approach is simple, fast, stable and attains orders of magnitude reduction in work-normalised variance.
|
[
"['Dominik Wagner' 'Basim Khajwal' 'C. -H. Luke Ong']"
] |
null | null |
2402.11755
| null | null |
http://arxiv.org/pdf/2402.11755v1
|
2024-02-19T00:53:48Z
|
2024-02-19T00:53:48Z
|
SPML: A DSL for Defending Language Models Against Prompt Attacks
|
Large language models (LLMs) have profoundly transformed natural language applications, with a growing reliance on instruction-based definitions for designing chatbots. However, post-deployment the chatbot definitions are fixed and are vulnerable to attacks by malicious users, emphasizing the need to prevent unethical applications and financial losses. Existing studies explore user prompts' impact on LLM-based chatbots, yet practical methods to contain attacks on application-specific chatbots remain unexplored. This paper presents System Prompt Meta Language (SPML), a domain-specific language for refining prompts and monitoring the inputs to the LLM-based chatbots. SPML actively checks attack prompts, ensuring user inputs align with chatbot definitions to prevent malicious execution on the LLM backbone, optimizing costs. It also streamlines chatbot definition crafting with programming language capabilities, overcoming natural language design challenges. Additionally, we introduce a groundbreaking benchmark with 1.8k system prompts and 20k user inputs, offering the inaugural language and benchmark for chatbot definition evaluation. Experiments across datasets demonstrate SPML's proficiency in understanding attacker prompts, surpassing models like GPT-4, GPT-3.5, and LLAMA. Our data and codes are publicly available at: https://prompt-compiler.github.io/SPML/.
|
[
"['Reshabh K Sharma' 'Vinayak Gupta' 'Dan Grossman']"
] |
null | null |
2402.11756
| null | null |
http://arxiv.org/pdf/2402.11756v3
|
2024-06-08T20:40:55Z
|
2024-02-19T01:04:22Z
|
MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in
Generative LLMs
|
Generative Large Language Models (LLMs) are widely utilized for their excellence in various tasks. However, their tendency to produce inaccurate or misleading outputs poses a potential risk, particularly in high-stakes environments. Therefore, estimating the correctness of generative LLM outputs is an important task for enhanced reliability. Uncertainty Estimation (UE) in generative LLMs is an evolving domain, where SOTA probability-based methods commonly employ length-normalized scoring. In this work, we propose Meaning-Aware Response Scoring (MARS) as an alternative to length-normalized scoring for UE methods. MARS is a novel scoring function that considers the semantic contribution of each token in the generated sequence in the context of the question. We demonstrate that integrating MARS into UE methods results in a universal and significant improvement in UE performance. We conduct experiments using three distinct closed-book question-answering datasets across five popular pre-trained LLMs. Lastly, we validate the efficacy of MARS on a Medical QA dataset. Code can be found https://github.com/Ybakman/LLM_Uncertainity.
|
[
"['Yavuz Faruk Bakman' 'Duygu Nur Yaldiz' 'Baturalp Buyukates'\n 'Chenyang Tao' 'Dimitrios Dimitriadis' 'Salman Avestimehr']"
] |
null | null |
2402.11760
| null | null |
http://arxiv.org/pdf/2402.11760v1
|
2024-02-19T01:17:52Z
|
2024-02-19T01:17:52Z
|
Reinforcement Learning as a Parsimonious Alternative to Prediction
Cascades: A Case Study on Image Segmentation
|
Deep learning architectures have achieved state-of-the-art (SOTA) performance on computer vision tasks such as object detection and image segmentation. This may be attributed to the use of over-parameterized, monolithic deep learning architectures executed on large datasets. Although such architectures lead to increased accuracy, this is usually accompanied by a large increase in computation and memory requirements during inference. While this is a non-issue in traditional machine learning pipelines, the recent confluence of machine learning and fields like the Internet of Things has rendered such large architectures infeasible for execution in low-resource settings. In such settings, previous efforts have proposed decision cascades where inputs are passed through models of increasing complexity until desired performance is achieved. However, we argue that cascaded prediction leads to increased computational cost due to wasteful intermediate computations. To address this, we propose PaSeR (Parsimonious Segmentation with Reinforcement Learning) a non-cascading, cost-aware learning pipeline as an alternative to cascaded architectures. Through experimental evaluation on real-world and standard datasets, we demonstrate that PaSeR achieves better accuracy while minimizing computational cost relative to cascaded models. Further, we introduce a new metric IoU/GigaFlop to evaluate the balance between cost and performance. On the real-world task of battery material phase segmentation, PaSeR yields a minimum performance improvement of 174% on the IoU/GigaFlop metric with respect to baselines. We also demonstrate PaSeR's adaptability to complementary models trained on a noisy MNIST dataset, where it achieved a minimum performance improvement on IoU/GigaFlop of 13.4% over SOTA models. Code and data are available at https://github.com/scailab/paser .
|
[
"['Bharat Srikishan' 'Anika Tabassum' 'Srikanth Allu' 'Ramakrishnan Kannan'\n 'Nikhil Muralidhar']"
] |
null | null |
2402.11771
| null | null |
http://arxiv.org/pdf/2402.11771v1
|
2024-02-19T01:55:55Z
|
2024-02-19T01:55:55Z
|
Evaluating the Effectiveness of Index-Based Treatment Allocation
|
When resources are scarce, an allocation policy is needed to decide who receives a resource. This problem occurs, for instance, when allocating scarce medical resources and is often solved using modern ML methods. This paper introduces methods to evaluate index-based allocation policies -- that allocate a fixed number of resources to those who need them the most -- by using data from a randomized control trial. Such policies create dependencies between agents, which render the assumptions behind standard statistical tests invalid and limit the effectiveness of estimators. Addressing these challenges, we translate and extend recent ideas from the statistics literature to present an efficient estimator and methods for computing asymptotically correct confidence intervals. This enables us to effectively draw valid statistical conclusions, a critical gap in previous work. Our extensive experiments validate our methodology in practical settings, while also showcasing its statistical power. We conclude by proposing and empirically verifying extensions of our methodology that enable us to reevaluate a past randomized control trial to evaluate different ML allocation policies in the context of a mHealth program, drawing previously invisible conclusions.
|
[
"['Niclas Boehmer' 'Yash Nair' 'Sanket Shah' 'Lucas Janson' 'Aparna Taneja'\n 'Milind Tambe']"
] |
null | null |
2402.11773
| null | null |
http://arxiv.org/pdf/2402.11773v2
|
2024-02-22T01:17:29Z
|
2024-02-19T02:06:04Z
|
Dynamic Multi-Network Mining of Tensor Time Series
|
Subsequence clustering of time series is an essential task in data mining, and interpreting the resulting clusters is also crucial since we generally do not have prior knowledge of the data. Thus, given a large collection of tensor time series consisting of multiple modes, including timestamps, how can we achieve subsequence clustering for tensor time series and provide interpretable insights? In this paper, we propose a new method, Dynamic Multi-network Mining (DMM), that converts a tensor time series into a set of segment groups of various lengths (i.e., clusters) characterized by a dependency network constrained with l1-norm. Our method has the following properties. (a) Interpretable: it characterizes the cluster with multiple networks, each of which is a sparse dependency network of a corresponding non-temporal mode, and thus provides visible and interpretable insights into the key relationships. (b) Accurate: it discovers the clusters with distinct networks from tensor time series according to the minimum description length (MDL). (c) Scalable: it scales linearly in terms of the input data size when solving a non-convex problem to optimize the number of segments and clusters, and thus it is applicable to long-range and high-dimensional tensors. Extensive experiments with synthetic datasets confirm that our method outperforms the state-of-the-art methods in terms of clustering accuracy. We then use real datasets to demonstrate that DMM is useful for providing interpretable insights from tensor time series.
|
[
"['Kohei Obata' 'Koki Kawabata' 'Yasuko Matsubara' 'Yasushi Sakurai']"
] |
null | null |
2402.11775
| null | null |
http://arxiv.org/pdf/2402.11775v1
|
2024-02-19T02:07:15Z
|
2024-02-19T02:07:15Z
|
FOD-Swin-Net: angular super resolution of fiber orientation distribution
using a transformer-based deep model
|
Identifying and characterizing brain fiber bundles can help to understand many diseases and conditions. An important step in this process is the estimation of fiber orientations using Diffusion-Weighted Magnetic Resonance Imaging (DW-MRI). However, obtaining robust orientation estimates demands high-resolution data, leading to lengthy acquisitions that are not always clinically available. In this work, we explore the use of automated angular super resolution from faster acquisitions to overcome this challenge. Using the publicly available Human Connectome Project (HCP) DW-MRI data, we trained a transformer-based deep learning architecture to achieve angular super resolution in fiber orientation distribution (FOD). Our patch-based methodology, FOD-Swin-Net, is able to bring a single-shell reconstruction driven from 32 directions to be comparable to a multi-shell 288 direction FOD reconstruction, greatly reducing the number of required directions on initial acquisition. Evaluations of the reconstructed FOD with Angular Correlation Coefficient and qualitative visualizations reveal superior performance than the state-of-the-art in HCP testing data. Open source code for reproducibility is available at https://github.com/MICLab-Unicamp/FOD-Swin-Net.
|
[
"['Mateus Oliveira da Silva' 'Caio Pinheiro Santana'\n 'Diedre Santos do Carmo' 'Letícia Rittner']"
] |
null | null |
2402.11777
| null | null |
http://arxiv.org/pdf/2402.11777v1
|
2024-02-19T02:08:03Z
|
2024-02-19T02:08:03Z
|
Uncovering Latent Human Wellbeing in Language Model Embeddings
|
Do language models implicitly learn a concept of human wellbeing? We explore this through the ETHICS Utilitarianism task, assessing if scaling enhances pretrained models' representations. Our initial finding reveals that, without any prompt engineering or finetuning, the leading principal component from OpenAI's text-embedding-ada-002 achieves 73.9% accuracy. This closely matches the 74.6% of BERT-large finetuned on the entire ETHICS dataset, suggesting pretraining conveys some understanding about human wellbeing. Next, we consider four language model families, observing how Utilitarianism accuracy varies with increased parameters. We find performance is nondecreasing with increased model size when using sufficient numbers of principal components.
|
[
"['Pedro Freire' 'ChengCheng Tan' 'Adam Gleave' 'Dan Hendrycks'\n 'Scott Emmons']"
] |
null | null |
2402.11778
| null | null |
http://arxiv.org/pdf/2402.11778v2
|
2024-06-24T14:23:30Z
|
2024-02-19T02:08:09Z
|
Towards Theoretical Understandings of Self-Consuming Generative Models
|
This paper tackles the emerging challenge of training generative models within a self-consuming loop, wherein successive generations of models are recursively trained on mixtures of real and synthetic data from previous generations. We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models, including parametric and non-parametric models. Specifically, we derive bounds on the total variation (TV) distance between the synthetic data distributions produced by future models and the original real data distribution under various mixed training scenarios for diffusion models with a one-hidden-layer neural network score function. Our analysis demonstrates that this distance can be effectively controlled under the condition that mixed training dataset sizes or proportions of real data are large enough. Interestingly, we further unveil a phase transition induced by expanding synthetic data amounts, proving theoretically that while the TV distance exhibits an initial ascent, it declines beyond a threshold point. Finally, we present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
|
[
"['Shi Fu' 'Sen Zhang' 'Yingjie Wang' 'Xinmei Tian' 'Dacheng Tao']"
] |
null | null |
2402.11782
| null | null |
http://arxiv.org/pdf/2402.11782v1
|
2024-02-19T02:15:34Z
|
2024-02-19T02:15:34Z
|
What Evidence Do Language Models Find Convincing?
|
Retrieval-augmented language models are being increasingly tasked with subjective, contentious, and conflicting queries such as "is aspartame linked to cancer". To resolve these ambiguous queries, one must search through a large range of websites and consider "which, if any, of this evidence do I find convincing?". In this work, we study how LLMs answer this question. In particular, we construct ConflictingQA, a dataset that pairs controversial queries with a series of real-world evidence documents that contain different facts (e.g., quantitative results), argument styles (e.g., appeals to authority), and answers (Yes or No). We use this dataset to perform sensitivity and counterfactual analyses to explore which text features most affect LLM predictions. Overall, we find that current models rely heavily on the relevance of a website to the query, while largely ignoring stylistic features that humans find important such as whether a text contains scientific references or is written with a neutral tone. Taken together, these results highlight the importance of RAG corpus quality (e.g., the need to filter misinformation), and possibly even a shift in how LLMs are trained to better align with human judgements.
|
[
"['Alexander Wan' 'Eric Wallace' 'Dan Klein']"
] |
null | null |
2402.11789
| null | null |
http://arxiv.org/pdf/2402.11789v1
|
2024-02-19T02:32:45Z
|
2024-02-19T02:32:45Z
|
Statistical Test for Generated Hypotheses by Diffusion Models
|
The enhanced performance of AI has accelerated its integration into scientific research. In particular, the use of generative AI to create scientific hypotheses is promising and is increasingly being applied across various fields. However, when employing AI-generated hypotheses for critical decisions, such as medical diagnoses, verifying their reliability is crucial. In this study, we consider a medical diagnostic task using generated images by diffusion models, and propose a statistical test to quantify its reliability. The basic idea behind the proposed statistical test is to employ a selective inference framework, where we consider a statistical test conditional on the fact that the generated images are produced by a trained diffusion model. Using the proposed method, the statistical reliability of medical image diagnostic results can be quantified in the form of a p-value, allowing for decision-making with a controlled error rate. We show the theoretical validity of the proposed statistical test and its effectiveness through numerical experiments on synthetic and brain image datasets.
|
[
"['Teruyuki Katsuoka' 'Tomohiro Shiraishi' 'Daiki Miwa' 'Vo Nguyen Le Duy'\n 'Ichiro Takeuchi']"
] |
null | null |
2402.11793
| null | null |
http://arxiv.org/pdf/2402.11793v3
|
2024-02-27T03:18:55Z
|
2024-02-19T02:48:40Z
|
Generative Kaleidoscopic Networks
|
We discovered that the neural networks, especially the deep ReLU networks, demonstrate an `over-generalization' phenomenon. That is, the output values for the inputs that were not seen during training are mapped close to the output range that were observed during the learning process. In other words, the neural networks learn a many-to-one mapping and this effect is more prominent as we increase the number of layers or the depth of the neural network. We utilize this property of neural networks to design a dataset kaleidoscope, termed as `Generative Kaleidoscopic Networks'. Briefly, if we learn a model to map from input $xinmathbb{R}^D$ to itself $f_mathcal{N}(x)rightarrow x$, the proposed `Kaleidoscopic sampling' procedure starts with a random input noise $zinmathbb{R}^D$ and recursively applies $f_mathcal{N}(cdots f_mathcal{N}(z)cdots )$. After a burn-in period duration, we start observing samples from the input distribution and the quality of samples recovered improves as we increase the depth of the model. Scope: We observed this phenomenon to various degrees for the other deep learning architectures like CNNs, Transformers & U-Nets and we are currently investigating them further.
|
[
"['Harsh Shrivastava']"
] |
null | null |
2402.11800
| null | null |
http://arxiv.org/pdf/2402.11800v3
|
2024-03-27T15:48:29Z
|
2024-02-19T03:08:02Z
|
Stochastic Approximation with Delayed Updates: Finite-Time Rates under
Markovian Sampling
|
Motivated by applications in large-scale and multi-agent reinforcement learning, we study the non-asymptotic performance of stochastic approximation (SA) schemes with delayed updates under Markovian sampling. While the effect of delays has been extensively studied for optimization, the manner in which they interact with the underlying Markov process to shape the finite-time performance of SA remains poorly understood. In this context, our first main contribution is to show that under time-varying bounded delays, the delayed SA update rule guarantees exponentially fast convergence of the emph{last iterate} to a ball around the SA operator's fixed point. Notably, our bound is emph{tight} in its dependence on both the maximum delay $tau_{max}$, and the mixing time $tau_{mix}$. To achieve this tight bound, we develop a novel inductive proof technique that, unlike various existing delayed-optimization analyses, relies on establishing uniform boundedness of the iterates. As such, our proof may be of independent interest. Next, to mitigate the impact of the maximum delay on the convergence rate, we provide the first finite-time analysis of a delay-adaptive SA scheme under Markovian sampling. In particular, we show that the exponent of convergence of this scheme gets scaled down by $tau_{avg}$, as opposed to $tau_{max}$ for the vanilla delayed SA rule; here, $tau_{avg}$ denotes the average delay across all iterations. Moreover, the adaptive scheme requires no prior knowledge of the delay sequence for step-size tuning. Our theoretical findings shed light on the finite-time effects of delays for a broad class of algorithms, including TD learning, Q-learning, and stochastic gradient descent under Markovian sampling.
|
[
"['Arman Adibi' 'Nicolo Dal Fabbro' 'Luca Schenato' 'Sanjeev Kulkarni'\n 'H. Vincent Poor' 'George J. Pappas' 'Hamed Hassani' 'Aritra Mitra']"
] |
null | null |
2402.11809
| null | null |
http://arxiv.org/pdf/2402.11809v3
|
2024-05-20T01:48:18Z
|
2024-02-19T03:39:10Z
|
Generation Meets Verification: Accelerating Large Language Model
Inference with Smart Parallel Auto-Correct Decoding
|
This research aims to accelerate the inference speed of large language models (LLMs) with billions of parameters. We propose textbf{S}mart textbf{P}arallel textbf{A}uto-textbf{C}orrect dtextbf{E}coding (SPACE), an innovative approach designed for achieving lossless acceleration of LLMs. By integrating semi-autoregressive inference and speculative decoding capabilities, SPACE uniquely enables autoregressive LLMs to parallelize token generation and verification. This is realized through a specialized semi-autoregressive supervised fine-tuning process that equips existing LLMs with the ability to simultaneously predict multiple tokens. Additionally, an auto-correct decoding algorithm facilitates the simultaneous generation and verification of token sequences within a single model invocation. Through extensive experiments on a range of LLMs, SPACE has demonstrated inference speedup ranging from 2.7x-4.0x on HumanEval-X while maintaining output quality.
|
[
"['Hanling Yi' 'Feng Lin' 'Hongbin Li' 'Peiyang Ning' 'Xiaotian Yu'\n 'Rong Xiao']"
] |
null | null |
2402.11815
| null | null |
http://arxiv.org/pdf/2402.11815v2
|
2024-03-27T20:30:08Z
|
2024-02-19T04:11:34Z
|
HU at SemEval-2024 Task 8A: Can Contrastive Learning Learn Embeddings to
Detect Machine-Generated Text?
|
This paper describes our system developed for SemEval-2024 Task 8, ``Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection'' Machine-generated texts have been one of the main concerns due to the use of large language models (LLM) in fake text generation, phishing, cheating in exams, or even plagiarizing copyright materials. A lot of systems have been developed to detect machine-generated text. Nonetheless, the majority of these systems rely on the text-generating model. This limitation is impractical in real-world scenarios, as it's often impossible to know which specific model the user has used for text generation. In this work, we propose a $textbf{single}$ model based on contrastive learning, which uses $textbf{$approx$40% of the baseline's parameters}$ (149M vs. 355M) but shows a comparable performance on the test dataset $(textbf{21st out of 137 participants})$. Our key finding is that even without an ensemble of multiple models, a single base model can have comparable performance with the help of data augmentation and contrastive learning. Our code is publicly available at https://github.com/dipta007/SemEval24-Task8.
|
[
"['Shubhashis Roy Dipta' 'Sadat Shahriar']"
] |
null | null |
2402.11816
| null | null |
http://arxiv.org/pdf/2402.11816v3
|
2024-07-15T14:28:46Z
|
2024-02-19T04:13:33Z
|
Learning the Unlearned: Mitigating Feature Suppression in Contrastive
Learning
|
Self-Supervised Contrastive Learning has proven effective in deriving high-quality representations from unlabeled data. However, a major challenge that hinders both unimodal and multimodal contrastive learning is feature suppression, a phenomenon where the trained model captures only a limited portion of the information from the input data while overlooking other potentially valuable content. This issue often leads to indistinguishable representations for visually similar but semantically different inputs, adversely affecting downstream task performance, particularly those requiring rigorous semantic comprehension. To address this challenge, we propose a novel model-agnostic Multistage Contrastive Learning (MCL) framework. Unlike standard contrastive learning which inherently captures one single biased feature distribution, MCL progressively learns previously unlearned features through feature-aware negative sampling at each stage, where the negative samples of an anchor are exclusively selected from the cluster it was assigned to in preceding stages. Meanwhile, MCL preserves the previously well-learned features by cross-stage representation integration, integrating features across all stages to form final representations. Our comprehensive evaluation demonstrates MCL's effectiveness and superiority across both unimodal and multimodal contrastive learning, spanning a range of model architectures from ResNet to Vision Transformers (ViT). Remarkably, in tasks where the original CLIP model has shown limitations, MCL dramatically enhances performance, with improvements up to threefold on specific attributes in the recently proposed MMVP benchmark.
|
[
"['Jihai Zhang' 'Xiang Lan' 'Xiaoye Qu' 'Yu Cheng' 'Mengling Feng'\n 'Bryan Hooi']"
] |
null | null |
2402.11821
| null | null |
http://arxiv.org/pdf/2402.11821v2
|
2024-02-20T03:23:49Z
|
2024-02-19T04:29:45Z
|
Microstructures and Accuracy of Graph Recall by Large Language Models
|
Graphs data is crucial for many applications, and much of it exists in the relations described in textual format. As a result, being able to accurately recall and encode a graph described in earlier text is a basic yet pivotal ability that LLMs need to demonstrate if they are to perform reasoning tasks that involve graph-structured information. Human performance at graph recall has been studied by cognitive scientists for decades, and has been found to often exhibit certain structural patterns of bias that align with human handling of social relationships. To date, however, we know little about how LLMs behave in analogous graph recall tasks: do their recalled graphs also exhibit certain biased patterns, and if so, how do they compare with humans and affect other graph reasoning tasks? In this work, we perform the first systematical study of graph recall by LLMs, investigating the accuracy and biased microstructures (local structural patterns) in their recall. We find that LLMs not only underperform often in graph recall, but also tend to favor more triangles and alternating 2-paths. Moreover, we find that more advanced LLMs have a striking dependence on the domain that a real-world graph comes from -- by yielding the best recall accuracy when the graph is narrated in a language style consistent with its original domain.
|
[
"['Yanbang Wang' 'Hejie Cui' 'Jon Kleinberg']"
] |
null | null |
2402.11835
| null | null |
http://arxiv.org/pdf/2402.11835v1
|
2024-02-19T04:58:39Z
|
2024-02-19T04:58:39Z
|
Easy as ABCs: Unifying Boltzmann Q-Learning and Counterfactual Regret
Minimization
|
We propose ABCs (Adaptive Branching through Child stationarity), a best-of-both-worlds algorithm combining Boltzmann Q-learning (BQL), a classic reinforcement learning algorithm for single-agent domains, and counterfactual regret minimization (CFR), a central algorithm for learning in multi-agent domains. ABCs adaptively chooses what fraction of the environment to explore each iteration by measuring the stationarity of the environment's reward and transition dynamics. In Markov decision processes, ABCs converges to the optimal policy with at most an O(A) factor slowdown compared to BQL, where A is the number of actions in the environment. In two-player zero-sum games, ABCs is guaranteed to converge to a Nash equilibrium (assuming access to a perfect oracle for detecting stationarity), while BQL has no such guarantees. Empirically, ABCs demonstrates strong performance when benchmarked across environments drawn from the OpenSpiel game library and OpenAI Gym and exceeds all prior methods in environments which are neither fully stationary nor fully nonstationary.
|
[
"[\"Luca D'Amico-Wong\" 'Hugh Zhang' 'Marc Lanctot' 'David C. Parkes']"
] |
null | null |
2402.11837
| null | null |
http://arxiv.org/pdf/2402.11837v2
|
2024-03-02T05:43:05Z
|
2024-02-19T05:00:07Z
|
Self-Guided Robust Graph Structure Refinement
|
Recent studies have revealed that GNNs are vulnerable to adversarial attacks. To defend against such attacks, robust graph structure refinement (GSR) methods aim at minimizing the effect of adversarial edges based on node features, graph structure, or external information. However, we have discovered that existing GSR methods are limited by narrowassumptions, such as assuming clean node features, moderate structural attacks, and the availability of external clean graphs, resulting in the restricted applicability in real-world scenarios. In this paper, we propose a self-guided GSR framework (SG-GSR), which utilizes a clean sub-graph found within the given attacked graph itself. Furthermore, we propose a novel graph augmentation and a group-training strategy to handle the two technical challenges in the clean sub-graph extraction: 1) loss of structural information, and 2) imbalanced node degree distribution. Extensive experiments demonstrate the effectiveness of SG-GSR under various scenarios including non-targeted attacks, targeted attacks, feature attacks, e-commerce fraud, and noisy node labels. Our code is available at https://github.com/yeonjun-in/torch-SG-GSR.
|
[
"['Yeonjun In' 'Kanghoon Yoon' 'Kibum Kim' 'Kijung Shin' 'Chanyoung Park']"
] |
null | null |
2402.11838
| null | null |
http://arxiv.org/abs/2402.11838v5
|
2024-07-01T02:51:58Z
|
2024-02-19T05:04:11Z
|
UniST: A Prompt-Empowered Universal Model for Urban Spatio-Temporal
Prediction
|
Urban spatio-temporal prediction is crucial for informed decision-making, such as traffic management, resource optimization, and emergence response. Despite remarkable breakthroughs in pretrained natural language models that enable one model to handle diverse tasks, a universal solution for spatio-temporal prediction remains challenging Existing prediction approaches are typically tailored for specific spatio-temporal scenarios, requiring task-specific model designs and extensive domain-specific training data. In this study, we introduce UniST, a universal model designed for general urban spatio-temporal prediction across a wide range of scenarios. Inspired by large language models, UniST achieves success through: (i) utilizing diverse spatio-temporal data from different scenarios, (ii) effective pre-training to capture complex spatio-temporal dynamics, (iii) knowledge-guided prompts to enhance generalization capabilities. These designs together unlock the potential of building a universal model for various scenarios Extensive experiments on more than 20 spatio-temporal scenarios demonstrate UniST's efficacy in advancing state-of-the-art performance, especially in few-shot and zero-shot prediction. The datasets and code implementation are released on https://github.com/tsinghua-fib-lab/UniST.
|
[
"['Yuan Yuan' 'Jingtao Ding' 'Jie Feng' 'Depeng Jin' 'Yong Li']"
] |
null | null |
2402.11839
| null | null |
http://arxiv.org/pdf/2402.11839v1
|
2024-02-19T05:06:10Z
|
2024-02-19T05:06:10Z
|
An enhanced Teaching-Learning-Based Optimization (TLBO) with Grey Wolf
Optimizer (GWO) for text feature selection and clustering
|
Text document clustering can play a vital role in organizing and handling the everincreasing number of text documents. Uninformative and redundant features included in large text documents reduce the effectiveness of the clustering algorithm. Feature selection (FS) is a well-known technique for removing these features. Since FS can be formulated as an optimization problem, various meta-heuristic algorithms have been employed to solve it. Teaching-Learning-Based Optimization (TLBO) is a novel meta-heuristic algorithm that benefits from the low number of parameters and fast convergence. A hybrid method can simultaneously benefit from the advantages of TLBO and tackle the possible entrapment in the local optimum. By proposing a hybrid of TLBO, Grey Wolf Optimizer (GWO), and Genetic Algorithm (GA) operators, this paper suggests a filter-based FS algorithm (TLBO-GWO). Six benchmark datasets are selected, and TLBO-GWO is compared with three recently proposed FS algorithms with similar approaches, the main TLBO and GWO. The comparison is conducted based on clustering evaluation measures, convergence behavior, and dimension reduction, and is validated using statistical tests. The results reveal that TLBO-GWO can significantly enhance the effectiveness of the text clustering technique (K-means).
|
[
"['Mahsa Azarshab' 'Mohammad Fathian' 'Babak Amiri']"
] |
null | null |
2402.11857
| null | null |
http://arxiv.org/pdf/2402.11857v1
|
2024-02-19T05:59:09Z
|
2024-02-19T05:59:09Z
|
Communication-Efficient Distributed Learning with Local Immediate Error
Compensation
|
Gradient compression with error compensation has attracted significant attention with the target of reducing the heavy communication overhead in distributed learning. However, existing compression methods either perform only unidirectional compression in one iteration with higher communication cost, or bidirectional compression with slower convergence rate. In this work, we propose the Local Immediate Error Compensated SGD (LIEC-SGD) optimization algorithm to break the above bottlenecks based on bidirectional compression and carefully designed compensation approaches. Specifically, the bidirectional compression technique is to reduce the communication cost, and the compensation technique compensates the local compression error to the model update immediately while only maintaining the global error variable on the server throughout the iterations to boost its efficacy. Theoretically, we prove that LIEC-SGD is superior to previous works in either the convergence rate or the communication cost, which indicates that LIEC-SGD could inherit the dual advantages from unidirectional compression and bidirectional compression. Finally, experiments of training deep neural networks validate the effectiveness of the proposed LIEC-SGD algorithm.
|
[
"['Yifei Cheng' 'Li Shen' 'Linli Xu' 'Xun Qian' 'Shiwei Wu' 'Yiming Zhou'\n 'Tie Zhang' 'Dacheng Tao' 'Enhong Chen']"
] |
null | null |
2402.11858
| null | null |
http://arxiv.org/pdf/2402.11858v3
|
2024-04-15T02:53:41Z
|
2024-02-19T06:00:35Z
|
Stochastic Hessian Fittings with Lie Groups
|
This paper studies the fitting of Hessian or its inverse for stochastic optimizations using a Hessian fitting criterion from the preconditioned stochastic gradient descent (PSGD) method, which is intimately related to many commonly used second order and adaptive gradient optimizers, e.g., BFGS, Gaussian-Newton and natural gradient descent, AdaGrad, etc. Our analyses reveal the efficiency and reliability differences among a wide range of preconditioner fitting methods, from closed-form to iterative solutions, using Hessian-vector products or stochastic gradients only, with Hessian fittings in the Euclidean space, the manifold of symmetric positive definite (SPL) matrices, to a variety of Lie groups. The most intriguing discovery is that the Hessian fitting itself as an optimization problem is strongly convex under mild conditions with a specific yet general enough Lie group. This discovery turns Hessian fitting into a well behaved optimization problem, and facilitates the designs of highly efficient and elegant Lie group sparse preconditioner fitting methods for large scale stochastic optimizations.
|
[
"['Xi-Lin Li']"
] |
null | null |
2402.11867
| null | null |
http://arxiv.org/pdf/2402.11867v3
|
2024-05-28T07:05:05Z
|
2024-02-19T06:22:09Z
|
LoRA Training in the NTK Regime has No Spurious Local Minima
|
Low-rank adaptation (LoRA) has become the standard approach for parameter-efficient fine-tuning of large language models (LLM), but our theoretical understanding of LoRA has been limited. In this work, we theoretically analyze LoRA fine-tuning in the neural tangent kernel (NTK) regime with $N$ data points, showing: (i) full fine-tuning (without LoRA) admits a low-rank solution of rank $rlesssim sqrt{N}$; (ii) using LoRA with rank $rgtrsim sqrt{N}$ eliminates spurious local minima, allowing gradient descent to find the low-rank solutions; (iii) the low-rank solution found using LoRA generalizes well.
|
[
"['Uijeong Jang' 'Jason D. Lee' 'Ernest K. Ryu']"
] |
null | null |
2402.11877
| null | null |
http://arxiv.org/pdf/2402.11877v1
|
2024-02-19T06:33:51Z
|
2024-02-19T06:33:51Z
|
Finite-Time Error Analysis of Online Model-Based Q-Learning with a
Relaxed Sampling Model
|
Reinforcement learning has witnessed significant advancements, particularly with the emergence of model-based approaches. Among these, $Q$-learning has proven to be a powerful algorithm in model-free settings. However, the extension of $Q$-learning to a model-based framework remains relatively unexplored. In this paper, we delve into the sample complexity of $Q$-learning when integrated with a model-based approach. Through theoretical analyses and empirical evaluations, we seek to elucidate the conditions under which model-based $Q$-learning excels in terms of sample efficiency compared to its model-free counterpart.
|
[
"['Han-Dong Lim' 'HyeAnn Lee' 'Donghwan Lee']"
] |
null | null |
2402.11887
| null | null |
http://arxiv.org/pdf/2402.11887v4
|
2024-05-28T08:31:28Z
|
2024-02-19T06:55:50Z
|
Generative Semi-supervised Graph Anomaly Detection
|
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal, contrasting to the extensively explored unsupervised setting with a fully unlabeled graph. We reveal that having access to the normal nodes, even just a small percentage of normal nodes, helps enhance the detection performance of existing unsupervised GAD methods when they are adapted to the semi-supervised setting. However, their utilization of these normal nodes is limited. In this paper, we propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes. The key idea is to generate pseudo anomaly nodes, referred to as 'outlier nodes', for providing effective negative node samples in training a discriminative one-class classifier. The main challenge here lies in the lack of ground truth information about real anomaly nodes. To address this challenge, GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness -- to generate reliable outlier nodes that assimilate anomaly nodes in both graph structure and feature representations. Comprehensive experiments on six real-world GAD datasets are performed to establish a benchmark for semi-supervised GAD and show that GGAD substantially outperforms state-of-the-art unsupervised and semi-supervised GAD methods with varying numbers of training normal nodes. Code will be made available at https://github.com/mala-lab/GGAD.
|
[
"['Hezhe Qiao' 'Qingsong Wen' 'Xiaoli Li' 'Ee-Peng Lim' 'Guansong Pang']"
] |
null | null |
2402.11904
| null | null |
http://arxiv.org/pdf/2402.11904v1
|
2024-02-19T07:45:04Z
|
2024-02-19T07:45:04Z
|
Scalable Virtual Valuations Combinatorial Auction Design by Combining
Zeroth-Order and First-Order Optimization Method
|
Automated auction design seeks to discover empirically high-revenue and incentive-compatible mechanisms using machine learning. Ensuring dominant strategy incentive compatibility (DSIC) is crucial, and the most effective approach is to confine the mechanism to Affine Maximizer Auctions (AMAs). Nevertheless, existing AMA-based approaches encounter challenges such as scalability issues (arising from combinatorial candidate allocations) and the non-differentiability of revenue. In this paper, to achieve a scalable AMA-based method, we further restrict the auction mechanism to Virtual Valuations Combinatorial Auctions (VVCAs), a subset of AMAs with significantly fewer parameters. Initially, we employ a parallelizable dynamic programming algorithm to compute the winning allocation of a VVCA. Subsequently, we propose a novel optimization method that combines both zeroth-order and first-order techniques to optimize the VVCA parameters. Extensive experiments demonstrate the efficacy and scalability of our proposed approach, termed Zeroth-order and First-order Optimization of VVCAs (ZFO-VVCA), particularly when applied to large-scale auctions.
|
[
"['Zhijian Duan' 'Haoran Sun' 'Yichong Xia' 'Siqiang Wang' 'Zhilin Zhang'\n 'Chuan Yu' 'Jian Xu' 'Bo Zheng' 'Xiaotie Deng']"
] |
null | null |
2402.11917
| null | null |
http://arxiv.org/pdf/2402.11917v3
|
2024-06-30T00:52:49Z
|
2024-02-19T08:04:25Z
|
A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step
Reasoning Task
|
Transformers demonstrate impressive performance on a range of reasoning benchmarks. To evaluate the degree to which these abilities are a result of actual reasoning, existing work has focused on developing sophisticated benchmarks for behavioral studies. However, these studies do not provide insights into the internal mechanisms driving the observed capabilities. To improve our understanding of the internal mechanisms of transformers, we present a comprehensive mechanistic analysis of a transformer trained on a synthetic reasoning task. We identify a set of interpretable mechanisms the model uses to solve the task, and validate our findings using correlational and causal evidence. Our results suggest that it implements a depth-bounded recurrent mechanisms that operates in parallel and stores intermediate results in selected token positions. We anticipate that the motifs we identified in our synthetic setting can provide valuable insights into the broader operating principles of transformers and thus provide a basis for understanding more complex models.
|
[
"['Jannik Brinkmann' 'Abhay Sheshadri' 'Victor Levoso' 'Paul Swoboda'\n 'Christian Bartelt']"
] |
null | null |
2402.11922
| null | null |
http://arxiv.org/pdf/2402.11922v3
|
2024-03-25T11:39:57Z
|
2024-02-19T08:11:26Z
|
Spatio-Temporal Few-Shot Learning via Diffusive Neural Network
Generation
|
Spatio-temporal modeling is foundational for smart city applications, yet it is often hindered by data scarcity in many cities and regions. To bridge this gap, we propose a novel generative pre-training framework, GPD, for spatio-temporal few-shot learning with urban knowledge transfer. Unlike conventional approaches that heavily rely on common feature extraction or intricate few-shot learning designs, our solution takes a novel approach by performing generative pre-training on a collection of neural network parameters optimized with data from source cities. We recast spatio-temporal few-shot learning as pre-training a generative diffusion model, which generates tailored neural networks guided by prompts, allowing for adaptability to diverse data distributions and city-specific characteristics. GPD employs a Transformer-based denoising diffusion model, which is model-agnostic to integrate with powerful spatio-temporal neural networks. By addressing challenges arising from data gaps and the complexity of generalizing knowledge across cities, our framework consistently outperforms state-of-the-art baselines on multiple real-world datasets for tasks such as traffic speed prediction and crowd flow prediction. The implementation of our approach is available: https://github.com/tsinghua-fib-lab/GPD.
|
[
"['Yuan Yuan' 'Chenyang Shao' 'Jingtao Ding' 'Depeng Jin' 'Yong Li']"
] |
null | null |
2402.11925
| null | null |
http://arxiv.org/pdf/2402.11925v1
|
2024-02-19T08:12:47Z
|
2024-02-19T08:12:47Z
|
Energy-Efficient Edge Learning via Joint Data Deepening-and-Prefetching
|
The vision of pervasive artificial intelligence (AI) services can be realized by training an AI model on time using real-time data collected by internet of things (IoT) devices. To this end, IoT devices require offloading their data to an edge server in proximity. However, transmitting high-dimensional and voluminous data from energy-constrained IoT devices poses a significant challenge. To address this limitation, we propose a novel offloading architecture, called joint data deepening-and-prefetching (JD2P), which is feature-by-feature offloading comprising two key techniques. The first one is data deepening, where each data sample's features are sequentially offloaded in the order of importance determined by the data embedding technique such as principle component analysis (PCA). Offloading is terminated once the already transmitted features are sufficient for accurate data classification, resulting in a reduction in the amount of transmitted data. The criteria to offload data are derived for binary and multi-class classifiers, which are designed based on support vector machine (SVM) and deep neural network (DNN), respectively. The second one is data prefetching, where some features potentially required in the future are offloaded in advance, thus achieving high efficiency via precise prediction and parameter optimization. We evaluate the effectiveness of JD2P through experiments using the MNIST dataset, and the results demonstrate its significant reduction in expected energy consumption compared to several benchmarks without degrading learning accuracy.
|
[
"['Sujin Kook' 'Won-Yong Shin' 'Seong-Lyun Kim' 'Seung-Woo Ko']"
] |
null | null |
2402.11933
| null | null |
http://arxiv.org/pdf/2402.11933v1
|
2024-02-19T08:19:26Z
|
2024-02-19T08:19:26Z
|
SLADE: Detecting Dynamic Anomalies in Edge Streams without Labels via
Self-Supervised Learning
|
To detect anomalies in real-world graphs, such as social, email, and financial networks, various approaches have been developed. While they typically assume static input graphs, most real-world graphs grow over time, naturally represented as edge streams. In this context, we aim to achieve three goals: (a) instantly detecting anomalies as they occur, (b) adapting to dynamically changing states, and (c) handling the scarcity of dynamic anomaly labels. In this paper, we propose SLADE (Self-supervised Learning for Anomaly Detection in Edge Streams) for rapid detection of dynamic anomalies in edge streams, without relying on labels. SLADE detects the shifts of nodes into abnormal states by observing deviations in their interaction patterns over time. To this end, it trains a deep neural network to perform two self-supervised tasks: (a) minimizing drift in node representations and (b) generating long-term interaction patterns from short-term ones. Failure in these tasks for a node signals its deviation from the norm. Notably, the neural network and tasks are carefully designed so that all required operations can be performed in constant time (w.r.t. the graph size) in response to each new edge in the input stream. In dynamic anomaly detection across four real-world datasets, SLADE outperforms nine competing methods, even those leveraging label supervision.
|
[
"['Jongha Lee' 'Sunwoo Kim' 'Kijung Shin']"
] |
null | null |
2402.11940
| null | null |
http://arxiv.org/pdf/2402.11940v2
|
2024-02-20T12:13:05Z
|
2024-02-19T08:27:23Z
|
AICAttack: Adversarial Image Captioning Attack with Attention-Based
Optimization
|
Recent advances in deep learning research have shown remarkable achievements across many tasks in computer vision (CV) and natural language processing (NLP). At the intersection of CV and NLP is the problem of image captioning, where the related models' robustness against adversarial attacks has not been well studied. In this paper, we present a novel adversarial attack strategy, which we call AICAttack (Attention-based Image Captioning Attack), designed to attack image captioning models through subtle perturbations on images. Operating within a black-box attack scenario, our algorithm requires no access to the target model's architecture, parameters, or gradient information. We introduce an attention-based candidate selection mechanism that identifies the optimal pixels to attack, followed by Differential Evolution (DE) for perturbing pixels' RGB values. We demonstrate AICAttack's effectiveness through extensive experiments on benchmark datasets with multiple victim models. The experimental results demonstrate that our method surpasses current leading-edge techniques by effectively distributing the alignment and semantics of words in the output.
|
[
"['Jiyao Li' 'Mingze Ni' 'Yifei Dong' 'Tianqing Zhu' 'Wei Liu']"
] |
null | null |
2402.11942
| null | null |
http://arxiv.org/pdf/2402.11942v3
|
2024-02-25T14:46:07Z
|
2024-02-19T08:30:06Z
|
The effect of Leaky ReLUs on the training and generalization of
overparameterized networks
|
We investigate the training and generalization errors of overparameterized neural networks (NNs) with a wide class of leaky rectified linear unit (ReLU) functions. More specifically, we carefully upper bound both the convergence rate of the training error and the generalization error of such NNs and investigate the dependence of these bounds on the Leaky ReLU parameter, $alpha$. We show that $alpha =-1$, which corresponds to the absolute value activation function, is optimal for the training error bound. Furthermore, in special settings, it is also optimal for the generalization error bound. Numerical experiments empirically support the practical choices guided by the theory.
|
[
"['Yinglong Guo' 'Shaohan Li' 'Gilad Lerman']"
] |
null | null |
2402.11948
| null | null |
http://arxiv.org/pdf/2402.11948v1
|
2024-02-19T08:43:00Z
|
2024-02-19T08:43:00Z
|
Mini-Hes: A Parallelizable Second-order Latent Factor Analysis Model
|
Interactions among large number of entities is naturally high-dimensional and incomplete (HDI) in many big data related tasks. Behavioral characteristics of users are hidden in these interactions, hence, effective representation of the HDI data is a fundamental task for understanding user behaviors. Latent factor analysis (LFA) model has proven to be effective in representing HDI data. The performance of an LFA model relies heavily on its training process, which is a non-convex optimization. It has been proven that incorporating local curvature and preprocessing gradients during its training process can lead to superior performance compared to LFA models built with first-order family methods. However, with the escalation of data volume, the feasibility of second-order algorithms encounters challenges. To address this pivotal issue, this paper proposes a mini-block diagonal hessian-free (Mini-Hes) optimization for building an LFA model. It leverages the dominant diagonal blocks in the generalized Gauss-Newton matrix based on the analysis of the Hessian matrix of LFA model and serves as an intermediary strategy bridging the gap between first-order and second-order optimization methods. Experiment results indicate that, with Mini-Hes, the LFA model outperforms several state-of-the-art models in addressing missing data estimation task on multiple real HDI datasets from recommender system. (The source code of Mini-Hes is available at https://github.com/Goallow/Mini-Hes)
|
[
"['Jialiang Wang' 'Weiling Li' 'Yurong Zhong' 'Xin Luo']"
] |
null | null |
2402.11950
| null | null |
http://arxiv.org/pdf/2402.11950v2
|
2024-04-05T08:51:55Z
|
2024-02-19T08:46:04Z
|
A novel molecule generative model of VAE combined with Transformer for
unseen structure generation
|
Recently, molecule generation using deep learning has been actively investigated in drug discovery. In this field, Transformer and VAE are widely used as powerful models, but they are rarely used in combination due to structural and performance mismatch of them. This study proposes a model that combines these two models through structural and parameter optimization in handling diverse molecules. The proposed model shows comparable performance to existing models in generating molecules, and showed by far superior performance in generating molecules with unseen structures. Another advantage of this VAE model is that it generates molecules from latent representation, and therefore properties of molecules can be easily predicted or conditioned with it, and indeed, we show that the latent representation of the model successfully predicts molecular properties. Ablation study suggested the advantage of VAE over other generative models like language model in generating novel molecules. It also indicated that the latent representation can be shortened to ~32 dimensional variables without loss of reconstruction, suggesting the possibility of a much smaller molecular descriptor or model than existing ones. This study is expected to provide a virtual chemical library containing a wide variety of compounds for virtual screening and to enable efficient screening.
|
[
"['Yasuhiro Yoshikai' 'Tadahaya Mizuno' 'Shumpei Nemoto'\n 'Hiroyuki Kusuhara']"
] |
null | null |
2402.11953
| null | null |
http://arxiv.org/pdf/2402.11953v1
|
2024-02-19T08:47:20Z
|
2024-02-19T08:47:20Z
|
Stealing the Invisible: Unveiling Pre-Trained CNN Models through
Adversarial Examples and Timing Side-Channels
|
Machine learning, with its myriad applications, has become an integral component of numerous technological systems. A common practice in this domain is the use of transfer learning, where a pre-trained model's architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it's crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present an approach based on the observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with timing side channels can lead to a model stealing method. Our approach, designed for typical user-level access in remote MLaaS environments exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20.
|
[
"['Shubhi Shukla' 'Manaar Alam' 'Pabitra Mitra' 'Debdeep Mukhopadhyay']"
] |
null | null |
2402.11960
| null | null |
http://arxiv.org/pdf/2402.11960v1
|
2024-02-19T09:04:30Z
|
2024-02-19T09:04:30Z
|
DB-LLM: Accurate Dual-Binarization for Efficient LLMs
|
Large language models (LLMs) have significantly advanced the field of natural language processing, while the expensive memory and computation consumption impede their practical deployment. Quantization emerges as one of the most effective methods for improving the computational efficiency of LLMs. However, existing ultra-low-bit quantization always causes severe accuracy drops. In this paper, we empirically relieve the micro and macro characteristics of ultra-low bit quantization and present a novel Dual-Binarization method for LLMs, namely DB-LLM. For the micro-level, we take both the accuracy advantage of 2-bit-width and the efficiency advantage of binarization into account, introducing Flexible Dual Binarization (FDB). By splitting 2-bit quantized weights into two independent sets of binaries, FDB ensures the accuracy of representations and introduces flexibility, utilizing the efficient bitwise operations of binarization while retaining the inherent high sparsity of ultra-low bit quantization. For the macro-level, we find the distortion that exists in the prediction of LLM after quantization, which is specified as the deviations related to the ambiguity of samples. We propose the Deviation-Aware Distillation (DAD) method, enabling the model to focus differently on various samples. Comprehensive experiments show that our DB-LLM not only significantly surpasses the current State-of-The-Art (SoTA) in ultra-low bit quantization (eg, perplexity decreased from 9.64 to 7.23), but also achieves an additional 20% reduction in computational consumption compared to the SOTA method under the same bit-width. Our code will be released soon.
|
[
"['Hong Chen' 'Chengtao Lv' 'Liang Ding' 'Haotong Qin' 'Xiabin Zhou'\n 'Yifu Ding' 'Xuebo Liu' 'Min Zhang' 'Jinyang Guo' 'Xianglong Liu'\n 'Dacheng Tao']"
] |
null | null |
2402.11963
| null | null |
http://arxiv.org/pdf/2402.11963v1
|
2024-02-19T09:06:26Z
|
2024-02-19T09:06:26Z
|
Imbalance in Regression Datasets
|
For classification, the problem of class imbalance is well known and has been extensively studied. In this paper, we argue that imbalance in regression is an equally important problem which has so far been overlooked: Due to under- and over-representations in a data set's target distribution, regressors are prone to degenerate to naive models, systematically neglecting uncommon training data and over-representing targets seen often during training. We analyse this problem theoretically and use resulting insights to develop a first definition of imbalance in regression, which we show to be a generalisation of the commonly employed imbalance measure in classification. With this, we hope to turn the spotlight on the overlooked problem of imbalance in regression and to provide common ground for future research.
|
[
"['Daniel Kowatsch' 'Nicolas M. Müller' 'Kilian Tscharke' 'Philip Sperl'\n 'Konstantin Bötinger']"
] |
null | null |
2402.11973
| null | null |
http://arxiv.org/pdf/2402.11973v1
|
2024-02-19T09:19:01Z
|
2024-02-19T09:19:01Z
|
Bayesian Active Learning for Censored Regression
|
Bayesian active learning is based on information theoretical approaches that focus on maximising the information that new observations provide to the model parameters. This is commonly done by maximising the Bayesian Active Learning by Disagreement (BALD) acquisitions function. However, we highlight that it is challenging to estimate BALD when the new data points are subject to censorship, where only clipped values of the targets are observed. To address this, we derive the entropy and the mutual information for censored distributions and derive the BALD objective for active learning in censored regression ($mathcal{C}$-BALD). We propose a novel modelling approach to estimate the $mathcal{C}$-BALD objective and use it for active learning in the censored setting. Across a wide range of datasets and models, we demonstrate that $mathcal{C}$-BALD outperforms other Bayesian active learning methods in censored regression.
|
[
"['Frederik Boe Hüttel' 'Christoffer Riis' 'Filipe Rodrigues'\n 'Francisco Câmara Pereira']"
] |
null | null |
2402.11984
| null | null |
http://arxiv.org/pdf/2402.11984v1
|
2024-02-19T09:29:37Z
|
2024-02-19T09:29:37Z
|
Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks
|
Neuromorphic computing with spiking neural networks is promising for energy-efficient artificial intelligence (AI) applications. However, different from humans who continually learn different tasks in a lifetime, neural network models suffer from catastrophic forgetting. How could neuronal operations solve this problem is an important question for AI and neuroscience. Many previous studies draw inspiration from observed neuroscience phenomena and propose episodic replay or synaptic metaplasticity, but they are not guaranteed to explicitly preserve knowledge for neuron populations. Other works focus on machine learning methods with more mathematical grounding, e.g., orthogonal projection on high dimensional spaces, but there is no neural correspondence for neuromorphic computing. In this work, we develop a new method with neuronal operations based on lateral connections and Hebbian learning, which can protect knowledge by projecting activity traces of neurons into an orthogonal subspace so that synaptic weight update will not interfere with old tasks. We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities and enable orthogonal projection. This provides new insights into how neural circuits and Hebbian learning can help continual learning, and also how the concept of orthogonal projection can be realized in neuronal systems. Our method is also flexible to utilize arbitrary training methods based on presynaptic activities/traces. Experiments show that our method consistently solves forgetting for spiking neural networks with nearly zero forgetting under various supervised training methods with different error propagation approaches, and outperforms previous approaches under various settings. Our method can pave a solid path for building continual neuromorphic computing systems.
|
[
"['Mingqing Xiao' 'Qingyan Meng' 'Zongpeng Zhang' 'Di He' 'Zhouchen Lin']"
] |
null | null |
2402.11985
| null | null |
http://arxiv.org/pdf/2402.11985v1
|
2024-02-19T09:30:05Z
|
2024-02-19T09:30:05Z
|
Weakly Supervised Object Detection in Chest X-Rays with Differentiable
ROI Proposal Networks and Soft ROI Pooling
|
Weakly supervised object detection (WSup-OD) increases the usefulness and interpretability of image classification algorithms without requiring additional supervision. The successes of multiple instance learning in this task for natural images, however, do not translate well to medical images due to the very different characteristics of their objects (i.e. pathologies). In this work, we propose Weakly Supervised ROI Proposal Networks (WSRPN), a new method for generating bounding box proposals on the fly using a specialized region of interest-attention (ROI-attention) module. WSRPN integrates well with classic backbone-head classification algorithms and is end-to-end trainable with only image-label supervision. We experimentally demonstrate that our new method outperforms existing methods in the challenging task of disease localization in chest X-ray images. Code: https://github.com/philip-mueller/wsrpn
|
[
"['Philip Müller' 'Felix Meissen' 'Georgios Kaissis' 'Daniel Rueckert']"
] |
null | null |
2402.11989
| null | null |
http://arxiv.org/pdf/2402.11989v2
|
2024-06-08T23:46:34Z
|
2024-02-19T09:32:48Z
|
Privacy-Preserving Low-Rank Adaptation for Latent Diffusion Models
|
Low-rank adaptation (LoRA) is an efficient strategy for adapting latent diffusion models (LDMs) on a private dataset to generate specific images by minimizing the adaptation loss. However, the LoRA-adapted LDMs are vulnerable to membership inference (MI) attacks that can judge whether a particular data point belongs to the private dataset, thus leading to the privacy leakage. To defend against MI attacks, we first propose a straightforward solution: Membership-Privacy-preserving LoRA (MP-LoRA). MP-LoRA is formulated as a min-max optimization problem where a proxy attack model is trained by maximizing its MI gain while the LDM is adapted by minimizing the sum of the adaptation loss and the MI gain of the proxy attack model. However, we empirically find that MP-LoRA has the issue of unstable optimization, and theoretically analyze that the potential reason is the unconstrained local smoothness, which impedes the privacy-preserving adaptation. To mitigate this issue, we further propose a Stable Membership-Privacy-preserving LoRA (SMP-LoRA) that adapts the LDM by minimizing the ratio of the adaptation loss to the MI gain. Besides, we theoretically prove that the local smoothness of SMP-LoRA can be constrained by the gradient norm, leading to improved convergence. Our experimental results corroborate that SMP-LoRA can indeed defend against MI attacks and generate high-quality images. Our code is available at https://github.com/WilliamLUO0/StablePrivateLoRA.
|
[
"['Zihao Luo' 'Xilie Xu' 'Feng Liu' 'Yun Sing Koh' 'Di Wang'\n 'Jingfeng Zhang']"
] |
null | null |
2402.11995
| null | null |
http://arxiv.org/pdf/2402.11995v1
|
2024-02-19T09:39:54Z
|
2024-02-19T09:39:54Z
|
Network Inversion of Binarised Neural Nets
|
While the deployment of neural networks, yielding impressive results, becomes more prevalent in various applications, their interpretability and understanding remain a critical challenge. Network inversion, a technique that aims to reconstruct the input space from the model's learned internal representations, plays a pivotal role in unraveling the black-box nature of input to output mappings in neural networks. In safety-critical scenarios, where model outputs may influence pivotal decisions, the integrity of the corresponding input space is paramount, necessitating the elimination of any extraneous "garbage" to ensure the trustworthiness of the network. Binarised Neural Networks (BNNs), characterized by binary weights and activations, offer computational efficiency and reduced memory requirements, making them suitable for resource-constrained environments. This paper introduces a novel approach to invert a trained BNN by encoding it into a CNF formula that captures the network's structure, allowing for both inference and inversion.
|
[
"['Pirzada Suhail' 'Supratik Chakraborty' 'Amit Sethi']"
] |
null | null |
2402.11996
| null | null |
http://arxiv.org/pdf/2402.11996v2
|
2024-02-27T14:53:53Z
|
2024-02-19T09:41:57Z
|
ISCUTE: Instance Segmentation of Cables Using Text Embedding
|
In the field of robotics and automation, conventional object recognition and instance segmentation methods face a formidable challenge when it comes to perceiving Deformable Linear Objects (DLOs) like wires, cables, and flexible tubes. This challenge arises primarily from the lack of distinct attributes such as shape, color, and texture, which calls for tailored solutions to achieve precise identification. In this work, we propose a foundation model-based DLO instance segmentation technique that is text-promptable and user-friendly. Specifically, our approach combines the text-conditioned semantic segmentation capabilities of CLIPSeg model with the zero-shot generalization capabilities of Segment Anything Model (SAM). We show that our method exceeds SOTA performance on DLO instance segmentation, achieving a mIoU of $91.21%$. We also introduce a rich and diverse DLO-specific dataset for instance segmentation.
|
[
"['Shir Kozlovsky' 'Omkar Joglekar' 'Dotan Di Castro']"
] |
null | null |
2402.11997
| null | null |
http://arxiv.org/pdf/2402.11997v2
|
2024-07-05T11:26:51Z
|
2024-02-19T09:43:03Z
|
Remember This Event That Year? Assessing Temporal Information and
Reasoning in Large Language Models
|
Large Language Models (LLMs) are increasingly ubiquitous, yet their ability to retain and reason about temporal information remains limited, hindering their application in real-world scenarios where understanding the sequential nature of events is crucial. Our study experiments with 12 state-of-the-art models (ranging from 2B to 70B+ parameters) on a novel numerical-temporal dataset, textbf{TempUN}, spanning from 10,000 BCE to 2100 CE, to uncover significant temporal retention and comprehension limitations. We propose six metrics to assess three learning paradigms to enhance temporal knowledge acquisition. Our findings reveal that open-source models exhibit knowledge gaps more frequently, suggesting a trade-off between limited knowledge and incorrect responses. Additionally, various fine-tuning approaches significantly improved performance, reducing incorrect outputs and impacting the identification of 'information not available' in the generations. The associated dataset and code are available at (https://github.com/lingoiitgn/TempUN).
|
[
"['Himanshu Beniwal' 'Dishant Patel' 'Kowsik Nandagopan D' 'Hritik Ladia'\n 'Ankit Yadav' 'Mayank Singh']"
] |
null | null |
2402.12008
| null | null |
http://arxiv.org/pdf/2402.12008v1
|
2024-02-19T10:02:00Z
|
2024-02-19T10:02:00Z
|
Cluster Metric Sensitivity to Irrelevant Features
|
Clustering algorithms are used extensively in data analysis for data exploration and discovery. Technological advancements lead to continually growth of data in terms of volume, dimensionality and complexity. This provides great opportunities in data analytics as the data can be interrogated for many different purposes. This however leads challenges, such as identification of relevant features for a given task. In supervised tasks, one can utilise a number of methods to optimise the input features for the task objective (e.g. classification accuracy). In unsupervised problems, such tools are not readily available, in part due to an inability to quantify feature relevance in unlabeled tasks. In this paper, we investigate the sensitivity of clustering performance noisy uncorrelated variables iteratively added to baseline datasets with well defined clusters. We show how different types of irrelevant variables can impact the outcome of a clustering result from $k$-means in different ways. We observe a resilience to very high proportions of irrelevant features for adjusted rand index (ARI) and normalised mutual information (NMI) when the irrelevant features are Gaussian distributed. For Uniformly distributed irrelevant features, we notice the resilience of ARI and NMI is dependent on the dimensionality of the data and exhibits tipping points between high scores and near zero. Our results show that the Silhouette Coefficient and the Davies-Bouldin score are the most sensitive to irrelevant added features exhibiting large changes in score for comparably low proportions of irrelevant features regardless of underlying distribution or data scaling. As such the Silhouette Coefficient and the Davies-Bouldin score are good candidates for optimising feature selection in unsupervised clustering tasks.
|
[
"['Miles McCrory' 'Spencer A. Thomas']"
] |
null | null |
2402.12010
| null | null |
http://arxiv.org/pdf/2402.12010v1
|
2024-02-19T10:03:46Z
|
2024-02-19T10:03:46Z
|
Training Green AI Models Using Elite Samples
|
The substantial increase in AI model training has considerable environmental implications, mandating more energy-efficient and sustainable AI practices. On the one hand, data-centric approaches show great potential towards training energy-efficient AI models. On the other hand, instance selection methods demonstrate the capability of training AI models with minimised training sets and negligible performance degradation. Despite the growing interest in both topics, the impact of data-centric training set selection on energy efficiency remains to date unexplored. This paper presents an evolutionary-based sampling framework aimed at (i) identifying elite training samples tailored for datasets and model pairs, (ii) comparing model performance and energy efficiency gains against typical model training practice, and (iii) investigating the feasibility of this framework for fostering sustainable model training practices. To evaluate the proposed framework, we conducted an empirical experiment including 8 commonly used AI classification models and 25 publicly available datasets. The results showcase that by considering 10% elite training samples, the models' performance can show a 50% improvement and remarkable energy savings of 98% compared to the common training practice.
|
[
"['Mohammed Alswaitti' 'Roberto Verdecchia' 'Grégoire Danoy'\n 'Pascal Bouvry' 'Johnatan Pecero']"
] |
null | null |
2402.12015
| null | null |
http://arxiv.org/pdf/2402.12015v1
|
2024-02-19T10:13:25Z
|
2024-02-19T10:13:25Z
|
An Index Policy Based on Sarsa and Q-learning for Heterogeneous Smart
Target Tracking
|
In solving the non-myopic radar scheduling for multiple smart target tracking within an active and passive radar network, we need to consider both short-term enhanced tracking performance and a higher probability of target maneuvering in the future with active tracking. Acquiring the long-term tracking performance while scheduling the beam resources of active and passive radars poses a challenge. To address this challenge, we model this problem as a Markov decision process consisting of parallel restless bandit processes. Each bandit process is associated with a smart target, of which the estimation state evolves according to different discrete dynamic models for different actions - whether or not the target is being tracked. The discrete state is defined by the dynamic mode. The problem exhibits the curse of dimensionality, where optimal solutions are in general intractable. We resort to heuristics through the famous restless multi-armed bandit techniques. It follows with efficient scheduling policies based on the indices that are real numbers representing the marginal rewards of taking different actions. For the inevitable practical case with unknown transition matrices, we propose a new method that utilizes the forward Sarsa and backward Q-learning to approximate the indices through adapting the state-action value functions, or equivalently the Q-functions, and propose a new policy, namely ISQ, aiming to maximize the long-term tracking rewards. Numerical results demonstrate that the proposed ISQ policy outperforms conventional Q-learning-based methods and rapidly converges to the well-known Whittle index policy with revealed state transition models, which is considered the benchmark.
|
[
"['Yuhang Hao' 'Zengfu Wang' 'Jing Fu' 'Quan Pan']"
] |
null | null |
2402.12022
| null | null |
http://arxiv.org/pdf/2402.12022v1
|
2024-02-19T10:31:53Z
|
2024-02-19T10:31:53Z
|
Distilling Large Language Models for Text-Attributed Graph Learning
|
Text-Attributed Graphs (TAGs) are graphs of connected textual documents. Graph models can efficiently learn TAGs, but their training heavily relies on human-annotated labels, which are scarce or even unavailable in many applications. Large language models (LLMs) have recently demonstrated remarkable capabilities in few-shot and zero-shot TAG learning, but they suffer from scalability, cost, and privacy issues. Therefore, in this work, we focus on synergizing LLMs and graph models with their complementary strengths by distilling the power of LLMs to a local graph model on TAG learning. To address the inherent gaps between LLMs (generative models for texts) and graph models (discriminative models for graphs), we propose first to let LLMs teach an interpreter with rich textual rationale and then let a student model mimic the interpreter's reasoning without LLMs' textual rationale. Extensive experiments validate the efficacy of our proposed framework.
|
[
"['Bo Pan' 'Zheng Zhang' 'Yifei Zhang' 'Yuntong Hu' 'Liang Zhao']"
] |
null | null |
2402.12034
| null | null |
http://arxiv.org/pdf/2402.12034v1
|
2024-02-19T10:42:34Z
|
2024-02-19T10:42:34Z
|
When Do Off-Policy and On-Policy Policy Gradient Methods Align?
|
Policy gradient methods are widely adopted reinforcement learning algorithms for tasks with continuous action spaces. These methods succeeded in many application domains, however, because of their notorious sample inefficiency their use remains limited to problems where fast and accurate simulations are available. A common way to improve sample efficiency is to modify their objective function to be computable from off-policy samples without importance sampling. A well-established off-policy objective is the excursion objective. This work studies the difference between the excursion objective and the traditional on-policy objective, which we refer to as the on-off gap. We provide the first theoretical analysis showing conditions to reduce the on-off gap while establishing empirical evidence of shortfalls arising when these conditions are not met.
|
[
"['Davide Mambelli' 'Stephan Bongers' 'Onno Zoeter' 'Matthijs T. J. Spaan'\n 'Frans A. Oliehoek']"
] |
null | null |
2402.12035
| null | null |
http://arxiv.org/pdf/2402.12035v1
|
2024-02-19T10:43:13Z
|
2024-02-19T10:43:13Z
|
Class-incremental Learning for Time Series: Benchmark and Evaluation
|
Real-world environments are inherently non-stationary, frequently introducing new classes over time. This is especially common in time series classification, such as the emergence of new disease classification in healthcare or the addition of new activities in human activity recognition. In such cases, a learning system is required to assimilate novel classes effectively while avoiding catastrophic forgetting of the old ones, which gives rise to the Class-incremental Learning (CIL) problem. However, despite the encouraging progress in the image and language domains, CIL for time series data remains relatively understudied. Existing studies suffer from inconsistent experimental designs, necessitating a comprehensive evaluation and benchmarking of methods across a wide range of datasets. To this end, we first present an overview of the Time Series Class-incremental Learning (TSCIL) problem, highlight its unique challenges, and cover the advanced methodologies. Further, based on standardized settings, we develop a unified experimental framework that supports the rapid development of new algorithms, easy integration of new datasets, and standardization of the evaluation process. Using this framework, we conduct a comprehensive evaluation of various generic and time-series-specific CIL methods in both standard and privacy-sensitive scenarios. Our extensive experiments not only provide a standard baseline to support future research but also shed light on the impact of various design factors such as normalization layers or memory budget thresholds. Codes are available at https://github.com/zqiao11/TSCIL.
|
[
"['Zhongzheng Qiao' 'Quang Pham' 'Zhen Cao' 'Hoang H Le' 'P. N. Suganthan'\n 'Xudong Jiang' 'Ramasamy Savitha']"
] |
null | null |
2402.12038
| null | null |
http://arxiv.org/pdf/2402.12038v3
|
2024-06-17T08:52:29Z
|
2024-02-19T10:47:09Z
|
Self-AMPLIFY: Improving Small Language Models with Self Post Hoc
Explanations
|
Incorporating natural language rationales in the prompt and In-Context Learning (ICL) have led to a significant improvement of Large Language Models (LLMs) performance. However, generating high-quality rationales require human-annotation or the use of auxiliary proxy models. In this work, we propose Self-AMPLIFY to automatically generate rationales from post hoc explanation methods applied to Small Language Models (SLMs) to improve their own performance. Self-AMPLIFY is a 3-step method that targets samples, generates rationales and builds a final prompt to leverage ICL. Self-AMPLIFY performance is evaluated on four SLMs and five datasets requiring strong reasoning abilities. Self-AMPLIFY achieves good results against competitors, leading to strong accuracy improvement. Self-AMPLIFY is the first method to apply post hoc explanation methods to autoregressive language models to generate rationales to improve their own performance in a fully automated manner.
|
[
"['Milan Bhan' 'Jean-Noel Vittaut' 'Nicolas Chesneau' 'Marie-Jeanne Lesot']"
] |
null | null |
2402.12042
| null | null |
http://arxiv.org/pdf/2402.12042v2
|
2024-05-29T10:58:25Z
|
2024-02-19T10:56:47Z
|
Linear bandits with polylogarithmic minimax regret
|
We study a noise model for linear stochastic bandits for which the subgaussian noise parameter vanishes linearly as we select actions on the unit sphere closer and closer to the unknown vector. We introduce an algorithm for this problem that exhibits a minimax regret scaling as $log^3(T)$ in the time horizon $T$, in stark contrast the square root scaling of this regret for typical bandit algorithms. Our strategy, based on weighted least-squares estimation, achieves the eigenvalue relation $lambda_{min} ( V_t ) = Omega (sqrt{lambda_{max}(V_t ) })$ for the design matrix $V_t$ at each time step $t$ through geometrical arguments that are independent of the noise model and might be of independent interest. This allows us to tightly control the expected regret in each time step to be of the order $O(frac1{t})$, leading to the logarithmic scaling of the cumulative regret.
|
[
"['Josep Lumbreras' 'Marco Tomamichel']"
] |
null | null |
2402.12061
| null | null |
http://arxiv.org/pdf/2402.12061v2
|
2024-06-05T15:08:28Z
|
2024-02-19T11:28:20Z
|
All Language Models Large and Small
|
Many leading language models (LMs) use high-intensity computational resources both during training and execution. This poses the challenge of lowering resource costs for deployment and faster execution of decision-making tasks among others. We introduce a novel plug-and-play LM framework named Language Optimising Network Distribution (LONDI) framework. LONDI learns to selectively employ large LMs only where complex decision-making and reasoning are required while using low-resource LMs (i.e. LMs require less GPU usage, but may not be able to solve the problem alone) everywhere else. LONDI consists of a system of two (off-)policy networks, an LM, a large LM (LLM), and a reinforcement learning module that uses switching controls to quickly learn which system states to call the LLM. We then introduce a variant of LONDI that maintains budget constraints on LLM calls and hence its resource usage. Theoretically, we prove LONDI learns the subset of system states to activate the LLM required to solve the task. We then prove that LONDI converges to optimal solutions while also preserving budgetary constraints on LLM calls almost surely enabling it to solve various tasks while significantly lowering computational costs. We test LONDI's performance in a range of tasks in ScienceWorld and BabyAI-Text and demonstrate that LONDI can solve tasks only solvable by resource-intensive LLMs while reducing GPU usage by up to 30%.
|
[
"['Zhixun Chen' 'Yali Du' 'David Mguni']"
] |
null | null |
2402.12062
| null | null |
http://arxiv.org/pdf/2402.12062v2
|
2024-02-20T22:53:23Z
|
2024-02-19T11:30:00Z
|
Causal Equal Protection as Algorithmic Fairness
|
Over the last ten years the literature in computer science and philosophy has formulated different criteria of algorithmic fairness. One of the most discussed, classification parity, requires that the erroneous classifications of a predictive algorithm occur with equal frequency for groups picked out by protected characteristics. Despite its intuitive appeal, classification parity has come under attack. Multiple scenarios can be imagined in which - intuitively - a predictive algorithm does not treat any individual unfairly, and yet classification parity is violated. To make progress, we turn to a related principle, equal protection, originally developed in the context of criminal justice. Key to equal protection is equalizing the risks of erroneous classifications (in a sense to be specified) as opposed to equalizing the rates of erroneous classifications. We show that equal protection avoids many of the counterexamples to classification parity, but also fails to model our moral intuitions in a number of common scenarios, for example, when the predictor is causally downstream relative to the protected characteristic. To address these difficulties, we defend a novel principle, causal equal protection, that models the fair allocation of the risks of erroneous classification through the lenses of causality.
|
[
"['Marcello Di Bello' 'Nicolò Cangiotti' 'Michele Loi']"
] |
null | null |
2402.12065
| null | null |
http://arxiv.org/pdf/2402.12065v2
|
2024-02-20T08:48:24Z
|
2024-02-19T11:33:21Z
|
WKVQuant: Quantizing Weight and Key/Value Cache for Large Language
Models Gains More
|
Large Language Models (LLMs) face significant deployment challenges due to their substantial memory requirements and the computational demands of auto-regressive text generation process. This paper addresses these challenges by focusing on the quantization of LLMs, a technique that reduces memory consumption by converting model parameters and activations into low-bit integers. We critically analyze the existing quantization approaches, identifying their limitations in balancing the accuracy and efficiency of the quantized LLMs. To advance beyond these limitations, we propose WKVQuant, a PTQ framework especially designed for quantizing weights and the key/value (KV) cache of LLMs. Specifically, we incorporates past-only quantization to improve the computation of attention. Additionally, we introduce two-dimensional quantization strategy to handle the distribution of KV cache, along with a cross-block reconstruction regularization for parameter optimization. Experiments show that WKVQuant achieves almost comparable memory savings to weight-activation quantization, while also approaching the performance of weight-only quantization.
|
[
"['Yuxuan Yue' 'Zhihang Yuan' 'Haojie Duanmu' 'Sifan Zhou' 'Jianlong Wu'\n 'Liqiang Nie']"
] |
null | null |
2402.12067
| null | null |
http://arxiv.org/pdf/2402.12067v1
|
2024-02-19T11:35:01Z
|
2024-02-19T11:35:01Z
|
Interpretable Brain-Inspired Representations Improve RL Performance on
Visual Navigation Tasks
|
Visual navigation requires a whole range of capabilities. A crucial one of these is the ability of an agent to determine its own location and heading in an environment. Prior works commonly assume this information as given, or use methods which lack a suitable inductive bias and accumulate error over time. In this work, we show how the method of slow feature analysis (SFA), inspired by neuroscience research, overcomes both limitations by generating interpretable representations of visual data that encode location and heading of an agent. We employ SFA in a modern reinforcement learning context, analyse and compare representations and illustrate where hierarchical SFA can outperform other feature extractors on navigation tasks.
|
[
"['Moritz Lange' 'Raphael C. Engelhardt' 'Wolfgang Konen' 'Laurenz Wiskott']"
] |
null | null |
2402.12072
| null | null |
http://arxiv.org/pdf/2402.12072v2
|
2024-07-09T07:13:56Z
|
2024-02-19T11:48:11Z
|
Robustness and Exploration of Variational and Machine Learning
Approaches to Inverse Problems: An Overview
|
This paper provides an overview of current approaches for solving inverse problems in imaging using variational methods and machine learning. A special focus lies on point estimators and their robustness against adversarial perturbations. In this context results of numerical experiments for a one-dimensional toy problem are provided, showing the robustness of different approaches and empirically verifying theoretical guarantees. Another focus of this review is the exploration of the subspace of data-consistent solutions through explicit guidance to satisfy specific semantic or textural properties.
|
[
"['Alexander Auras' 'Kanchana Vaishnavi Gandikota' 'Hannah Droege'\n 'Michael Moeller']"
] |
null | null |
2402.12118
| null | null |
http://arxiv.org/pdf/2402.12118v1
|
2024-02-19T13:13:16Z
|
2024-02-19T13:13:16Z
|
DualView: Data Attribution from the Dual Perspective
|
Local data attribution (or influence estimation) techniques aim at estimating the impact that individual data points seen during training have on particular predictions of an already trained Machine Learning model during test time. Previous methods either do not perform well consistently across different evaluation criteria from literature, are characterized by a high computational demand, or suffer from both. In this work we present DualView, a novel method for post-hoc data attribution based on surrogate modelling, demonstrating both high computational efficiency, as well as good evaluation results. With a focus on neural networks, we evaluate our proposed technique using suitable quantitative evaluation strategies from the literature against related principal local data attribution methods. We find that DualView requires considerably lower computational resources than other methods, while demonstrating comparable performance to competing approaches across evaluation metrics. Futhermore, our proposed method produces sparse explanations, where sparseness can be tuned via a hyperparameter. Finally, we showcase that with DualView, we can now render explanations from local data attributions compatible with established local feature attribution methods: For each prediction on (test) data points explained in terms of impactful samples from the training set, we are able to compute and visualize how the prediction on (test) sample relates to each influential training sample in terms of features recognized and by the model. We provide an Open Source implementation of DualView online, together with implementations for all other local data attribution methods we compare against, as well as the metrics reported here, for full reproducibility.
|
[
"['Galip Ümit Yolcu' 'Thomas Wiegand' 'Wojciech Samek'\n 'Sebastian Lapuschkin']"
] |
null | null |
2402.12134
| null | null |
http://arxiv.org/pdf/2402.12134v1
|
2024-02-19T13:32:30Z
|
2024-02-19T13:32:30Z
|
Molecule Generation and Optimization for Efficient Fragrance Creation
|
This research introduces a Machine Learning-centric approach to replicate olfactory experiences, validated through experimental quantification of perfume perception. Key contributions encompass a hybrid model connecting perfume molecular structure to human olfactory perception. This model includes an AI-driven molecule generator (utilizing Graph and Generative Neural Networks), quantification and prediction of odor intensity, and refinery of optimal solvent and molecule combinations for desired fragrances. Additionally, a thermodynamic-based model establishes a link between olfactory perception and liquid-phase concentrations. The methodology employs Transfer Learning and selects the most suitable molecules based on vapor pressure and fragrance notes. Ultimately, a mathematical optimization problem is formulated to minimize discrepancies between new and target olfactory experiences. The methodology is validated by reproducing two distinct olfactory experiences using available experimental data.
|
[
"['Bruno C. L. Rodrigues' 'Vinicius V. Santana' 'Sandris Murins'\n 'Idelfonso B. R. Nogueira']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.