categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.01112
null
null
http://arxiv.org/pdf/2403.01112v2
2024-03-07T13:40:04Z
2024-03-02T07:37:05Z
Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning
In cooperative multi-agent reinforcement learning (MARL), agents aim to achieve a common goal, such as defeating enemies or scoring a goal. Existing MARL algorithms are effective but still require significant learning time and often get trapped in local optima by complex tasks, subsequently failing to discover a goal-reaching policy. To address this, we introduce Efficient episodic Memory Utilization (EMU) for MARL, with two primary objectives: (a) accelerating reinforcement learning by leveraging semantically coherent memory from an episodic buffer and (b) selectively promoting desirable transitions to prevent local convergence. To achieve (a), EMU incorporates a trainable encoder/decoder structure alongside MARL, creating coherent memory embeddings that facilitate exploratory memory recall. To achieve (b), EMU introduces a novel reward structure called episodic incentive based on the desirability of states. This reward improves the TD target in Q-learning and acts as an additional incentive for desirable transitions. We provide theoretical support for the proposed incentive and demonstrate the effectiveness of EMU compared to conventional episodic control. The proposed method is evaluated in StarCraft II and Google Research Football, and empirical results indicate further performance improvement over state-of-the-art methods.
[ "['Hyungho Na' 'Yunkyeong Seo' 'Il-chul Moon']" ]
null
null
2403.01121
null
null
http://arxiv.org/pdf/2403.01121v2
2024-03-28T09:11:27Z
2024-03-02T08:05:03Z
OpenGraph: Towards Open Graph Foundation Models
Graph learning has become indispensable for interpreting and harnessing relational data in diverse fields, ranging from recommendation systems to social network analysis. In this context, a variety of GNNs have emerged as promising methodologies for encoding the structural information of graphs. By effectively capturing the graph's underlying structure, these GNNs have shown great potential in enhancing performance in graph learning tasks, such as link prediction and node classification. However, despite their successes, a significant challenge persists: these advanced methods often face difficulties in generalizing to unseen graph data that significantly differs from the training instances. In this work, our aim is to advance the graph learning paradigm by developing a general graph foundation model. This model is designed to understand the complex topological patterns present in diverse graph data, enabling it to excel in zero-shot graph learning tasks across different downstream datasets. To achieve this goal, we address several key technical challenges in our OpenGraph model. Firstly, we propose a unified graph tokenizer to adapt our graph model to generalize well on unseen graph data, even when the underlying graph properties differ significantly from those encountered during training. Secondly, we develop a scalable graph transformer as the foundational encoder, which effectively captures node-wise dependencies within the global topological context. Thirdly, we introduce a data augmentation mechanism enhanced by a LLM to alleviate the limitations of data scarcity in real-world scenarios. Extensive experiments validate the effectiveness of our framework. By adapting our OpenGraph to new graph characteristics and comprehending the nuances of diverse graphs, our approach achieves remarkable zero-shot graph learning performance across various settings and domains.
[ "['Lianghao Xia' 'Ben Kao' 'Chao Huang']" ]
null
null
2403.01128
null
null
http://arxiv.org/pdf/2403.01128v2
2024-03-05T14:37:36Z
2024-03-02T08:18:32Z
Sensitivity Analysis On Loss Landscape
Gradients can be employed for sensitivity analysis. Here, we leverage the advantages of the Loss Landscape to comprehend which independent variables impact the dependent variable. We seek to grasp the loss landscape by utilizing first, second, and third derivatives through automatic differentiation. we know that Spearman's rank correlation coefficient can detect the monotonic relationship between two variables. However, I have found that second-order gradients, with certain configurations and parameters, provide information that can be visualized similarly to Spearman results, In this approach, we incorporate a loss function with an activation function, resulting in a non-linear pattern. Each exploration of the loss landscape through retraining yields new valuable information. Furthermore, the first and third derivatives are also beneficial, as they indicate the extent to which independent variables influence the dependent variable.
[ "['Salman Faroz']" ]
null
null
2403.01131
null
null
http://arxiv.org/pdf/2403.01131v2
2024-03-05T11:11:41Z
2024-03-02T08:21:59Z
LLaMoCo: Instruction Tuning of Large Language Models for Optimization Code Generation
Recent research explores optimization using large language models (LLMs) by either iteratively seeking next-step solutions from LLMs or directly prompting LLMs for an optimizer. However, these approaches exhibit inherent limitations, including low operational efficiency, high sensitivity to prompt design, and a lack of domain-specific knowledge. We introduce LLaMoCo, the first instruction-tuning framework designed to adapt LLMs for solving optimization problems in a code-to-code manner. Specifically, we establish a comprehensive instruction set containing well-described problem prompts and effective optimization codes. We then develop a novel two-phase learning strategy that incorporates a contrastive learning-based warm-up procedure before the instruction-tuning phase to enhance the convergence behavior during model fine-tuning. The experiment results demonstrate that a CodeGen (350M) model fine-tuned by our LLaMoCo achieves superior optimization performance compared to GPT-4 Turbo and the other competitors across both synthetic and realistic problem sets. The fine-tuned model and the usage instructions are available at https://anonymous.4open.science/r/LLaMoCo-722A.
[ "['Zeyuan Ma' 'Hongshu Guo' 'Jiacheng Chen' 'Guojun Peng' 'Zhiguang Cao'\n 'Yining Ma' 'Yue-Jiao Gong']" ]
null
null
2403.01132
null
null
http://arxiv.org/pdf/2403.01132v1
2024-03-02T08:27:05Z
2024-03-02T08:27:05Z
MPIPN: A Multi Physics-Informed PointNet for solving parametric acoustic-structure systems
Machine learning is employed for solving physical systems governed by general nonlinear partial differential equations (PDEs). However, complex multi-physics systems such as acoustic-structure coupling are often described by a series of PDEs that incorporate variable physical quantities, which are referred to as parametric systems. There are lack of strategies for solving parametric systems governed by PDEs that involve explicit and implicit quantities. In this paper, a deep learning-based Multi Physics-Informed PointNet (MPIPN) is proposed for solving parametric acoustic-structure systems. First, the MPIPN induces an enhanced point-cloud architecture that encompasses explicit physical quantities and geometric features of computational domains. Then, the MPIPN extracts local and global features of the reconstructed point-cloud as parts of solving criteria of parametric systems, respectively. Besides, implicit physical quantities are embedded by encoding techniques as another part of solving criteria. Finally, all solving criteria that characterize parametric systems are amalgamated to form distinctive sequences as the input of the MPIPN, whose outputs are solutions of systems. The proposed framework is trained by adaptive physics-informed loss functions for corresponding computational domains. The framework is generalized to deal with new parametric conditions of systems. The effectiveness of the MPIPN is validated by applying it to solve steady parametric acoustic-structure coupling systems governed by the Helmholtz equations. An ablation experiment has been implemented to demonstrate the efficacy of physics-informed impact with a minority of supervised data. The proposed method yields reasonable precision across all computational domains under constant parametric conditions and changeable combinations of parametric conditions for acoustic-structure systems.
[ "['Chu Wang' 'Jinhong Wu' 'Yanzhi Wang' 'Zhijian Zha' 'Qi Zhou']" ]
null
null
2403.01133
null
null
http://arxiv.org/pdf/2403.01133v2
2024-04-14T11:24:26Z
2024-03-02T08:29:08Z
Evaluating Large Language Models as Virtual Annotators for Time-series Physical Sensing Data
Traditional human-in-the-loop-based annotation for time-series data like inertial data often requires access to alternate modalities like video or audio from the environment. These alternate sources provide the necessary information to the human annotator, as the raw numeric data is often too obfuscated even for an expert. However, this traditional approach has many concerns surrounding overall cost, efficiency, storage of additional modalities, time, scalability, and privacy. Interestingly, recent large language models (LLMs) are also trained with vast amounts of publicly available alphanumeric data, which allows them to comprehend and perform well on tasks beyond natural language processing. Naturally, this opens up a potential avenue to explore LLMs as virtual annotators where the LLMs will be directly provided the raw sensor data for annotation instead of relying on any alternate modality. Naturally, this could mitigate the problems of the traditional human-in-the-loop approach. Motivated by this observation, we perform a detailed study in this paper to assess whether the state-of-the-art (SOTA) LLMs can be used as virtual annotators for labeling time-series physical sensing data. To perform this in a principled manner, we segregate the study into two major phases. In the first phase, we investigate the challenges an LLM like GPT-4 faces in comprehending raw sensor data. Considering the observations from phase 1, in the next phase, we investigate the possibility of encoding the raw sensor data using SOTA SSL approaches and utilizing the projected time-series data to get annotations from the LLM. Detailed evaluation with four benchmark HAR datasets shows that SSL-based encoding and metric-based guidance allow the LLM to make more reasonable decisions and provide accurate annotations without requiring computationally expensive fine-tuning or sophisticated prompt engineering.
[ "['Aritra Hota' 'Soumyajit Chatterjee' 'Sandip Chakraborty']" ]
null
null
2403.01136
null
null
http://arxiv.org/pdf/2403.01136v1
2024-03-02T08:40:07Z
2024-03-02T08:40:07Z
LLM-PQ: Serving LLM on Heterogeneous Clusters with Phase-Aware Partition and Adaptive Quantization
Recent breakthroughs in Large-scale language models (LLMs) have demonstrated impressive performance on various tasks. The immense sizes of LLMs have led to very high resource demand and cost for running the models. Though the models are largely served using uniform high-caliber GPUs nowadays, utilizing a heterogeneous cluster with a mix of available high- and low-capacity GPUs can potentially substantially reduce the serving cost. There is a lack of designs to support efficient LLM serving using a heterogeneous cluster, while the current solutions focus on model partition and uniform compression among homogeneous devices. This paper proposes LLM-PQ, a system that advocates adaptive model quantization and phase-aware partition to improve LLM serving efficiency on heterogeneous GPU clusters. We carefully decide on mixed-precision model quantization together with phase-aware model partition and micro-batch sizing in distributed LLM serving with an efficient algorithm, to greatly enhance inference throughput while fulfilling user-specified model quality targets. Extensive experiments on production inference workloads in 11 different clusters demonstrate that LLM-PQ achieves up to 2.88x (2.26x on average) throughput improvement in inference, showing great advantages over state-of-the-art works.
[ "['Juntao Zhao' 'Borui Wan' 'Yanghua Peng' 'Haibin Lin' 'Chuan Wu']" ]
null
null
2403.01147
null
null
http://arxiv.org/pdf/2403.01147v1
2024-03-02T09:28:04Z
2024-03-02T09:28:04Z
A Hybrid Model for Traffic Incident Detection based on Generative Adversarial Networks and Transformer Model
In addition to enhancing traffic safety and facilitating prompt emergency response, traffic incident detection plays an indispensable role in intelligent transportation systems by providing real-time traffic status information. This enables the realization of intelligent traffic control and management. Previous research has identified that apart from employing advanced algorithmic models, the effectiveness of detection is also significantly influenced by challenges related to acquiring large datasets and addressing dataset imbalances. A hybrid model combining transformer and generative adversarial networks (GANs) is proposed to address these challenges. Experiments are conducted on four real datasets to validate the superiority of the transformer in traffic incident detection. Additionally, GANs are utilized to expand the dataset and achieve a balanced ratio of 1:4, 2:3, and 1:1. The proposed model is evaluated against the baseline model. The results demonstrate that the proposed model enhances the dataset size, balances the dataset, and improves the performance of traffic incident detection in various aspects.
[ "['Xinying Lu' 'Doudou Zhang' 'Jianli Xiao']" ]
null
null
2403.01158
null
null
http://arxiv.org/pdf/2403.01158v1
2024-03-02T10:18:37Z
2024-03-02T10:18:37Z
A Bayesian Committee Machine Potential for Oxygen-containing Organic Compounds
Understanding the pivotal role of oxygen-containing organic compounds in serving as an energy source for living organisms and contributing to protein formation is crucial in the field of biochemistry. This study addresses the challenge of comprehending protein-protein interactions (PPI) and developing predicitive models for proteins and organic compounds, with a specific focus on quantifying their binding affinity. Here, we introduce the active Bayesian Committee Machine (BCM) potential, specifically designed to predict oxygen-containing organic compounds within eight groups of CHO. The BCM potential adopts a committee-based approach to tackle scalability issues associated with kernel regressors, particularly when dealing with large datasets. Its adaptable structure allows for efficient and cost-effective expansion, maintaing both transferability and scalability. Through systematic benchmarking, we position the sparse BCM potential as a promising contender in the pursuit of a universal machine learning potential.
[ "['Seungwon Kim' 'D. ChangMo Yang' 'Soohaeng Yoo Willow' 'Chang Woo Myung']" ]
null
null
2403.01183
null
null
http://arxiv.org/pdf/2403.01183v1
2024-03-02T11:44:14Z
2024-03-02T11:44:14Z
Leveraging Self-Supervised Learning for Scene Recognition in Child Sexual Abuse Imagery
Crime in the 21st century is split into a virtual and real world. However, the former has become a global menace to people's well-being and security in the latter. The challenges it presents must be faced with unified global cooperation, and we must rely more than ever on automated yet trustworthy tools to combat the ever-growing nature of online offenses. Over 10 million child sexual abuse reports are submitted to the US National Center for Missing & Exploited Children every year, and over 80% originated from online sources. Therefore, investigation centers and clearinghouses cannot manually process and correctly investigate all imagery. In light of that, reliable automated tools that can securely and efficiently deal with this data are paramount. In this sense, the scene recognition task looks for contextual cues in the environment, being able to group and classify child sexual abuse data without requiring to be trained on sensitive material. The scarcity and limitations of working with child sexual abuse images lead to self-supervised learning, a machine-learning methodology that leverages unlabeled data to produce powerful representations that can be more easily transferred to target tasks. This work shows that self-supervised deep learning models pre-trained on scene-centric data can reach 71.6% balanced accuracy on our indoor scene classification task and, on average, 2.2 percentage points better performance than a fully supervised version. We cooperate with Brazilian Federal Police experts to evaluate our indoor classification model on actual child abuse material. The results demonstrate a notable discrepancy between the features observed in widely used scene datasets and those depicted on sensitive materials.
[ "['Pedro H. V. Valois' 'João Macedo' 'Leo S. F. Ribeiro'\n 'Jefersson A. dos Santos' 'Sandra Avila']" ]
null
null
2403.01189
null
null
http://arxiv.org/pdf/2403.01189v1
2024-03-02T12:06:42Z
2024-03-02T12:06:42Z
Training Unbiased Diffusion Models From Biased Dataset
With significant advancements in diffusion models, addressing the potential risks of dataset bias becomes increasingly important. Since generated outputs directly suffer from dataset bias, mitigating latent bias becomes a key factor in improving sample quality and proportion. This paper proposes time-dependent importance reweighting to mitigate the bias for the diffusion models. We demonstrate that the time-dependent density ratio becomes more precise than previous approaches, thereby minimizing error propagation in generative learning. While directly applying it to score-matching is intractable, we discover that using the time-dependent density ratio both for reweighting and score correction can lead to a tractable form of the objective function to regenerate the unbiased data density. Furthermore, we theoretically establish a connection with traditional score-matching, and we demonstrate its convergence to an unbiased distribution. The experimental evidence supports the usefulness of the proposed method, which outperforms baselines including time-independent importance reweighting on CIFAR-10, CIFAR-100, FFHQ, and CelebA with various bias settings. Our code is available at https://github.com/alsdudrla10/TIW-DSM.
[ "['Yeongmin Kim' 'Byeonghu Na' 'Minsang Park' 'JoonHo Jang' 'Dongjun Kim'\n 'Wanmo Kang' 'Il-Chul Moon']" ]
null
null
2403.01192
null
null
http://arxiv.org/pdf/2403.01192v2
2024-03-08T15:18:19Z
2024-03-02T12:12:04Z
A Composite Decomposition Method for Large-Scale Global Optimization
Cooperative co-evolution (CC) algorithms, based on the divide-and-conquer strategy, have emerged as the predominant approach to solving large-scale global optimization (LSGO) problems. The efficiency and accuracy of the grouping stage significantly impact the performance of the optimization process. While the general separability grouping (GSG) method has overcome the limitation of previous differential grouping (DG) methods by enabling the decomposition of non-additively separable functions, it suffers from high computational complexity. To address this challenge, this article proposes a composite separability grouping (CSG) method, seamlessly integrating DG and GSG into a problem decomposition framework to utilize the strengths of both approaches. CSG introduces a step-by-step decomposition framework that accurately decomposes various problem types using fewer computational resources. By sequentially identifying additively, multiplicatively and generally separable variables, CSG progressively groups non-separable variables by recursively considering the interactions between each non-separable variable and the formed non-separable groups. Furthermore, to enhance the efficiency and accuracy of CSG, we introduce two innovative methods: a multiplicatively separable variable detection method and a non-separable variable grouping method. These two methods are designed to effectively detect multiplicatively separable variables and efficiently group non-separable variables, respectively. Extensive experimental results demonstrate that CSG achieves more accurate variable grouping with lower computational complexity compared to GSG and state-of-the-art DG series designs.
[ "['Maojiang Tian' 'Minyang Chen' 'Wei Du' 'Yang Tang' 'Yaochu Jin'\n 'Gary G. Yen']" ]
null
null
2403.01203
null
null
http://arxiv.org/pdf/2403.01203v1
2024-03-02T12:44:59Z
2024-03-02T12:44:59Z
Pseudo-Label Calibration Semi-supervised Multi-Modal Entity Alignment
Multi-modal entity alignment (MMEA) aims to identify equivalent entities between two multi-modal knowledge graphs for integration. Unfortunately, prior arts have attempted to improve the interaction and fusion of multi-modal information, which have overlooked the influence of modal-specific noise and the usage of labeled and unlabeled data in semi-supervised settings. In this work, we introduce a Pseudo-label Calibration Multi-modal Entity Alignment (PCMEA) in a semi-supervised way. Specifically, in order to generate holistic entity representations, we first devise various embedding modules and attention mechanisms to extract visual, structural, relational, and attribute features. Different from the prior direct fusion methods, we next propose to exploit mutual information maximization to filter the modal-specific noise and to augment modal-invariant commonality. Then, we combine pseudo-label calibration with momentum-based contrastive learning to make full use of the labeled and unlabeled data, which improves the quality of pseudo-label and pulls aligned entities closer. Finally, extensive experiments on two MMEA datasets demonstrate the effectiveness of our PCMEA, which yields state-of-the-art performance.
[ "['Luyao Wang' 'Pengnian Qi' 'Xigang Bao' 'Chunlai Zhou' 'Biao Qin']" ]
null
null
2403.01204
null
null
http://arxiv.org/pdf/2403.01204v1
2024-03-02T12:45:01Z
2024-03-02T12:45:01Z
Stochastic gradient descent for streaming linear and rectified linear systems with Massart noise
We propose SGD-exp, a stochastic gradient descent approach for linear and ReLU regressions under Massart noise (adversarial semi-random corruption model) for the fully streaming setting. We show novel nearly linear convergence guarantees of SGD-exp to the true parameter with up to $50%$ Massart corruption rate, and with any corruption rate in the case of symmetric oblivious corruptions. This is the first convergence guarantee result for robust ReLU regression in the streaming setting, and it shows the improved convergence rate over previous robust methods for $L_1$ linear regression due to a choice of an exponentially decaying step size, known for its efficiency in practice. Our analysis is based on the drift analysis of a discrete stochastic process, which could also be interesting on its own.
[ "['Halyun Jeong' 'Deanna Needell' 'Elizaveta Rebrova']" ]
null
null
2403.01216
null
null
http://arxiv.org/pdf/2403.01216v2
2024-04-04T02:15:39Z
2024-03-02T14:14:45Z
API Is Enough: Conformal Prediction for Large Language Models Without Logit-Access
This study aims to address the pervasive challenge of quantifying uncertainty in large language models (LLMs) without logit-access. Conformal Prediction (CP), known for its model-agnostic and distribution-free features, is a desired approach for various LLMs and data distributions. However, existing CP methods for LLMs typically assume access to the logits, which are unavailable for some API-only LLMs. In addition, logits are known to be miscalibrated, potentially leading to degraded CP performance. To tackle these challenges, we introduce a novel CP method that (1) is tailored for API-only LLMs without logit-access; (2) minimizes the size of prediction sets; and (3) ensures a statistical guarantee of the user-defined coverage. The core idea of this approach is to formulate nonconformity measures using both coarse-grained (i.e., sample frequency) and fine-grained uncertainty notions (e.g., semantic similarity). Experimental results on both close-ended and open-ended Question Answering tasks show our approach can mostly outperform the logit-based CP baselines.
[ "['Jiayuan Su' 'Jing Luo' 'Hongwei Wang' 'Lu Cheng']" ]
null
null
2403.01218
null
null
http://arxiv.org/pdf/2403.01218v3
2024-05-21T17:08:00Z
2024-03-02T14:22:40Z
Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy
The high cost of model training makes it increasingly desirable to develop techniques for unlearning. These techniques seek to remove the influence of a training example without having to retrain the model from scratch. Intuitively, once a model has unlearned, an adversary that interacts with the model should no longer be able to tell whether the unlearned example was included in the model's training set or not. In the privacy literature, this is known as membership inference. In this work, we discuss adaptations of Membership Inference Attacks (MIAs) to the setting of unlearning (leading to their "U-MIA" counterparts). We propose a categorization of existing U-MIAs into "population U-MIAs", where the same attacker is instantiated for all examples, and "per-example U-MIAs", where a dedicated attacker is instantiated for each example. We show that the latter category, wherein the attacker tailors its membership prediction to each example under attack, is significantly stronger. Indeed, our results show that the commonly used U-MIAs in the unlearning literature overestimate the privacy protection afforded by existing unlearning techniques on both vision and language models. Our investigation reveals a large variance in the vulnerability of different examples to per-example U-MIAs. In fact, several unlearning algorithms lead to a reduced vulnerability for some, but not all, examples that we wish to unlearn, at the expense of increasing it for other examples. Notably, we find that the privacy protection for the remaining training examples may worsen as a consequence of unlearning. We also discuss the fundamental difficulty of equally protecting all examples using existing unlearning schemes, due to the different rates at which examples are unlearned. We demonstrate that naive attempts at tailoring unlearning stopping criteria to different examples fail to alleviate these issues.
[ "['Jamie Hayes' 'Ilia Shumailov' 'Eleni Triantafillou' 'Amr Khalifa'\n 'Nicolas Papernot']" ]
null
null
2403.01221
null
null
http://arxiv.org/pdf/2403.01221v2
2024-05-21T11:34:38Z
2024-03-02T14:30:57Z
A Two-Stage Algorithm for Cost-Efficient Multi-instance Counterfactual Explanations
Counterfactual explanations constitute among the most popular methods for analyzing black-box systems since they can recommend cost-efficient and actionable changes to the input of a system to obtain the desired system output. While most of the existing counterfactual methods explain a single instance, several real-world problems, such as customer satisfaction, require the identification of a single counterfactual that can satisfy multiple instances (e.g. customers) simultaneously. To address this limitation, in this work, we propose a flexible two-stage algorithm for finding groups of instances and computing cost-efficient multi-instance counterfactual explanations. The paper presents the algorithm and its performance against popular alternatives through a comparative evaluation.
[ "['André Artelt' 'Andreas Gregoriades']" ]
null
null
2403.01229
null
null
http://arxiv.org/pdf/2403.01229v1
2024-03-02T15:14:58Z
2024-03-02T15:14:58Z
REWIND Dataset: Privacy-preserving Speaking Status Segmentation from Multimodal Body Movement Signals in the Wild
Recognizing speaking in humans is a central task towards understanding social interactions. Ideally, speaking would be detected from individual voice recordings, as done previously for meeting scenarios. However, individual voice recordings are hard to obtain in the wild, especially in crowded mingling scenarios due to cost, logistics, and privacy concerns. As an alternative, machine learning models trained on video and wearable sensor data make it possible to recognize speech by detecting its related gestures in an unobtrusive, privacy-preserving way. These models themselves should ideally be trained using labels obtained from the speech signal. However, existing mingling datasets do not contain high quality audio recordings. Instead, speaking status annotations have often been inferred by human annotators from video, without validation of this approach against audio-based ground truth. In this paper we revisit no-audio speaking status estimation by presenting the first publicly available multimodal dataset with high-quality individual speech recordings of 33 subjects in a professional networking event. We present three baselines for no-audio speaking status segmentation: a) from video, b) from body acceleration (chest-worn accelerometer), c) from body pose tracks. In all cases we predict a 20Hz binary speaking status signal extracted from the audio, a time resolution not available in previous datasets. In addition to providing the signals and ground truth necessary to evaluate a wide range of speaking status detection methods, the availability of audio in REWIND makes it suitable for cross-modality studies not feasible with previous mingling datasets. Finally, our flexible data consent setup creates new challenges for multimodal systems under missing modalities.
[ "['Jose Vargas Quiros' 'Chirag Raman' 'Stephanie Tan' 'Ekin Gedik'\n 'Laura Cabrera-Quiros' 'Hayley Hung']" ]
null
null
2403.01232
null
null
http://arxiv.org/pdf/2403.01232v3
2024-04-06T23:26:26Z
2024-03-02T15:32:01Z
Polynormer: Polynomial-Expressive Graph Transformer in Linear Time
Graph transformers (GTs) have emerged as a promising architecture that is theoretically more expressive than message-passing graph neural networks (GNNs). However, typical GT models have at least quadratic complexity and thus cannot scale to large graphs. While there are several linear GTs recently proposed, they still lag behind GNN counterparts on several popular graph datasets, which poses a critical concern on their practical expressivity. To balance the trade-off between expressivity and scalability of GTs, we propose Polynormer, a polynomial-expressive GT model with linear complexity. Polynormer is built upon a novel base model that learns a high-degree polynomial on input features. To enable the base model permutation equivariant, we integrate it with graph topology and node features separately, resulting in local and global equivariant attention models. Consequently, Polynormer adopts a linear local-to-global attention scheme to learn high-degree equivariant polynomials whose coefficients are controlled by attention scores. Polynormer has been evaluated on $13$ homophilic and heterophilic datasets, including large graphs with millions of nodes. Our extensive experiment results show that Polynormer outperforms state-of-the-art GNN and GT baselines on most datasets, even without the use of nonlinear activation functions.
[ "['Chenhui Deng' 'Zichao Yue' 'Zhiru Zhang']" ]
null
null
2403.01234
null
null
http://arxiv.org/pdf/2403.01234v1
2024-03-02T15:34:31Z
2024-03-02T15:34:31Z
Active Deep Kernel Learning of Molecular Functionalities: Realizing Dynamic Structural Embeddings
Exploring molecular spaces is crucial for advancing our understanding of chemical properties and reactions, leading to groundbreaking innovations in materials science, medicine, and energy. This paper explores an approach for active learning in molecular discovery using Deep Kernel Learning (DKL), a novel approach surpassing the limits of classical Variational Autoencoders (VAEs). Employing the QM9 dataset, we contrast DKL with traditional VAEs, which analyze molecular structures based on similarity, revealing limitations due to sparse regularities in latent spaces. DKL, however, offers a more holistic perspective by correlating structure with properties, creating latent spaces that prioritize molecular functionality. This is achieved by recalculating embedding vectors iteratively, aligning with the experimental availability of target properties. The resulting latent spaces are not only better organized but also exhibit unique characteristics such as concentrated maxima representing molecular functionalities and a correlation between predictive uncertainty and error. Additionally, the formation of exclusion regions around certain compounds indicates unexplored areas with potential for groundbreaking functionalities. This study underscores DKL's potential in molecular research, offering new avenues for understanding and discovering molecular functionalities beyond classical VAE limitations.
[ "['Ayana Ghosh' 'Maxim Ziatdinov and' 'Sergei V. Kalinin']" ]
null
null
2403.01242
null
null
http://arxiv.org/pdf/2403.01242v1
2024-03-02T16:06:03Z
2024-03-02T16:06:03Z
Augmenting Automation: Intent-Based User Instruction Classification with Machine Learning
Electric automation systems offer convenience and efficiency in controlling electrical circuits and devices. Traditionally, these systems rely on predefined commands for control, limiting flexibility and adaptability. In this paper, we propose a novel approach to augment automation by introducing intent-based user instruction classification using machine learning techniques. Our system represents user instructions as intents, allowing for dynamic control of electrical circuits without relying on predefined commands. Through a machine learning model trained on a labeled dataset of user instructions, our system classifies intents from user input, enabling a more intuitive and adaptable control scheme. We present the design and implementation of our intent-based electric automation system, detailing the development of the machine learning model for intent classification. Experimental results demonstrate the effectiveness of our approach in enhancing user experience and expanding the capabilities of electric automation systems. Our work contributes to the advancement of smart technologies by providing a more seamless interaction between users and their environments.
[ "['Lochan Basyal' 'Bijay Gaudel']" ]
null
null
2403.01245
null
null
http://arxiv.org/pdf/2403.01245v1
2024-03-02T16:11:58Z
2024-03-02T16:11:58Z
AcME-AD: Accelerated Model Explanations for Anomaly Detection
Pursuing fast and robust interpretability in Anomaly Detection is crucial, especially due to its significance in practical applications. Traditional Anomaly Detection methods excel in outlier identification but are often black-boxes, providing scant insights into their decision-making process. This lack of transparency compromises their reliability and hampers their adoption in scenarios where comprehending the reasons behind anomaly detection is vital. At the same time, getting explanations quickly is paramount in practical scenarios. To bridge this gap, we present AcME-AD, a novel approach rooted in Explainable Artificial Intelligence principles, designed to clarify Anomaly Detection models for tabular data. AcME-AD transcends the constraints of model-specific or resource-heavy explainability techniques by delivering a model-agnostic, efficient solution for interoperability. It offers local feature importance scores and a what-if analysis tool, shedding light on the factors contributing to each anomaly, thus aiding root cause analysis and decision-making. This paper elucidates AcME-AD's foundation, its benefits over existing methods, and validates its effectiveness with tests on both synthetic and real datasets. AcME-AD's implementation and experiment replication code is accessible in a public repository.
[ "['Valentina Zaccaria' 'David Dandolo' 'Chiara Masiero'\n 'Gian Antonio Susto']" ]
null
null
2403.01248
null
null
http://arxiv.org/pdf/2403.01248v1
2024-03-02T16:16:26Z
2024-03-02T16:16:26Z
SceneCraft: An LLM Agent for Synthesizing 3D Scene as Blender Code
This paper introduces SceneCraft, a Large Language Model (LLM) Agent converting text descriptions into Blender-executable Python scripts which render complex scenes with up to a hundred 3D assets. This process requires complex spatial planning and arrangement. We tackle these challenges through a combination of advanced abstraction, strategic planning, and library learning. SceneCraft first models a scene graph as a blueprint, detailing the spatial relationships among assets in the scene. SceneCraft then writes Python scripts based on this graph, translating relationships into numerical constraints for asset layout. Next, SceneCraft leverages the perceptual strengths of vision-language foundation models like GPT-V to analyze rendered images and iteratively refine the scene. On top of this process, SceneCraft features a library learning mechanism that compiles common script functions into a reusable library, facilitating continuous self-improvement without expensive LLM parameter tuning. Our evaluation demonstrates that SceneCraft surpasses existing LLM-based agents in rendering complex scenes, as shown by its adherence to constraints and favorable human assessments. We also showcase the broader application potential of SceneCraft by reconstructing detailed 3D scenes from the Sintel movie and guiding a video generative model with generated scenes as intermediary control signal.
[ "['Ziniu Hu' 'Ahmet Iscen' 'Aashi Jain' 'Thomas Kipf' 'Yisong Yue'\n 'David A. Ross' 'Cordelia Schmid' 'Alireza Fathi']" ]
null
null
2403.01267
null
null
http://arxiv.org/pdf/2403.01267v1
2024-03-02T17:10:44Z
2024-03-02T17:10:44Z
Dissecting Language Models: Machine Unlearning via Selective Pruning
Understanding and shaping the behaviour of Large Language Models (LLMs) is increasingly important as applications become more powerful and more frequently adopted. This paper introduces a machine unlearning method specifically designed for LLMs. We introduce a selective pruning method for LLMs that removes neurons based on their relative importance on a targeted capability compared to overall network performance. This approach is a compute- and data-efficient method for identifying and removing neurons that enable specific behaviours. Our findings reveal that both feed-forward and attention neurons in LLMs are specialized; that is, for specific tasks, certain neurons are more crucial than others.
[ "['Nicholas Pochinkov' 'Nandi Schoots']" ]
null
null
2403.01268
null
null
http://arxiv.org/pdf/2403.01268v1
2024-03-02T17:12:32Z
2024-03-02T17:12:32Z
Defending Against Data Reconstruction Attacks in Federated Learning: An Information Theory Approach
Federated Learning (FL) trains a black-box and high-dimensional model among different clients by exchanging parameters instead of direct data sharing, which mitigates the privacy leak incurred by machine learning. However, FL still suffers from membership inference attacks (MIA) or data reconstruction attacks (DRA). In particular, an attacker can extract the information from local datasets by constructing DRA, which cannot be effectively throttled by existing techniques, e.g., Differential Privacy (DP). In this paper, we aim to ensure a strong privacy guarantee for FL under DRA. We prove that reconstruction errors under DRA are constrained by the information acquired by an attacker, which means that constraining the transmitted information can effectively throttle DRA. To quantify the information leakage incurred by FL, we establish a channel model, which depends on the upper bound of joint mutual information between the local dataset and multiple transmitted parameters. Moreover, the channel model indicates that the transmitted information can be constrained through data space operation, which can improve training efficiency and the model accuracy under constrained information. According to the channel model, we propose algorithms to constrain the information transmitted in a single round of local training. With a limited number of training rounds, the algorithms ensure that the total amount of transmitted information is limited. Furthermore, our channel model can be applied to various privacy-enhancing techniques (such as DP) to enhance privacy guarantees against DRA. Extensive experiments with real-world datasets validate the effectiveness of our methods.
[ "['Qi Tan' 'Qi Li' 'Yi Zhao' 'Zhuotao Liu' 'Xiaobing Guo' 'Ke Xu']" ]
null
null
2403.01272
null
null
http://arxiv.org/pdf/2403.01272v1
2024-03-02T17:28:55Z
2024-03-02T17:28:55Z
Can a Confident Prior Replace a Cold Posterior?
Benchmark datasets used for image classification tend to have very low levels of label noise. When Bayesian neural networks are trained on these datasets, they often underfit, misrepresenting the aleatoric uncertainty of the data. A common solution is to cool the posterior, which improves fit to the training data but is challenging to interpret from a Bayesian perspective. We explore whether posterior tempering can be replaced by a confidence-inducing prior distribution. First, we introduce a "DirClip" prior that is practical to sample and nearly matches the performance of a cold posterior. Second, we introduce a "confidence prior" that directly approximates a cold likelihood in the limit of decreasing temperature but cannot be easily sampled. Lastly, we provide several general insights into confidence-inducing priors, such as when they might diverge and how fine-tuning can mitigate numerical instability.
[ "['Martin Marek' 'Brooks Paige' 'Pavel Izmailov']" ]
null
null
2403.01273
null
null
http://arxiv.org/pdf/2403.01273v1
2024-03-02T17:29:22Z
2024-03-02T17:29:22Z
NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Large language model inference on Central Processing Units (CPU) is challenging due to the vast quantities of expensive Multiply-Add (MAD) matrix operations in the attention computations. In this paper, we argue that there is a rare gem in modern CPUs, Single-Instruction-Multiple-Data (SIMD) registers, which allow for ultra-low-latency lookups in batch. We leverage this unique capability of CPUs to propose NoMAD-Attention, an efficient attention algorithm that replaces MAD operations with in-register lookups. Through hardware-aware algorithmic designs, NoMAD-Attention achieves the computation of attention scores using repeated fast accesses to SIMD registers despite their highly limited sizes. Moreover, NoMAD-Attention works with pre-trained attention-based LLMs without model finetuning. Empirical evaluations demonstrate that NoMAD-Attention maintains the quality of the original LLMs well, and speeds up the 4-bit quantized LLaMA-7B-based model by up to 2$times$ at 16k context length. Our results are reproducible at https://github.com/tonyzhang617/nomad-dist.
[ "['Tianyi Zhang' 'Jonah Wonkyu Yi' 'Bowen Yao' 'Zhaozhuo Xu'\n 'Anshumali Shrivastava']" ]
null
null
2403.01299
null
null
http://arxiv.org/pdf/2403.01299v1
2024-03-02T19:44:19Z
2024-03-02T19:44:19Z
A Photonic Physically Unclonable Function's Resilience to Multiple-Valued Machine Learning Attacks
Physically unclonable functions (PUFs) identify integrated circuits using nonlinearly-related challenge-response pairs (CRPs). Ideally, the relationship between challenges and corresponding responses is unpredictable, even if a subset of CRPs is known. Previous work developed a photonic PUF offering improved security compared to non-optical counterparts. Here, we investigate this PUF's susceptibility to Multiple-Valued-Logic-based machine learning attacks. We find that approximately 1,000 CRPs are necessary to train models that predict response bits better than random chance. Given the significant challenge of acquiring a vast number of CRPs from a photonic PUF, our results demonstrate photonic PUF resilience against such attacks.
[ "['Jessie M. Henderson' 'Elena R. Henderson' 'Clayton A. Harper'\n 'Hiva Shahoei' 'William V. Oxford' 'Eric C. Larson'\n 'Duncan L. MacFarlane' 'Mitchell A. Thornton']" ]
null
null
2403.01301
null
null
http://arxiv.org/pdf/2403.01301v1
2024-03-02T19:55:38Z
2024-03-02T19:55:38Z
Supplier Recommendation in Online Procurement
Supply chain optimization is key to a healthy and profitable business. Many companies use online procurement systems to agree contracts with suppliers. It is vital that the most competitive suppliers are invited to bid for such contracts. In this work, we propose a recommender system to assist with supplier discovery in road freight online procurement. Our system is able to provide personalized supplier recommendations, taking into account customer needs and preferences. This is a novel application of recommender systems, calling for design choices that fit the unique requirements of online procurement. Our preliminary results, using real-world data, are promising.
[ "['Victor Coscrato' 'Derek Bridge']" ]
null
null
2403.01306
null
null
http://arxiv.org/pdf/2403.01306v3
2024-06-11T07:18:44Z
2024-03-02T20:36:10Z
ICC: Quantifying Image Caption Concreteness for Multimodal Dataset Curation
Web-scale training on paired text-image data is becoming increasingly central to multimodal learning, but is challenged by the highly noisy nature of datasets in the wild. Standard data filtering approaches succeed in removing mismatched text-image pairs, but permit semantically related but highly abstract or subjective text. These approaches lack the fine-grained ability to isolate the most concrete samples that provide the strongest signal for learning in a noisy dataset. In this work, we propose a new metric, image caption concreteness, that evaluates caption text without an image reference to measure its concreteness and relevancy for use in multimodal learning. Our approach leverages strong foundation models for measuring visual-semantic information loss in multimodal representations. We demonstrate that this strongly correlates with human evaluation of concreteness in both single-word and sentence-level texts. Moreover, we show that curation using ICC complements existing approaches: It succeeds in selecting the highest quality samples from multimodal web-scale datasets to allow for efficient training in resource-constrained settings.
[ "['Moran Yanuka' 'Morris Alper' 'Hadar Averbuch-Elor' 'Raja Giryes']" ]
null
null
2403.01308
null
null
http://arxiv.org/pdf/2403.01308v2
2024-03-14T16:37:37Z
2024-03-02T20:40:11Z
VBART: The Turkish LLM
We present VBART, the first Turkish sequence-to-sequence Large Language Models (LLMs) pre-trained on a large corpus from scratch. VBART are compact LLMs based on good ideas leveraged from BART and mBART models and come in two sizes, Large and XLarge. Fine-tuned VBART models surpass the prior state-of-the-art results in abstractive text summarization, title generation, text paraphrasing, question answering and question generation tasks. They allow fine-tuning for future text generation tasks and datasets, carving a new path for Turkish Natural Language Processing (NLP) research. Our work shows that having a pre-trained LLM for Turkish outperforms up to 3x multilingual models, improving existing results and providing efficient models for training and inference. Moreover, we show that our monolingual tokenizer is up to 11x more efficient than multilingual tokenizers. Last but not least, we introduce a method to enlarge an existing pre-trained LLM and question the relevancy of Chinchilla Scaling Law to sequence-to-sequence masked language models. Our fine-tuned models, tokenizer and cleaned vngrs-web-corpus of 135 GB are publicly available at huggingface.co/vngrs-ai.
[ "['Meliksah Turker' 'Mehmet Erdi Ari' 'Aydin Han']" ]
null
null
2403.01309
null
null
http://arxiv.org/pdf/2403.01309v1
2024-03-02T20:46:56Z
2024-03-02T20:46:56Z
VNLP: Turkish NLP Package
In this work, we present VNLP: the first dedicated, complete, open-source, well-documented, lightweight, production-ready, state-of-the-art Natural Language Processing (NLP) package for the Turkish language. It contains a wide variety of tools, ranging from the simplest tasks, such as sentence splitting and text normalization, to the more advanced ones, such as text and token classification models. Its token classification models are based on "Context Model", a novel architecture that is both an encoder and an auto-regressive model. NLP tasks solved by VNLP models include but are not limited to Sentiment Analysis, Named Entity Recognition, Morphological Analysis & Disambiguation and Part-of-Speech Tagging. Moreover, it comes with pre-trained word embeddings and corresponding SentencePiece Unigram tokenizers. VNLP has an open-source GitHub repository, ReadtheDocs documentation, PyPi package for convenient installation, Python and command-line API and a demo page to test all the functionality. Consequently, our main contribution is a complete, compact, easy-to-install and easy-to-use NLP package for Turkish.
[ "['Meliksah Turker' 'Mehmet Erdi Ari' 'Aydin Han']" ]
null
null
2403.01315
null
null
http://arxiv.org/pdf/2403.01315v2
2024-05-30T00:18:21Z
2024-03-02T21:22:46Z
Near-optimal Per-Action Regret Bounds for Sleeping Bandits
We derive near-optimal per-action regret bounds for sleeping bandits, in which both the sets of available arms and their losses in every round are chosen by an adversary. In a setting with $K$ total arms and at most $A$ available arms in each round over $T$ rounds, the best known upper bound is $O(Ksqrt{TAln{K}})$, obtained indirectly via minimizing internal sleeping regrets. Compared to the minimax $Omega(sqrt{TA})$ lower bound, this upper bound contains an extra multiplicative factor of $Kln{K}$. We address this gap by directly minimizing the per-action regret using generalized versions of EXP3, EXP3-IX and FTRL with Tsallis entropy, thereby obtaining near-optimal bounds of order $O(sqrt{TAln{K}})$ and $O(sqrt{Tsqrt{AK}})$. We extend our results to the setting of bandits with advice from sleeping experts, generalizing EXP4 along the way. This leads to new proofs for a number of existing adaptive and tracking regret bounds for standard non-sleeping bandits. Extending our results to the bandit version of experts that report their confidences leads to new bounds for the confidence regret that depends primarily on the sum of experts' confidences. We prove a lower bound, showing that for any minimax optimal algorithms, there exists an action whose regret is sublinear in $T$ but linear in the number of its active rounds.
[ "['Quan Nguyen' 'Nishant A. Mehta']" ]
null
null
2403.01317
null
null
http://arxiv.org/pdf/2403.01317v4
2024-04-10T23:31:08Z
2024-03-02T21:33:23Z
Less is More: Hop-Wise Graph Attention for Scalable and Generalizable Learning on Circuits
While graph neural networks (GNNs) have gained popularity for learning circuit representations in various electronic design automation (EDA) tasks, they face challenges in scalability when applied to large graphs and exhibit limited generalizability to new designs. These limitations make them less practical for addressing large-scale, complex circuit problems. In this work we propose HOGA, a novel attention-based model for learning circuit representations in a scalable and generalizable manner. HOGA first computes hop-wise features per node prior to model training. Subsequently, the hop-wise features are solely used to produce node representations through a gated self-attention module, which adaptively learns important features among different hops without involving the graph topology. As a result, HOGA is adaptive to various structures across different circuits and can be efficiently trained in a distributed manner. To demonstrate the efficacy of HOGA, we consider two representative EDA tasks: quality of results (QoR) prediction and functional reasoning. Our experimental results indicate that (1) HOGA reduces estimation error over conventional GNNs by 46.76% for predicting QoR after logic synthesis; (2) HOGA improves 10.0% reasoning accuracy over GNNs for identifying functional blocks on unseen gate-level netlists after complex technology mapping; (3) The training time for HOGA almost linearly decreases with an increase in computing resources.
[ "['Chenhui Deng' 'Zichao Yue' 'Cunxi Yu' 'Gokce Sarar' 'Ryan Carey'\n 'Rajeev Jain' 'Zhiru Zhang']" ]
null
null
2403.01318
null
null
http://arxiv.org/pdf/2403.01318v1
2024-03-02T21:37:40Z
2024-03-02T21:37:40Z
High-Dimensional Tail Index Regression: with An Application to Text Analyses of Viral Posts in Social Media
Motivated by the empirical power law of the distributions of credits (e.g., the number of "likes") of viral posts in social media, we introduce the high-dimensional tail index regression and methods of estimation and inference for its parameters. We propose a regularized estimator, establish its consistency, and derive its convergence rate. To conduct inference, we propose to debias the regularized estimate, and establish the asymptotic normality of the debiased estimator. Simulation studies support our theory. These methods are applied to text analyses of viral posts in X (formerly Twitter) concerning LGBTQ+.
[ "['Yuya Sasaki' 'Jing Tao' 'Yulong Wang']" ]
null
null
2403.01329
null
null
http://arxiv.org/pdf/2403.01329v1
2024-03-02T22:27:44Z
2024-03-02T22:27:44Z
Bespoke Non-Stationary Solvers for Fast Sampling of Diffusion and Flow Models
This paper introduces Bespoke Non-Stationary (BNS) Solvers, a solver distillation approach to improve sample efficiency of Diffusion and Flow models. BNS solvers are based on a family of non-stationary solvers that provably subsumes existing numerical ODE solvers and consequently demonstrate considerable improvement in sample approximation (PSNR) over these baselines. Compared to model distillation, BNS solvers benefit from a tiny parameter space ($<$200 parameters), fast optimization (two orders of magnitude faster), maintain diversity of samples, and in contrast to previous solver distillation approaches nearly close the gap from standard distillation methods such as Progressive Distillation in the low-medium NFE regime. For example, BNS solver achieves 45 PSNR / 1.76 FID using 16 NFE in class-conditional ImageNet-64. We experimented with BNS solvers for conditional image generation, text-to-image generation, and text-2-audio generation showing significant improvement in sample approximation (PSNR) in all.
[ "['Neta Shaul' 'Uriel Singer' 'Ricky T. Q. Chen' 'Matthew Le' 'Ali Thabet'\n 'Albert Pumarola' 'Yaron Lipman']" ]
null
null
2403.01332
null
null
http://arxiv.org/pdf/2403.01332v1
2024-03-02T22:38:01Z
2024-03-02T22:38:01Z
Chaining thoughts and LLMs to learn DNA structural biophysics
The future development of an AI scientist, a tool that is capable of integrating a variety of experimental data and generating testable hypotheses, holds immense potential. So far, bespoke machine learning models have been created to specialize in singular scientific tasks, but otherwise lack the flexibility of a general purpose model. Here, we show that a general purpose large language model, chatGPT 3.5-turbo, can be fine-tuned to learn the structural biophysics of DNA. We find that both fine-tuning models to return chain-of-thought responses and chaining together models fine-tuned for subtasks have an enhanced ability to analyze and design DNA sequences and their structures.
[ "['Tyler D. Ross' 'Ashwin Gopinath']" ]
null
null
2403.01339
null
null
http://arxiv.org/pdf/2403.01339v1
2024-03-02T23:19:10Z
2024-03-02T23:19:10Z
Uniform $\mathcal{C}^k$ Approximation of $G$-Invariant and Antisymmetric Functions, Embedding Dimensions, and Polynomial Representations
For any subgroup $G$ of the symmetric group $mathcal{S}_n$ on $n$ symbols, we present results for the uniform $mathcal{C}^k$ approximation of $G$-invariant functions by $G$-invariant polynomials. For the case of totally symmetric functions ($G = mathcal{S}_n$), we show that this gives rise to the sum-decomposition Deep Sets ansatz of Zaheer et al. (2018), where both the inner and outer functions can be chosen to be smooth, and moreover, the inner function can be chosen to be independent of the target function being approximated. In particular, we show that the embedding dimension required is independent of the regularity of the target function, the accuracy of the desired approximation, as well as $k$. Next, we show that a similar procedure allows us to obtain a uniform $mathcal{C}^k$ approximation of antisymmetric functions as a sum of $K$ terms, where each term is a product of a smooth totally symmetric function and a smooth antisymmetric homogeneous polynomial of degree at most $binom{n}{2}$. We also provide upper and lower bounds on $K$ and show that $K$ is independent of the regularity of the target function, the desired approximation accuracy, and $k$.
[ "['Soumya Ganguly' 'Khoa Tran' 'Rahul Sarkar']" ]
null
null
2403.01344
null
null
http://arxiv.org/pdf/2403.01344v1
2024-03-02T23:37:16Z
2024-03-02T23:37:16Z
Mitigating the Bias in the Model for Continual Test-Time Adaptation
Continual Test-Time Adaptation (CTA) is a challenging task that aims to adapt a source pre-trained model to continually changing target domains. In the CTA setting, a model does not know when the target domain changes, thus facing a drastic change in the distribution of streaming inputs during the test-time. The key challenge is to keep adapting the model to the continually changing target domains in an online manner. We find that a model shows highly biased predictions as it constantly adapts to the chaining distribution of the target data. It predicts certain classes more often than other classes, making inaccurate over-confident predictions. This paper mitigates this issue to improve performance in the CTA scenario. To alleviate the bias issue, we make class-wise exponential moving average target prototypes with reliable target samples and exploit them to cluster the target features class-wisely. Moreover, we aim to align the target distributions to the source distribution by anchoring the target feature to its corresponding source prototype. With extensive experiments, our proposed method achieves noteworthy performance gain when applied on top of existing CTA methods without substantial adaptation time overhead.
[ "['Inseop Chung' 'Kyomin Hwang' 'Jayeon Yoo' 'Nojun Kwak']" ]
null
null
2403.01346
null
null
http://arxiv.org/pdf/2403.01346v1
2024-03-02T23:53:24Z
2024-03-02T23:53:24Z
Improve Cost Efficiency of Active Learning over Noisy Dataset
Active learning is a learning strategy whereby the machine learning algorithm actively identifies and labels data points to optimize its learning. This strategy is particularly effective in domains where an abundance of unlabeled data exists, but the cost of labeling these data points is prohibitively expensive. In this paper, we consider cases of binary classification, where acquiring a positive instance incurs a significantly higher cost compared to that of negative instances. For example, in the financial industry, such as in money-lending businesses, a defaulted loan constitutes a positive event leading to substantial financial loss. To address this issue, we propose a shifted normal distribution sampling function that samples from a wider range than typical uncertainty sampling. Our simulation underscores that our proposed sampling function limits both noisy and positive label selection, delivering between 20% and 32% improved cost efficiency over different test datasets.
[ "['Zan-Kai Chong' 'Hiroyuki Ohsaki' 'Bryan Ng']" ]
null
null
2403.01348
null
null
http://arxiv.org/abs/2403.01348v1
2024-03-03T00:01:29Z
2024-03-03T00:01:29Z
SANGRIA: Stacked Autoencoder Neural Networks with Gradient Boosting for Indoor Localization
Indoor localization is a critical task in many embedded applications, such as asset tracking, emergency response, and realtime navigation. In this article, we propose a novel fingerprintingbased framework for indoor localization called SANGRIA that uses stacked autoencoder neural networks with gradient boosted trees. Our approach is designed to overcome the device heterogeneity challenge that can create uncertainty in wireless signal measurements across embedded devices used for localization. We compare SANGRIA to several state-of-the-art frameworks and demonstrate 42.96% lower average localization error across diverse indoor locales and heterogeneous devices.
[ "['Danish Gufran' 'Saideep Tiku' 'Sudeep Pasricha']" ]
null
null
2403.01352
null
null
http://arxiv.org/abs/2403.01352v1
2024-03-03T00:14:12Z
2024-03-03T00:14:12Z
Improving Uncertainty Sampling with Bell Curve Weight Function
Typically, a supervised learning model is trained using passive learning by randomly selecting unlabelled instances to annotate. This approach is effective for learning a model, but can be costly in cases where acquiring labelled instances is expensive. For example, it can be time-consuming to manually identify spam mails (labelled instances) from thousands of emails (unlabelled instances) flooding an inbox during initial data collection. Generally, we answer the above scenario with uncertainty sampling, an active learning method that improves the efficiency of supervised learning by using fewer labelled instances than passive learning. Given an unlabelled data pool, uncertainty sampling queries the labels of instances where the predicted probabilities, p, fall into the uncertainty region, i.e., $p approx 0.5$. The newly acquired labels are then added to the existing labelled data pool to learn a new model. Nonetheless, the performance of uncertainty sampling is susceptible to the area of unpredictable responses (AUR) and the nature of the dataset. It is difficult to determine whether to use passive learning or uncertainty sampling without prior knowledge of a new dataset. To address this issue, we propose bell curve sampling, which employs a bell curve weight function to acquire new labels. With the bell curve centred at p=0.5, bell curve sampling selects instances whose predicted values are in the uncertainty area most of the time without neglecting the rest. Simulation results show that, most of the time bell curve sampling outperforms uncertainty sampling and passive learning in datasets of different natures and with AUR.
[ "['Zan-Kai Chong' 'Hiroyuki Ohsaki' 'Bok-Min Goi']" ]
null
null
2403.01355
null
null
http://arxiv.org/pdf/2403.01355v1
2024-03-03T00:58:27Z
2024-03-03T00:58:27Z
a-DCF: an architecture agnostic metric with application to spoofing-robust speaker verification
Spoofing detection is today a mainstream research topic. Standard metrics can be applied to evaluate the performance of isolated spoofing detection solutions and others have been proposed to support their evaluation when they are combined with speaker detection. These either have well-known deficiencies or restrict the architectural approach to combine speaker and spoof detectors. In this paper, we propose an architecture-agnostic detection cost function (a-DCF). A generalisation of the original DCF used widely for the assessment of automatic speaker verification (ASV), the a-DCF is designed for the evaluation of spoofing-robust ASV. Like the DCF, the a-DCF reflects the cost of decisions in a Bayes risk sense, with explicitly defined class priors and detection cost model. We demonstrate the merit of the a-DCF through the benchmarking evaluation of architecturally-heterogeneous spoofing-robust ASV solutions.
[ "['Hye-jin Shim' 'Jee-weon Jung' 'Tomi Kinnunen' 'Nicholas Evans'\n 'Jean-Francois Bonastre' 'Itshak Lapidot']" ]
null
null
2403.01361
null
null
http://arxiv.org/pdf/2403.01361v2
2024-07-06T00:44:23Z
2024-03-03T01:33:47Z
Bandit Profit-maximization for Targeted Marketing
We study a sequential profit-maximization problem, optimizing for both price and ancillary variables like marketing expenditures. Specifically, we aim to maximize profit over an arbitrary sequence of multiple demand curves, each dependent on a distinct ancillary variable, but sharing the same price. A prototypical example is targeted marketing, where a firm (seller) wishes to sell a product over multiple markets. The firm may invest different marketing expenditures for different markets to optimize customer acquisition, but must maintain the same price across all markets. Moreover, markets may have heterogeneous demand curves, each responding to prices and marketing expenditures differently. The firm's objective is to maximize its gross profit, the total revenue minus marketing costs. Our results are near-optimal algorithms for this class of problems in an adversarial bandit setting, where demand curves are arbitrary non-adaptive sequences, and the firm observes only noisy evaluations of chosen points on the demand curves. For $n$ demand curves (markets), we prove a regret upper bound of $tilde{O}(nT^{3/4})$ and a lower bound of $Omega((nT)^{3/4})$ for monotonic demand curves, and a regret bound of $tilde{Theta}(nT^{2/3})$ for demands curves that are monotonic in price and concave in the ancillary variables.
[ "['Joon Suk Huh' 'Ellen Vitercik' 'Kirthevasan Kandasamy']" ]
null
null
2403.01369
null
null
http://arxiv.org/pdf/2403.01369v1
2024-03-03T02:05:17Z
2024-03-03T02:05:17Z
A Closer Look at Wav2Vec2 Embeddings for On-Device Single-Channel Speech Enhancement
Self-supervised learned models have been found to be very effective for certain speech tasks such as automatic speech recognition, speaker identification, keyword spotting and others. While the features are undeniably useful in speech recognition and associated tasks, their utility in speech enhancement systems is yet to be firmly established, and perhaps not properly understood. In this paper, we investigate the uses of SSL representations for single-channel speech enhancement in challenging conditions and find that they add very little value for the enhancement task. Our constraints are designed around on-device real-time speech enhancement -- model is causal, the compute footprint is small. Additionally, we focus on low SNR conditions where such models struggle to provide good enhancement. In order to systematically examine how SSL representations impact performance of such enhancement models, we propose a variety of techniques to utilize these embeddings which include different forms of knowledge-distillation and pre-training.
[ "['Ravi Shankar' 'Ke Tan' 'Buye Xu' 'Anurag Kumar']" ]
null
null
2403.01371
null
null
http://arxiv.org/pdf/2403.01371v2
2024-05-31T16:05:37Z
2024-03-03T02:19:49Z
eXponential FAmily Dynamical Systems (XFADS): Large-scale nonlinear Gaussian state-space modeling
State-space graphical models and the variational autoencoder framework provide a principled apparatus for learning dynamical systems from data. State-of-the-art probabilistic approaches are often able to scale to large problems at the cost of flexibility of the variational posterior or expressivity of the dynamics model. However, those consolidations can be detrimental if the ultimate goal is to learn a generative model capable of explaining the spatiotemporal structure of the data and making accurate forecasts. We introduce a low-rank structured variational autoencoding framework for nonlinear Gaussian state-space graphical models capable of capturing dense covariance structures that are important for learning dynamical systems with predictive capabilities. Our inference algorithm exploits the covariance structures that arise naturally from sample based approximate Gaussian message passing and low-rank amortized posterior updates -- effectively performing approximate variational smoothing with time complexity scaling linearly in the state dimensionality. In comparisons with other deep state-space model architectures our approach consistently demonstrates the ability to learn a more predictive generative model. Furthermore, when applied to neural physiological recordings, our approach is able to learn a dynamical system capable of forecasting population spiking and behavioral correlates from a small portion of single trials.
[ "['Matthew Dowling' 'Yuan Zhao' 'Il Memming Park']" ]
null
null
2403.01384
null
null
http://arxiv.org/pdf/2403.01384v2
2024-05-06T02:29:14Z
2024-03-03T03:27:07Z
On the Compressibility of Quantized Large Language Models
Deploying Large Language Models (LLMs) on edge or mobile devices offers significant benefits, such as enhanced data privacy and real-time processing capabilities. However, it also faces critical challenges due to the substantial memory requirement of LLMs. Quantization is an effective way of reducing the model size while maintaining good performance. However, even after quantization, LLMs may still be too big to fit entirely into the limited memory of edge or mobile devices and have to be partially loaded from the storage to complete the inference. In this case, the I/O latency of model loading becomes the bottleneck of the LLM inference latency. In this work, we take a preliminary step of studying applying data compression techniques to reduce data movement and thus speed up the inference of quantized LLM on memory-constrained devices. In particular, we discussed the compressibility of quantized LLMs, the trade-off between the compressibility and performance of quantized LLMs, and opportunities to optimize both of them jointly.
[ "['Yu Mao' 'Weilan Wang' 'Hongchao Du' 'Nan Guan' 'Chun Jason Xue']" ]
null
null
2403.01387
null
null
http://arxiv.org/pdf/2403.01387v1
2024-03-03T03:52:27Z
2024-03-03T03:52:27Z
A Comprehensive Survey of Federated Transfer Learning: Challenges, Methods and Applications
Federated learning (FL) is a novel distributed machine learning paradigm that enables participants to collaboratively train a centralized model with privacy preservation by eliminating the requirement of data sharing. In practice, FL often involves multiple participants and requires the third party to aggregate global information to guide the update of the target participant. Therefore, many FL methods do not work well due to the training and test data of each participant may not be sampled from the same feature space and the same underlying distribution. Meanwhile, the differences in their local devices (system heterogeneity), the continuous influx of online data (incremental data), and labeled data scarcity may further influence the performance of these methods. To solve this problem, federated transfer learning (FTL), which integrates transfer learning (TL) into FL, has attracted the attention of numerous researchers. However, since FL enables a continuous share of knowledge among participants with each communication round while not allowing local data to be accessed by other participants, FTL faces many unique challenges that are not present in TL. In this survey, we focus on categorizing and reviewing the current progress on federated transfer learning, and outlining corresponding solutions and applications. Furthermore, the common setting of FTL scenarios, available datasets, and significant related research are summarized in this survey.
[ "['Wei Guo' 'Fuzhen Zhuang' 'Xiao Zhang' 'Yiqi Tong' 'Jin Dong']" ]
null
null
2403.01389
null
null
http://arxiv.org/abs/2403.01389v1
2024-03-03T04:21:21Z
2024-03-03T04:21:21Z
Fusion of Gaussian Processes Predictions with Monte Carlo Sampling
In science and engineering, we often work with models designed for accurate prediction of variables of interest. Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes. In this paper, we operate within the Bayesian paradigm, relying on Gaussian processes as our models. These models generate predictive probability density functions (pdfs), and the objective is to integrate them systematically, employing both linear and log-linear pooling. We introduce novel approaches for log-linear pooling, determining input-dependent weights for the predictive pdfs of the Gaussian processes. The aggregation of the pdfs is realized through Monte Carlo sampling, drawing samples of weights from their posterior. The performance of these methods, as well as those based on linear pooling, is demonstrated using a synthetic dataset.
[ "['Marzieh Ajirak' 'Daniel Waxman' 'Fernando Llorente' 'Petar M. Djuric']" ]
null
null
2403.01400
null
null
http://arxiv.org/pdf/2403.01400v1
2024-03-03T05:29:49Z
2024-03-03T05:29:49Z
Decoupling Weighing and Selecting for Integrating Multiple Graph Pre-training Tasks
Recent years have witnessed the great success of graph pre-training for graph representation learning. With hundreds of graph pre-training tasks proposed, integrating knowledge acquired from multiple pre-training tasks has become a popular research topic. In this paper, we identify two important collaborative processes for this topic: (1) select: how to select an optimal task combination from a given task pool based on their compatibility, and (2) weigh: how to weigh the selected tasks based on their importance. While there currently has been a lot of work focused on weighing, comparatively little effort has been devoted to selecting. This paper proposes a novel instance-level framework for integrating multiple graph pre-training tasks, Weigh And Select (WAS), where the two collaborative processes, weighing and selecting, are combined by decoupled siamese networks. Specifically, it first adaptively learns an optimal combination of tasks for each instance from a given task pool, based on which a customized instance-level task weighing strategy is learned. Extensive experiments on 16 graph datasets across node-level and graph-level downstream tasks have demonstrated that by combining a few simple but classical tasks, WAS can achieve comparable performance to other leading counterparts. The code is available at https://github.com/TianyuFan0504/WAS.
[ "['Tianyu Fan' 'Lirong Wu' 'Yufei Huang' 'Haitao Lin' 'Cheng Tan'\n 'Zhangyang Gao' 'Stan Z. Li']" ]
null
null
2403.01417
null
null
http://arxiv.org/pdf/2403.01417v1
2024-03-03T07:12:37Z
2024-03-03T07:12:37Z
Asyn2F: An Asynchronous Federated Learning Framework with Bidirectional Model Aggregation
In federated learning, the models can be trained synchronously or asynchronously. Many research works have focused on developing an aggregation method for the server to aggregate multiple local models into the global model with improved performance. They ignore the heterogeneity of the training workers, which causes the delay in the training of the local models, leading to the obsolete information issue. In this paper, we design and develop Asyn2F, an Asynchronous Federated learning Framework with bidirectional model aggregation. By bidirectional model aggregation, Asyn2F, on one hand, allows the server to asynchronously aggregate multiple local models and results in a new global model. On the other hand, it allows the training workers to aggregate the new version of the global model into the local model, which is being trained even in the middle of a training epoch. We develop Asyn2F considering the practical implementation requirements such as using cloud services for model storage and message queuing protocols for communications. Extensive experiments with different datasets show that the models trained by Asyn2F achieve higher performance compared to the state-of-the-art techniques. The experiments also demonstrate the effectiveness, practicality, and scalability of Asyn2F, making it ready for deployment in real scenarios.
[ "['Tien-Dung Cao' 'Nguyen T. Vuong' 'Thai Q. Le' 'Hoang V. N. Dao'\n 'Tram Truong-Huu']" ]
null
null
2403.01420
null
null
http://arxiv.org/pdf/2403.01420v1
2024-03-03T07:38:24Z
2024-03-03T07:38:24Z
The Implicit Bias of Heterogeneity towards Invariance and Causality
It is observed empirically that the large language models (LLM), trained with a variant of regression loss using numerous corpus from the Internet, can unveil causal associations to some extent. This is contrary to the traditional wisdom that ``association is not causation'' and the paradigm of traditional causal inference in which prior causal knowledge should be carefully incorporated into the design of methods. It is a mystery why causality, in a higher layer of understanding, can emerge from the regression task that pursues associations. In this paper, we claim the emergence of causality from association-oriented training can be attributed to the coupling effects from the heterogeneity of the source data, stochasticity of training algorithms, and over-parameterization of the learning models. We illustrate such an intuition using a simple but insightful model that learns invariance, a quasi-causality, using regression loss. To be specific, we consider multi-environment low-rank matrix sensing problems where the unknown r-rank ground-truth d*d matrices diverge across the environments but contain a lower-rank invariant, causal part. In this case, running pooled gradient descent will result in biased solutions that only learn associations in general. We show that running large-batch Stochastic Gradient Descent, whose each batch being linear measurement samples randomly selected from a certain environment, can successfully drive the solution towards the invariant, causal solution under certain conditions. This step is related to the relatively strong heterogeneity of the environments, the large step size and noises in the optimization algorithm, and the over-parameterization of the model. In summary, we unveil another implicit bias that is a result of the symbiosis between the heterogeneity of data and modern algorithms, which is, to the best of our knowledge, first in the literature.
[ "['Yang Xu' 'Yihong Gu' 'Cong Fang']" ]
null
null
2403.01423
null
null
http://arxiv.org/pdf/2403.01423v1
2024-03-03T07:45:35Z
2024-03-03T07:45:35Z
Collective Certified Robustness against Graph Injection Attacks
We investigate certified robustness for GNNs under graph injection attacks. Existing research only provides sample-wise certificates by verifying each node independently, leading to very limited certifying performance. In this paper, we present the first collective certificate, which certifies a set of target nodes simultaneously. To achieve it, we formulate the problem as a binary integer quadratic constrained linear programming (BQCLP). We further develop a customized linearization technique that allows us to relax the BQCLP into linear programming (LP) that can be efficiently solved. Through comprehensive experiments, we demonstrate that our collective certification scheme significantly improves certification performance with minimal computational overhead. For instance, by solving the LP within 1 minute on the Citeseer dataset, we achieve a significant increase in the certified ratio from 0.0% to 81.2% when the injected node number is 5% of the graph size. Our step marks a crucial step towards making provable defense more practical.
[ "['Yuni Lai' 'Bailin Pan' 'Kaihuang Chen' 'Yancheng Yuan' 'Kai Zhou']" ]
null
null
2403.01426
null
null
http://arxiv.org/pdf/2403.01426v1
2024-03-03T07:52:10Z
2024-03-03T07:52:10Z
Introduction to Algogens
This book introduces the concept of Algogens, a promising integration of generative AI with traditional algorithms aimed at improving problem-solving techniques across various fields. It provides an accessible overview of how Algogens combine AI's innovative potential with algorithms' reliability to tackle complex challenges more effectively than either could alone. The text explores the basics of Algogens, their development, applications, and advantages, such as better adaptability and efficiency. Through examples and case studies, readers will learn about Algogens' practical uses today and their potential for future cybersecurity, healthcare, and environmental science innovation. Acknowledging new technologies' challenges and ethical considerations, the book offers a balanced look at the prospects and obstacles facing Algogens. It invites a broad audience, including experts and newcomers, to engage with the topic and consider Algogens' role in advancing our problem-solving capabilities. This work is presented as a starting point for anyone interested in the intersection of AI and algorithms, encouraging further exploration and discussion on this emerging field. It aims to spark curiosity and contribute to the ongoing conversation about how technology can evolve to meet the complex demands of the AI era.
[ "['Amir Shachar']" ]
null
null
2403.01430
null
null
http://arxiv.org/pdf/2403.01430v1
2024-03-03T07:56:55Z
2024-03-03T07:56:55Z
On Diffusion Process in SE(3)-invariant Space
Sampling viable 3D structures (e.g., molecules and point clouds) with SE(3)-invariance using diffusion-based models proved promising in a variety of real-world applications, wherein SE(3)-invariant properties can be naturally characterized by the inter-point distance manifold. However, due to the non-trivial geometry, we still lack a comprehensive understanding of the diffusion mechanism within such SE(3)-invariant space. This study addresses this gap by mathematically delineating the diffusion mechanism under SE(3)-invariance, via zooming into the interaction behavior between coordinates and the inter-point distance manifold through the lens of differential geometry. Upon this analysis, we propose accurate and projection-free diffusion SDE and ODE accordingly. Such formulations enable enhancing the performance and the speed of generation pathways; meanwhile offering valuable insights into other systems incorporating SE(3)-invariance.
[ "['Zihan Zhou' 'Ruiying Liu' 'Jiachen Zheng' 'Xiaoxue Wang' 'Tianshu Yu']" ]
null
null
2403.01438
null
null
http://arxiv.org/pdf/2403.01438v2
2024-03-12T05:37:07Z
2024-03-03T08:24:39Z
Privacy-Preserving Collaborative Split Learning Framework for Smart Grid Load Forecasting
Accurate load forecasting is crucial for energy management, infrastructure planning, and demand-supply balancing. Smart meter data availability has led to the demand for sensor-based load forecasting. Conventional ML allows training a single global model using data from multiple smart meters requiring data transfer to a central server, raising concerns for network requirements, privacy, and security. We propose a split learning-based framework for load forecasting to alleviate this issue. We split a deep neural network model into two parts, one for each Grid Station (GS) responsible for an entire neighbourhood's smart meters and the other for the Service Provider (SP). Instead of sharing their data, client smart meters use their respective GSs' model split for forward pass and only share their activations with the GS. Under this framework, each GS is responsible for training a personalized model split for their respective neighbourhoods, whereas the SP can train a single global or personalized model for each GS. Experiments show that the proposed models match or exceed a centrally trained model's performance and generalize well. Privacy is analyzed by assessing information leakage between data and shared activations of the GS model split. Additionally, differential privacy enhances local data privacy while examining its impact on performance. A transformer model is used as our base learner.
[ "['Asif Iqbal' 'Prosanta Gope' 'Biplab Sikdar']" ]
null
null
2403.01451
null
null
http://arxiv.org/pdf/2403.01451v1
2024-03-03T09:08:41Z
2024-03-03T09:08:41Z
Enhancing Data Provenance and Model Transparency in Federated Learning Systems -- A Database Approach
Federated Learning (FL) presents a promising paradigm for training machine learning models across decentralized edge devices while preserving data privacy. Ensuring the integrity and traceability of data across these distributed environments, however, remains a critical challenge. The ability to create transparent artificial intelligence, such as detailing the training process of a machine learning model, has become an increasingly prominent concern due to the large number of sensitive (hyper)parameters it utilizes; thus, it is imperative to strike a reasonable balance between openness and the need to protect sensitive information. In this paper, we propose one of the first approaches to enhance data provenance and model transparency in federated learning systems. Our methodology leverages a combination of cryptographic techniques and efficient model management to track the transformation of data throughout the FL process, and seeks to increase the reproducibility and trustworthiness of a trained FL model. We demonstrate the effectiveness of our approach through experimental evaluations on diverse FL scenarios, showcasing its ability to tackle accountability and explainability across the board. Our findings show that our system can greatly enhance data transparency in various FL environments by storing chained cryptographic hashes and client model snapshots in our proposed design for data decoupled FL. This is made possible by also employing multiple optimization techniques which enables comprehensive data provenance without imposing substantial computational loads. Extensive experimental results suggest that integrating a database subsystem into federated learning systems can improve data provenance in an efficient manner, encouraging secure FL adoption in privacy-sensitive applications and paving the way for future advancements in FL transparency and security features.
[ "['Michael Gu' 'Ramasoumya Naraparaju' 'Dongfang Zhao']" ]
null
null
2403.01460
null
null
http://arxiv.org/pdf/2403.01460v1
2024-03-03T09:43:23Z
2024-03-03T09:43:23Z
One-Step Multi-View Clustering Based on Transition Probability
The large-scale multi-view clustering algorithms, based on the anchor graph, have shown promising performance and efficiency and have been extensively explored in recent years. Despite their successes, current methods lack interpretability in the clustering process and do not sufficiently consider the complementary information across different views. To address these shortcomings, we introduce the One-Step Multi-View Clustering Based on Transition Probability (OSMVC-TP). This method adopts a probabilistic approach, which leverages the anchor graph, representing the transition probabilities from samples to anchor points. Our method directly learns the transition probabilities from anchor points to categories, and calculates the transition probabilities from samples to categories, thus obtaining soft label matrices for samples and anchor points, enhancing the interpretability of clustering. Furthermore, to maintain consistency in labels across different views, we apply a Schatten p-norm constraint on the tensor composed of the soft labels. This approach effectively harnesses the complementary information among the views. Extensive experiments have confirmed the effectiveness and robustness of OSMVC-TP.
[ "['Wenhui Zhao' 'Quanxue Gao' 'Guangfei Li' 'Cheng Deng' 'Ming Yang']" ]
null
null
2403.01467
null
null
http://arxiv.org/abs/2403.01467v1
2024-03-03T10:23:08Z
2024-03-03T10:23:08Z
Collaborate to Adapt: Source-Free Graph Domain Adaptation via Bi-directional Adaptation
Unsupervised Graph Domain Adaptation (UGDA) has emerged as a practical solution to transfer knowledge from a label-rich source graph to a completely unlabelled target graph. However, most methods require a labelled source graph to provide supervision signals, which might not be accessible in the real-world settings due to regulations and privacy concerns. In this paper, we explore the scenario of source-free unsupervised graph domain adaptation, which tries to address the domain adaptation problem without accessing the labelled source graph. Specifically, we present a novel paradigm called GraphCTA, which performs model adaptation and graph adaptation collaboratively through a series of procedures: (1) conduct model adaptation based on node's neighborhood predictions in target graph considering both local and global information; (2) perform graph adaptation by updating graph structure and node attributes via neighborhood contrastive learning; and (3) the updated graph serves as an input to facilitate the subsequent iteration of model adaptation, thereby establishing a collaborative loop between model adaptation and graph adaptation. Comprehensive experiments are conducted on various public datasets. The experimental results demonstrate that our proposed model outperforms recent source-free baselines by large margins.
[ "['Zhen Zhang' 'Meihan Liu' 'Anhui Wang' 'Hongyang Chen' 'Zhao Li'\n 'Jiajun Bu' 'Bingsheng He']" ]
null
null
2403.01471
null
null
http://arxiv.org/pdf/2403.01471v1
2024-03-03T10:35:46Z
2024-03-03T10:35:46Z
Preserving correlations: A statistical method for generating synthetic data
We propose a method to generate statistically representative synthetic data. The main goal is to be able to maintain in the synthetic dataset the correlations of the features present in the original one, while offering a comfortable privacy level that can be eventually tailored on specific customer demands. We describe in detail our algorithm used both for the analysis of the original dataset and for the generation of the synthetic data points. The approach is tested using a large energy-related dataset. We obtain good results both qualitatively (e.g. via vizualizing correlation maps) and quantitatively (in terms of suitable $ell^1$-type error norms used as evaluation metrics). The proposed methodology is general in the sense that it does not rely on the used test dataset. We expect it to be applicable in a much broader context than indicated here.
[ "['Nicklas Jävergård' 'Rainey Lyons' 'Adrian Muntean' 'Jonas Forsman']" ]
null
null
2403.01472
null
null
http://arxiv.org/pdf/2403.01472v2
2024-06-09T04:34:55Z
2024-03-03T10:39:27Z
WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection
Embedding as a Service (EaaS) has become a widely adopted solution, which offers feature extraction capabilities for addressing various downstream tasks in Natural Language Processing (NLP). Prior studies have shown that EaaS can be prone to model extraction attacks; nevertheless, this concern could be mitigated by adding backdoor watermarks to the text embeddings and subsequently verifying the attack models post-publication. Through the analysis of the recent watermarking strategy for EaaS, EmbMarker, we design a novel CSE (Clustering, Selection, Elimination) attack that removes the backdoor watermark while maintaining the high utility of embeddings, indicating that the previous watermarking approach can be breached. In response to this new threat, we propose a new protocol to make the removal of watermarks more challenging by incorporating multiple possible watermark directions. Our defense approach, WARDEN, notably increases the stealthiness of watermarks and has been empirically shown to be effective against CSE attack.
[ "['Anudeex Shetty' 'Yue Teng' 'Ke He' 'Qiongkai Xu']" ]
null
null
2403.01475
null
null
http://arxiv.org/pdf/2403.01475v1
2024-03-03T10:59:16Z
2024-03-03T10:59:16Z
Representation Learning on Heterophilic Graph with Directional Neighborhood Attention
Graph Attention Network (GAT) is one of the most popular Graph Neural Network (GNN) architecture, which employs the attention mechanism to learn edge weights and has demonstrated promising performance in various applications. However, since it only incorporates information from immediate neighborhood, it lacks the ability to capture long-range and global graph information, leading to unsatisfactory performance on some datasets, particularly on heterophilic graphs. To address this limitation, we propose the Directional Graph Attention Network (DGAT) in this paper. DGAT is able to combine the feature-based attention with the global directional information extracted from the graph topology. To this end, a new class of Laplacian matrices is proposed which can provably reduce the diffusion distance between nodes. Based on the new Laplacian, topology-guided neighbour pruning and edge adding mechanisms are proposed to remove the noisy and capture the helpful long-range neighborhood information. Besides, a global directional attention is designed to enable a topological-aware information propagation. The superiority of the proposed DGAT over the baseline GAT has also been verified through experiments on real-world benchmarks and synthetic data sets. It also outperforms the state-of-the-art (SOTA) models on 6 out of 7 real-world benchmark datasets.
[ "['Qincheng Lu' 'Jiaqi Zhu' 'Sitao Luan' 'Xiao-Wen Chang']" ]
null
null
2403.01485
null
null
http://arxiv.org/pdf/2403.01485v2
2024-05-25T21:47:13Z
2024-03-03T11:36:35Z
Approximations to the Fisher Information Metric of Deep Generative Models for Out-Of-Distribution Detection
Likelihood-based deep generative models such as score-based diffusion models and variational autoencoders are state-of-the-art machine learning models approximating high-dimensional distributions of data such as images, text, or audio. One of many downstream tasks they can be naturally applied to is out-of-distribution (OOD) detection. However, seminal work by Nalisnick et al. which we reproduce showed that deep generative models consistently infer higher log-likelihoods for OOD data than data they were trained on, marking an open problem. In this work, we analyse using the gradient of a data point with respect to the parameters of the deep generative model for OOD detection, based on the simple intuition that OOD data should have larger gradient norms than training data. We formalise measuring the size of the gradient as approximating the Fisher information metric. We show that the Fisher information matrix (FIM) has large absolute diagonal values, motivating the use of chi-square distributed, layer-wise gradient norms as features. We combine these features to make a simple, model-agnostic and hyperparameter-free method for OOD detection which estimates the joint density of the layer-wise gradient norms for a given data point. We find that these layer-wise gradient norms are weakly correlated, rendering their combined usage informative, and prove that the layer-wise gradient norms satisfy the principle of (data representation) invariance. Our empirical results indicate that this method outperforms the Typicality test for most deep generative models and image dataset pairings.
[ "['Sam Dauncey' 'Chris Holmes' 'Christopher Williams' 'Fabian Falck']" ]
null
null
2403.01493
null
null
http://arxiv.org/pdf/2403.01493v1
2024-03-03T12:05:49Z
2024-03-03T12:05:49Z
ConvTimeNet: A Deep Hierarchical Fully Convolutional Model for Multivariate Time Series Analysis
This paper introduces ConvTimeNet, a novel deep hierarchical fully convolutional network designed to serve as a general-purpose model for time series analysis. The key design of this network is twofold, designed to overcome the limitations of traditional convolutional networks. Firstly, we propose an adaptive segmentation of time series into sub-series level patches, treating these as fundamental modeling units. This setting avoids the sparsity semantics associated with raw point-level time steps. Secondly, we design a fully convolutional block by skillfully integrating deepwise and pointwise convolution operations, following the advanced building block style employed in Transformer encoders. This backbone network allows for the effective capture of both global sequence and cross-variable dependence, as it not only incorporates the advancements of Transformer architecture but also inherits the inherent properties of convolution. Furthermore, multi-scale representations of given time series instances can be learned by controlling the kernel size flexibly. Extensive experiments are conducted on both time series forecasting and classification tasks. The results consistently outperformed strong baselines in most situations in terms of effectiveness.The code is publicly available.
[ "['Mingyue Cheng' 'Jiqian Yang' 'Tingyue Pan' 'Qi Liu' 'Zhi Li']" ]
null
null
2403.01499
null
null
http://arxiv.org/pdf/2403.01499v1
2024-03-03T12:23:17Z
2024-03-03T12:23:17Z
Normalising Flow-based Differentiable Particle Filters
Recently, there has been a surge of interest in incorporating neural networks into particle filters, e.g. differentiable particle filters, to perform joint sequential state estimation and model learning for non-linear non-Gaussian state-space models in complex environments. Existing differentiable particle filters are mostly constructed with vanilla neural networks that do not allow density estimation. As a result, they are either restricted to a bootstrap particle filtering framework or employ predefined distribution families (e.g. Gaussian distributions), limiting their performance in more complex real-world scenarios. In this paper we present a differentiable particle filtering framework that uses (conditional) normalising flows to build its dynamic model, proposal distribution, and measurement model. This not only enables valid probability densities but also allows the proposed method to adaptively learn these modules in a flexible way, without being restricted to predefined distribution families. We derive the theoretical properties of the proposed filters and evaluate the proposed normalising flow-based differentiable particle filters' performance through a series of numerical experiments.
[ "['Xiongjie Chen' 'Yunpeng Li']" ]
null
null
2403.01501
null
null
http://arxiv.org/pdf/2403.01501v1
2024-03-03T12:34:13Z
2024-03-03T12:34:13Z
Applying Self-supervised Learning to Network Intrusion Detection for Network Flows with Graph Neural Network
Graph Neural Networks (GNNs) have garnered intensive attention for Network Intrusion Detection System (NIDS) due to their suitability for representing the network traffic flows. However, most present GNN-based methods for NIDS are supervised or semi-supervised. Network flows need to be manually annotated as supervisory labels, a process that is time-consuming or even impossible, making NIDS difficult to adapt to potentially complex attacks, especially in large-scale real-world scenarios. The existing GNN-based self-supervised methods focus on the binary classification of network flow as benign or not, and thus fail to reveal the types of attack in practice. This paper studies the application of GNNs to identify the specific types of network flows in an unsupervised manner. We first design an encoder to obtain graph embedding, that introduces the graph attention mechanism and considers the edge information as the only essential factor. Then, a self-supervised method based on graph contrastive learning is proposed. The method samples center nodes, and for each center node, generates subgraph by it and its direct neighbor nodes, and corresponding contrastive subgraph from the interpolated graph, and finally constructs positive and negative samples from subgraphs. Furthermore, a structured contrastive loss function based on edge features and graph local topology is introduced. To the best of our knowledge, it is the first GNN-based self-supervised method for the multiclass classification of network flows in NIDS. Detailed experiments conducted on four real-world databases (NF-Bot-IoT, NF-Bot-IoT-v2, NF-CSE-CIC-IDS2018, and NF-CSE-CIC-IDS2018-v2) systematically compare our model with the state-of-the-art supervised and self-supervised models, illustrating the considerable potential of our method. Our code is accessible through https://github.com/renj-xu/NEGSC.
[ "['Renjie Xu' 'Guangwei Wu' 'Weiping Wang' 'Xing Gao' 'An He'\n 'Zhengpeng Zhang']" ]
null
null
2403.01518
null
null
http://arxiv.org/pdf/2403.01518v1
2024-03-03T14:03:48Z
2024-03-03T14:03:48Z
Revisiting Dynamic Evaluation: Online Adaptation for Large Language Models
We consider the problem of online fine tuning the parameters of a language model at test time, also known as dynamic evaluation. While it is generally known that this approach improves the overall predictive performance, especially when considering distributional shift between training and evaluation data, we here emphasize the perspective that online adaptation turns parameters into temporally changing states and provides a form of context-length extension with memory in weights, more in line with the concept of memory in neuroscience. We pay particular attention to the speed of adaptation (in terms of sample efficiency),sensitivity to the overall distributional drift, and the computational overhead for performing gradient computations and parameter updates. Our empirical study provides insights on when online adaptation is particularly interesting. We highlight that with online adaptation the conceptual distinction between in-context learning and fine tuning blurs: both are methods to condition the model on previously observed tokens.
[ "['Amal Rannen-Triki' 'Jorg Bornschein' 'Razvan Pascanu' 'Marcus Hutter'\n 'Andras György' 'Alexandre Galashov' 'Yee Whye Teh' 'Michalis K. Titsias']" ]
null
null
2403.01523
null
null
http://arxiv.org/pdf/2403.01523v1
2024-03-03T14:50:15Z
2024-03-03T14:50:15Z
Data-driven local operator finding for reduced-order modelling of plasma systems: I. Concept and verifications
Reduced-order plasma models that can efficiently predict plasma behavior across various settings and configurations are highly sought after yet elusive. The demand for such models has surged in the past decade due to their potential to facilitate scientific research and expedite the development of plasma technologies. In line with the advancements in computational power and data-driven methods, we introduce the "Phi Method" in this two-part article. Part I presents this novel algorithm, which employs constrained regression on a candidate term library informed by numerical discretization schemes to discover discretized systems of differential equations. We demonstrate Phi Method's efficacy in deriving reliable and robust reduced-order models (ROMs) for three test cases: the Lorenz attractor, flow past a cylinder, and a 1D Hall-thruster-representative plasma. Part II will delve into the method's application for parametric dynamics discovery. Our results show that ROMs derived from the Phi Method provide remarkably accurate predictions of systems' behavior, whether derived from steady-state or transient-state data. This underscores the method's potential for transforming plasma system modeling.
[ "['Farbod Faraji' 'Maryam Reza' 'Aaron Knoll' 'J. Nathan Kutz']" ]
null
null
2403.01532
null
null
http://arxiv.org/pdf/2403.01532v1
2024-03-03T15:09:49Z
2024-03-03T15:09:49Z
Data-driven local operator finding for reduced-order modelling of plasma systems: II. Application to parametric dynamics
Real-world systems often exhibit dynamics influenced by various parameters, either inherent or externally controllable, necessitating models capable of reliably capturing these parametric behaviors. Plasma technologies exemplify such systems. For example, phenomena governing global dynamics in Hall thrusters (a spacecraft propulsion technology) vary with various parameters, such as the "self-sustained electric field". In this Part II, following on the introduction of our novel data-driven local operator finding algorithm, Phi Method, in Part I, we showcase the method's effectiveness in learning parametric dynamics to predict system behavior across unseen parameter spaces. We present two adaptations: the "parametric Phi Method" and the "ensemble Phi Method", which are demonstrated through 2D fluid-flow-past-a-cylinder and 1D Hall-thruster-plasma-discharge problems. Comparative evaluation against parametric OPT-DMD in the fluid case demonstrates superior predictive performance of the parametric Phi Method. Across both test cases, parametric and ensemble Phi Method reliably recover governing parametric PDEs and offer accurate predictions over test parameters. Ensemble ROM analysis underscores Phi Method's robust learning of dominant dynamic coefficients with high confidence.
[ "['Farbod Faraji' 'Maryam Reza' 'Aaron Knoll' 'J. Nathan Kutz']" ]
null
null
2403.01533
null
null
http://arxiv.org/pdf/2403.01533v1
2024-03-03T15:23:49Z
2024-03-03T15:23:49Z
Machine learning predicts long-term mortality after acute myocardial infarction using systolic time intervals and routinely collected clinical data
Precise estimation of cardiac patients' current and future comorbidities is an important factor in prioritizing continuous physiological monitoring and new therapies. ML models have shown satisfactory performance in short-term mortality prediction of patients with heart disease, while their utility in long-term predictions is limited. This study aims to investigate the performance of tree-based ML models on long-term mortality prediction and the effect of two recently introduced biomarkers on long-term mortality. This study utilized publicly available data from CCHIA at the Ministry of Health and Welfare, Taiwan, China. Medical records were used to gather demographic and clinical data, including age, gender, BMI, percutaneous coronary intervention (PCI) status, and comorbidities such as hypertension, dyslipidemia, ST-segment elevation myocardial infarction (STEMI), and non-STEMI. Using medical and demographic records as well as two recently introduced biomarkers, brachial pre-ejection period (bPEP) and brachial ejection time (bET), collected from 139 patients with acute myocardial infarction, we investigated the performance of advanced ensemble tree-based ML algorithms (random forest, AdaBoost, and XGBoost) to predict all-cause mortality within 14 years. The developed ML models achieved significantly better performance compared to the baseline LR (C-Statistic, 0.80 for random forest, 0.79 for AdaBoost, and 0.78 for XGBoost, vs 0.77 for LR) (P-RF<0.001, PAdaBoost<0.001, PXGBoost<0.05). Adding bPEP and bET to our feature set significantly improved the algorithms' performance, leading to an absolute increase in C-Statistic of up to 0.03 (C-Statistic, 0.83 for random forest, 0.82 for AdaBoost, and 0.80 for XGBoost, vs 0.74 for LR) (P-RF<0.001, PAdaBoost<0.001, PXGBoost<0.05). This advancement may enable better treatment prioritization for high-risk individuals.
[ "['Bijan Roudini' 'Boshra Khajehpiri' 'Hamid Abrishami Moghaddam'\n 'Mohamad Forouzanfar']" ]
null
null
2403.01535
null
null
http://arxiv.org/pdf/2403.01535v2
2024-04-23T13:46:10Z
2024-03-03T15:28:47Z
Neural Graph Generator: Feature-Conditioned Graph Generation using Latent Diffusion Models
Graph generation has emerged as a crucial task in machine learning, with significant challenges in generating graphs that accurately reflect specific properties. Existing methods often fall short in efficiently addressing this need as they struggle with the high-dimensional complexity and varied nature of graph properties. In this paper, we introduce the Neural Graph Generator (NGG), a novel approach which utilizes conditioned latent diffusion models for graph generation. NGG demonstrates a remarkable capacity to model complex graph patterns, offering control over the graph generation process. NGG employs a variational graph autoencoder for graph compression and a diffusion process in the latent vector space, guided by vectors summarizing graph statistics. We demonstrate NGG's versatility across various graph generation tasks, showing its capability to capture desired graph properties and generalize to unseen graphs. This work signifies a significant shift in graph generation methodologies, offering a more practical and efficient solution for generating diverse types of graphs with specific characteristics.
[ "['Iakovos Evdaimon' 'Giannis Nikolentzos' 'Michail Chatzianastasis'\n 'Hadi Abdine' 'Michalis Vazirgiannis']" ]
null
null
2403.01536
null
null
http://arxiv.org/pdf/2403.01536v1
2024-03-03T15:30:31Z
2024-03-03T15:30:31Z
Fast Ergodic Search with Kernel Functions
Ergodic search enables optimal exploration of an information distribution while guaranteeing the asymptotic coverage of the search space. However, current methods typically have exponential computation complexity in the search space dimension and are restricted to Euclidean space. We introduce a computationally efficient ergodic search method. Our contributions are two-fold. First, we develop a kernel-based ergodic metric and generalize it from Euclidean space to Lie groups. We formally prove the proposed metric is consistent with the standard ergodic metric while guaranteeing linear complexity in the search space dimension. Secondly, we derive the first-order optimality condition of the kernel ergodic metric for nonlinear systems, which enables efficient trajectory optimization. Comprehensive numerical benchmarks show that the proposed method is at least two orders of magnitude faster than the state-of-the-art algorithm. Finally, we demonstrate the proposed algorithm with a peg-in-hole insertion task. We formulate the problem as a coverage task in the space of SE(3) and use a 30-second-long human demonstration as the prior distribution for ergodic coverage. Ergodicity guarantees the asymptotic solution of the peg-in-hole problem so long as the solution resides within the prior information distribution, which is seen in the 100% success rate.
[ "['Muchen Sun' 'Ayush Gaggar' 'Peter Trautman' 'Todd Murphey']" ]
null
null
2403.01537
null
null
http://arxiv.org/pdf/2403.01537v4
2024-06-17T04:05:29Z
2024-03-03T15:30:59Z
Mixed Strategy Nash Equilibrium for Crowd Navigation
Robots navigating in crowded areas should negotiate free space with humans rather than fully controlling collision avoidance, as this can lead to freezing behavior. Game theory provides a framework for the robot to reason about potential cooperation from humans for collision avoidance during path planning. In particular, the mixed strategy Nash equilibrium captures the negotiation behavior under uncertainty, making it well suited for crowd navigation. However, computing the mixed strategy Nash equilibrium is often prohibitively expensive for real-time decision-making. In this paper, we propose an iterative Bayesian update scheme over probability distributions of trajectories. The algorithm simultaneously generates a stochastic plan for the robot and probabilistic predictions of other pedestrians' paths. We prove that the proposed algorithm is equivalent to solving a mixed strategy game for crowd navigation, and the algorithm guarantees the recovery of the global Nash equilibrium of the game. We name our algorithm Bayes' Rule Nash Equilibrium (BRNE) and develop a real-time model prediction crowd navigation framework. Since BRNE is not solving a general-purpose mixed strategy Nash equilibrium but a tailored formula specifically for crowd navigation, it can compute the solution in real-time on a low-power embedded computer. We evaluate BRNE in both simulated environments and real-world pedestrian datasets. BRNE consistently outperforms non-learning and learning-based methods regarding safety and navigation efficiency. It also reaches human-level crowd navigation performance in the pedestrian dataset benchmark. Lastly, we demonstrate the practicality of our algorithm with real humans on an untethered quadruped robot with fully onboard perception and computation.
[ "['Muchen Sun' 'Francesca Baldini' 'Katie Hughes' 'Peter Trautman'\n 'Todd Murphey']" ]
null
null
2403.01540
null
null
http://arxiv.org/pdf/2403.01540v1
2024-03-03T15:40:24Z
2024-03-03T15:40:24Z
Quantized Hierarchical Federated Learning: A Robust Approach to Statistical Heterogeneity
This paper presents a novel hierarchical federated learning algorithm within multiple sets that incorporates quantization for communication-efficiency and demonstrates resilience to statistical heterogeneity. Unlike conventional hierarchical federated learning algorithms, our approach combines gradient aggregation in intra-set iterations with model aggregation in inter-set iterations. We offer a comprehensive analytical framework to evaluate its optimality gap and convergence rate, comparing these aspects with those of conventional algorithms. Additionally, we develop a problem formulation to derive optimal system parameters in a closed-form solution. Our findings reveal that our algorithm consistently achieves high learning accuracy over a range of parameters and significantly outperforms other hierarchical algorithms, particularly in scenarios with heterogeneous data distributions.
[ "['Seyed Mohammad Azimi-Abarghouyi' 'Viktoria Fodor']" ]
null
null
2403.01546
null
null
http://arxiv.org/pdf/2403.01546v1
2024-03-03T15:47:43Z
2024-03-03T15:47:43Z
Hyperspectral Image Analysis in Single-Modal and Multimodal setting using Deep Learning Techniques
Hyperspectral imaging provides precise classification for land use and cover due to its exceptional spectral resolution. However, the challenges of high dimensionality and limited spatial resolution hinder its effectiveness. This study addresses these challenges by employing deep learning techniques to efficiently process, extract features, and classify data in an integrated manner. To enhance spatial resolution, we integrate information from complementary modalities such as LiDAR and SAR data through multimodal learning. Moreover, adversarial learning and knowledge distillation are utilized to overcome issues stemming from domain disparities and missing modalities. We also tailor deep learning architectures to suit the unique characteristics of HSI data, utilizing 1D convolutional and recurrent neural networks to handle its continuous spectral dimension. Techniques like visual attention and feedback connections within the architecture bolster the robustness of feature extraction. Additionally, we tackle the issue of limited training samples through self-supervised learning methods, employing autoencoders for dimensionality reduction and exploring semi-supervised learning techniques that leverage unlabeled data. Our proposed approaches are evaluated across various HSI datasets, consistently outperforming existing state-of-the-art techniques.
[ "['Shivam Pande']" ]
null
null
2403.01548
null
null
http://arxiv.org/pdf/2403.01548v3
2024-03-12T09:49:28Z
2024-03-03T15:53:41Z
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Large language models (LLMs) frequently hallucinate and produce factual errors, yet our understanding of why they make these errors remains limited. In this study, we delve into the underlying mechanisms of LLM hallucinations from the perspective of inner representations, and discover a salient pattern associated with hallucinations: correct generations tend to have sharper context activations in the hidden states of the in-context tokens, compared to the incorrect ones. Leveraging this insight, we propose an entropy-based metric to quantify the ``sharpness'' among the in-context hidden states and incorporate it into the decoding process to formulate a constrained decoding approach. Experiments on various knowledge-seeking and hallucination benchmarks demonstrate our approach's consistent effectiveness, for example, achieving up to an 8.6 point improvement on TruthfulQA. We believe this study can improve our understanding of hallucinations and serve as a practical solution for hallucination mitigation.
[ "['Shiqi Chen' 'Miao Xiong' 'Junteng Liu' 'Zhengxuan Wu' 'Teng Xiao'\n 'Siyang Gao' 'Junxian He']" ]
null
null
2403.01554
null
null
http://arxiv.org/pdf/2403.01554v1
2024-03-03T16:12:20Z
2024-03-03T16:12:20Z
Transformers for Supervised Online Continual Learning
Transformers have become the dominant architecture for sequence modeling tasks such as natural language processing or audio processing, and they are now even considered for tasks that are not naturally sequential such as image classification. Their ability to attend to and to process a set of tokens as context enables them to develop in-context few-shot learning abilities. However, their potential for online continual learning remains relatively unexplored. In online continual learning, a model must adapt to a non-stationary stream of data, minimizing the cumulative nextstep prediction loss. We focus on the supervised online continual learning setting, where we learn a predictor $x_t rightarrow y_t$ for a sequence of examples $(x_t, y_t)$. Inspired by the in-context learning capabilities of transformers and their connection to meta-learning, we propose a method that leverages these strengths for online continual learning. Our approach explicitly conditions a transformer on recent observations, while at the same time online training it with stochastic gradient descent, following the procedure introduced with Transformer-XL. We incorporate replay to maintain the benefits of multi-epoch training while adhering to the sequential protocol. We hypothesize that this combination enables fast adaptation through in-context learning and sustained longterm improvement via parametric learning. Our method demonstrates significant improvements over previous state-of-the-art results on CLOC, a challenging large-scale real-world benchmark for image geo-localization.
[ "['Jorg Bornschein' 'Yazhe Li' 'Amal Rannen-Triki']" ]
null
null
2403.01570
null
null
http://arxiv.org/pdf/2403.01570v2
2024-03-16T04:07:01Z
2024-03-03T17:35:52Z
SERVAL: Synergy Learning between Vertical Models and LLMs towards Oracle-Level Zero-shot Medical Prediction
Recent development of large language models (LLMs) has exhibited impressive zero-shot proficiency on generic and common sense questions. However, LLMs' application on domain-specific vertical questions still lags behind, primarily due to the humiliation problems and deficiencies in vertical knowledge. Furthermore, the vertical data annotation process often requires labor-intensive expert involvement, thereby presenting an additional challenge in enhancing the model's vertical capabilities. In this paper, we propose SERVAL, a synergy learning pipeline designed for unsupervised development of vertical capabilities in both LLMs and small models by mutual enhancement. Specifically, SERVAL utilizes the LLM's zero-shot outputs as annotations, leveraging its confidence to teach a robust vertical model from scratch. Reversely, the trained vertical model guides the LLM fine-tuning to enhance its zero-shot capability, progressively improving both models through an iterative process. In medical domain, known for complex vertical knowledge and costly annotations, comprehensive experiments show that, without access to any gold labels, SERVAL with the synergy learning of OpenAI GPT-3.5 and a simple model attains fully-supervised competitive performance across ten widely used medical datasets. These datasets represent vertically specialized medical diagnostic scenarios (e.g., diabetes, heart diseases, COVID-19), highlighting the potential of SERVAL in refining the vertical capabilities of LLMs and training vertical models from scratch, all achieved without the need for annotations.
[ "['Jiahuan Yan' 'Jintai Chen' 'Chaowen Hu' 'Bo Zheng' 'Yaojun Hu'\n 'Jimeng Sun' 'Jian Wu']" ]
null
null
2403.01571
null
null
http://arxiv.org/pdf/2403.01571v1
2024-03-03T17:36:42Z
2024-03-03T17:36:42Z
Limits to classification performance by relating Kullback-Leibler divergence to Cohen's Kappa
The performance of machine learning classification algorithms are evaluated by estimating metrics, often from the confusion matrix, using training data and cross-validation. However, these do not prove that the best possible performance has been achieved. Fundamental limits to error rates can be estimated using information distance measures. To this end, the confusion matrix has been formulated to comply with the Chernoff-Stein Lemma. This links the error rates to the Kullback-Leibler divergences between the probability density functions describing the two classes. This leads to a key result that relates Cohen's Kappa to the Resistor Average Distance which is the parallel resistor combination of the two Kullback-Leibler divergences. The Resistor Average Distance has units of bits and is estimated from the same training data used by the classification algorithm, using kNN estimates of the KullBack-Leibler divergences. The classification algorithm gives the confusion matrix and Kappa. Theory and methods are discussed in detail and then applied to Monte Carlo data and real datasets. Four very different real datasets - Breast Cancer, Coronary Heart Disease, Bankruptcy, and Particle Identification - are analysed, with both continuous and discrete values, and their classification performance compared to the expected theoretical limit. In all cases this analysis shows that the algorithms could not have performed any better due to the underlying probability density functions for the two classes. Important lessons are learnt on how to predict the performance of algorithms for imbalanced data using training datasets that are approximately balanced. Machine learning is very powerful but classification performance ultimately depends on the quality of the data and the relevance of the variables to the problem.
[ "['L. Crow' 'S. J. Watts']" ]
null
null
2403.01582
null
null
http://arxiv.org/pdf/2403.01582v2
2024-05-23T13:46:05Z
2024-03-03T18:22:39Z
Selection, Ensemble, and Adaptation: Advancing Multi-Source-Free Domain Adaptation via Architecture Zoo
Conventional Multi-Source Free Domain Adaptation (MSFDA) assumes that each source domain provides a single source model, and all source models adopt a uniform architecture. This paper introduces Zoo-MSFDA, a more general setting that allows each source domain to offer a zoo of multiple source models with different architectures. While it enriches the source knowledge, Zoo-MSFDA risks being dominated by suboptimal/harmful models. To address this issue, we theoretically analyze the model selection problem in Zoo-MSFDA, and introduce two principles: transferability principle and diversity principle. Recognizing the challenge of measuring transferability, we subsequently propose a novel Source-Free Unsupervised Transferability Estimation (SUTE). It enables assessing and comparing transferability across multiple source models with different architectures under domain shift, without requiring target labels and source data. Based on above, we introduce a Selection, Ensemble, and Adaptation (SEA) framework to address Zoo-MSFDA, which consists of: 1) source models selection based on the proposed principles and SUTE; 2) ensemble construction based on SUTE-estimated transferability; 3) target-domain adaptation of the ensemble model. Evaluations demonstrate that our SEA framework, with the introduced Zoo-MSFDA setting, significantly improves adaptation performance (e.g., 13.5% on DomainNet). Additionally, our SUTE achieves state-of-the-art performance in transferability estimation.
[ "['Jiangbo Pei' 'Ruizhe Li' 'Aidong Men' 'Yang Liu' 'Xiahai Zhuang'\n 'Qingchao Chen']" ]
null
null
2403.01590
null
null
http://arxiv.org/pdf/2403.01590v2
2024-03-31T14:31:14Z
2024-03-03T18:58:21Z
The Hidden Attention of Mamba Models
The Mamba layer offers an efficient selective state space model (SSM) that is highly effective in modeling multiple domains, including NLP, long-range sequence processing, and computer vision. Selective SSMs are viewed as dual models, in which one trains in parallel on the entire sequence via an IO-aware parallel scan, and deploys in an autoregressive manner. We add a third view and show that such models can be viewed as attention-driven models. This new perspective enables us to empirically and theoretically compare the underlying mechanisms to that of the self-attention layers in transformers and allows us to peer inside the inner workings of the Mamba model with explainability methods. Our code is publicly available.
[ "['Ameen Ali' 'Itamar Zimerman' 'Lior Wolf']" ]
null
null
2403.01599
null
null
http://arxiv.org/pdf/2403.01599v1
2024-03-03T19:53:06Z
2024-03-03T19:53:06Z
SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos
We study the problem of procedure planning in instructional videos, which aims to make a goal-oriented sequence of action steps given partial visual state observations. The motivation of this problem is to learn a structured and plannable state and action space. Recent works succeeded in sequence modeling of steps with only sequence-level annotations accessible during training, which overlooked the roles of states in the procedures. In this work, we point out that State CHangEs MAtter (SCHEMA) for procedure planning in instructional videos. We aim to establish a more structured state space by investigating the causal relations between steps and states in procedures. Specifically, we explicitly represent each step as state changes and track the state changes in procedures. For step representation, we leveraged the commonsense knowledge in large language models (LLMs) to describe the state changes of steps via our designed chain-of-thought prompting. For state change tracking, we align visual state observations with language state descriptions via cross-modal contrastive learning, and explicitly model the intermediate states of the procedure using LLM-generated state descriptions. Experiments on CrossTask, COIN, and NIV benchmark datasets demonstrate that our proposed SCHEMA model achieves state-of-the-art performance and obtains explainable visualizations.
[ "['Yulei Niu' 'Wenliang Guo' 'Long Chen' 'Xudong Lin' 'Shih-Fu Chang']" ]
null
null
2403.01605
null
null
http://arxiv.org/pdf/2403.01605v1
2024-03-03T20:09:09Z
2024-03-03T20:09:09Z
Towards Provable Log Density Policy Gradient
Policy gradient methods are a vital ingredient behind the success of modern reinforcement learning. Modern policy gradient methods, although successful, introduce a residual error in gradient estimation. In this work, we argue that this residual term is significant and correcting for it could potentially improve sample-complexity of reinforcement learning methods. To that end, we propose log density gradient to estimate the policy gradient, which corrects for this residual error term. Log density gradient method computes policy gradient by utilising the state-action discounted distributional formulation. We first present the equations needed to exactly find the log density gradient for a tabular Markov Decision Processes (MDPs). For more complex environments, we propose a temporal difference (TD) method that approximates log density gradient by utilizing backward on-policy samples. Since backward sampling from a Markov chain is highly restrictive we also propose a min-max optimization that can approximate log density gradient using just on-policy samples. We also prove uniqueness, and convergence under linear function approximation, for this min-max optimization. Finally, we show that the sample complexity of our min-max optimization to be of the order of $m^{-1/2}$, where $m$ is the number of on-policy samples. We also demonstrate a proof-of-concept for our log density gradient method on gridworld environment, and observe that our method is able to improve upon the classical policy gradient method by a clear margin, thus indicating a promising novel direction to develop reinforcement learning algorithms that require fewer samples.
[ "['Pulkit Katdare' 'Anant Joshi' 'Katherine Driggs-Campbell']" ]
null
null
2403.01607
null
null
http://arxiv.org/pdf/2403.01607v1
2024-03-03T20:16:16Z
2024-03-03T20:16:16Z
Respiratory motion forecasting with online learning of recurrent neural networks for safety enhancement in externally guided radiotherapy
In lung radiotherapy, infrared cameras can record the location of reflective objects on the chest to infer the position of the tumor moving due to breathing, but treatment system latencies hinder radiation beam precision. Real-time recurrent learning (RTRL), is a potential solution as it can learn patterns within non-stationary respiratory data but has high complexity. This study assesses the capabilities of resource-efficient online RNN algorithms, namely unbiased online recurrent optimization (UORO), sparse-1 step approximation (SnAp-1), and decoupled neural interfaces (DNI) to forecast respiratory motion during radiotherapy treatment accurately. We use time series containing the 3D position of external markers on the chest of healthy subjects. We propose efficient implementations for SnAp-1 and DNI based on compression of the influence and immediate Jacobian matrices and an accurate update of the linear coefficients used in credit assignment estimation, respectively. The original sampling frequency was 10Hz; we performed resampling at 3.33Hz and 30Hz. We use UORO, SnAp-1, and DNI to forecast each marker's 3D position with horizons (the time interval in advance for which the prediction is made) h<=2.1s and compare them with RTRL, least mean squares, and linear regression. RNNs trained online achieved similar or better accuracy than most previous works using larger training databases and deep learning, even though we used only the first minute of each sequence to predict motion within that exact sequence. SnAp-1 had the lowest normalized root mean square errors (nRMSE) averaged over the horizon values considered, equal to 0.335 and 0.157, at 3.33Hz and 10.0Hz, respectively. Similarly, UORO had the highest accuracy at 30Hz, with an nRMSE of 0.0897. DNI's inference time, equal to 6.8ms per time step at 30Hz (Intel Core i7-13700 CPU), was the lowest among the RNN methods examined.
[ "['Michel Pohl' 'Mitsuru Uesaka' 'Hiroyuki Takahashi' 'Kazuyuki Demachi'\n 'Ritu Bhusal Chhatkuli']" ]
null
null
2403.01615
null
null
http://arxiv.org/pdf/2403.01615v1
2024-03-03T21:04:36Z
2024-03-03T21:04:36Z
Partial Federated Learning
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices (for example, mobile phones) due to privacy concerns. Typically, FL is trained with the assumption that no part of the user data can be egressed from the edge. However, in many production settings, specific data-modalities/meta-data are limited to be on device while others are not. For example, in commercial SLU systems, it is typically desired to prevent transmission of biometric signals (such as audio recordings of the input prompt) to the cloud, but egress of locally (i.e. on the edge device) transcribed text to the cloud may be possible. In this work, we propose a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities or their intermediate representations can be made available to the server. We further restrict our model training by preventing the egress of data labels to the cloud for better privacy, and instead use a contrastive learning based model objective. We evaluate our approach on two different multi-modal datasets and show promising results with our proposed approach.
[ "['Tiantian Feng' 'Anil Ramakrishna' 'Jimit Majmudar' 'Charith Peris'\n 'Jixuan Wang' 'Clement Chung' 'Richard Zemel' 'Morteza Ziyadi'\n 'Rahul Gupta']" ]
null
null
2403.01621
null
null
http://arxiv.org/pdf/2403.01621v1
2024-03-03T21:42:55Z
2024-03-03T21:42:55Z
Machine Learning vs Deep Learning: The Generalization Problem
The capacity to generalize beyond the range of training data is a pivotal challenge, often synonymous with a model's utility and robustness. This study investigates the comparative abilities of traditional machine learning (ML) models and deep learning (DL) algorithms in terms of extrapolation -- a more challenging aspect of generalization because it requires the model to make inferences about data points that lie outside the domain it has been trained on. We present an empirical analysis where both ML and DL models are trained on an exponentially growing function and then tested on values outside the training domain. The choice of this function allows us to distinctly showcase the divergence in performance when models are required to predict beyond the scope of their training data. Our findings suggest that deep learning models possess inherent capabilities to generalize beyond the training scope, an essential feature for real-world applications where data is often incomplete or extends beyond the observed range. This paper argues for a nuanced understanding of the structural differences between ML and DL models, with an emphasis on the implications for both theoretical research and practical deployment.
[ "['Yong Yi Bay' 'Kathleen A. Yearick']" ]
null
null
2403.01623
null
null
http://arxiv.org/pdf/2403.01623v1
2024-03-03T22:10:21Z
2024-03-03T22:10:21Z
ML4PhySim : Machine Learning for Physical Simulations Challenge (The airfoil design)
The use of machine learning (ML) techniques to solve complex physical problems has been considered recently as a promising approach. However, the evaluation of such learned physical models remains an important issue for industrial use. The aim of this competition is to encourage the development of new ML techniques to solve physical problems using a unified evaluation framework proposed recently, called Learning Industrial Physical Simulations (LIPS). We propose learning a task representing a well-known physical use case: the airfoil design simulation, using a dataset called AirfRANS. The global score calculated for each submitted solution is based on three main categories of criteria covering different aspects, namely: ML-related, Out-Of-Distribution, and physical compliance criteria. To the best of our knowledge, this is the first competition addressing the use of ML-based surrogate approaches to improve the trade-off computational cost/accuracy of physical simulation.The competition is hosted by the Codabench platform with online training and evaluation of all submitted solutions.
[ "['Mouadh Yagoubi' 'Milad Leyli-Abadi' 'David Danan' 'Jean-Patrick Brunet'\n 'Jocelyn Ahmed Mazari' 'Florent Bonnet' 'Asma Farjallah'\n 'Marc Schoenauer' 'Patrick Gallinari']" ]
null
null
2403.01628
null
null
http://arxiv.org/pdf/2403.01628v2
2024-04-05T14:15:13Z
2024-03-03T22:21:58Z
Recent Advances, Applications, and Open Challenges in Machine Learning for Health: Reflections from Research Roundtables at ML4H 2023 Symposium
The third ML4H symposium was held in person on December 10, 2023, in New Orleans, Louisiana, USA. The symposium included research roundtable sessions to foster discussions between participants and senior researchers on timely and relevant topics for the ac{ML4H} community. Encouraged by the successful virtual roundtables in the previous year, we organized eleven in-person roundtables and four virtual roundtables at ML4H 2022. The organization of the research roundtables at the conference involved 17 Senior Chairs and 19 Junior Chairs across 11 tables. Each roundtable session included invited senior chairs (with substantial experience in the field), junior chairs (responsible for facilitating the discussion), and attendees from diverse backgrounds with interest in the session's topic. Herein we detail the organization process and compile takeaways from these roundtable discussions, including recent advances, applications, and open challenges for each topic. We conclude with a summary and lessons learned across all roundtables. This document serves as a comprehensive review paper, summarizing the recent advancements in machine learning for healthcare as contributed by foremost researchers in the field.
[ "['Hyewon Jeong' 'Sarah Jabbour' 'Yuzhe Yang' 'Rahul Thapta'\n 'Hussein Mozannar' 'William Jongwon Han' 'Nikita Mehandru'\n 'Michael Wornow' 'Vladislav Lialin' 'Xin Liu' 'Alejandro Lozano'\n 'Jiacheng Zhu' 'Rafal Dariusz Kocielnik' 'Keith Harrigian' 'Haoran Zhang'\n 'Edward Lee' 'Milos Vukadinovic' 'Aparna Balagopalan' 'Vincent Jeanselme'\n 'Katherine Matton' 'Ilker Demirel' 'Jason Fries' 'Parisa Rashidi'\n 'Brett Beaulieu-Jones' 'Xuhai Orson Xu' 'Matthew McDermott'\n 'Tristan Naumann' 'Monica Agrawal' 'Marinka Zitnik' 'Berk Ustun'\n 'Edward Choi' 'Kristen Yeom' 'Gamze Gursoy' 'Marzyeh Ghassemi'\n 'Emma Pierson' 'George Chen' 'Sanjat Kanjilal' 'Michael Oberst'\n 'Linying Zhang' 'Harvineet Singh' 'Tom Hartvigsen' 'Helen Zhou'\n 'Chinasa T. Okolo']" ]
null
null
2403.01632
null
null
http://arxiv.org/pdf/2403.01632v3
2024-07-14T22:22:59Z
2024-03-03T22:38:35Z
SynCode: LLM Generation with Grammar Augmentation
LLMs are widely used in complex AI applications. These applications underscore the need for LLM outputs to adhere to a specific format, for their integration with other components in the systems. Typically the format rules e.g., for data serialization formats such as JSON, YAML, or Code in Programming Language are expressed as context-free grammar (CFG). Due to the hallucinations and unreliability of LLMs, instructing LLMs to adhere to specified syntax becomes an increasingly important challenge. We present SynCode, a novel framework for efficient and general syntactical decoding with LLMs, to address this challenge. SynCode ensures soundness and completeness with respect to the CFG of a formal language, effectively retaining valid tokens while filtering out invalid ones. SynCode uses an offline-constructed, efficient lookup table, the DFA mask store, derived from the DFA of the language's grammar for efficient generation. SynCode seamlessly integrates with any language defined by CFG, as evidenced by experiments focusing on generating JSON, Python, and Go outputs. Our experiments evaluating the effectiveness of SynCode for JSON generation demonstrate that SynCode eliminates all syntax errors and significantly outperforms state-of-the-art baselines. Furthermore, our results underscore how SynCode significantly reduces 96.07% of syntax errors in generated Python and Go code, showcasing its substantial impact on enhancing syntactical precision in LLM generation. Our code is available at https://github.com/uiuc-focal-lab/syncode
[ "['Shubham Ugare' 'Tarun Suresh' 'Hangoo Kang' 'Sasa Misailovic'\n 'Gagandeep Singh']" ]
null
null
2403.01633
null
null
http://arxiv.org/pdf/2403.01633v2
2024-05-24T20:35:38Z
2024-03-03T22:43:47Z
Critical windows: non-asymptotic theory for feature emergence in diffusion models
We develop theory to understand an intriguing property of diffusion models for image generation that we term critical windows. Empirically, it has been observed that there are narrow time intervals in sampling during which particular features of the final image emerge, e.g. the image class or background color (Ho et al., 2020b; Meng et al., 2022; Choi et al., 2022; Raya & Ambrogioni, 2023; Georgiev et al., 2023; Sclocchi et al., 2024; Biroli et al., 2024). While this is advantageous for interpretability as it implies one can localize properties of the generation to a small segment of the trajectory, it seems at odds with the continuous nature of the diffusion. We propose a formal framework for studying these windows and show that for data coming from a mixture of strongly log-concave densities, these windows can be provably bounded in terms of certain measures of inter- and intra-group separation. We also instantiate these bounds for concrete examples like well-conditioned Gaussian mixtures. Finally, we use our bounds to give a rigorous interpretation of diffusion models as hierarchical samplers that progressively "decide" output features over a discrete sequence of times. We validate our bounds with synthetic experiments. Additionally, preliminary experiments on Stable Diffusion suggest critical windows may serve as a useful tool for diagnosing fairness and privacy violations in real-world diffusion models.
[ "['Marvin Li' 'Sitan Chen']" ]
null
null
2403.01635
null
null
http://arxiv.org/pdf/2403.01635v1
2024-03-03T22:55:39Z
2024-03-03T22:55:39Z
Application of Neural Ordinary Differential Equations for Tokamak Plasma Dynamics Analysis
In the quest for controlled thermonuclear fusion, tokamaks present complex challenges in understanding burning plasma dynamics. This study introduces a multi-region multi-timescale transport model, employing Neural Ordinary Differential Equations (Neural ODEs) to simulate the intricate energy transfer processes within tokamaks. Our methodology leverages Neural ODEs for the numerical derivation of diffusivity parameters from DIII-D tokamak experimental data, enabling the precise modeling of energy interactions between electrons and ions across various regions, including the core, edge, and scrape-off layer. These regions are conceptualized as distinct nodes, capturing the critical timescales of radiation and transport processes essential for efficient tokamak operation. Validation against DIII-D plasmas under various auxiliary heating conditions demonstrates the model's effectiveness, ultimately shedding light on ways to enhance tokamak performance with deep learning.
[ "['Zefang Liu' 'Weston M. Stacey']" ]
null
null
2403.01636
null
null
http://arxiv.org/pdf/2403.01636v2
2024-03-06T04:34:01Z
2024-03-03T22:57:44Z
Sample Efficient Myopic Exploration Through Multitask Reinforcement Learning with Diverse Tasks
Multitask Reinforcement Learning (MTRL) approaches have gained increasing attention for its wide applications in many important Reinforcement Learning (RL) tasks. However, while recent advancements in MTRL theory have focused on the improved statistical efficiency by assuming a shared structure across tasks, exploration--a crucial aspect of RL--has been largely overlooked. This paper addresses this gap by showing that when an agent is trained on a sufficiently diverse set of tasks, a generic policy-sharing algorithm with myopic exploration design like $epsilon$-greedy that are inefficient in general can be sample-efficient for MTRL. To the best of our knowledge, this is the first theoretical demonstration of the "exploration benefits" of MTRL. It may also shed light on the enigmatic success of the wide applications of myopic exploration in practice. To validate the role of diversity, we conduct experiments on synthetic robotic control environments, where the diverse task set aligns with the task selection by automatic curriculum learning, which is empirically shown to improve sample-efficiency.
[ "['Ziping Xu' 'Zifan Xu' 'Runxuan Jiang' 'Peter Stone' 'Ambuj Tewari']" ]
null
null
2403.01639
null
null
http://arxiv.org/pdf/2403.01639v1
2024-03-03T23:15:48Z
2024-03-03T23:15:48Z
Theoretical Insights for Diffusion Guidance: A Case Study for Gaussian Mixture Models
Diffusion models benefit from instillation of task-specific information into the score function to steer the sample generation towards desired properties. Such information is coined as guidance. For example, in text-to-image synthesis, text input is encoded as guidance to generate semantically aligned images. Proper guidance inputs are closely tied to the performance of diffusion models. A common observation is that strong guidance promotes a tight alignment to the task-specific information, while reducing the diversity of the generated samples. In this paper, we provide the first theoretical study towards understanding the influence of guidance on diffusion models in the context of Gaussian mixture models. Under mild conditions, we prove that incorporating diffusion guidance not only boosts classification confidence but also diminishes distribution diversity, leading to a reduction in the differential entropy of the output distribution. Our analysis covers the widely adopted sampling schemes including DDPM and DDIM, and leverages comparison inequalities for differential equations as well as the Fokker-Planck equation that characterizes the evolution of probability density function, which may be of independent theoretical interest.
[ "['Yuchen Wu' 'Minshuo Chen' 'Zihao Li' 'Mengdi Wang' 'Yuting Wei']" ]
null
null
2403.01642
null
null
http://arxiv.org/pdf/2403.01642v1
2024-03-03T23:38:37Z
2024-03-03T23:38:37Z
Blue and Green-Mode Energy-Efficient Chemiresistive Sensor Array Realized by Rapid Ensemble Learning
The rapid advancement of Internet of Things (IoT) necessitates the development of optimized Chemiresistive Sensor (CRS) arrays that are both energy-efficient and capable. This study introduces a novel optimization strategy that employs a rapid ensemble learning-based model committee approach to achieve these goals. Utilizing machine learning models such as Elastic Net Regression, Random Forests, and XGBoost, among others, the strategy identifies the most impactful sensors in a CRS array for accurate classification: A weighted voting mechanism is introduced to aggregate the models' opinions in sensor selection, thereby setting up wo distinct working modes, termed "Blue" and "Green". The Blue mode operates with all sensors for maximum detection capability, while the Green mode selectively activates only key sensors, significantly reducing energy consumption without compromising detection accuracy. The strategy is validated through theoretical calculations and Monte Carlo simulations, demonstrating its effectiveness and accuracy. The proposed optimization strategy not only elevates the detection capability of CRS arrays but also brings it closer to theoretical limits, promising significant implications for the development of low-cost, easily fabricable next-generation IoT sensor terminals.
[ "['Zeheng Wang' 'James Cooper' 'Muhammad Usman' 'Timothy van der Laan']" ]
null
null
2403.01643
null
null
http://arxiv.org/pdf/2403.01643v2
2024-05-30T17:46:22Z
2024-03-03T23:40:35Z
You Need to Pay Better Attention: Rethinking the Mathematics of Attention Mechanism
Scaled Dot Product Attention (SDPA) is the backbone of many modern deep-learning models. It is so versatile that it has been used in natural language, vision, and multi-modal domains with very little change compared to its original formulation. This paper discusses why the current formulation is inefficient by delving into the mathematical details of the attention mechanism. We propose three improvements to mitigate these inefficiencies, thereby, introducing three enhanced attention mechanisms: Optimised, Efficient, and Super Attention. Optimised and Efficient Attention have one and two matrix multiplications fewer per head, respectively, and 25% and 50% fewer parameters, respectively, than standard SDPA, but perform similarly to standard SDPA in both vision and natural language tasks. They can be used in all applications where SDPA is used while offering smaller model sizes and faster training and inference without noticeable loss in performance. Super Attention introduces a new linear transformation on the values, transforming them from the left. It outperforms standard SPDA on vision and natural language tasks by up to 17% while having one fewer matrix multiplication per head and 25% fewer parameters than standard SDPA. Consequently, it is also faster than standard SDPA. Super Attention is ideal in applications where the attention layer's context length is fixed, such as Vision Transformers. In addition to providing mathematical reasoning, we evaluate the presented attention mechanisms on several datasets including MNIST, CIFAR100, ImageNet, IMDB Movie Reviews, and Amazon Reviews datasets, as well as combined Europarl and Anki English-Spanish datasets for neural machine translation.
[ "['Mehran Hosseini' 'Peyman Hosseini']" ]
null
null
2403.01653
null
null
http://arxiv.org/pdf/2403.01653v1
2024-03-04T00:09:07Z
2024-03-04T00:09:07Z
Day-ahead regional solar power forecasting with hierarchical temporal convolutional neural networks using historical power generation and weather data
Regional solar power forecasting, which involves predicting the total power generation from all rooftop photovoltaic systems in a region holds significant importance for various stakeholders in the energy sector. However, the vast amount of solar power generation and weather time series from geographically dispersed locations that need to be considered in the forecasting process makes accurate regional forecasting challenging. Therefore, previous work has limited the focus to either forecasting a single time series (i.e., aggregated time series) which is the addition of all solar generation time series in a region, disregarding the location-specific weather effects or forecasting solar generation time series of each PV site (i.e., individual time series) independently using location-specific weather data, resulting in a large number of forecasting models. In this work, we propose two deep-learning-based regional forecasting methods that can effectively leverage both types of time series (aggregated and individual) with weather data in a region. We propose two hierarchical temporal convolutional neural network architectures (HTCNN) and two strategies to adapt HTCNNs for regional solar power forecasting. At first, we explore generating a regional forecast using a single HTCNN. Next, we divide the region into multiple sub-regions based on weather information and train separate HTCNNs for each sub-region; the forecasts of each sub-region are then added to generate a regional forecast. The proposed work is evaluated using a large dataset collected over a year from 101 locations across Western Australia to provide a day ahead forecast. We compare our approaches with well-known alternative methods and show that the sub-region HTCNN requires fewer individual networks and achieves a forecast skill score of 40.2% reducing a statistically significant error by 6.5% compared to the best counterpart.
[ "['Maneesha Perera' 'Julian De Hoog' 'Kasun Bandara' 'Damith Senanayake'\n 'Saman Halgamuge']" ]
null
null
2403.01660
null
null
http://arxiv.org/pdf/2403.01660v1
2024-03-04T00:48:36Z
2024-03-04T00:48:36Z
Geometry and Stability of Supervised Learning Problems
We introduce a notion of distance between supervised learning problems, which we call the Risk distance. This optimal-transport-inspired distance facilitates stability results; one can quantify how seriously issues like sampling bias, noise, limited data, and approximations might change a given problem by bounding how much these modifications can move the problem under the Risk distance. With the distance established, we explore the geometry of the resulting space of supervised learning problems, providing explicit geodesics and proving that the set of classification problems is dense in a larger class of problems. We also provide two variants of the Risk distance: one that incorporates specified weights on a problem's predictors, and one that is more sensitive to the contours of a problem's risk landscape.
[ "['Facundo Mémoli' 'Brantley Vose' 'Robert C. Williamson']" ]
null
null
2403.01666
null
null
http://arxiv.org/pdf/2403.01666v2
2024-06-08T04:05:49Z
2024-03-04T01:33:53Z
Improving Adversarial Energy-Based Model via Diffusion Process
Generative models have shown strong generation ability while efficient likelihood estimation is less explored. Energy-based models~(EBMs) define a flexible energy function to parameterize unnormalized densities efficiently but are notorious for being difficult to train. Adversarial EBMs introduce a generator to form a minimax training game to avoid expensive MCMC sampling used in traditional EBMs, but a noticeable gap between adversarial EBMs and other strong generative models still exists. Inspired by diffusion-based models, we embedded EBMs into each denoising step to split a long-generated process into several smaller steps. Besides, we employ a symmetric Jeffrey divergence and introduce a variational posterior distribution for the generator's training to address the main challenges that exist in adversarial EBMs. Our experiments show significant improvement in generation compared to existing adversarial EBMs, while also providing a useful energy function for efficient density estimation.
[ "['Cong Geng' 'Tian Han' 'Peng-Tao Jiang' 'Hao Zhang' 'Jinwei Chen'\n 'Søren Hauberg' 'Bo Li']" ]
null
null
2403.01669
null
null
http://arxiv.org/pdf/2403.01669v1
2024-03-04T01:44:19Z
2024-03-04T01:44:19Z
Quantifying and Predicting Residential Building Flexibility Using Machine Learning Methods
Residential buildings account for a significant portion (35%) of the total electricity consumption in the U.S. as of 2022. As more distributed energy resources are installed in buildings, their potential to provide flexibility to the grid increases. To tap into that flexibility provided by buildings, aggregators or system operators need to quantify and forecast flexibility. Previous works in this area primarily focused on commercial buildings, with little work on residential buildings. To address the gap, this paper first proposes two complementary flexibility metrics (i.e., power and energy flexibility) and then investigates several mainstream machine learning-based models for predicting the time-variant and sporadic flexibility of residential buildings at four-hour and 24-hour forecast horizons. The long-short-term-memory (LSTM) model achieves the best performance and can predict power flexibility for up to 24 hours ahead with the average error around 0.7 kW. However, for energy flexibility, the LSTM model is only successful for loads with consistent operational patterns throughout the year and faces challenges when predicting energy flexibility associated with HVAC systems.
[ "['Patrick Salter' 'Qiuhua Huang' 'Paulo Cesar Tabares-Velasco']" ]
null
null
2403.01671
null
null
http://arxiv.org/pdf/2403.01671v3
2024-05-09T00:56:22Z
2024-03-04T01:49:23Z
Permutation invariant functions: statistical tests, density estimation, and computationally efficient embedding
Permutation invariance is among the most common symmetry that can be exploited to simplify complex problems in machine learning (ML). There has been a tremendous surge of research activities in building permutation invariant ML architectures. However, less attention is given to: (1) how to statistically test for permutation invariance of coordinates in a random vector where the dimension is allowed to grow with the sample size; (2) how to leverage permutation invariance in estimation problems and how does it help reduce dimensions. In this paper, we take a step back and examine these questions in several fundamental problems: (i) testing the assumption of permutation invariance of multivariate distributions; (ii) estimating permutation invariant densities; (iii) analyzing the metric entropy of permutation invariant function classes and compare them with their counterparts without imposing permutation invariance; (iv) deriving an embedding of permutation invariant reproducing kernel Hilbert spaces for efficient computation. In particular, our methods for (i) and (iv) are based on a sorting trick and (ii) is based on an averaging trick. These tricks substantially simplify the exploitation of permutation invariance.
[ "['Wee Chaimanowong' 'Ying Zhu']" ]