categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.10946
null
null
http://arxiv.org/pdf/2403.10946v1
2024-03-16T15:29:22Z
2024-03-16T15:29:22Z
The Fallacy of Minimizing Local Regret in the Sequential Task Setting
In the realm of Reinforcement Learning (RL), online RL is often conceptualized as an optimization problem, where an algorithm interacts with an unknown environment to minimize cumulative regret. In a stationary setting, strong theoretical guarantees, like a sublinear ($sqrt{T}$) regret bound, can be obtained, which typically implies the convergence to an optimal policy and the cessation of exploration. However, these theoretical setups often oversimplify the complexities encountered in real-world RL implementations, where tasks arrive sequentially with substantial changes between tasks and the algorithm may not be allowed to adaptively learn within certain tasks. We study the changes beyond the outcome distributions, encompassing changes in the reward designs (mappings from outcomes to rewards) and the permissible policy spaces. Our results reveal the fallacy of myopically minimizing regret within each task: obtaining optimal regret rates in the early tasks may lead to worse rates in the subsequent ones, even when the outcome distributions stay the same. To realize the optimal cumulative regret bound across all the tasks, the algorithm has to overly explore in the earlier tasks. This theoretical insight is practically significant, suggesting that due to unanticipated changes (e.g., rapid technological development or human-in-the-loop involvement) between tasks, the algorithm needs to explore more than it would in the usual stationary setting within each task. Such implication resonates with the common practice of using clipped policies in mobile health clinical trials and maintaining a fixed rate of $epsilon$-greedy exploration in robotic learning.
[ "['Ziping Xu' 'Kelly W. Zhang' 'Susan A. Murphy']" ]
null
null
2403.10949
null
null
http://arxiv.org/pdf/2403.10949v2
2024-03-26T01:15:09Z
2024-03-16T15:30:34Z
SelfIE: Self-Interpretation of Large Language Model Embeddings
How do large language models (LLMs) obtain their answers? The ability to explain and control an LLM's reasoning process is key for reliability, transparency, and future model developments. We propose SelfIE (Self-Interpretation of Embeddings), a framework that enables LLMs to interpret their own embeddings in natural language by leveraging their ability to respond to inquiries about a given passage. Capable of interpreting open-world concepts in the hidden embeddings, SelfIE reveals LLM internal reasoning in cases such as making ethical decisions, internalizing prompt injection, and recalling harmful knowledge. SelfIE's text descriptions on hidden embeddings also open up new avenues to control LLM reasoning. We propose Supervised Control, which allows editing open-ended concepts while only requiring gradient computation of individual layer. We extend RLHF to hidden embeddings and propose Reinforcement Control that erases harmful knowledge in LLM without supervision targets.
[ "['Haozhe Chen' 'Carl Vondrick' 'Chengzhi Mao']" ]
null
null
2403.10961
null
null
http://arxiv.org/abs/2403.10961v1
2024-03-16T16:16:31Z
2024-03-16T16:16:31Z
Energy-Based Models with Applications to Speech and Language Processing
Energy-Based Models (EBMs) are an important class of probabilistic models, also known as random fields and undirected graphical models. EBMs are un-normalized and thus radically different from other popular self-normalized probabilistic models such as hidden Markov models (HMMs), autoregressive models, generative adversarial nets (GANs) and variational auto-encoders (VAEs). Over the past years, EBMs have attracted increasing interest not only from the core machine learning community, but also from application domains such as speech, vision, natural language processing (NLP) and so on, due to significant theoretical and algorithmic progress. The sequential nature of speech and language also presents special challenges and needs a different treatment from processing fix-dimensional data (e.g., images). Therefore, the purpose of this monograph is to present a systematic introduction to energy-based models, including both algorithmic progress and applications in speech and language processing. First, the basics of EBMs are introduced, including classic models, recent models parameterized by neural networks, sampling methods, and various learning methods from the classic learning algorithms to the most advanced ones. Then, the application of EBMs in three different scenarios is presented, i.e., for modeling marginal, conditional and joint distributions, respectively. 1) EBMs for sequential data with applications in language modeling, where the main focus is on the marginal distribution of a sequence itself; 2) EBMs for modeling conditional distributions of target sequences given observation sequences, with applications in speech recognition, sequence labeling and text generation; 3) EBMs for modeling joint distributions of both sequences of observations and targets, and their applications in semi-supervised learning and calibrated natural language understanding.
[ "['Zhijian Ou']" ]
null
null
2403.10967
null
null
http://arxiv.org/pdf/2403.10967v1
2024-03-16T16:29:40Z
2024-03-16T16:29:40Z
Dreaming of Many Worlds: Learning Contextual World Models Aids Zero-Shot Generalization
Zero-shot generalization (ZSG) to unseen dynamics is a major challenge for creating generally capable embodied agents. To address the broader challenge, we start with the simpler setting of contextual reinforcement learning (cRL), assuming observability of the context values that parameterize the variation in the system's dynamics, such as the mass or dimensions of a robot, without making further simplifying assumptions about the observability of the Markovian state. Toward the goal of ZSG to unseen variation in context, we propose the contextual recurrent state-space model (cRSSM), which introduces changes to the world model of the Dreamer (v3) (Hafner et al., 2023). This allows the world model to incorporate context for inferring latent Markovian states from the observations and modeling the latent dynamics. Our experiments show that such systematic incorporation of the context improves the ZSG of the policies trained on the ``dreams'' of the world model. We further find qualitatively that our approach allows Dreamer to disentangle the latent state from context, allowing it to extrapolate its dreams to the many worlds of unseen contexts. The code for all our experiments is available at url{https://github.com/sai-prasanna/dreaming_of_many_worlds}.
[ "['Sai Prasanna' 'Karim Farid' 'Raghu Rajan' 'André Biedenkapp']" ]
null
null
2403.10968
null
null
http://arxiv.org/pdf/2403.10968v1
2024-03-16T16:45:28Z
2024-03-16T16:45:28Z
Enhancing IoT Security Against DDoS Attacks through Federated Learning
The rapid proliferation of the Internet of Things (IoT) has ushered in transformative connectivity between physical devices and the digital realm. Nonetheless, the escalating threat of Distributed Denial of Service (DDoS) attacks jeopardizes the integrity and reliability of IoT networks. Conventional DDoS mitigation approaches are ill-equipped to handle the intricacies of IoT ecosystems, potentially compromising data privacy. This paper introduces an innovative strategy to bolster the security of IoT networks against DDoS attacks by harnessing the power of Federated Learning that allows multiple IoT devices or edge nodes to collaboratively build a global model while preserving data privacy and minimizing communication overhead. The research aims to investigate Federated Learning's effectiveness in detecting and mitigating DDoS attacks in IoT. Our proposed framework leverages IoT devices' collective intelligence for real-time attack detection without compromising sensitive data. This study proposes innovative deep autoencoder approaches for data dimensionality reduction, retraining, and partial selection to enhance the performance and stability of the proposed model. Additionally, two renowned aggregation algorithms, FedAvg and FedAvgM, are employed in this research. Various metrics, including true positive rate, false positive rate, and F1-score, are employed to evaluate the model. The dataset utilized in this research, N-BaIoT, exhibits non-IID data distribution, where data categories are distributed quite differently. The negative impact of these distribution disparities is managed by employing retraining and partial selection techniques, enhancing the final model's stability. Furthermore, evaluation results demonstrate that the FedAvgM aggregation algorithm outperforms FedAvg, indicating that in non-IID datasets, FedAvgM provides better stability and performance.
[ "['Ghazaleh Shirvani' 'Saeid Ghasemshirazi' 'Mohammad Ali Alipour']" ]
null
null
2403.10984
null
null
http://arxiv.org/pdf/2403.10984v1
2024-03-16T17:32:59Z
2024-03-16T17:32:59Z
IoTCO2: Assessing the End-To-End Carbon Footprint of Internet-of-Things-Enabled Deep Learning
To improve privacy and ensure quality-of-service (QoS), deep learning (DL) models are increasingly deployed on Internet of Things (IoT) devices for data processing, significantly increasing the carbon footprint associated with DL on IoT, covering both operational and embodied aspects. Existing operational energy predictors often overlook quantized DL models and emerging neural processing units (NPUs), while embodied carbon footprint modeling tools neglect non-computing hardware components common in IoT devices, creating a gap in accurate carbon footprint modeling tools for IoT-enabled DL. This paper introduces textit{carb}, an end-to-end modeling tool for precise carbon footprint estimation in IoT-enabled DL, demonstrating a maximum $pm21%$ deviation in carbon footprint values compared to actual measurements across various DL models. Additionally, practical applications of carb are showcased through multiple user case studies.
[ "['Ahmad Faiz' 'Shahzeen Attari' 'Gayle Buck' 'Fan Chen' 'Lei Jiang']" ]
null
null
2403.10995
null
null
http://arxiv.org/pdf/2403.10995v1
2024-03-16T18:44:56Z
2024-03-16T18:44:56Z
Edge Private Graph Neural Networks with Singular Value Perturbation
Graph neural networks (GNNs) play a key role in learning representations from graph-structured data and are demonstrated to be useful in many applications. However, the GNN training pipeline has been shown to be vulnerable to node feature leakage and edge extraction attacks. This paper investigates a scenario where an attacker aims to recover private edge information from a trained GNN model. Previous studies have employed differential privacy (DP) to add noise directly to the adjacency matrix or a compact graph representation. The added perturbations cause the graph structure to be substantially morphed, reducing the model utility. We propose a new privacy-preserving GNN training algorithm, Eclipse, that maintains good model utility while providing strong privacy protection on edges. Eclipse is based on two key observations. First, adjacency matrices in graph structures exhibit low-rank behavior. Thus, Eclipse trains GNNs with a low-rank format of the graph via singular values decomposition (SVD), rather than the original graph. Using the low-rank format, Eclipse preserves the primary graph topology and removes the remaining residual edges. Eclipse adds noise to the low-rank singular values instead of the entire graph, thereby preserving the graph privacy while still maintaining enough of the graph structure to maintain model utility. We theoretically show Eclipse provide formal DP guarantee on edges. Experiments on benchmark graph datasets show that Eclipse achieves significantly better privacy-utility tradeoff compared to existing privacy-preserving GNN training methods. In particular, under strong privacy constraints ($epsilon$ < 4), Eclipse shows significant gains in the model utility by up to 46%. We further demonstrate that Eclipse also has better resilience against common edge attacks (e.g., LPA), lowering the attack AUC by up to 5% compared to other state-of-the-art baselines.
[ "['Tingting Tang' 'Yue Niu' 'Salman Avestimehr' 'Murali Annavaram']" ]
null
null
2403.10996
null
null
http://arxiv.org/pdf/2403.10996v1
2024-03-16T18:47:04Z
2024-03-16T18:47:04Z
A Scalable and Parallelizable Digital Twin Framework for Sustainable Sim2Real Transition of Multi-Agent Reinforcement Learning Systems
This work presents a sustainable multi-agent deep reinforcement learning framework capable of selectively scaling parallelized training workloads on-demand, and transferring the trained policies from simulation to reality using minimal hardware resources. We introduce AutoDRIVE Ecosystem as an enabling digital twin framework to train, deploy, and transfer cooperative as well as competitive multi-agent reinforcement learning policies from simulation to reality. Particularly, we first investigate an intersection traversal problem of 4 cooperative vehicles (Nigel) that share limited state information in single as well as multi-agent learning settings using a common policy approach. We then investigate an adversarial autonomous racing problem of 2 vehicles (F1TENTH) using an individual policy approach. In either set of experiments, a decentralized learning architecture was adopted, which allowed robust training and testing of the policies in stochastic environments. The agents were provided with realistically sparse observation spaces, and were restricted to sample control actions that implicitly satisfied the imposed kinodynamic and safety constraints. The experimental results for both problem statements are reported in terms of quantitative metrics and qualitative remarks for training as well as deployment phases. We also discuss agent and environment parallelization techniques adopted to efficiently accelerate MARL training, while analyzing their computational performance. Finally, we demonstrate a resource-aware transition of the trained policies from simulation to reality using the proposed digital twin framework.
[ "['Chinmay Vilas Samak' 'Tanmay Vilas Samak' 'Venkat Krovi']" ]
null
null
2403.10997
null
null
http://arxiv.org/pdf/2403.10997v1
2024-03-16T18:50:44Z
2024-03-16T18:50:44Z
N2F2: Hierarchical Scene Understanding with Nested Neural Feature Fields
Understanding complex scenes at multiple levels of abstraction remains a formidable challenge in computer vision. To address this, we introduce Nested Neural Feature Fields (N2F2), a novel approach that employs hierarchical supervision to learn a single feature field, wherein different dimensions within the same high-dimensional feature encode scene properties at varying granularities. Our method allows for a flexible definition of hierarchies, tailored to either the physical dimensions or semantics or both, thereby enabling a comprehensive and nuanced understanding of scenes. We leverage a 2D class-agnostic segmentation model to provide semantically meaningful pixel groupings at arbitrary scales in the image space, and query the CLIP vision-encoder to obtain language-aligned embeddings for each of these segments. Our proposed hierarchical supervision method then assigns different nested dimensions of the feature field to distill the CLIP embeddings using deferred volumetric rendering at varying physical scales, creating a coarse-to-fine representation. Extensive experiments show that our approach outperforms the state-of-the-art feature field distillation methods on tasks such as open-vocabulary 3D segmentation and localization, demonstrating the effectiveness of the learned nested feature field.
[ "['Yash Bhalgat' 'Iro Laina' 'João F. Henriques' 'Andrew Zisserman'\n 'Andrea Vedaldi']" ]
null
null
2403.11001
null
null
http://arxiv.org/pdf/2403.11001v1
2024-03-16T19:11:57Z
2024-03-16T19:11:57Z
Topologically faithful multi-class segmentation in medical images
Topological accuracy in medical image segmentation is a highly important property for downstream applications such as network analysis and flow modeling in vessels or cell counting. Recently, significant methodological advancements have brought well-founded concepts from algebraic topology to binary segmentation. However, these approaches have been underexplored in multi-class segmentation scenarios, where topological errors are common. We propose a general loss function for topologically faithful multi-class segmentation extending the recent Betti matching concept, which is based on induced matchings of persistence barcodes. We project the N-class segmentation problem to N single-class segmentation tasks, which allows us to use 1-parameter persistent homology making training of neural networks computationally feasible. We validate our method on a comprehensive set of four medical datasets with highly variant topological characteristics. Our loss formulation significantly enhances topological correctness in cardiac, cell, artery-vein, and Circle of Willis segmentation.
[ "['Alexander H. Berger' 'Nico Stucki' 'Laurin Lux' 'Vincent Buergin'\n 'Suprosanna Shit' 'Anna Banaszak' 'Daniel Rueckert' 'Ulrich Bauer'\n 'Johannes C. Paetzold']" ]
null
null
2403.11004
null
null
http://arxiv.org/pdf/2403.11004v2
2024-04-13T00:10:00Z
2024-03-16T19:40:35Z
Forward Learning of Graph Neural Networks
Graph neural networks (GNNs) have achieved remarkable success across a wide range of applications, such as recommendation, drug discovery, and question answering. Behind the success of GNNs lies the backpropagation (BP) algorithm, which is the de facto standard for training deep neural networks (NNs). However, despite its effectiveness, BP imposes several constraints, which are not only biologically implausible, but also limit the scalability, parallelism, and flexibility in learning NNs. Examples of such constraints include storage of neural activities computed in the forward pass for use in the subsequent backward pass, and the dependence of parameter updates on non-local signals. To address these limitations, the forward-forward algorithm (FF) was recently proposed as an alternative to BP in the image classification domain, which trains NNs by performing two forward passes over positive and negative data. Inspired by this advance, we propose ForwardGNN in this work, a new forward learning procedure for GNNs, which avoids the constraints imposed by BP via an effective layer-wise local forward training. ForwardGNN extends the original FF to deal with graph data and GNNs, and makes it possible to operate without generating negative inputs (hence no longer forward-forward). Further, ForwardGNN enables each layer to learn from both the bottom-up and top-down signals without relying on the backpropagation of errors. Extensive experiments on real-world datasets show the effectiveness and generality of the proposed forward graph learning framework. We release our code at https://github.com/facebookresearch/forwardgnn.
[ "['Namyong Park' 'Xing Wang' 'Antoine Simoulin' 'Shuai Yang' 'Grey Yang'\n 'Ryan Rossi' 'Puja Trivedi' 'Nesreen Ahmed']" ]
null
null
2403.11013
null
null
http://arxiv.org/pdf/2403.11013v1
2024-03-16T20:42:27Z
2024-03-16T20:42:27Z
Improved Algorithm and Bounds for Successive Projection
Given a $K$-vertex simplex in a $d$-dimensional space, suppose we measure $n$ points on the simplex with noise (hence, some of the observed points fall outside the simplex). Vertex hunting is the problem of estimating the $K$ vertices of the simplex. A popular vertex hunting algorithm is successive projection algorithm (SPA). However, SPA is observed to perform unsatisfactorily under strong noise or outliers. We propose pseudo-point SPA (pp-SPA). It uses a projection step and a denoise step to generate pseudo-points and feed them into SPA for vertex hunting. We derive error bounds for pp-SPA, leveraging on extreme value theory of (possibly) high-dimensional random vectors. The results suggest that pp-SPA has faster rates and better numerical performances than SPA. Our analysis includes an improved non-asymptotic bound for the original SPA, which is of independent interest.
[ "['Jiashun Jin' 'Zheng Tracy Ke' 'Gabriel Moryoussef' 'Jiajun Tang'\n 'Jingming Wang']" ]
null
null
2403.11020
null
null
http://arxiv.org/pdf/2403.11020v1
2024-03-16T21:34:24Z
2024-03-16T21:34:24Z
Accelerating prototype selection with spatial abstraction
The increasing digitalization in industry and society leads to a growing abundance of data available to be processed and exploited. However, the high volume of data requires considerable computational resources for applying machine learning approaches. Prototype selection techniques have been applied to reduce the requirements of computational resources that are needed by these techniques. In this paper, we propose an approach for speeding up existing prototype selection techniques. It builds an abstract representation of the dataset, using the notion of spatial partition. The second step uses this abstract representation to prune the search space efficiently and select a set of candidate prototypes. After, some conventional prototype selection algorithms can be applied to the candidates selected by our approach. Our approach was integrated with five conventional prototype selection algorithms and tested on 14 widely recognized datasets used in classification tasks. The performance of the modified algorithms was compared to that of their original versions in terms of accuracy and reduction rate. The experimental results demonstrate that, overall, our proposed approach maintains accuracy while enhancing the reduction rate of the original prototype selection algorithms and simultaneously reducing their execution times.
[ "['Joel Luís Carbonera']" ]
null
null
2403.11032
null
null
http://arxiv.org/pdf/2403.11032v1
2024-03-16T22:35:21Z
2024-03-16T22:35:21Z
FH-TabNet: Multi-Class Familial Hypercholesterolemia Detection via a Multi-Stage Tabular Deep Learning
Familial Hypercholesterolemia (FH) is a genetic disorder characterized by elevated levels of Low-Density Lipoprotein (LDL) cholesterol or its associated genes. Early-stage and accurate categorization of FH is of significance allowing for timely interventions to mitigate the risk of life-threatening conditions. Conventional diagnosis approach, however, is complex, costly, and a challenging interpretation task even for experienced clinicians resulting in high underdiagnosis rates. Although there has been a recent surge of interest in using Machine Learning (ML) models for early FH detection, existing solutions only consider a binary classification task solely using classical ML models. Despite its significance, application of Deep Learning (DL) for FH detection is in its infancy, possibly, due to categorical nature of the underlying clinical data. The paper addresses this gap by introducing the FH-TabNet, which is a multi-stage tabular DL network for multi-class (Definite, Probable, Possible, and Unlikely) FH detection. The FH-TabNet initially involves applying a deep tabular data learning architecture (TabNet) for primary categorization into healthy (Possible/Unlikely) and patient (Probable/Definite) classes. Subsequently, independent TabNet classifiers are applied to each subgroup, enabling refined classification. The model's performance is evaluated through 5-fold cross-validation illustrating superior performance in categorizing FH patients, particularly in the challenging low-prevalence subcategories.
[ "['Sadaf Khademi' 'Zohreh Hajiakhondi' 'Golnaz Vaseghi'\n 'Nizal Sarrafzadegan' 'Arash Mohammadi']" ]
null
null
2403.11041
null
null
http://arxiv.org/pdf/2403.11041v1
2024-03-16T23:24:03Z
2024-03-16T23:24:03Z
FAGH: Accelerating Federated Learning with Approximated Global Hessian
In federated learning (FL), the significant communication overhead due to the slow convergence speed of training the global model poses a great challenge. Specifically, a large number of communication rounds are required to achieve the convergence in FL. One potential solution is to employ the Newton-based optimization method for training, known for its quadratic convergence rate. However, the existing Newton-based FL training methods suffer from either memory inefficiency or high computational costs for local clients or the server. To address this issue, we propose an FL with approximated global Hessian (FAGH) method to accelerate FL training. FAGH leverages the first moment of the approximated global Hessian and the first moment of the global gradient to train the global model. By harnessing the approximated global Hessian curvature, FAGH accelerates the convergence of global model training, leading to the reduced number of communication rounds and thus the shortened training time. Experimental results verify FAGH's effectiveness in decreasing the number of communication rounds and the time required to achieve the pre-specified objectives of the global model performance in terms of training and test losses as well as test accuracy. Notably, FAGH outperforms several state-of-the-art FL training methods.
[ "['Mrinmay Sen' 'A. K. Qin' 'Krishna Mohan C']" ]
null
null
2403.11044
null
null
http://arxiv.org/pdf/2403.11044v1
2024-03-16T23:52:25Z
2024-03-16T23:52:25Z
Advancing multivariate time series similarity assessment: an integrated computational approach
Data mining, particularly the analysis of multivariate time series data, plays a crucial role in extracting insights from complex systems and supporting informed decision-making across diverse domains. However, assessing the similarity of multivariate time series data presents several challenges, including dealing with large datasets, addressing temporal misalignments, and the need for efficient and comprehensive analytical frameworks. To address all these challenges, we propose a novel integrated computational approach known as Multivariate Time series Alignment and Similarity Assessment (MTASA). MTASA is built upon a hybrid methodology designed to optimize time series alignment, complemented by a multiprocessing engine that enhances the utilization of computational resources. This integrated approach comprises four key components, each addressing essential aspects of time series similarity assessment, thereby offering a comprehensive framework for analysis. MTASA is implemented as an open-source Python library with a user-friendly interface, making it accessible to researchers and practitioners. To evaluate the effectiveness of MTASA, we conducted an empirical study focused on assessing agroecosystem similarity using real-world environmental data. The results from this study highlight MTASA's superiority, achieving approximately 1.5 times greater accuracy and twice the speed compared to existing state-of-the-art integrated frameworks for multivariate time series similarity assessment. It is hoped that MTASA will significantly enhance the efficiency and accessibility of multivariate time series analysis, benefitting researchers and practitioners across various domains. Its capabilities in handling large datasets, addressing temporal misalignments, and delivering accurate results make MTASA a valuable tool for deriving insights and aiding decision-making processes in complex systems.
[ "['Franck Tonle' 'Henri Tonnang' 'Milliam Ndadji' 'Maurice Tchendji'\n 'Armand Nzeukou' 'Kennedy Senagi' 'Saliou Niassy']" ]
null
null
2403.11046
null
null
http://arxiv.org/pdf/2403.11046v1
2024-03-17T00:11:15Z
2024-03-17T00:11:15Z
Regulating Chatbot Output via Inter-Informational Competition
The advent of ChatGPT has sparked over a year of regulatory frenzy. However, few existing studies have rigorously questioned the assumption that, if left unregulated, AI chatbot's output would inflict tangible, severe real harm on human affairs. Most researchers have overlooked the critical possibility that the information market itself can effectively mitigate these risks and, as a result, they tend to use regulatory tools to address the issue directly. This Article develops a yardstick for reevaluating both AI-related content risks and corresponding regulatory proposals by focusing on inter-informational competition among various outlets. The decades-long history of regulating information and communications technologies indicates that regulators tend to err too much on the side of caution and to put forward excessive regulatory measures when encountering the uncertainties brought about by new technologies. In fact, a trove of empirical evidence has demonstrated that market competition among information outlets can effectively mitigate most risks and that overreliance on regulation is not only unnecessary but detrimental, as well. This Article argues that sufficient competition among chatbots and other information outlets in the information marketplace can sufficiently mitigate and even resolve most content risks posed by generative AI technologies. This renders certain loudly advocated regulatory strategies, like mandatory prohibitions, licensure, curation of datasets, and notice-and-response regimes, truly unnecessary and even toxic to desirable competition and innovation throughout the AI industry. Ultimately, the ideas that I advance in this Article should pour some much-needed cold water on the regulatory frenzy over generative AI and steer the issue back to a rational track.
[ "['Jiawei Zhang']" ]
null
null
2403.11048
null
null
http://arxiv.org/pdf/2403.11048v1
2024-03-17T00:29:42Z
2024-03-17T00:29:42Z
JustQ: Automated Deployment of Fair and Accurate Quantum Neural Networks
Despite the success of Quantum Neural Networks (QNNs) in decision-making systems, their fairness remains unexplored, as the focus primarily lies on accuracy. This work conducts a design space exploration, unveiling QNN unfairness, and highlighting the significant influence of QNN deployment and quantum noise on accuracy and fairness. To effectively navigate the vast QNN deployment design space, we propose JustQ, a framework for deploying fair and accurate QNNs on NISQ computers. It includes a complete NISQ error model, reinforcement learning-based deployment, and a flexible optimization objective incorporating both fairness and accuracy. Experimental results show JustQ outperforms previous methods, achieving superior accuracy and fairness. This work pioneers fair QNN design on NISQ computers, paving the way for future investigations.
[ "['Ruhan Wang' 'Fahiz Baba-Yara' 'Fan Chen']" ]
null
null
2403.11062
null
null
http://arxiv.org/pdf/2403.11062v3
2024-06-28T16:31:06Z
2024-03-17T02:24:09Z
A Simple Mixture Policy Parameterization for Improving Sample Efficiency of CVaR Optimization
Reinforcement learning algorithms utilizing policy gradients (PG) to optimize Conditional Value at Risk (CVaR) face significant challenges with sample inefficiency, hindering their practical applications. This inefficiency stems from two main facts: a focus on tail-end performance that overlooks many sampled trajectories, and the potential of gradient vanishing when the lower tail of the return distribution is overly flat. To address these challenges, we propose a simple mixture policy parameterization. This method integrates a risk-neutral policy with an adjustable policy to form a risk-averse policy. By employing this strategy, all collected trajectories can be utilized for policy updating, and the issue of vanishing gradients is counteracted by stimulating higher returns through the risk-neutral component, thus lifting the tail and preventing flatness. Our empirical study reveals that this mixture parameterization is uniquely effective across a variety of benchmark domains. Specifically, it excels in identifying risk-averse CVaR policies in some Mujoco environments where the traditional CVaR-PG fails to learn a reasonable policy.
[ "['Yudong Luo' 'Yangchen Pan' 'Han Wang' 'Philip Torr' 'Pascal Poupart']" ]
null
null
2403.11079
null
null
http://arxiv.org/pdf/2403.11079v1
2024-03-17T04:20:38Z
2024-03-17T04:20:38Z
Bridging Expert Knowledge with Deep Learning Techniques for Just-In-Time Defect Prediction
Just-In-Time (JIT) defect prediction aims to automatically predict whether a commit is defective or not, and has been widely studied in recent years. In general, most studies can be classified into two categories: 1) simple models using traditional machine learning classifiers with hand-crafted features, and 2) complex models using deep learning techniques to automatically extract features from commit contents. Hand-crafted features used by simple models are based on expert knowledge but may not fully represent the semantic meaning of the commits. On the other hand, deep learning-based features used by complex models represent the semantic meaning of commits but may not reflect useful expert knowledge. Simple models and complex models seem complementary to each other to some extent. To utilize the advantages of both simple and complex models, we propose a model fusion framework that adopts both early fusions on the feature level and late fusions on the decision level. We propose SimCom++ by adopting the best early and late fusion strategies. The experimental results show that SimCom++ can significantly outperform the baselines by 5.7--26.9%. In addition, our experimental results confirm that the simple model and complex model are complementary to each other.
[ "['Xin Zhou' 'DongGyun Han' 'David Lo']" ]
null
null
2403.11082
null
null
http://arxiv.org/pdf/2403.11082v1
2024-03-17T04:29:45Z
2024-03-17T04:29:45Z
RobustSentEmbed: Robust Sentence Embeddings Using Adversarial Self-Supervised Contrastive Learning
Pre-trained language models (PLMs) have consistently demonstrated outstanding performance across a diverse spectrum of natural language processing tasks. Nevertheless, despite their success with unseen data, current PLM-based representations often exhibit poor robustness in adversarial settings. In this paper, we introduce RobustSentEmbed, a self-supervised sentence embedding framework designed to improve both generalization and robustness in diverse text representation tasks and against a diverse set of adversarial attacks. Through the generation of high-risk adversarial perturbations and their utilization in a novel objective function, RobustSentEmbed adeptly learns high-quality and robust sentence embeddings. Our experiments confirm the superiority of RobustSentEmbed over state-of-the-art representations. Specifically, Our framework achieves a significant reduction in the success rate of various adversarial attacks, notably reducing the BERTAttack success rate by almost half (from 75.51% to 38.81%). The framework also yields improvements of 1.59% and 0.23% in semantic textual similarity tasks and various transfer tasks, respectively.
[ "['Javad Rafiei Asl' 'Prajwal Panzade' 'Eduardo Blanco' 'Daniel Takabi'\n 'Zhipeng Cai']" ]
null
null
2403.11087
null
null
http://arxiv.org/pdf/2403.11087v2
2024-03-19T13:37:36Z
2024-03-17T04:42:41Z
Incorporating Higher-order Structural Information for Graph Clustering
Clustering holds profound significance in data mining. In recent years, graph convolutional network (GCN) has emerged as a powerful tool for deep clustering, integrating both graph structural information and node attributes. However, most existing methods ignore the higher-order structural information of the graph. Evidently, nodes within the same cluster can establish distant connections. Besides, recent deep clustering methods usually apply a self-supervised module to monitor the training process of their model, focusing solely on node attributes without paying attention to graph structure. In this paper, we propose a novel graph clustering network to make full use of graph structural information. To capture the higher-order structural information, we design a graph mutual infomax module, effectively maximizing mutual information between graph-level and node-level representations, and employ a trinary self-supervised module that includes modularity as a structural constraint. Our proposed model outperforms many state-of-the-art methods on various datasets, demonstrating its superiority.
[ "['Qiankun Li' 'Haobing Liu' 'Ruobing Jiang' 'Tingting Wang']" ]
null
null
2403.11090
null
null
http://arxiv.org/pdf/2403.11090v1
2024-03-17T04:59:30Z
2024-03-17T04:59:30Z
Brain-on-Switch: Towards Advanced Intelligent Network Data Plane via NN-Driven Traffic Analysis at Line-Speed
The emerging programmable networks sparked significant research on Intelligent Network Data Plane (INDP), which achieves learning-based traffic analysis at line-speed. Prior art in INDP focus on deploying tree/forest models on the data plane. We observe a fundamental limitation in tree-based INDP approaches: although it is possible to represent even larger tree/forest tables on the data plane, the flow features that are computable on the data plane are fundamentally limited by hardware constraints. In this paper, we present BoS to push the boundaries of INDP by enabling Neural Network (NN) driven traffic analysis at line-speed. Many types of NNs (such as Recurrent Neural Network (RNN), and transformers) that are designed to work with sequential data have advantages over tree-based models, because they can take raw network data as input without complex feature computations on the fly. However, the challenge is significant: the recurrent computation scheme used in RNN inference is fundamentally different from the match-action paradigm used on the network data plane. BoS addresses this challenge by (i) designing a novel data plane friendly RNN architecture that can execute unlimited RNN time steps with limited data plane stages, effectively achieving line-speed RNN inference; and (ii) complementing the on-switch RNN model with an off-switch transformer-based traffic analysis module to further boost the overall performance. We implement a prototype of BoS using a P4 programmable switch as our data plane, and extensively evaluate it over multiple traffic analysis tasks. The results show that BoS outperforms state-of-the-art in both analysis accuracy and scalability.
[ "['Jinzhu Yan' 'Haotian Xu' 'Zhuotao Liu' 'Qi Li' 'Ke Xu' 'Mingwei Xu'\n 'Jianping Wu']" ]
null
null
2403.11093
null
null
http://arxiv.org/pdf/2403.11093v1
2024-03-17T05:07:04Z
2024-03-17T05:07:04Z
Learning-Based Pricing and Matching for Two-Sided Queues
We consider a dynamic system with multiple types of customers and servers. Each type of waiting customer or server joins a separate queue, forming a bipartite graph with customer-side queues and server-side queues. The platform can match the servers and customers if their types are compatible. The matched pairs then leave the system. The platform will charge a customer a price according to their type when they arrive and will pay a server a price according to their type. The arrival rate of each queue is determined by the price according to some unknown demand or supply functions. Our goal is to design pricing and matching algorithms to maximize the profit of the platform with unknown demand and supply functions, while keeping queue lengths of both customers and servers below a predetermined threshold. This system can be used to model two-sided markets such as ride-sharing markets with passengers and drivers. The difficulties of the problem include simultaneous learning and decision making, and the tradeoff between maximizing profit and minimizing queue length. We use a longest-queue-first matching algorithm and propose a learning-based pricing algorithm, which combines gradient-free stochastic projected gradient ascent with bisection search. We prove that our proposed algorithm yields a sublinear regret $tilde{O}(T^{5/6})$ and queue-length bound $tilde{O}(T^{2/3})$, where $T$ is the time horizon. We further establish a tradeoff between the regret bound and the queue-length bound: $tilde{O}(T^{1-gamma/4})$ versus $tilde{O}(T^{gamma})$ for $gamma in (0, 2/3].$
[ "['Zixian Yang' 'Lei Ying']" ]
null
null
2403.11100
null
null
http://arxiv.org/pdf/2403.11100v1
2024-03-17T06:08:08Z
2024-03-17T06:08:08Z
Graph Expansion in Pruned Recurrent Neural Network Layers Preserve Performance
Expansion property of a graph refers to its strong connectivity as well as sparseness. It has been reported that deep neural networks can be pruned to a high degree of sparsity while maintaining their performance. Such pruning is essential for performing real time sequence learning tasks using recurrent neural networks in resource constrained platforms. We prune recurrent networks such as RNNs and LSTMs, maintaining a large spectral gap of the underlying graphs and ensuring their layerwise expansion properties. We also study the time unfolded recurrent network graphs in terms of the properties of their bipartite layers. Experimental results for the benchmark sequence MNIST, CIFAR-10, and Google speech command data show that expander graph properties are key to preserving classification accuracy of RNN and LSTM.
[ "['Suryam Arnav Kalra' 'Arindam Biswas' 'Pabitra Mitra' 'Biswajit Basu']" ]
null
null
2403.11103
null
null
http://arxiv.org/pdf/2403.11103v2
2024-06-09T04:48:35Z
2024-03-17T06:12:43Z
ProgGen: Generating Named Entity Recognition Datasets Step-by-step with Self-Reflexive Large Language Models
Although Large Language Models (LLMs) exhibit remarkable adaptability across domains, these models often fall short in structured knowledge extraction tasks such as named entity recognition (NER). This paper explores an innovative, cost-efficient strategy to harness LLMs with modest NER capabilities for producing superior NER datasets. Our approach diverges from the basic class-conditional prompts by instructing LLMs to self-reflect on the specific domain, thereby generating domain-relevant attributes (such as category and emotions for movie reviews), which are utilized for creating attribute-rich training data. Furthermore, we preemptively generate entity terms and then develop NER context data around these entities, effectively bypassing the LLMs' challenges with complex structures. Our experiments across both general and niche domains reveal significant performance enhancements over conventional data generation methods while being more cost-effective than existing alternatives.
[ "['Yuzhao Heng' 'Chunyuan Deng' 'Yitong Li' 'Yue Yu' 'Yinghao Li'\n 'Rongzhi Zhang' 'Chao Zhang']" ]
null
null
2403.11106
null
null
http://arxiv.org/pdf/2403.11106v1
2024-03-17T06:20:28Z
2024-03-17T06:20:28Z
Self-Supervised Quantization-Aware Knowledge Distillation
Quantization-aware training (QAT) and Knowledge Distillation (KD) are combined to achieve competitive performance in creating low-bit deep learning models. However, existing works applying KD to QAT require tedious hyper-parameter tuning to balance the weights of different loss terms, assume the availability of labeled training data, and require complex, computationally intensive training procedures for good performance. To address these limitations, this paper proposes a novel Self-Supervised Quantization-Aware Knowledge Distillation (SQAKD) framework. SQAKD first unifies the forward and backward dynamics of various quantization functions, making it flexible for incorporating various QAT works. Then it formulates QAT as a co-optimization problem that simultaneously minimizes the KL-Loss between the full-precision and low-bit models for KD and the discretization error for quantization, without supervision from labels. A comprehensive evaluation shows that SQAKD substantially outperforms the state-of-the-art QAT and KD works for a variety of model architectures. Our code is at: https://github.com/kaiqi123/SQAKD.git.
[ "['Kaiqi Zhao' 'Ming Zhao']" ]
null
null
2403.11114
null
null
http://arxiv.org/pdf/2403.11114v1
2024-03-17T06:41:09Z
2024-03-17T06:41:09Z
Phasic Diversity Optimization for Population-Based Reinforcement Learning
Reviewing the previous work of diversity Rein-forcement Learning,diversity is often obtained via an augmented loss function,which requires a balance between reward and diversity.Generally,diversity optimization algorithms use Multi-armed Bandits algorithms to select the coefficient in the pre-defined space. However, the dynamic distribution of reward signals for MABs or the conflict between quality and diversity limits the performance of these methods. We introduce the Phasic Diversity Optimization (PDO) algorithm, a Population-Based Training framework that separates reward and diversity training into distinct phases instead of optimizing a multi-objective function. In the auxiliary phase, agents with poor performance diversified via determinants will not replace the better agents in the archive. The decoupling of reward and diversity allows us to use an aggressive diversity optimization in the auxiliary phase without performance degradation. Furthermore, we construct a dogfight scenario for aerial agents to demonstrate the practicality of the PDO algorithm. We introduce two implementations of PDO archive and conduct tests in the newly proposed adversarial dogfight and MuJoCo simulations. The results show that our proposed algorithm achieves better performance than baselines.
[ "['Jingcheng Jiang' 'Haiyin Piao' 'Yu Fu' 'Yihang Hao' 'Chuanlu Jiang'\n 'Ziqi Wei' 'Xin Yang']" ]
null
null
2403.11125
null
null
http://arxiv.org/pdf/2403.11125v2
2024-04-20T15:07:41Z
2024-03-17T07:17:07Z
Machine learning-based system reliability analysis with Gaussian Process Regression
Machine learning-based reliability analysis methods have shown great advancements for their computational efficiency and accuracy. Recently, many efficient learning strategies have been proposed to enhance the computational performance. However, few of them explores the theoretical optimal learning strategy. In this article, we propose several theorems that facilitates such exploration. Specifically, cases that considering and neglecting the correlations among the candidate design samples are well elaborated. Moreover, we prove that the well-known U learning function can be reformulated to the optimal learning function for the case neglecting the Kriging correlation. In addition, the theoretical optimal learning strategy for sequential multiple training samples enrichment is also mathematically explored through the Bayesian estimate with the corresponding lost functions. Simulation results show that the optimal learning strategy considering the Kriging correlation works better than that neglecting the Kriging correlation and other state-of-the art learning functions from the literatures in terms of the reduction of number of evaluations of performance function. However, the implementation needs to investigate very large computational resource.
[ "['Lisang Zhou' 'Ziqian Luo' 'Xueting Pan']" ]
null
null
2403.11144
null
null
http://arxiv.org/pdf/2403.11144v3
2024-04-27T12:39:09Z
2024-03-17T08:50:44Z
Is Mamba Effective for Time Series Forecasting?
In the realm of time series forecasting (TSF), it is imperative for models to adeptly discern and distill hidden patterns within historical time series data to forecast future states. Transformer-based models exhibit formidable efficacy in TSF, primarily attributed to their advantage in apprehending these patterns. However, the quadratic complexity of the Transformer leads to low computational efficiency and high costs, which somewhat hinders the deployment of the TSF model in real-world scenarios. Recently, Mamba, a selective state space model, has gained traction due to its ability to process dependencies in sequences while maintaining near-linear complexity. For TSF tasks, these characteristics enable Mamba to comprehend hidden patterns as the Transformer and reduce computational overhead compared to the Transformer. Therefore, we propose a Mamba-based model named Simple-Mamba (S-Mamba) for TSF. Specifically, we tokenize the time points of each variate autonomously via a linear layer. A bidirectional Mamba layer is utilized to extract inter-variate correlations and a Feed-Forward Network is set to learn temporal dependencies. Finally, the generation of forecast outcomes through a linear mapping layer. Experiments on thirteen public datasets prove that S-Mamba maintains low computational overhead and achieves leading performance. Furthermore, we conduct extensive experiments to explore Mamba's potential in TSF tasks. Our code is available at https://github.com/wzhwzhwzh0921/S-D-Mamba.
[ "['Zihan Wang' 'Fanheng Kong' 'Shi Feng' 'Ming Wang' 'Xiaocui Yang'\n 'Han Zhao' 'Daling Wang' 'Yifei Zhang']" ]
null
null
2403.11162
null
null
http://arxiv.org/pdf/2403.11162v1
2024-03-17T10:06:38Z
2024-03-17T10:06:38Z
CGI-DM: Digital Copyright Authentication for Diffusion Models via Contrasting Gradient Inversion
Diffusion Models (DMs) have evolved into advanced image generation tools, especially for few-shot generation where a pretrained model is fine-tuned on a small set of images to capture a specific style or object. Despite their success, concerns exist about potential copyright violations stemming from the use of unauthorized data in this process. In response, we present Contrasting Gradient Inversion for Diffusion Models (CGI-DM), a novel method featuring vivid visual representations for digital copyright authentication. Our approach involves removing partial information of an image and recovering missing details by exploiting conceptual differences between the pretrained and fine-tuned models. We formulate the differences as KL divergence between latent variables of the two models when given the same input image, which can be maximized through Monte Carlo sampling and Projected Gradient Descent (PGD). The similarity between original and recovered images serves as a strong indicator of potential infringements. Extensive experiments on the WikiArt and Dreambooth datasets demonstrate the high accuracy of CGI-DM in digital copyright authentication, surpassing alternative validation techniques. Code implementation is available at https://github.com/Nicholas0228/Revelio.
[ "['Xiaoyu Wu' 'Yang Hua' 'Chumeng Liang' 'Jiaru Zhang' 'Hao Wang'\n 'Tao Song' 'Haibing Guan']" ]
null
null
2403.11163
null
null
http://arxiv.org/abs/2403.11163v1
2024-03-17T10:09:54Z
2024-03-17T10:09:54Z
A Selective Review on Statistical Methods for Massive Data Computation: Distributed Computing, Subsampling, and Minibatch Techniques
This paper presents a selective review of statistical computation methods for massive data analysis. A huge amount of statistical methods for massive data computation have been rapidly developed in the past decades. In this work, we focus on three categories of statistical computation methods: (1) distributed computing, (2) subsampling methods, and (3) minibatch gradient techniques. The first class of literature is about distributed computing and focuses on the situation, where the dataset size is too huge to be comfortably handled by one single computer. In this case, a distributed computation system with multiple computers has to be utilized. The second class of literature is about subsampling methods and concerns about the situation, where the sample size of dataset is small enough to be placed on one single computer but too large to be easily processed by its memory as a whole. The last class of literature studies those minibatch gradient related optimization techniques, which have been extensively used for optimizing various deep learning models.
[ "['Xuetong Li' 'Yuan Gao' 'Hong Chang' 'Danyang Huang' 'Yingying Ma'\n 'Rui Pan' 'Haobo Qi' 'Feifei Wang' 'Shuyuan Wu' 'Ke Xu' 'Jing Zhou'\n 'Xuening Zhu' 'Yingqiu Zhu' 'Hansheng Wang']" ]
null
null
2403.11166
null
null
http://arxiv.org/pdf/2403.11166v1
2024-03-17T10:26:41Z
2024-03-17T10:26:41Z
Pencil: Private and Extensible Collaborative Learning without the Non-Colluding Assumption
The escalating focus on data privacy poses significant challenges for collaborative neural network training, where data ownership and model training/deployment responsibilities reside with distinct entities. Our community has made substantial contributions to addressing this challenge, proposing various approaches such as federated learning (FL) and privacy-preserving machine learning based on cryptographic constructs like homomorphic encryption (HE) and secure multiparty computation (MPC). However, FL completely overlooks model privacy, and HE has limited extensibility (confined to only one data provider). While the state-of-the-art MPC frameworks provide reasonable throughput and simultaneously ensure model/data privacy, they rely on a critical non-colluding assumption on the computing servers, and relaxing this assumption is still an open problem. In this paper, we present Pencil, the first private training framework for collaborative learning that simultaneously offers data privacy, model privacy, and extensibility to multiple data providers, without relying on the non-colluding assumption. Our fundamental design principle is to construct the n-party collaborative training protocol based on an efficient two-party protocol, and meanwhile ensuring that switching to different data providers during model training introduces no extra cost. We introduce several novel cryptographic protocols to realize this design principle and conduct a rigorous security and privacy analysis. Our comprehensive evaluations of Pencil demonstrate that (i) models trained in plaintext and models trained privately using Pencil exhibit nearly identical test accuracies; (ii) The training overhead of Pencil is greatly reduced: Pencil achieves 10 ~ 260x higher throughput and 2 orders of magnitude less communication than prior art; (iii) Pencil is resilient against both existing and adaptive (white-box) attacks.
[ "['Xuanqi Liu' 'Zhuotao Liu' 'Qi Li' 'Ke Xu' 'Mingwei Xu']" ]
null
null
2403.11173
null
null
http://arxiv.org/pdf/2403.11173v1
2024-03-17T11:19:45Z
2024-03-17T11:19:45Z
Multi-Objective Evolutionary Neural Architecture Search for Recurrent Neural Networks
Artificial neural network (NN) architecture design is a nontrivial and time-consuming task that often requires a high level of human expertise. Neural architecture search (NAS) serves to automate the design of NN architectures and has proven to be successful in automatically finding NN architectures that outperform those manually designed by human experts. NN architecture performance can be quantified based on multiple objectives, which include model accuracy and some NN architecture complexity objectives, among others. The majority of modern NAS methods that consider multiple objectives for NN architecture performance evaluation are concerned with automated feed forward NN architecture design, which leaves multi-objective automated recurrent neural network (RNN) architecture design unexplored. RNNs are important for modeling sequential datasets, and prominent within the natural language processing domain. It is often the case in real world implementations of machine learning and NNs that a reasonable trade-off is accepted for marginally reduced model accuracy in favour of lower computational resources demanded by the model. This paper proposes a multi-objective evolutionary algorithm-based RNN architecture search method. The proposed method relies on approximate network morphisms for RNN architecture complexity optimisation during evolution. The results show that the proposed method is capable of finding novel RNN architectures with comparable performance to state-of-the-art manually designed RNN architectures, but with reduced computational demand.
[ "['Reinhard Booysen' 'Anna Sergeevna Bosman']" ]
null
null
2403.11175
null
null
http://arxiv.org/pdf/2403.11175v1
2024-03-17T11:23:51Z
2024-03-17T11:23:51Z
Prior-dependent analysis of posterior sampling reinforcement learning with function approximation
This work advances randomized exploration in reinforcement learning (RL) with function approximation modeled by linear mixture MDPs. We establish the first prior-dependent Bayesian regret bound for RL with function approximation; and refine the Bayesian regret analysis for posterior sampling reinforcement learning (PSRL), presenting an upper bound of ${mathcal{O}}(dsqrt{H^3 T log T})$, where $d$ represents the dimensionality of the transition kernel, $H$ the planning horizon, and $T$ the total number of interactions. This signifies a methodological enhancement by optimizing the $mathcal{O}(sqrt{log T})$ factor over the previous benchmark (Osband and Van Roy, 2014) specified to linear mixture MDPs. Our approach, leveraging a value-targeted model learning perspective, introduces a decoupling argument and a variance reduction technique, moving beyond traditional analyses reliant on confidence sets and concentration inequalities to formalize Bayesian regret bounds more effectively.
[ "['Yingru Li' 'Zhi-Quan Luo']" ]
null
null
2403.11180
null
null
http://arxiv.org/pdf/2403.11180v1
2024-03-17T11:49:57Z
2024-03-17T11:49:57Z
usfAD Based Effective Unknown Attack Detection Focused IDS Framework
The rapid expansion of varied network systems, including the Internet of Things (IoT) and Industrial Internet of Things (IIoT), has led to an increasing range of cyber threats. Ensuring robust protection against these threats necessitates the implementation of an effective Intrusion Detection System (IDS). For more than a decade, researchers have delved into supervised machine learning techniques to develop IDS to classify normal and attack traffic. However, building effective IDS models using supervised learning requires a substantial number of benign and attack samples. To collect a sufficient number of attack samples from real-life scenarios is not possible since cyber attacks occur occasionally. Further, IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks due to the swift evolution of attack patterns. To address this challenge, we put forth two strategies for semi-supervised learning based IDS where training samples of attacks are not required: 1) training a supervised machine learning model using randomly and uniformly dispersed synthetic attack samples; 2) building a One Class Classification (OCC) model that is trained exclusively on benign network traffic. We have implemented both approaches and compared their performances using 10 recent benchmark IDS datasets. Our findings demonstrate that the OCC model based on the state-of-art anomaly detection technique called usfAD significantly outperforms conventional supervised classification and other OCC based techniques when trained and tested considering real-life scenarios, particularly to detect previously unseen attacks.
[ "['Md. Ashraf Uddin' 'Sunil Aryal' 'Mohamed Reda Bouadjenek'\n 'Muna Al-Hawawreh' 'Md. Alamin Talukder']" ]
null
null
2403.11199
null
null
http://arxiv.org/pdf/2403.11199v1
2024-03-17T12:55:23Z
2024-03-17T12:55:23Z
Graph Unitary Message Passing
Message passing mechanism contributes to the success of GNNs in various applications, but also brings the oversquashing problem. Recent works combat oversquashing by improving the graph spectrums with rewiring techniques, disrupting the structural bias in graphs, and having limited improvement on oversquashing in terms of oversquashing measure. Motivated by unitary RNN, we propose Graph Unitary Message Passing (GUMP) to alleviate oversquashing in GNNs by applying unitary adjacency matrix for message passing. To design GUMP, a transformation is first proposed to make general graphs have unitary adjacency matrix and keep its structural bias. Then, unitary adjacency matrix is obtained with a unitary projection algorithm, which is implemented by utilizing the intrinsic structure of unitary adjacency matrix and allows GUMP to be permutation-equivariant. Experimental results show the effectiveness of GUMP in improving the performance on various graph learning tasks.
[ "['Haiquan Qiu' 'Yatao Bian' 'Quanming Yao']" ]
null
null
2403.11204
null
null
http://arxiv.org/pdf/2403.11204v1
2024-03-17T13:06:29Z
2024-03-17T13:06:29Z
Partitioned Neural Network Training via Synthetic Intermediate Labels
The proliferation of extensive neural network architectures, particularly deep learning models, presents a challenge in terms of resource-intensive training. GPU memory constraints have become a notable bottleneck in training such sizable models. Existing strategies, including data parallelism, model parallelism, pipeline parallelism, and fully sharded data parallelism, offer partial solutions. Model parallelism, in particular, enables the distribution of the entire model across multiple GPUs, yet the ensuing data communication between these partitions slows down training. Additionally, the substantial memory overhead required to store auxiliary parameters on each GPU compounds computational demands. Instead of using the entire model for training, this study advocates partitioning the model across GPUs and generating synthetic intermediate labels to train individual segments. These labels, produced through a random process, mitigate memory overhead and computational load. This approach results in a more efficient training process that minimizes data communication while maintaining model accuracy. To validate this method, a 6-layer fully connected neural network is partitioned into two parts and its performance is assessed on the extended MNIST dataset. Experimental results indicate that the proposed approach achieves similar testing accuracies to conventional training methods, while significantly reducing memory and computational requirements. This work contributes to mitigating the resource-intensive nature of training large neural networks, paving the way for more efficient deep learning model development.
[ "['Cevat Volkan Karadağ' 'Nezih Topaloğlu']" ]
null
null
2403.11206
null
null
http://arxiv.org/pdf/2403.11206v1
2024-03-17T13:14:09Z
2024-03-17T13:14:09Z
CBR -- Boosting Adaptive Classification By Retrieval of Encrypted Network Traffic with Out-of-distribution
Encrypted network traffic Classification tackles the problem from different approaches and with different goals. One of the common approaches is using Machine learning or Deep Learning-based solutions on a fixed number of classes, leading to misclassification when an unknown class is given as input. One of the solutions for handling unknown classes is to retrain the model, however, retraining models every time they become obsolete is both resource and time-consuming. Therefore, there is a growing need to allow classification models to detect and adapt to new classes dynamically, without retraining, but instead able to detect new classes using few shots learning [1]. In this paper, we introduce Adaptive Classification By Retrieval CBR, a novel approach for encrypted network traffic classification. Our new approach is based on an ANN-based method, which allows us to effectively identify new and existing classes without retraining the model. The novel approach is simple, yet effective and achieved similar results to RF with up to 5% difference (usually less than that) in the classification tasks while having a slight decrease in the case of new samples (from new classes) without retraining. To summarize, the new method is a real-time classification, which can classify new classes without retraining. Furthermore, our solution can be used as a complementary solution alongside RF or any other machine/deep learning classification method, as an aggregated solution.
[ "['Amir Lukach' 'Ran Dubin' 'Amit Dvir' 'Chen Hajaj']" ]
null
null
2403.11220
null
null
http://arxiv.org/pdf/2403.11220v3
2024-03-22T11:42:40Z
2024-03-17T13:43:10Z
CPA-Enhancer: Chain-of-Thought Prompted Adaptive Enhancer for Object Detection under Unknown Degradations
Object detection methods under known single degradations have been extensively investigated. However, existing approaches require prior knowledge of the degradation type and train a separate model for each, limiting their practical applications in unpredictable environments. To address this challenge, we propose a chain-of-thought (CoT) prompted adaptive enhancer, CPA-Enhancer, for object detection under unknown degradations. Specifically, CPA-Enhancer progressively adapts its enhancement strategy under the step-by-step guidance of CoT prompts, that encode degradation-related information. To the best of our knowledge, it's the first work that exploits CoT prompting for object detection tasks. Overall, CPA-Enhancer is a plug-and-play enhancement model that can be integrated into any generic detectors to achieve substantial gains on degraded images, without knowing the degradation type priorly. Experimental results demonstrate that CPA-Enhancer not only sets the new state of the art for object detection but also boosts the performance of other downstream vision tasks under unknown degradations.
[ "['Yuwei Zhang' 'Yan Wu' 'Yanming Liu' 'Xinyue Peng']" ]
null
null
2403.11227
null
null
http://arxiv.org/pdf/2403.11227v1
2024-03-17T14:21:42Z
2024-03-17T14:21:42Z
Cheap Ways of Extracting Clinical Markers from Texts
This paper describes the work of the UniBuc Archaeology team for CLPsych's 2024 Shared Task, which involved finding evidence within the text supporting the assigned suicide risk level. Two types of evidence were required: highlights (extracting relevant spans within the text) and summaries (aggregating evidence into a synthesis). Our work focuses on evaluating Large Language Models (LLM) as opposed to an alternative method that is much more memory and resource efficient. The first approach employs a good old-fashioned machine learning (GOML) pipeline consisting of a tf-idf vectorizer with a logistic regression classifier, whose representative features are used to extract relevant highlights. The second, more resource intensive, uses an LLM for generating the summaries and is guided by chain-of-thought to provide sequences of text indicating clinical markers.
[ "['Anastasia Sandu' 'Teodor Mihailescu' 'Sergiu Nisioi']" ]
null
null
2403.11230
null
null
http://arxiv.org/pdf/2403.11230v1
2024-03-17T14:34:51Z
2024-03-17T14:34:51Z
Simple 2D Convolutional Neural Network-based Approach for COVID-19 Detection
This study explores the use of deep learning techniques for analyzing lung Computed Tomography (CT) images. Classic deep learning approaches face challenges with varying slice counts and resolutions in CT images, a diversity arising from the utilization of assorted scanning equipment. Typically, predictions are made on single slices which are then combined for a comprehensive outcome. Yet, this method does not incorporate learning features specific to each slice, leading to a compromise in effectiveness. To address these challenges, we propose an advanced Spatial-Slice Feature Learning (SSFL++) framework specifically tailored for CT scans. It aims to filter out out-of-distribution (OOD) data within the entire CT scan, allowing us to select essential spatial-slice features for analysis by reducing data redundancy by 70%. Additionally, we introduce a Kernel-Density-based slice Sampling (KDS) method to enhance stability during training and inference phases, thereby accelerating convergence and enhancing overall performance. Remarkably, our experiments reveal that our model achieves promising results with a simple EfficientNet-2D (E2D) model. The effectiveness of our approach is confirmed on the COVID-19-CT-DB datasets provided by the DEF-AI-MIA workshop.
[ "['Chih-Chung Hsu' 'Chia-Ming Lee' 'Yang Fan Chiang' 'Yi-Shiuan Chou'\n 'Chih-Yu Jiang' 'Shen-Chieh Tai' 'Chi-Han Tsai']" ]
null
null
2403.11259
null
null
http://arxiv.org/pdf/2403.11259v2
2024-03-23T08:27:02Z
2024-03-17T16:23:00Z
A learning-based solution approach to the application placement problem in mobile edge computing under uncertainty
Placing applications in mobile edge computing servers presents a complex challenge involving many servers, users, and their requests. Existing algorithms take a long time to solve high-dimensional problems with significant uncertainty scenarios. Therefore, an efficient approach is required to maximize the quality of service while considering all technical constraints. One of these approaches is machine learning, which emulates optimal solutions for application placement in edge servers. Machine learning models are expected to learn how to allocate user requests to servers based on the spatial positions of users and servers. In this study, the problem is formulated as a two-stage stochastic programming. A sufficient amount of training records is generated by varying parameters such as user locations, their request rates, and solving the optimization model. Then, based on the distance features of each user from the available servers and their request rates, machine learning models generate decision variables for the first stage of the stochastic optimization model, which is the user-to-server request allocation, and are employed as independent decision agents that reliably mimic the optimization model. Support Vector Machines (SVM) and Multi-layer Perceptron (MLP) are used in this research to achieve practical decisions from the stochastic optimization models. The performance of each model has shown an execution effectiveness of over 80%. This research aims to provide a more efficient approach for tackling high-dimensional problems and scenarios with uncertainties in mobile edge computing by leveraging machine learning models for optimal decision-making in request allocation to edge servers. These results suggest that machine-learning models can significantly improve solution times compared to conventional approaches.
[ "['Taha-Hossein Hejazi' 'Zahra Ghadimkhani' 'Arezoo Borji']" ]
null
null
2403.11261
null
null
http://arxiv.org/pdf/2403.11261v1
2024-03-17T16:24:07Z
2024-03-17T16:24:07Z
A Lie Group Approach to Riemannian Batch Normalization
Manifold-valued measurements exist in numerous applications within computer vision and machine learning. Recent studies have extended Deep Neural Networks (DNNs) to manifolds, and concomitantly, normalization techniques have also been adapted to several manifolds, referred to as Riemannian normalization. Nonetheless, most of the existing Riemannian normalization methods have been derived in an ad hoc manner and only apply to specific manifolds. This paper establishes a unified framework for Riemannian Batch Normalization (RBN) techniques on Lie groups. Our framework offers the theoretical guarantee of controlling both the Riemannian mean and variance. Empirically, we focus on Symmetric Positive Definite (SPD) manifolds, which possess three distinct types of Lie group structures. Using the deformation concept, we generalize the existing Lie groups on SPD manifolds into three families of parameterized Lie groups. Specific normalization layers induced by these Lie groups are then proposed for SPD neural networks. We demonstrate the effectiveness of our approach through three sets of experiments: radar recognition, human action recognition, and electroencephalography (EEG) classification. The code is available at https://github.com/GitZH-Chen/LieBN.git.
[ "['Ziheng Chen' 'Yue Song' 'Yunmei Liu' 'Nicu Sebe']" ]
null
null
2403.11262
null
null
http://arxiv.org/pdf/2403.11262v1
2024-03-17T16:24:29Z
2024-03-17T16:24:29Z
Understanding Diffusion Models by Feynman's Path Integral
Score-based diffusion models have proven effective in image generation and have gained widespread usage; however, the underlying factors contributing to the performance disparity between stochastic and deterministic (i.e., the probability flow ODEs) sampling schemes remain unclear. We introduce a novel formulation of diffusion models using Feynman's path integral, which is a formulation originally developed for quantum physics. We find this formulation providing comprehensive descriptions of score-based generative models, and demonstrate the derivation of backward stochastic differential equations and loss functions.The formulation accommodates an interpolating parameter connecting stochastic and deterministic sampling schemes, and we identify this parameter as a counterpart of Planck's constant in quantum physics. This analogy enables us to apply the Wentzel-Kramers-Brillouin (WKB) expansion, a well-established technique in quantum physics, for evaluating the negative log-likelihood to assess the performance disparity between stochastic and deterministic sampling schemes.
[ "['Yuji Hirono' 'Akinori Tanaka' 'Kenji Fukushima']" ]
null
null
2403.11265
null
null
http://arxiv.org/pdf/2403.11265v1
2024-03-17T16:36:26Z
2024-03-17T16:36:26Z
Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else. It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author. In this paper, we investigate the potential benefits of augmenting the classifier training set with (negative) synthetic examples. These synthetic examples are generated to imitate the style of the author of interest. We analyze the improvements in classifier prediction that this augmentation brings to bear in the task of AV in an adversarial setting. In particular, we experiment with three different generator architectures (one based on Recurrent Neural Networks, another based on small-scale transformers, and another based on the popular GPT model) and with two training strategies (one inspired by standard Language Models, and another inspired by Wasserstein Generative Adversarial Networks). We evaluate our hypothesis on five datasets (three of which have been specifically collected to represent an adversarial setting) and using two learning algorithms for the AV classifier (Support Vector Machines and Convolutional Neural Networks). This experimentation has yielded negative results, revealing that, although our methodology proves effective in many adversarial settings, its benefits are too sporadic for a pragmatical application.
[ "['Silvia Corbara' 'Alejandro Moreo']" ]
null
null
2403.11291
null
null
http://arxiv.org/abs/2403.11291v1
2024-03-17T18:06:06Z
2024-03-17T18:06:06Z
Advanced Knowledge Extraction of Physical Design Drawings, Translation and conversion to CAD formats using Deep Learning
The maintenance, archiving and usage of the design drawings is cumbersome in physical form in different industries for longer period. It is hard to extract information by simple scanning of drawing sheets. Converting them to their digital formats such as Computer-Aided Design (CAD), with needed knowledge extraction can solve this problem. The conversion of these machine drawings to its digital form is a crucial challenge which requires advanced techniques. This research proposes an innovative methodology utilizing Deep Learning methods. The approach employs object detection model, such as Yolov7, Faster R-CNN, to detect physical drawing objects present in the images followed by, edge detection algorithms such as canny filter to extract and refine the identified lines from the drawing region and curve detection techniques to detect circle. Also ornaments (complex shapes) within the drawings are extracted. To ensure comprehensive conversion, an Optical Character Recognition (OCR) tool is integrated to identify and extract the text elements from the drawings. The extracted data which includes the lines, shapes and text is consolidated and stored in a structured comma separated values(.csv) file format. The accuracy and the efficiency of conversion is evaluated. Through this, conversion can be automated to help organizations enhance their productivity, facilitate seamless collaborations and preserve valuable design information in a digital format easily accessible. Overall, this study contributes to the advancement of CAD conversions, providing accurate results from the translating process. Future research can focus on handling diverse drawing types, enhanced accuracy in shape and line detection and extraction.
[ "['Jesher Joshua M' 'Ragav V' 'Syed Ibrahim S P']" ]
null
null
2403.11292
null
null
http://arxiv.org/pdf/2403.11292v1
2024-03-17T18:08:22Z
2024-03-17T18:08:22Z
Multi-Relational Graph Neural Network for Out-of-Domain Link Prediction
Dynamic multi-relational graphs are an expressive relational representation for data enclosing entities and relations of different types, and where relationships are allowed to vary in time. Addressing predictive tasks over such data requires the ability to find structure embeddings that capture the diversity of the relationships involved, as well as their dynamic evolution. In this work, we establish a novel class of challenging tasks for dynamic multi-relational graphs involving out-of-domain link prediction, where the relationship being predicted is not available in the input graph. We then introduce a novel Graph Neural Network model, named GOOD, designed specifically to tackle the out-of-domain generalization problem. GOOD introduces a novel design concept for multi-relation embedding aggregation, based on the idea that good representations are such when it is possible to disentangle the mixing proportions of the different relational embeddings that have produced it. We also propose five benchmarks based on two retail domains, where we show that GOOD can effectively generalize predictions out of known relationship types and achieve state-of-the-art results. Most importantly, we provide insights into problems where out-of-domain prediction might be preferred to an in-domain formulation, that is, where the relationship to be predicted has very few positive examples.
[ "['Asma Sattar' 'Georgios Deligiorgis' 'Marco Trincavelli' 'Davide Bacciu']" ]
null
null
2403.11297
null
null
http://arxiv.org/pdf/2403.11297v1
2024-03-17T18:39:14Z
2024-03-17T18:39:14Z
A Modified Word Saliency-Based Adversarial Attack on Text Classification Models
This paper introduces a novel adversarial attack method targeting text classification models, termed the Modified Word Saliency-based Adversarial At-tack (MWSAA). The technique builds upon the concept of word saliency to strategically perturb input texts, aiming to mislead classification models while preserving semantic coherence. By refining the traditional adversarial attack approach, MWSAA significantly enhances its efficacy in evading detection by classification systems. The methodology involves first identifying salient words in the input text through a saliency estimation process, which prioritizes words most influential to the model's decision-making process. Subsequently, these salient words are subjected to carefully crafted modifications, guided by semantic similarity metrics to ensure that the altered text remains coherent and retains its original meaning. Empirical evaluations conducted on diverse text classification datasets demonstrate the effectiveness of the proposed method in generating adversarial examples capable of successfully deceiving state-of-the-art classification models. Comparative analyses with existing adversarial attack techniques further indicate the superiority of the proposed approach in terms of both attack success rate and preservation of text coherence.
[ "['Hetvi Waghela' 'Sneha Rakshit' 'Jaydip Sen']" ]
null
null
2403.11299
null
null
http://arxiv.org/pdf/2403.11299v2
2024-07-15T17:37:11Z
2024-03-17T18:42:38Z
SQ-LLaVA: Self-Questioning for Large Vision-Language Assistant
Recent advances in vision-language models have shown notable generalization in broad tasks through visual instruction tuning. However, bridging the gap between the pre-trained vision encoder and the large language models (LLMs) becomes the whole network's bottleneck. To improve cross-modality alignment, existing works usually consider more visual instruction data covering a broader range of vision tasks to fine-tune the model for question-answering, which, however, is costly to obtain and has not thoroughly explored the rich contextual information contained in images. This paper first attempts to harness the overlooked context within visual instruction data, training the model to self-supervised "learning" how to ask high-quality questions. In this way, we introduce a novel framework named SQ-LLaVA: Self-Questioning for Large Vision-Language Assistant. SQ-LLaVA exhibits proficiency in generating flexible and meaningful image-related questions while analyzing the visual clue and prior language knowledge, signifying an advanced level of generalized visual understanding. Moreover, fine-tuning SQ-LLaVA on higher-quality instruction data shows a performance improvement compared with traditional visual-instruction tuning methods. This improvement highlights the efficacy of self-questioning techniques in achieving a deeper and more nuanced comprehension of visual content across various contexts.
[ "['Guohao Sun' 'Can Qin' 'Jiamian Wang' 'Zeyuan Chen' 'Ran Xu'\n 'Zhiqiang Tao']" ]
null
null
2403.11314
null
null
http://arxiv.org/pdf/2403.11314v1
2024-03-17T19:32:12Z
2024-03-17T19:32:12Z
Reasoning in Transformers -- Mitigating Spurious Correlations and Reasoning Shortcuts
Transformer language models are neural networks used for a wide variety of tasks concerning natural language, including some that also require logical reasoning. However, a transformer model may easily learn spurious patterns in the data, short-circuiting actual reasoning. In this paper we investigate to what extent transformers can be trained to a) approximate reasoning in propositional logic while b) avoiding known reasoning shortcuts via spurious correlations in the training data. To do so, we use a dataset with known spurious correlation between truth and e.g. the number of rules in the problem. We augment the data with proofs, and train two models: a generative transformer, WP-BART, trained on problems and their whole proofs, and a neuro-symbolic model, SIP-BART, trained on individual proof steps and combining the generative transformer model BART with a symbolic proof checker. We find that SIP-BART succeeds in avoiding reasoning shortcuts, while WP-BART does not. For SIP-BART, we then identify a few remaining reasoning errors, not previously described in the literature, arising from using a pre-trained language model. These are qualitatively analysed to create a taxonomy of four different types of additional pitfalls.
[ "['Daniel Enström' 'Viktor Kjellberg' 'Moa Johansson']" ]
null
null
2403.11330
null
null
http://arxiv.org/pdf/2403.11330v2
2024-04-23T03:17:07Z
2024-03-17T20:21:26Z
Improving Dialogue Agents by Decomposing One Global Explicit Annotation with Local Implicit Multimodal Feedback
We describe an approach for aligning an LLM-based dialogue agent based on global (i.e., dialogue-level) rewards, while also taking into account naturally-occurring multimodal signals. At a high level, our approach (dubbed GELI) learns a local, turn-level reward model by decomposing the human-provided Global Explicit (GE) session-level reward, using Local Implicit (LI) multimodal reward signals to crossmodally shape the reward decomposition step. This decomposed reward model is then used as part of the standard RHLF pipeline improve an LLM-based dialog agent. We run quantitative and qualitative human studies to evaluate the performance of our GELI approach, and find that it shows consistent improvements across various conversational metrics compared to baseline methods.
[ "['Dong Won Lee' 'Hae Won Park' 'Yoon Kim' 'Cynthia Breazeal'\n 'Louis-Philippe Morency']" ]
null
null
2403.11331
null
null
http://arxiv.org/pdf/2403.11331v1
2024-03-17T20:23:06Z
2024-03-17T20:23:06Z
Potential of Domain Adaptation in Machine Learning in Ecology and Hydrology to Improve Model Extrapolability
Due to the heterogeneity of the global distribution of ecological and hydrological ground-truth observations, machine learning models can have limited adaptability when applied to unknown locations, which is referred to as weak extrapolability. Domain adaptation techniques have been widely used in machine learning domains such as image classification, which can improve the model generalization ability by adjusting the difference or inconsistency of the domain distribution between the training and test sets. However, this approach has rarely been used explicitly in machine learning models in ecology and hydrology at the global scale, although these models have often been questioned due to geographic extrapolability issues. This paper briefly describes the shortcomings of current machine learning models of ecology and hydrology in terms of the global representativeness of the distribution of observations and the resulting limitations of the lack of extrapolability and suggests that future related modelling efforts should consider the use of domain adaptation techniques to improve extrapolability.
[ "['Haiyang Shi']" ]
null
null
2403.11332
null
null
http://arxiv.org/pdf/2403.11332v2
2024-05-31T21:38:53Z
2024-03-17T20:23:42Z
Graph Machine Learning based Doubly Robust Estimator for Network Causal Effects
We address the challenge of inferring causal effects in social network data. This results in challenges due to interference -- where a unit's outcome is affected by neighbors' treatments -- and network-induced confounding factors. While there is extensive literature focusing on estimating causal effects in social network setups, a majority of them make prior assumptions about the form of network-induced confounding mechanisms. Such strong assumptions are rarely likely to hold especially in high-dimensional networks. We propose a novel methodology that combines graph machine learning approaches with the double machine learning framework to enable accurate and efficient estimation of direct and peer effects using a single observational social network. We demonstrate the semiparametric efficiency of our proposed estimator under mild regularity conditions, allowing for consistent uncertainty quantification. We demonstrate that our method is accurate, robust, and scalable via an extensive simulation study. We use our method to investigate the impact of Self-Help Group participation on financial risk tolerance.
[ "['Seyedeh Baharan Khatami' 'Harsh Parikh' 'Haowei Chen' 'Sudeepa Roy'\n 'Babak Salimi']" ]
null
null
2403.11338
null
null
http://arxiv.org/pdf/2403.11338v1
2024-03-17T20:44:38Z
2024-03-17T20:44:38Z
Ensembling and Test Augmentation for Covid-19 Detection and Covid-19 Domain Adaptation from 3D CT-Scans
Since the emergence of Covid-19 in late 2019, medical image analysis using artificial intelligence (AI) has emerged as a crucial research area, particularly with the utility of CT-scan imaging for disease diagnosis. This paper contributes to the 4th COV19D competition, focusing on Covid-19 Detection and Covid-19 Domain Adaptation Challenges. Our approach centers on lung segmentation and Covid-19 infection segmentation employing the recent CNN-based segmentation architecture PDAtt-Unet, which simultaneously segments lung regions and infections. Departing from traditional methods, we concatenate the input slice (grayscale) with segmented lung and infection, generating three input channels akin to color channels. Additionally, we employ three 3D CNN backbones Customized Hybrid-DeCoVNet, along with pretrained 3D-Resnet-18 and 3D-Resnet-50 models to train Covid-19 recognition for both challenges. Furthermore, we explore ensemble approaches and testing augmentation to enhance performance. Comparison with baseline results underscores the substantial efficiency of our approach, with a significant margin in terms of F1-score (14 %). This study advances the field by presenting a comprehensive methodology for accurate Covid-19 detection and adaptation, leveraging cutting-edge AI techniques in medical image analysis.
[ "['Fares Bougourzi' 'Feryal Windal Moula' 'Halim Benhabiles'\n 'Fadi Dornaika' 'Abdelmalik Taleb-Ahmed']" ]
null
null
2403.11343
null
null
http://arxiv.org/pdf/2403.11343v2
2024-04-09T10:34:40Z
2024-03-17T21:04:48Z
Federated Transfer Learning with Differential Privacy
Federated learning is gaining increasing popularity, with data heterogeneity and privacy being two prominent challenges. In this paper, we address both issues within a federated transfer learning framework, aiming to enhance learning on a target data set by leveraging information from multiple heterogeneous source data sets while adhering to privacy constraints. We rigorously formulate the notion of textit{federated differential privacy}, which offers privacy guarantees for each data set without assuming a trusted central server. Under this privacy constraint, we study three classical statistical problems, namely univariate mean estimation, low-dimensional linear regression, and high-dimensional linear regression. By investigating the minimax rates and identifying the costs of privacy for these problems, we show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy. Our analyses incorporate data heterogeneity and privacy, highlighting the fundamental costs of both in federated learning and underscoring the benefit of knowledge transfer across data sets.
[ "['Mengchu Li' 'Ye Tian' 'Yang Feng' 'Yi Yu']" ]
null
null
2403.11345
null
null
http://arxiv.org/pdf/2403.11345v1
2024-03-17T21:11:55Z
2024-03-17T21:11:55Z
Independent RL for Cooperative-Competitive Agents: A Mean-Field Perspective
We address in this paper Reinforcement Learning (RL) among agents that are grouped into teams such that there is cooperation within each team but general-sum (non-zero sum) competition across different teams. To develop an RL method that provably achieves a Nash equilibrium, we focus on a linear-quadratic structure. Moreover, to tackle the non-stationarity induced by multi-agent interactions in the finite population setting, we consider the case where the number of agents within each team is infinite, i.e., the mean-field setting. This results in a General-Sum LQ Mean-Field Type Game (GS-MFTGs). We characterize the Nash equilibrium (NE) of the GS-MFTG, under a standard invertibility condition. This MFTG NE is then shown to be $mathcal{O}(1/M)$-NE for the finite population game where $M$ is a lower bound on the number of agents in each team. These structural results motivate an algorithm called Multi-player Receding-horizon Natural Policy Gradient (MRPG), where each team minimizes its cumulative cost independently in a receding-horizon manner. Despite the non-convexity of the problem, we establish that the resulting algorithm converges to a global NE through a novel problem decomposition into sub-problems using backward recursive discrete-time Hamilton-Jacobi-Isaacs (HJI) equations, in which independent natural policy gradient is shown to exhibit linear convergence under time-independent diagonal dominance. Experiments illuminate the merits of this approach in practice.
[ "['Muhammad Aneeq uz Zaman' 'Alec Koppel' 'Mathieu Laurière' 'Tamer Başar']" ]
null
null
2403.11348
null
null
http://arxiv.org/pdf/2403.11348v1
2024-03-17T21:23:45Z
2024-03-17T21:23:45Z
COLEP: Certifiably Robust Learning-Reasoning Conformal Prediction via Probabilistic Circuits
Conformal prediction has shown spurring performance in constructing statistically rigorous prediction sets for arbitrary black-box machine learning models, assuming the data is exchangeable. However, even small adversarial perturbations during the inference can violate the exchangeability assumption, challenge the coverage guarantees, and result in a subsequent decline in empirical coverage. In this work, we propose a certifiably robust learning-reasoning conformal prediction framework (COLEP) via probabilistic circuits, which comprise a data-driven learning component that trains statistical models to learn different semantic concepts, and a reasoning component that encodes knowledge and characterizes the relationships among the trained models for logic reasoning. To achieve exact and efficient reasoning, we employ probabilistic circuits (PCs) within the reasoning component. Theoretically, we provide end-to-end certification of prediction coverage for COLEP in the presence of bounded adversarial perturbations. We also provide certified coverage considering the finite size of the calibration set. Furthermore, we prove that COLEP achieves higher prediction coverage and accuracy over a single model as long as the utilities of knowledge models are non-trivial. Empirically, we show the validity and tightness of our certified coverage, demonstrating the robust conformal prediction of COLEP on various datasets, including GTSRB, CIFAR10, and AwA2. We show that COLEP achieves up to 12% improvement in certified coverage on GTSRB, 9% on CIFAR-10, and 14% on AwA2.
[ "['Mintong Kang' 'Nezihe Merve Gürel' 'Linyi Li' 'Bo Li']" ]
null
null
2403.11350
null
null
http://arxiv.org/pdf/2403.11350v1
2024-03-17T21:35:45Z
2024-03-17T21:35:45Z
Robustness of the data-driven approach in limited angle tomography
The limited angle Radon transform is notoriously difficult to invert due to the ill-posedness. In this work, we give a mathematical explanation that the data-driven approach based on deep neural networks can reconstruct more information in a stable way compared to traditional methods.
[ "['Yiran Wang' 'Yimin Zhong']" ]
null
null
2403.11351
null
null
http://arxiv.org/pdf/2403.11351v1
2024-03-17T21:43:19Z
2024-03-17T21:43:19Z
An SDP-based Branch-and-Cut Algorithm for Biclustering
Biclustering, also called co-clustering, block clustering, or two-way clustering, involves the simultaneous clustering of both the rows and columns of a data matrix into distinct groups, such that the rows and columns within a group display similar patterns. As a model problem for biclustering, we consider the $k$-densest-disjoint biclique problem, whose goal is to identify $k$ disjoint complete bipartite subgraphs (called bicliques) of a given weighted complete bipartite graph such that the sum of their densities is maximized. To address this problem, we present a tailored branch-and-cut algorithm. For the upper bound routine, we consider a semidefinite programming relaxation and propose valid inequalities to strengthen the bound. We solve this relaxation in a cutting-plane fashion using a first-order method. For the lower bound, we design a maximum weight matching rounding procedure that exploits the solution of the relaxation solved at each node. Computational results on both synthetic and real-world instances show that the proposed algorithm can solve instances approximately 20 times larger than those handled by general-purpose solvers.
[ "['Antonio M. Sudoso']" ]
null
null
2403.11353
null
null
http://arxiv.org/pdf/2403.11353v3
2024-05-30T23:18:46Z
2024-03-17T21:52:51Z
AI-enabled prediction of NMR spectroscopy: Deducing 2-D NMR of carbohydrate
In the dynamic field of nuclear magnetic resonance (NMR) spectroscopy, artificial intelligence (AI) has ushered in a transformative era for molecular studies. AI-driven NMR prediction, powered by advanced machine learning and predictive algorithms, has fundamentally reshaped the interpretation of NMR spectra. This innovation empowers us to forecast spectral patterns swiftly and accurately across a broad spectrum of molecular structures. Furthermore, the advent of generative modeling offers a groundbreaking approach, making it feasible to make informed prediction of 2D NMR from chemical language (such as SMILES, IUPAC Name). Our method mirrors the multifaceted nature of NMR imaging experiments, producing 2D NMRs for the same molecule based on different conditions, such as solvents and temperatures. Our methodology is versatile, catering to both monosaccharide-derived small molecules, oligosaccharides and large polysaccharides. A deeper exploration of the discrepancies in these predictions can provide insights into the influence of elements such as functional groups, repeating units, and the modification of the monomers on the outcomes. Given the complex nature involved in the generation of 2D NMRs, our objective is to fully leverage the potential of AI to enhance the precision, efficiency, and comprehensibility of NMR spectral analysis, ultimately advancing both the field of NMR spectroscopy and the broader realm of molecular research.
[ "['Yunrui Li' 'Hao Xu' 'Pengyu Hong']" ]
null
null
2403.11363
null
null
http://arxiv.org/pdf/2403.11363v1
2024-03-17T22:44:36Z
2024-03-17T22:44:36Z
IGANN Sparse: Bridging Sparsity and Interpretability with Non-linear Insight
Feature selection is a critical component in predictive analytics that significantly affects the prediction accuracy and interpretability of models. Intrinsic methods for feature selection are built directly into model learning, providing a fast and attractive option for large amounts of data. Machine learning algorithms, such as penalized regression models (e.g., lasso) are the most common choice when it comes to in-built feature selection. However, they fail to capture non-linear relationships, which ultimately affects their ability to predict outcomes in intricate datasets. In this paper, we propose IGANN Sparse, a novel machine learning model from the family of generalized additive models, which promotes sparsity through a non-linear feature selection process during training. This ensures interpretability through improved model sparsity without sacrificing predictive performance. Moreover, IGANN Sparse serves as an exploratory tool for information systems researchers to unveil important non-linear relationships in domains that are characterized by complex patterns. Our ongoing research is directed at a thorough evaluation of the IGANN Sparse model, including user studies that allow to assess how well users of the model can benefit from the reduced number of features. This will allow for a deeper understanding of the interactions between linear vs. non-linear modeling, number of selected features, and predictive performance.
[ "['Theodor Stoecker' 'Nico Hambauer' 'Patrick Zschech' 'Mathias Kraus']" ]
null
null
2403.11366
null
null
http://arxiv.org/pdf/2403.11366v2
2024-03-19T16:19:49Z
2024-03-17T23:02:04Z
JORA: JAX Tensor-Parallel LoRA Library for Retrieval Augmented Fine-Tuning
The scaling of Large Language Models (LLMs) for retrieval-based tasks, particularly in Retrieval Augmented Generation (RAG), faces significant memory constraints, especially when fine-tuning extensive prompt sequences. Current open-source libraries support full-model inference and fine-tuning across multiple GPUs but fall short of accommodating the efficient parameter distribution required for retrieved context. Addressing this gap, we introduce a novel framework for PEFT-compatible fine-tuning of Llama-2 models, leveraging distributed training. Our framework uniquely utilizes JAX's just-in-time (JIT) compilation and tensor-sharding for efficient resource management, thereby enabling accelerated fine-tuning with reduced memory requirements. This advancement significantly improves the scalability and feasibility of fine-tuning LLMs for complex RAG applications, even on systems with limited GPU resources. Our experiments show more than 12x improvement in runtime compared to Hugging Face/DeepSpeed implementation with four GPUs while consuming less than half the VRAM per GPU.
[ "['Anique Tahir' 'Lu Cheng' 'Huan Liu']" ]
null
null
2403.11375
null
null
http://arxiv.org/pdf/2403.11375v1
2024-03-18T00:02:48Z
2024-03-18T00:02:48Z
Path-GPTOmic: A Balanced Multi-modal Learning Framework for Survival Outcome Prediction
For predicting cancer survival outcomes, standard approaches in clinical research are often based on two main modalities: pathology images for observing cell morphology features, and genomic (e.g., bulk RNA-seq) for quantifying gene expressions. However, existing pathology-genomic multi-modal algorithms face significant challenges: (1) Valuable biological insights regarding genes and gene-gene interactions are frequently overlooked; (2) one modality often dominates the optimization process, causing inadequate training for the other modality. In this paper, we introduce a new multi-modal ``Path-GPTOmic" framework for cancer survival outcome prediction. First, to extract valuable biological insights, we regulate the embedding space of a foundation model, scGPT, initially trained on single-cell RNA-seq data, making it adaptable for bulk RNA-seq data. Second, to address the imbalance-between-modalities problem, we propose a gradient modulation mechanism tailored to the Cox partial likelihood loss for survival prediction. The contributions of the modalities are dynamically monitored and adjusted during the training process, encouraging that both modalities are sufficiently trained. Evaluated on two TCGA(The Cancer Genome Atlas) datasets, our model achieves substantially improved survival prediction accuracy.
[ "['Hongxiao Wang' 'Yang Yang' 'Zhuo Zhao' 'Pengfei Gu' 'Nishchal Sapkota'\n 'Danny Z. Chen']" ]
null
null
2403.11385
null
null
http://arxiv.org/pdf/2403.11385v1
2024-03-18T00:22:33Z
2024-03-18T00:22:33Z
Stochastic approach for elliptic problems in perforated domains
A wide range of applications in science and engineering involve a PDE model in a domain with perforations, such as perforated metals or air filters. Solving such perforated domain problems suffers from computational challenges related to resolving the scale imposed by the geometries of perforations. We propose a neural network-based mesh-free approach for perforated domain problems. The method is robust and efficient in capturing various configuration scales, including the averaged macroscopic behavior of the solution that involves a multiscale nature induced by small perforations. The new approach incorporates the derivative-free loss method that uses a stochastic representation or the Feynman-Kac formulation. In particular, we implement the Neumann boundary condition for the derivative-free loss method to handle the interface between the domain and perforations. A suite of stringent numerical tests is provided to support the proposed method's efficacy in handling various perforation scales.
[ "['Jihun Han' 'Yoonsang Lee']" ]
null
null
2403.11391
null
null
http://arxiv.org/pdf/2403.11391v1
2024-03-18T00:48:58Z
2024-03-18T00:48:58Z
Investigating the Benefits of Projection Head for Representation Learning
An effective technique for obtaining high-quality representations is adding a projection head on top of the encoder during training, then discarding it and using the pre-projection representations. Despite its proven practical effectiveness, the reason behind the success of this technique is poorly understood. The pre-projection representations are not directly optimized by the loss function, raising the question: what makes them better? In this work, we provide a rigorous theoretical answer to this question. We start by examining linear models trained with self-supervised contrastive loss. We reveal that the implicit bias of training algorithms leads to layer-wise progressive feature weighting, where features become increasingly unequal as we go deeper into the layers. Consequently, lower layers tend to have more normalized and less specialized representations. We theoretically characterize scenarios where such representations are more beneficial, highlighting the intricate interplay between data augmentation and input features. Additionally, we demonstrate that introducing non-linearity into the network allows lower layers to learn features that are completely absent in higher layers. Finally, we show how this mechanism improves the robustness in supervised contrastive learning and supervised learning. We empirically validate our results through various experiments on CIFAR-10/100, UrbanCars and shifted versions of ImageNet. We also introduce a potential alternative to projection head, which offers a more interpretable and controllable design.
[ "['Yihao Xue' 'Eric Gan' 'Jiayi Ni' 'Siddharth Joshi'\n 'Baharan Mirzasoleiman']" ]
null
null
2403.11395
null
null
http://arxiv.org/abs/2403.11395v2
2024-03-19T09:36:27Z
2024-03-18T01:07:48Z
Automated data processing and feature engineering for deep learning and big data applications: a survey
Modern approach to artificial intelligence (AI) aims to design algorithms that learn directly from data. This approach has achieved impressive results and has contributed significantly to the progress of AI, particularly in the sphere of supervised deep learning. It has also simplified the design of machine learning systems as the learning process is highly automated. However, not all data processing tasks in conventional deep learning pipelines have been automated. In most cases data has to be manually collected, preprocessed and further extended through data augmentation before they can be effective for training. Recently, special techniques for automating these tasks have emerged. The automation of data processing tasks is driven by the need to utilize large volumes of complex, heterogeneous data for machine learning and big data applications. Today, end-to-end automated data processing systems based on automated machine learning (AutoML) techniques are capable of taking raw data and transforming them into useful features for Big Data tasks by automating all intermediate processing stages. In this work, we present a thorough review of approaches for automating data processing tasks in deep learning pipelines, including automated data preprocessing--e.g., data cleaning, labeling, missing data imputation, and categorical data encoding--as well as data augmentation (including synthetic data generation using generative AI methods) and feature engineering--specifically, automated feature extraction, feature construction and feature selection. In addition to automating specific data processing tasks, we discuss the use of AutoML methods and tools to simultaneously optimize all stages of the machine learning pipeline.
[ "['Alhassan Mumuni' 'Fuseini Mumuni']" ]
null
null
2403.11407
null
null
http://arxiv.org/pdf/2403.11407v1
2024-03-18T01:47:24Z
2024-03-18T01:47:24Z
Divide-and-Conquer Posterior Sampling for Denoising Diffusion Priors
Interest in the use of Denoising Diffusion Models (DDM) as priors for solving inverse Bayesian problems has recently increased significantly. However, sampling from the resulting posterior distribution poses a challenge. To solve this problem, previous works have proposed approximations to bias the drift term of the diffusion. In this work, we take a different approach and utilize the specific structure of the DDM prior to define a set of intermediate and simpler posterior sampling problems, resulting in a lower approximation error compared to previous methods. We empirically demonstrate the reconstruction capability of our method for general linear inverse problems using synthetic examples and various image restoration tasks.
[ "['Yazid Janati' 'Alain Durmus' 'Eric Moulines' 'Jimmy Olsson']" ]
null
null
2403.11408
null
null
http://arxiv.org/pdf/2403.11408v1
2024-03-18T01:48:50Z
2024-03-18T01:48:50Z
Layer-diverse Negative Sampling for Graph Neural Networks
Graph neural networks (GNNs) are a powerful solution for various structure learning applications due to their strong representation capabilities for graph data. However, traditional GNNs, relying on message-passing mechanisms that gather information exclusively from first-order neighbours (known as positive samples), can lead to issues such as over-smoothing and over-squashing. To mitigate these issues, we propose a layer-diverse negative sampling method for message-passing propagation. This method employs a sampling matrix within a determinantal point process, which transforms the candidate set into a space and selectively samples from this space to generate negative samples. To further enhance the diversity of the negative samples during each forward pass, we develop a space-squeezing method to achieve layer-wise diversity in multi-layer GNNs. Experiments on various real-world graph datasets demonstrate the effectiveness of our approach in improving the diversity of negative samples and overall learning performance. Moreover, adding negative samples dynamically changes the graph's topology, thus with the strong potential to improve the expressiveness of GNNs and reduce the risk of over-squashing.
[ "['Wei Duan' 'Jie Lu' 'Yu Guang Wang' 'Junyu Xuan']" ]
null
null
2403.11415
null
null
http://arxiv.org/pdf/2403.11415v1
2024-03-18T02:08:58Z
2024-03-18T02:08:58Z
DreamSampler: Unifying Diffusion Sampling and Score Distillation for Image Manipulation
Reverse sampling and score-distillation have emerged as main workhorses in recent years for image manipulation using latent diffusion models (LDMs). While reverse diffusion sampling often requires adjustments of LDM architecture or feature engineering, score distillation offers a simple yet powerful model-agnostic approach, but it is often prone to mode-collapsing. To address these limitations and leverage the strengths of both approaches, here we introduce a novel framework called {em DreamSampler}, which seamlessly integrates these two distinct approaches through the lens of regularized latent optimization. Similar to score-distillation, DreamSampler is a model-agnostic approach applicable to any LDM architecture, but it allows both distillation and reverse sampling with additional guidance for image editing and reconstruction. Through experiments involving image editing, SVG reconstruction and etc, we demonstrate the competitive performance of DreamSampler compared to existing approaches, while providing new applications.
[ "['Jeongsol Kim' 'Geon Yeong Park' 'Jong Chul Ye']" ]
null
null
2403.11418
null
null
http://arxiv.org/pdf/2403.11418v1
2024-03-18T02:12:12Z
2024-03-18T02:12:12Z
Variational Sampling of Temporal Trajectories
A deterministic temporal process can be determined by its trajectory, an element in the product space of (a) initial condition $z_0 in mathcal{Z}$ and (b) transition function $f: (mathcal{Z}, mathcal{T}) to mathcal{Z}$ often influenced by the control of the underlying dynamical system. Existing methods often model the transition function as a differential equation or as a recurrent neural network. Despite their effectiveness in predicting future measurements, few results have successfully established a method for sampling and statistical inference of trajectories using neural networks, partially due to constraints in the parameterization. In this work, we introduce a mechanism to learn the distribution of trajectories by parameterizing the transition function $f$ explicitly as an element in a function space. Our framework allows efficient synthesis of novel trajectories, while also directly providing a convenient tool for inference, i.e., uncertainty estimation, likelihood evaluations and out of distribution detection for abnormal trajectories. These capabilities can have implications for various downstream tasks, e.g., simulation and evaluation for reinforcement learning.
[ "['Jurijs Nazarovs' 'Zhichun Huang' 'Xingjian Zhen' 'Sourav Pal'\n 'Rudrasis Chakraborty' 'Vikas Singh']" ]
null
null
2403.11420
null
null
http://arxiv.org/pdf/2403.11420v1
2024-03-18T02:20:22Z
2024-03-18T02:20:22Z
Neural network representation of quantum systems
It has been proposed that random wide neural networks near Gaussian process are quantum field theories around Gaussian fixed points. In this paper, we provide a novel map with which a wide class of quantum mechanical systems can be cast into the form of a neural network with a statistical summation over network parameters. Our simple idea is to use the universal approximation theorem of neural networks to generate arbitrary paths in the Feynman's path integral. The map can be applied to interacting quantum systems / field theories, even away from the Gaussian limit. Our findings bring machine learning closer to the quantum world.
[ "['Koji Hashimoto' 'Yuji Hirono' 'Jun Maeda' 'Jojiro Totsuka-Yoshinaka']" ]
null
null
2403.11425
null
null
http://arxiv.org/pdf/2403.11425v1
2024-03-18T02:42:01Z
2024-03-18T02:42:01Z
Narrative Feature or Structured Feature? A Study of Large Language Models to Identify Cancer Patients at Risk of Heart Failure
Cancer treatments are known to introduce cardiotoxicity, negatively impacting outcomes and survivorship. Identifying cancer patients at risk of heart failure (HF) is critical to improving cancer treatment outcomes and safety. This study examined machine learning (ML) models to identify cancer patients at risk of HF using electronic health records (EHRs), including traditional ML, Time-Aware long short-term memory (T-LSTM), and large language models (LLMs) using novel narrative features derived from the structured medical codes. We identified a cancer cohort of 12,806 patients from the University of Florida Health, diagnosed with lung, breast, and colorectal cancers, among which 1,602 individuals developed HF after cancer. The LLM, GatorTron-3.9B, achieved the best F1 scores, outperforming the traditional support vector machines by 39%, the T-LSTM deep learning model by 7%, and a widely used transformer model, BERT, by 5.6%. The analysis shows that the proposed narrative features remarkably increased feature density and improved performance.
[ "['Ziyi Chen' 'Mengyuan Zhang' 'Mustafa Mohammed Ahmed' 'Yi Guo'\n 'Thomas J. George' 'Jiang Bian' 'Yonghui Wu']" ]
null
null
2403.11432
null
null
http://arxiv.org/pdf/2403.11432v2
2024-06-13T15:03:40Z
2024-03-18T02:59:13Z
Demystifying the Physics of Deep Reinforcement Learning-Based Autonomous Vehicle Decision-Making
With the advent of universal function approximators in the domain of reinforcement learning, the number of practical applications leveraging deep reinforcement learning (DRL) has exploded. Decision-making in autonomous vehicles (AVs) has emerged as a chief application among them, taking the sensor data or the higher-order kinematic variables as the input and providing a discrete choice or continuous control output. There has been a continuous effort to understand the black-box nature of the DRL models, but so far, there hasn't been any discussion (to the best of authors' knowledge) about how the models learn the physical process. This presents an overwhelming limitation that restricts the real-world deployment of DRL in AVs. Therefore, in this research work, we try to decode the knowledge learnt by the attention-based DRL framework about the physical process. We use a continuous proximal policy optimization-based DRL algorithm as the baseline model and add a multi-head attention framework in an open-source AV simulation environment. We provide some analytical techniques for discussing the interpretability of the trained models in terms of explainability and causality for spatial and temporal correlations. We show that the weights in the first head encode the positions of the neighboring vehicles while the second head focuses on the leader vehicle exclusively. Also, the ego vehicle's action is causally dependent on the vehicles in the target lane spatially and temporally. Through these findings, we reliably show that these techniques can help practitioners decipher the results of the DRL algorithms.
[ "['Hanxi Wan' 'Pei Li' 'Arpan Kusari']" ]
null
null
2403.11446
null
null
http://arxiv.org/pdf/2403.11446v1
2024-03-18T03:44:55Z
2024-03-18T03:44:55Z
LLM Guided Evolution -- The Automation of Models Advancing Models
In the realm of machine learning, traditional model development and automated approaches like AutoML typically rely on layers of abstraction, such as tree-based or Cartesian genetic programming. Our study introduces "Guided Evolution" (GE), a novel framework that diverges from these methods by utilizing Large Language Models (LLMs) to directly modify code. GE leverages LLMs for a more intelligent, supervised evolutionary process, guiding mutations and crossovers. Our unique "Evolution of Thought" (EoT) technique further enhances GE by enabling LLMs to reflect on and learn from the outcomes of previous mutations. This results in a self-sustaining feedback loop that augments decision-making in model evolution. GE maintains genetic diversity, crucial for evolutionary algorithms, by leveraging LLMs' capability to generate diverse responses from expertly crafted prompts and modulate model temperature. This not only accelerates the evolution process but also injects expert like creativity and insight into the process. Our application of GE in evolving the ExquisiteNetV2 model demonstrates its efficacy: the LLM-driven GE autonomously produced variants with improved accuracy, increasing from 92.52% to 93.34%, without compromising model compactness. This underscores the potential of LLMs to accelerate the traditional model design pipeline, enabling models to autonomously evolve and enhance their own designs.
[ "['Clint Morris' 'Michael Jurado' 'Jason Zutty']" ]
null
null
2403.11449
null
null
http://arxiv.org/pdf/2403.11449v2
2024-05-22T00:16:18Z
2024-03-18T03:56:34Z
Graph Partial Label Learning with Potential Cause Discovering
Graph Neural Networks (GNNs) have garnered widespread attention for their potential to address the challenges posed by graph representation learning, which face complex graph-structured data across various domains. However, due to the inherent complexity and interconnectedness of graphs, accurately annotating graph data for training GNNs is extremely challenging. To address this issue, we have introduced Partial Label Learning (PLL) into graph representation learning. PLL is a critical weakly supervised learning problem where each training instance is associated with a set of candidate labels, including the ground-truth label and the additional interfering labels. PLL allows annotators to make errors, which reduces the difficulty of data labeling. Subsequently, we propose a novel graph representation learning method that enables GNN models to effectively learn discriminative information within the context of PLL. Our approach utilizes potential cause extraction to obtain graph data that holds causal relationships with the labels. By conducting auxiliary training based on the extracted graph data, our model can effectively eliminate the interfering information in the PLL scenario. We support the rationale behind our method with a series of theoretical analyses. Moreover, we conduct extensive evaluations and ablation studies on multiple datasets, demonstrating the superiority of our proposed method.
[ "['Hang Gao' 'Jiaguo Yuan' 'Jiangmeng Li' 'Peng Qiao' 'Fengge Wu'\n 'Changwen Zheng' 'Huaping Liu']" ]
null
null
2403.11464
null
null
http://arxiv.org/pdf/2403.11464v1
2024-03-18T04:31:38Z
2024-03-18T04:31:38Z
FedSPU: Personalized Federated Learning for Resource-constrained Devices with Stochastic Parameter Update
Personalized Federated Learning (PFL) is widely employed in IoT applications to handle high-volume, non-iid client data while ensuring data privacy. However, heterogeneous edge devices owned by clients may impose varying degrees of resource constraints, causing computation and communication bottlenecks for PFL. Federated Dropout has emerged as a popular strategy to address this challenge, wherein only a subset of the global model, i.e. a textit{sub-model}, is trained on a client's device, thereby reducing computation and communication overheads. Nevertheless, the dropout-based model-pruning strategy may introduce bias, particularly towards non-iid local data. When biased sub-models absorb highly divergent parameters from other clients, performance degradation becomes inevitable. In response, we propose federated learning with stochastic parameter update (FedSPU). Unlike dropout that tailors the global model to small-size local sub-models, FedSPU maintains the full model architecture on each device but randomly freezes a certain percentage of neurons in the local model during training while updating the remaining neurons. This approach ensures that a portion of the local model remains personalized, thereby enhancing the model's robustness against biased parameters from other clients. Experimental results demonstrate that FedSPU outperforms federated dropout by 7.57% on average in terms of accuracy. Furthermore, an introduced early stopping scheme leads to a significant reduction of the training time by (24.8%sim70.4%) while maintaining high accuracy.
[ "['Ziru Niu' 'Hai Dong' 'A. K. Qin']" ]
null
null
2403.11472
null
null
http://arxiv.org/pdf/2403.11472v1
2024-03-18T04:44:00Z
2024-03-18T04:44:00Z
Accelerating String-Key Learned Index Structures via Memoization-based Incremental Training
Learned indexes use machine learning models to learn the mappings between keys and their corresponding positions in key-value indexes. These indexes use the mapping information as training data. Learned indexes require frequent retrainings of their models to incorporate the changes introduced by update queries. To efficiently retrain the models, existing learned index systems often harness a linear algebraic QR factorization technique that performs matrix decomposition. This factorization approach processes all key-position pairs during each retraining, resulting in compute operations that grow linearly with the total number of keys and their lengths. Consequently, the retrainings create a severe performance bottleneck, especially for variable-length string keys, while the retrainings are crucial for maintaining high prediction accuracy and in turn, ensuring low query service latency. To address this performance problem, we develop an algorithm-hardware co-designed string-key learned index system, dubbed SIA. In designing SIA, we leverage a unique algorithmic property of the matrix decomposition-based training method. Exploiting the property, we develop a memoization-based incremental training scheme, which only requires computation over updated keys, while decomposition results of non-updated keys from previous computations can be reused. We further enhance SIA to offload a portion of this training process to an FPGA accelerator to not only relieve CPU resources for serving index queries (i.e., inference), but also accelerate the training itself. Our evaluation shows that compared to ALEX, LIPP, and SIndex, a state-of-the-art learned index systems, SIA-accelerated learned indexes offer 2.6x and 3.4x higher throughput on the two real-world benchmark suites, YCSB and Twitter cache trace, respectively.
[ "['Minsu Kim' 'Jinwoo Hwang' 'Guseul Heo' 'Seiyeon Cho' 'Divya Mahajan'\n 'Jongse Park']" ]
null
null
2403.11477
null
null
http://arxiv.org/pdf/2403.11477v2
2024-06-04T21:22:45Z
2024-03-18T04:52:11Z
Span-Based Optimal Sample Complexity for Weakly Communicating and General Average Reward MDPs
We study the sample complexity of learning an $varepsilon$-optimal policy in an average-reward Markov decision process (MDP) under a generative model. For weakly communicating MDPs, we establish the complexity bound $widetilde{O}(SAfrac{H}{varepsilon^2} )$, where $H$ is the span of the bias function of the optimal policy and $SA$ is the cardinality of the state-action space. Our result is the first that is minimax optimal (up to log factors) in all parameters $S,A,H$, and $varepsilon$, improving on existing work that either assumes uniformly bounded mixing times for all policies or has suboptimal dependence on the parameters. We also initiate the study of sample complexity in general (multichain) average-reward MDPs. We argue a new transient time parameter $B$ is necessary, establish an $widetilde{O}(SAfrac{B + H}{varepsilon^2})$ complexity bound, and prove a matching (up to log factors) minimax lower bound. Both results are based on reducing the average-reward MDP to a discounted MDP, which requires new ideas in the general setting. To optimally analyze this reduction, we develop improved bounds for $gamma$-discounted MDPs, showing that $widetilde{O}(SAfrac{H}{(1-gamma)^2varepsilon^2} )$ and $widetilde{O}(SAfrac{B + H}{(1-gamma)^2varepsilon^2} )$ samples suffice to learn $varepsilon$-optimal policies in weakly communicating and in general MDPs, respectively. Both these results circumvent the well-known minimax lower bound of $widetilde{Omega}(SAfrac{1}{(1-gamma)^3varepsilon^2} )$ for $gamma$-discounted MDPs, and establish a quadratic rather than cubic horizon dependence for a fixed MDP instance.
[ "['Matthew Zurek' 'Yudong Chen']" ]
null
null
2403.11482
null
null
http://arxiv.org/pdf/2403.11482v1
2024-03-18T05:10:13Z
2024-03-18T05:10:13Z
SeisFusion: Constrained Diffusion Model with Input Guidance for 3D Seismic Data Interpolation and Reconstruction
Geographical, physical, or economic constraints often result in missing traces within seismic data, making the reconstruction of complete seismic data a crucial step in seismic data processing. Traditional methods for seismic data reconstruction require the selection of multiple empirical parameters and struggle to handle large-scale continuous missing data. With the development of deep learning, various neural networks have demonstrated powerful reconstruction capabilities. However, these convolutional neural networks represent a point-to-point reconstruction approach that may not cover the entire distribution of the dataset. Consequently, when dealing with seismic data featuring complex missing patterns, such networks may experience varying degrees of performance degradation. In response to this challenge, we propose a novel diffusion model reconstruction framework tailored for 3D seismic data. To constrain the results generated by the diffusion model, we introduce conditional supervision constraints into the diffusion model, constraining the generated data of the diffusion model based on the input data to be reconstructed. We introduce a 3D neural network architecture into the diffusion model, successfully extending the 2D diffusion model to 3D space. Additionally, we refine the model's generation process by incorporating missing data into the generation process, resulting in reconstructions with higher consistency. Through ablation studies determining optimal parameter values, our method exhibits superior reconstruction accuracy when applied to both field datasets and synthetic datasets, effectively addressing a wide range of complex missing patterns. Our implementation is available at https://github.com/WAL-l/SeisFusion.
[ "['Shuang Wang' 'Fei Deng' 'Peifan Jiang' 'Zishan Gong' 'Xiaolin Wei'\n 'Yuqing Wang']" ]
null
null
2403.11483
null
null
http://arxiv.org/pdf/2403.11483v1
2024-03-18T05:12:54Z
2024-03-18T05:12:54Z
Open-World Semi-Supervised Learning for Node Classification
Open-world semi-supervised learning (Open-world SSL) for node classification, that classifies unlabeled nodes into seen classes or multiple novel classes, is a practical but under-explored problem in the graph community. As only seen classes have human labels, they are usually better learned than novel classes, and thus exhibit smaller intra-class variances within the embedding space (named as imbalance of intra-class variances between seen and novel classes). Based on empirical and theoretical analysis, we find the variance imbalance can negatively impact the model performance. Pre-trained feature encoders can alleviate this issue via producing compact representations for novel classes. However, creating general pre-trained encoders for various types of graph data has been proven to be challenging. As such, there is a demand for an effective method that does not rely on pre-trained graph encoders. In this paper, we propose an IMbalance-Aware method named OpenIMA for Open-world semi-supervised node classification, which trains the node classification model from scratch via contrastive learning with bias-reduced pseudo labels. Extensive experiments on seven popular graph benchmarks demonstrate the effectiveness of OpenIMA, and the source code has been available on GitHub.
[ "['Yanling Wang' 'Jing Zhang' 'Lingxi Zhang' 'Lixin Liu' 'Yuxiao Dong'\n 'Cuiping Li' 'Hong Chen' 'Hongzhi Yin']" ]
null
null
2403.11491
null
null
http://arxiv.org/pdf/2403.11491v1
2024-03-18T05:49:45Z
2024-03-18T05:49:45Z
Uncertainty-Calibrated Test-Time Model Adaptation without Forgetting
Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and test data by adapting a given model w.r.t. any test sample. Although recent TTA has shown promising performance, we still face two key challenges: 1) prior methods perform backpropagation for each test sample, resulting in unbearable optimization costs to many applications; 2) while existing TTA can significantly improve the test performance on out-of-distribution data, they often suffer from severe performance degradation on in-distribution data after TTA (known as forgetting). To this end, we have proposed an Efficient Anti-Forgetting Test-Time Adaptation (EATA) method which develops an active sample selection criterion to identify reliable and non-redundant samples for test-time entropy minimization. To alleviate forgetting, EATA introduces a Fisher regularizer estimated from test samples to constrain important model parameters from drastic changes. However, in EATA, the adopted entropy loss consistently assigns higher confidence to predictions even for samples that are underlying uncertain, leading to overconfident predictions. To tackle this, we further propose EATA with Calibration (EATA-C) to separately exploit the reducible model uncertainty and the inherent data uncertainty for calibrated TTA. Specifically, we measure the model uncertainty by the divergence between predictions from the full network and its sub-networks, on which we propose a divergence loss to encourage consistent predictions instead of overconfident ones. To further recalibrate prediction confidence, we utilize the disagreement among predicted labels as an indicator of the data uncertainty, and then devise a min-max entropy regularizer to selectively increase and decrease prediction confidence for different samples. Experiments on image classification and semantic segmentation verify the effectiveness of our methods.
[ "['Mingkui Tan' 'Guohao Chen' 'Jiaxiang Wu' 'Yifan Zhang' 'Yaofo Chen'\n 'Peilin Zhao' 'Shuaicheng Niu']" ]
null
null
2403.11495
null
null
http://arxiv.org/pdf/2403.11495v1
2024-03-18T05:59:56Z
2024-03-18T05:59:56Z
Semantic-Enhanced Representation Learning for Road Networks with Temporal Dynamics
In this study, we introduce a novel framework called Toast for learning general-purpose representations of road networks, along with its advanced counterpart DyToast, designed to enhance the integration of temporal dynamics to boost the performance of various time-sensitive downstream tasks. Specifically, we propose to encode two pivotal semantic characteristics intrinsic to road networks: traffic patterns and traveling semantics. To achieve this, we refine the skip-gram module by incorporating auxiliary objectives aimed at predicting the traffic context associated with a target road segment. Moreover, we leverage trajectory data and design pre-training strategies based on Transformer to distill traveling semantics on road networks. DyToast further augments this framework by employing unified trigonometric functions characterized by their beneficial properties, enabling the capture of temporal evolution and dynamic nature of road networks more effectively. With these proposed techniques, we can obtain representations that encode multi-faceted aspects of knowledge within road networks, applicable across both road segment-based applications and trajectory-based applications. Extensive experiments on two real-world datasets across three tasks demonstrate that our proposed framework consistently outperforms the state-of-the-art baselines by a significant margin.
[ "['Yile Chen' 'Xiucheng Li' 'Gao Cong' 'Zhifeng Bao' 'Cheng Long']" ]
null
null
2403.11497
null
null
http://arxiv.org/pdf/2403.11497v1
2024-03-18T06:04:02Z
2024-03-18T06:04:02Z
Do CLIPs Always Generalize Better than ImageNet Models?
Large vision language models, such as CLIPs, have revolutionized modern machine learning. CLIPs have demonstrated great generalizability under distribution shifts, supported by an increasing body of literature. However, the evaluation datasets for CLIPs are variations primarily designed for ImageNet benchmarks, which may not fully reflect the extent to which CLIPs, e.g., pre-trained on LAION, robust to spurious correlations. To bridge the gap, we collect a real-world dataset called CounterAnimal that contains realistic spurious features found in animal photos. CounterAnimal consists of a) the common group: comprising animals on common backgrounds, and b) the counter group: including animals on unusual backgrounds. The performance drops from the common to counter groups quantify the reliance of models on spurious features (i.e., backgrounds) to predict the animals. We find that CLIPs trained on either LAION or the OpenAI data exhibit notable performance drops on the counter group. Surprisingly, we observe that single-modal models trained on ImageNet are more robust than CLIPs. We provide both theoretical and empirical explanations for why CLIPs still learn spurious features. Our findings suggest that distribution shifts remain an open problem for CLIPs, and one needs to be cautious about test setups when evaluating foundation models pre-trained on a significantly different scale and distribution.
[ "['Qizhou Wang' 'Yong Lin' 'Yongqiang Chen' 'Ludwig Schmidt' 'Bo Han'\n 'Tong Zhang']" ]
null
null
2403.11504
null
null
http://arxiv.org/pdf/2403.11504v1
2024-03-18T06:19:37Z
2024-03-18T06:19:37Z
MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning
Self-supervised learning (SSL) is potentially useful in reducing the need for manual annotation and making deep learning models accessible for medical image analysis tasks. By leveraging the representations learned from unlabeled data, self-supervised models perform well on tasks that require little to no fine-tuning. However, for medical images, like chest X-rays, which are characterized by complex anatomical structures and diverse clinical conditions, there arises a need for representation learning techniques that can encode fine-grained details while preserving the broader contextual information. In this context, we introduce MLVICX (Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning), an approach to capture rich representations in the form of embeddings from chest X-ray images. Central to our approach is a novel multi-level variance and covariance exploration strategy that empowers the model to detect diagnostically meaningful patterns while reducing redundancy effectively. By enhancing the variance and covariance of the learned embeddings, MLVICX promotes the retention of critical medical insights by adapting both global and local contextual details. We demonstrate the performance of MLVICX in advancing self-supervised chest X-ray representation learning through comprehensive experiments. The performance enhancements we observe across various downstream tasks highlight the significance of the proposed approach in enhancing the utility of chest X-ray embeddings for precision medical diagnosis and comprehensive image analysis. For pertaining, we used the NIH-Chest X-ray dataset, while for downstream tasks, we utilized NIH-Chest X-ray, Vinbig-CXR, RSNA pneumonia, and SIIM-ACR Pneumothorax datasets. Overall, we observe more than 3% performance gains over SOTA SSL approaches in various downstream tasks.
[ "['Azad Singh' 'Vandan Gorade' 'Deepak Mishra']" ]
null
null
2403.11505
null
null
http://arxiv.org/pdf/2403.11505v2
2024-03-27T20:10:05Z
2024-03-18T06:20:49Z
COVID-19 detection from pulmonary CT scans using a novel EfficientNet with attention mechanism
Manual analysis and diagnosis of COVID-19 through the examination of Computed Tomography (CT) images of the lungs can be time-consuming and result in errors, especially given high volume of patients and numerous images per patient. So, we address the need for automation of this task by developing a new deep learning model-based pipeline. Our motivation was sparked by the CVPR Workshop on "Domain Adaptation, Explainability and Fairness in AI for Medical Image Analysis", more specifically, the "COVID-19 Diagnosis Competition (DEF-AI-MIA COV19D)" under the same Workshop. This challenge provides an opportunity to assess our proposed pipeline for COVID-19 detection from CT scan images. The same pipeline incorporates the original EfficientNet, but with an added Attention Mechanism: EfficientNet-AM. Also, unlike the traditional/past pipelines, which relied on a pre-processing step, our pipeline takes the raw selected input images without any such step, except for an image-selection step to simply reduce the number of CT images required for training and/or testing. Moreover, our pipeline is computationally efficient, as, for example, it does not incorporate a decoder for segmenting the lungs. It also does not combine different backbones nor combine RNN with a backbone, as other pipelines in the past did. Nevertheless, our pipeline still outperforms all approaches presented by other teams in last year's instance of the same challenge, at least based on the validation subset of the competition dataset.
[ "['Ramy Farag' 'Parth Upadhyay' 'Yixiang Gao' 'Jacket Demby'\n 'Katherin Garces Montoya' 'Seyed Mohamad Ali Tousi' 'Gbenga Omotara'\n 'Guilherme DeSouza']" ]
null
null
2403.11520
null
null
http://arxiv.org/pdf/2403.11520v1
2024-03-18T07:14:21Z
2024-03-18T07:14:21Z
State-Separated SARSA: A Practical Sequential Decision-Making Algorithm with Recovering Rewards
While many multi-armed bandit algorithms assume that rewards for all arms are constant across rounds, this assumption does not hold in many real-world scenarios. This paper considers the setting of recovering bandits (Pike-Burke & Grunewalder, 2019), where the reward depends on the number of rounds elapsed since the last time an arm was pulled. We propose a new reinforcement learning (RL) algorithm tailored to this setting, named the State-Separate SARSA (SS-SARSA) algorithm, which treats rounds as states. The SS-SARSA algorithm achieves efficient learning by reducing the number of state combinations required for Q-learning/SARSA, which often suffers from combinatorial issues for large-scale RL problems. Additionally, it makes minimal assumptions about the reward structure and offers lower computational complexity. Furthermore, we prove asymptotic convergence to an optimal policy under mild assumptions. Simulation studies demonstrate the superior performance of our algorithm across various settings.
[ "['Yuto Tanimoto' 'Kenji Fukumizu']" ]
null
null
2403.11521
null
null
http://arxiv.org/pdf/2403.11521v1
2024-03-18T07:15:01Z
2024-03-18T07:15:01Z
A Data-driven Approach for Rapid Detection of Aeroelastic Modes from Flutter Flight Test Based on Limited Sensor Measurements
Flutter flight test involves the evaluation of the airframes aeroelastic stability by applying artificial excitation on the aircraft lifting surfaces. The subsequent responses are captured and analyzed to extract the frequencies and damping characteristics of the system. However, noise contamination, turbulence, non-optimal excitation of modes, and sensor malfunction in one or more sensors make it time-consuming and corrupt the extraction process. In order to expedite the process of identifying and analyzing aeroelastic modes, this study implements a time-delay embedded Dynamic Mode Decomposition technique. This approach is complemented by Robust Principal Component Analysis methodology, and a sparsity promoting criterion which enables the automatic and optimal selection of sparse modes. The anonymized flutter flight test data, provided by the fifth author of this research paper, is utilized in this implementation. The methodology assumes no knowledge of the input excitation, only deals with the responses captured by accelerometer channels, and rapidly identifies the aeroelastic modes. By incorporating a compressed sensing algorithm, the methodology gains the ability to identify aeroelastic modes, even when the number of available sensors is limited. This augmentation greatly enhances the methodology's robustness and effectiveness, making it an excellent choice for real-time implementation during flutter test campaigns.
[ "['Arpan Das' 'Pier Marzocca' 'Giuliano Coppotelli' 'Oleg Levinski'\n 'Paul Taylor']" ]
null
null
2403.11522
null
null
http://arxiv.org/pdf/2403.11522v2
2024-03-22T10:28:05Z
2024-03-18T07:22:31Z
LOOPer: A Learned Automatic Code Optimizer For Polyhedral Compilers
While polyhedral compilers have shown success in implementing advanced code transformations, they still have challenges in selecting the most profitable transformations that lead to the best speedups. This has motivated the use of machine learning to build cost models to guide the search for polyhedral optimizations. State-of-the-art polyhedral compilers have demonstrated a viable proof-of-concept of this approach. While such a proof-of-concept has shown promise, it still has significant limitations. State-of-the-art polyhedral compilers that use a deep-learning cost model only support a small subset of affine transformations, limiting their ability to apply complex code transformations. They also only support simple programs that have a single loop nest and a rectangular iteration domain, limiting their applicability to many programs. These limitations significantly impact the generality of such compilers and autoschedulers and put into question the whole approach. In this paper, we introduce LOOPer, the first polyhedral autoscheduler that uses a deep-learning based cost model and covers a large set of affine transformations and programs. It supports the exploration of a large set of affine transformations, allowing the application of complex sequences of polyhedral transformations. It also supports the optimization of programs with multiple loop nests and with rectangular and non-rectangular iteration domains, allowing the optimization of an extensive set of programs. We implement and evaluate LOOPer and show that it achieves speedups over the state-of-the-art. On the Polybench benchmark, LOOPer achieves a geometric mean speedup of 1.59x over Tiramisu. LOOPer also achieves competitive speedups with a geometric mean speedup of 1.34x over Pluto, a state-of-the-art polyhedral compiler that does not use a machine-learning based cost model.
[ "['Massinissa Merouani' 'Khaled Afif Boudaoud' 'Iheb Nassim Aouadj'\n 'Nassim Tchoulak' 'Islem Kara Bernou' 'Hamza Benyamina'\n 'Fatima Benbouzid-Si Tayeb' 'Karima Benatchba' 'Hugh Leather'\n 'Riyadh Baghdadi']" ]
null
null
2403.11532
null
null
http://arxiv.org/pdf/2403.11532v1
2024-03-18T07:35:25Z
2024-03-18T07:35:25Z
Out-of-Distribution Detection Should Use Conformal Prediction (and Vice-versa?)
Research on Out-Of-Distribution (OOD) detection focuses mainly on building scores that efficiently distinguish OOD data from In Distribution (ID) data. On the other hand, Conformal Prediction (CP) uses non-conformity scores to construct prediction sets with probabilistic coverage guarantees. In this work, we propose to use CP to better assess the efficiency of OOD scores. Specifically, we emphasize that in standard OOD benchmark settings, evaluation metrics can be overly optimistic due to the finite sample size of the test dataset. Based on the work of (Bates et al., 2022), we define new conformal AUROC and conformal FRP@TPR95 metrics, which are corrections that provide probabilistic conservativeness guarantees on the variability of these metrics. We show the effect of these corrections on two reference OOD and anomaly detection benchmarks, OpenOOD (Yang et al., 2022) and ADBench (Han et al., 2022). We also show that the benefits of using OOD together with CP apply the other way around by using OOD scores as non-conformity scores, which results in improving upon current CP methods. One of the key messages of these contributions is that since OOD is concerned with designing scores and CP with interpreting these scores, the two fields may be inherently intertwined.
[ "['Paul Novello' 'Joseba Dalmau' 'Léo Andeol']" ]
null
null
2403.11536
null
null
http://arxiv.org/pdf/2403.11536v1
2024-03-18T07:41:39Z
2024-03-18T07:41:39Z
OCR is All you need: Importing Multi-Modality into Image-based Defect Detection System
Automatic optical inspection (AOI) plays a pivotal role in the manufacturing process, predominantly leveraging high-resolution imaging instruments for scanning purposes. It detects anomalies by analyzing image textures or patterns, making it an essential tool in industrial manufacturing and quality control. Despite its importance, the deployment of models for AOI often faces challenges. These include limited sample sizes, which hinder effective feature learning, variations among source domains, and sensitivities to changes in lighting and camera positions during imaging. These factors collectively compromise the accuracy of model predictions. Traditional AOI often fails to capitalize on the rich mechanism-parameter information from machines or inside images, including statistical parameters, which typically benefit AOI classification. To address this, we introduce an external modality-guided data mining framework, primarily rooted in optical character recognition (OCR), to extract statistical features from images as a second modality to enhance performance, termed OANet (Ocr-Aoi-Net). A key aspect of our approach is the alignment of external modality features, extracted using a single modality-aware model, with image features encoded by a convolutional neural network. This synergy enables a more refined fusion of semantic representations from different modalities. We further introduce feature refinement and a gating function in our OANet to optimize the combination of these features, enhancing inference and decision-making capabilities. Experimental outcomes show that our methodology considerably boosts the recall rate of the defect detection model and maintains high robustness even in challenging scenarios.
[ "['Chih-Chung Hsu' 'Chia-Ming Lee' 'Chun-Hung Sun' 'Kuang-Ming Wu']" ]
null
null
2403.11537
null
null
http://arxiv.org/pdf/2403.11537v1
2024-03-18T07:43:14Z
2024-03-18T07:43:14Z
Semantic Prompting with Image-Token for Continual Learning
Continual learning aims to refine model parameters for new tasks while retaining knowledge from previous tasks. Recently, prompt-based learning has emerged to leverage pre-trained models to be prompted to learn subsequent tasks without the reliance on the rehearsal buffer. Although this approach has demonstrated outstanding results, existing methods depend on preceding task-selection process to choose appropriate prompts. However, imperfectness in task-selection may lead to negative impacts on the performance particularly in the scenarios where the number of tasks is large or task distributions are imbalanced. To address this issue, we introduce I-Prompt, a task-agnostic approach focuses on the visual semantic information of image tokens to eliminate task prediction. Our method consists of semantic prompt matching, which determines prompts based on similarities between tokens, and image token-level prompting, which applies prompts directly to image tokens in the intermediate layers. Consequently, our method achieves competitive performance on four benchmarks while significantly reducing training time compared to state-of-the-art methods. Moreover, we demonstrate the superiority of our method across various scenarios through extensive experiments.
[ "['Jisu Han' 'Jaemin Na' 'Wonjun Hwang']" ]
null
null
2403.11544
null
null
http://arxiv.org/pdf/2403.11544v2
2024-03-20T01:22:11Z
2024-03-18T07:54:11Z
RL in Markov Games with Independent Function Approximation: Improved Sample Complexity Bound under the Local Access Model
Efficiently learning equilibria with large state and action spaces in general-sum Markov games while overcoming the curse of multi-agency is a challenging problem. Recent works have attempted to solve this problem by employing independent linear function classes to approximate the marginal $Q$-value for each agent. However, existing sample complexity bounds under such a framework have a suboptimal dependency on the desired accuracy $varepsilon$ or the action space. In this work, we introduce a new algorithm, Lin-Confident-FTRL, for learning coarse correlated equilibria (CCE) with local access to the simulator, i.e., one can interact with the underlying environment on the visited states. Up to a logarithmic dependence on the size of the state space, Lin-Confident-FTRL learns $epsilon$-CCE with a provable optimal accuracy bound $O(epsilon^{-2})$ and gets rids of the linear dependency on the action space, while scaling polynomially with relevant problem parameters (such as the number of agents and time horizon). Moreover, our analysis of Linear-Confident-FTRL generalizes the virtual policy iteration technique in the single-agent local planning literature, which yields a new computationally efficient algorithm with a tighter sample complexity bound when assuming random access to the simulator.
[ "['Junyi Fan' 'Yuxuan Han' 'Jialin Zeng' 'Jian-Feng Cai' 'Yang Wang'\n 'Yang Xiang' 'Jiheng Zhang']" ]
null
null
2403.11563
null
null
http://arxiv.org/pdf/2403.11563v1
2024-03-18T08:33:56Z
2024-03-18T08:33:56Z
Advancing Neuromorphic Computing: Mixed-Signal Design Techniques Leveraging Brain Code Units and Fundamental Code Units
This paper introduces a groundbreaking digital neuromorphic architecture that innovatively integrates Brain Code Unit (BCU) and Fundamental Code Unit (FCU) using mixedsignal design methodologies. Leveraging open-source datasets and the latest advances in materials science, our research focuses on enhancing the computational efficiency, accuracy, and adaptability of neuromorphic systems. The core of our approach lies in harmonizing the precision and scalability of digital systems with the robustness and energy efficiency of analog processing. Through experimentation, we demonstrate the effectiveness of our system across various metrics. The BCU achieved an accuracy of 88.0% and a power efficiency of 20.0 GOP/s/W, while the FCU recorded an accuracy of 86.5% and a power efficiency of 18.5 GOP/s/W. Our mixed-signal design approach significantly improved latency and throughput, achieving a latency as low as 0.75 ms and throughput up to 213 TOP/s. These results firmly establish the potential of our architecture in neuromorphic computing, providing a solid foundation for future developments in this domain. Our study underscores the feasibility of mixedsignal neuromorphic systems and their promise in advancing the field, particularly in applications requiring high efficiency and adaptability
[ "['Murat Isik' 'Sols Miziev' 'Wiktoria Pawlak' 'Newton Howard']" ]
null
null
2403.11565
null
null
http://arxiv.org/pdf/2403.11565v2
2024-06-27T10:07:10Z
2024-03-18T08:35:17Z
Decentralized Stochastic Subgradient Methods for Nonsmooth Nonconvex Optimization
In this paper, we concentrate on decentralized optimization problems with nonconvex and nonsmooth objective functions, especially on the decentralized training of nonsmooth neural networks. We introduce a unified framework to analyze the global convergence of decentralized stochastic subgradient-based methods. We prove the global convergence of our proposed framework under mild conditions, by establishing that the generated sequence asymptotically approximates the trajectories of its associated differential inclusion. Furthermore, we establish that our proposed framework covers a wide range of existing efficient decentralized subgradient-based methods, including decentralized stochastic subgradient descent (DSGD), DSGD with gradient-tracking technique (DSGD-T), and DSGD with momentum (DSGD-M). In addition, we introduce the sign map to regularize the update directions in DSGD-M, and show it is enclosed in our proposed framework. Consequently, our convergence results establish, for the first time, global convergence of these methods when applied to nonsmooth nonconvex objectives. Preliminary numerical experiments demonstrate that our proposed framework yields highly efficient decentralized subgradient-based methods with convergence guarantees in the training of nonsmooth neural networks.
[ "['Siyuan Zhang' 'Nachuan Xiao' 'Xin Liu']" ]
null
null
2403.11574
null
null
http://arxiv.org/pdf/2403.11574v1
2024-03-18T08:50:30Z
2024-03-18T08:50:30Z
Offline Multitask Representation Learning for Reinforcement Learning
We study offline multitask representation learning in reinforcement learning (RL), where a learner is provided with an offline dataset from different tasks that share a common representation and is asked to learn the shared representation. We theoretically investigate offline multitask low-rank RL, and propose a new algorithm called MORL for offline multitask representation learning. Furthermore, we examine downstream RL in reward-free, offline and online scenarios, where a new task is introduced to the agent that shares the same representation as the upstream offline tasks. Our theoretical results demonstrate the benefits of using the learned representation from the upstream offline task instead of directly learning the representation of the low-rank model.
[ "['Haque Ishfaq' 'Thanh Nguyen-Tang' 'Songtao Feng' 'Raman Arora'\n 'Mengdi Wang' 'Ming Yin' 'Doina Precup']" ]
null
null
2403.11585
null
null
http://arxiv.org/pdf/2403.11585v2
2024-04-30T17:56:33Z
2024-03-18T08:58:47Z
Linguacodus: A Synergistic Framework for Transformative Code Generation in Machine Learning Pipelines
In the ever-evolving landscape of machine learning, seamless translation of natural language descriptions into executable code remains a formidable challenge. This paper introduces Linguacodus, an innovative framework designed to tackle this challenge by deploying a dynamic pipeline that iteratively transforms natural language task descriptions into code through high-level data-shaping instructions. The core of Linguacodus is a fine-tuned large language model (LLM), empowered to evaluate diverse solutions for various problems and select the most fitting one for a given task. This paper details the fine-tuning process, and sheds light on how natural language descriptions can be translated into functional code. Linguacodus represents a substantial leap towards automated code generation, effectively bridging the gap between task descriptions and executable code. It holds great promise for advancing machine learning applications across diverse domains. Additionally, we propose an algorithm capable of transforming a natural description of an ML task into code with minimal human interaction. In extensive experiments on a vast machine learning code dataset originating from Kaggle, we showcase the effectiveness of Linguacodus. The investigations highlight its potential applications across diverse domains, emphasizing its impact on applied machine learning in various scientific fields.
[ "['Ekaterina Trofimova' 'Emil Sataev' 'Andrey E. Ustyuzhanin']" ]
null
null
2403.11591
null
null
http://arxiv.org/pdf/2403.11591v1
2024-03-18T09:10:39Z
2024-03-18T09:10:39Z
A physics-informed neural network method for the approximation of slow invariant manifolds for the general class of stiff systems of ODEs
We present a physics-informed neural network (PINN) approach for the discovery of slow invariant manifolds (SIMs), for the most general class of fast/slow dynamical systems of ODEs. In contrast to other machine learning (ML) approaches that construct reduced order black box surrogate models using simple regression, and/or require a priori knowledge of the fast and slow variables, our approach, simultaneously decomposes the vector field into fast and slow components and provides a functional of the underlying SIM in a closed form. The decomposition is achieved by finding a transformation of the state variables to the fast and slow ones, which enables the derivation of an explicit, in terms of fast variables, SIM functional. The latter is obtained by solving a PDE corresponding to the invariance equation within the Geometric Singular Perturbation Theory (GSPT) using a single-layer feedforward neural network with symbolic differentiation. The performance of the proposed physics-informed ML framework is assessed via three benchmark problems: the Michaelis-Menten, the target mediated drug disposition (TMDD) reaction model and a fully competitive substrate-inhibitor(fCSI) mechanism. We also provide a comparison with other GPST methods, namely the quasi steady state approximation (QSSA), the partial equilibrium approximation (PEA) and CSP with one and two iterations. We show that the proposed PINN scheme provides SIM approximations, of equivalent or even higher accuracy, than those provided by QSSA, PEA and CSP, especially close to the boundaries of the underlying SIMs.
[ "['Dimitrios G. Patsatzis' 'Lucia Russo' 'Constantinos Siettos']" ]
null
null
2403.11603
null
null
http://arxiv.org/pdf/2403.11603v1
2024-03-18T09:25:59Z
2024-03-18T09:25:59Z
Fair Distributed Cooperative Bandit Learning on Networks for Intelligent Internet of Things Systems (Technical Report)
In intelligent Internet of Things (IoT) systems, edge servers within a network exchange information with their neighbors and collect data from sensors to complete delivered tasks. In this paper, we propose a multiplayer multi-armed bandit model for intelligent IoT systems to facilitate data collection and incorporate fairness considerations. In our model, we establish an effective communication protocol that helps servers cooperate with their neighbors. Then we design a distributed cooperative bandit algorithm, DC-ULCB, enabling servers to collaboratively select sensors to maximize data rates while maintaining fairness in their choices. We conduct an analysis of the reward regret and fairness regret of DC-ULCB, and prove that both regrets have logarithmic instance-dependent upper bounds. Additionally, through extensive simulations, we validate that DC-ULCB outperforms existing algorithms in maximizing reward and ensuring fairness.
[ "['Ziqun Chen' 'Kechao Cai' 'Jinbei Zhang' 'Zhigang Yu']" ]
null
null
2403.11624
null
null
http://arxiv.org/pdf/2403.11624v3
2024-03-29T14:20:17Z
2024-03-18T09:56:00Z
Dual-Channel Multiplex Graph Neural Networks for Recommendation
Efficient recommender systems play a crucial role in accurately capturing user and item attributes that mirror individual preferences. Some existing recommendation techniques have started to shift their focus towards modeling various types of interaction relations between users and items in real-world recommendation scenarios, such as clicks, marking favorites, and purchases on online shopping platforms. Nevertheless, these approaches still grapple with two significant shortcomings: (1) Insufficient modeling and exploitation of the impact of various behavior patterns formed by multiplex relations between users and items on representation learning, and (2) ignoring the effect of different relations in the behavior patterns on the target relation in recommender system scenarios. In this study, we introduce a novel recommendation framework, Dual-Channel Multiplex Graph Neural Network (DCMGNN), which addresses the aforementioned challenges. It incorporates an explicit behavior pattern representation learner to capture the behavior patterns composed of multiplex user-item interaction relations, and includes a relation chain representation learning and a relation chain-aware encoder to discover the impact of various auxiliary relations on the target relation, the dependencies between different relations, and mine the appropriate order of relations in a behavior pattern. Extensive experiments on three real-world datasets demonstrate that our model surpasses various state-of-the-art recommendation methods. It outperforms the best baselines by 10.06% and 12.15% on average across all datasets in terms of R@10 and N@10 respectively.
[ "['Xiang Li' 'Chaofan Fu' 'Zhongying Zhao' 'Guanjie Zheng' 'Chao Huang'\n 'Junyu Dong' 'Yanwei Yu']" ]