categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2405.10970
null
null
http://arxiv.org/pdf/2405.10970v1
2024-05-08T18:08:11Z
2024-05-08T18:08:11Z
Untargeted Adversarial Attack on Knowledge Graph Embeddings
Knowledge graph embedding (KGE) methods have achieved great success in handling various knowledge graph (KG) downstream tasks. However, KGE methods may learn biased representations on low-quality KGs that are prevalent in the real world. Some recent studies propose adversarial attacks to investigate the vulnerabilities of KGE methods, but their attackers are target-oriented with the KGE method and the target triples to predict are given in advance, which lacks practicability. In this work, we explore untargeted attacks with the aim of reducing the global performances of KGE methods over a set of unknown test triples and conducting systematic analyses on KGE robustness. Considering logic rules can effectively summarize the global structure of a KG, we develop rule-based attack strategies to enhance the attack efficiency. In particular,we consider adversarial deletion which learns rules, applying the rules to score triple importance and delete important triples, and adversarial addition which corrupts the learned rules and applies them for negative triples as perturbations. Extensive experiments on two datasets over three representative classes of KGE methods demonstrate the effectiveness of our proposed untargeted attacks in diminishing the link prediction results. And we also find that different KGE methods exhibit different robustness to untargeted attacks. For example, the robustness of methods engaged with graph neural networks and logic rules depends on the density of the graph. But rule-based methods like NCRL are easily affected by adversarial addition attacks to capture negative rules
[ "['Tianzhe Zhao' 'Jiaoyan Chen' 'Yanchi Ru' 'Qika Lin' 'Yuxia Geng'\n 'Jun Liu']" ]
null
null
2405.10973
null
null
http://arxiv.org/pdf/2405.10973v1
2024-05-12T09:00:56Z
2024-05-12T09:00:56Z
Adaptation of XAI to Auto-tuning for Numerical Libraries
Concerns have arisen regarding the unregulated utilization of artificial intelligence (AI) outputs, potentially leading to various societal issues. While humans routinely validate information, manually inspecting the vast volumes of AI-generated results is impractical. Therefore, automation and visualization are imperative. In this context, Explainable AI (XAI) technology is gaining prominence, aiming to streamline AI model development and alleviate the burden of explaining AI outputs to users. Simultaneously, software auto-tuning (AT) technology has emerged, aiming to reduce the man-hours required for performance tuning in numerical calculations. AT is a potent tool for cost reduction during parameter optimization and high-performance programming for numerical computing. The synergy between AT mechanisms and AI technology is noteworthy, with AI finding extensive applications in AT. However, applying AI to AT mechanisms introduces challenges in AI model explainability. This research focuses on XAI for AI models when integrated into two different processes for practical numerical computations: performance parameter tuning of accuracy-guaranteed numerical calculations and sparse iterative algorithm.
[ "['Shota Aoki' 'Takahiro Katagiri' 'Satoshi Ohshima' 'Masatoshi Kawai'\n 'Toru Nagai' 'Tetsuya Hoshino']" ]
null
null
2405.10974
null
null
http://arxiv.org/pdf/2405.10974v2
2024-05-21T01:29:13Z
2024-05-12T11:41:26Z
Bottleneck-Minimal Indexing for Generative Document Retrieval
We apply an information-theoretic perspective to reconsider generative document retrieval (GDR), in which a document $x in X$ is indexed by $t in T$, and a neural autoregressive model is trained to map queries $Q$ to $T$. GDR can be considered to involve information transmission from documents $X$ to queries $Q$, with the requirement to transmit more bits via the indexes $T$. By applying Shannon's rate-distortion theory, the optimality of indexing can be analyzed in terms of the mutual information, and the design of the indexes $T$ can then be regarded as a {em bottleneck} in GDR. After reformulating GDR from this perspective, we empirically quantify the bottleneck underlying GDR. Finally, using the NQ320K and MARCO datasets, we evaluate our proposed bottleneck-minimal indexing method in comparison with various previous indexing methods, and we show that it outperforms those methods.
[ "['Xin Du' 'Lixin Xiu' 'Kumiko Tanaka-Ishii']" ]
null
null
2405.10976
null
null
http://arxiv.org/abs/2405.10976v1
2024-05-13T03:31:13Z
2024-05-13T03:31:13Z
On Constructing Algorithm Portfolios in Algorithm Selection for Computationally Expensive Black-box Optimization in the Fixed-budget Setting
Feature-based offline algorithm selection has shown its effectiveness in a wide range of optimization problems, including the black-box optimization problem. An algorithm selection system selects the most promising optimizer from an algorithm portfolio, which is a set of pre-defined optimizers. Thus, algorithm selection requires a well-constructed algorithm portfolio consisting of efficient optimizers complementary to each other. Although construction methods for the fixed-target setting have been well studied, those for the fixed-budget setting have received less attention. Here, the fixed-budget setting is generally used for computationally expensive optimization, where a budget of function evaluations is small. In this context, first, this paper points out some undesirable properties of experimental setups in previous studies. Then, this paper argues the importance of considering the number of function evaluations used in the sampling phase when constructing algorithm portfolios, whereas the previous studies ignored that. The results show that algorithm portfolios constructed by our approach perform significantly better than those by the previous approach.
[ "['Takushi Yoshikawa' 'Ryoji Tanabe']" ]
null
null
2405.10986
null
null
http://arxiv.org/pdf/2405.10986v1
2024-05-15T20:28:15Z
2024-05-15T20:28:15Z
Benchmark Early and Red Team Often: A Framework for Assessing and Managing Dual-Use Hazards of AI Foundation Models
A concern about cutting-edge or "frontier" AI foundation models is that an adversary may use the models for preparing chemical, biological, radiological, nuclear, (CBRN), cyber, or other attacks. At least two methods can identify foundation models with potential dual-use capability; each has advantages and disadvantages: A. Open benchmarks (based on openly available questions and answers), which are low-cost but accuracy-limited by the need to omit security-sensitive details; and B. Closed red team evaluations (based on private evaluation by CBRN and cyber experts), which are higher-cost but can achieve higher accuracy by incorporating sensitive details. We propose a research and risk-management approach using a combination of methods including both open benchmarks and closed red team evaluations, in a way that leverages advantages of both methods. We recommend that one or more groups of researchers with sufficient resources and access to a range of near-frontier and frontier foundation models run a set of foundation models through dual-use capability evaluation benchmarks and red team evaluations, then analyze the resulting sets of models' scores on benchmark and red team evaluations to see how correlated those are. If, as we expect, there is substantial correlation between the dual-use potential benchmark scores and the red team evaluation scores, then implications include the following: The open benchmarks should be used frequently during foundation model development as a quick, low-cost measure of a model's dual-use potential; and if a particular model gets a high score on the dual-use potential benchmark, then more in-depth red team assessments of that model's dual-use capability should be performed. We also discuss limitations and mitigations for our approach, e.g., if model developers try to game benchmarks by including a version of benchmark test data in a model's training data.
[ "['Anthony M. Barrett' 'Krystal Jackson' 'Evan R. Murphy' 'Nada Madkour'\n 'Jessica Newman']" ]
null
null
2405.10987
null
null
http://arxiv.org/pdf/2405.10987v1
2024-05-16T05:58:29Z
2024-05-16T05:58:29Z
Manifold-based Incomplete Multi-view Clustering via Bi-Consistency Guidance
Incomplete multi-view clustering primarily focuses on dividing unlabeled data into corresponding categories with missing instances, and has received intensive attention due to its superiority in real applications. Considering the influence of incomplete data, the existing methods mostly attempt to recover data by adding extra terms. However, for the unsupervised methods, a simple recovery strategy will cause errors and outlying value accumulations, which will affect the performance of the methods. Broadly, the previous methods have not taken the effectiveness of recovered instances into consideration, or cannot flexibly balance the discrepancies between recovered data and original data. To address these problems, we propose a novel method termed Manifold-based Incomplete Multi-view clustering via Bi-consistency guidance (MIMB), which flexibly recovers incomplete data among various views, and attempts to achieve biconsistency guidance via reverse regularization. In particular, MIMB adds reconstruction terms to representation learning by recovering missing instances, which dynamically examines the latent consensus representation. Moreover, to preserve the consistency information among multiple views, MIMB implements a biconsistency guidance strategy with reverse regularization of the consensus representation and proposes a manifold embedding measure for exploring the hidden structure of the recovered data. Notably, MIMB aims to balance the importance of different views, and introduces an adaptive weight term for each view. Finally, an optimization algorithm with an alternating iteration optimization strategy is designed for final clustering. Extensive experimental results on 6 benchmark datasets are provided to confirm that MIMB can significantly obtain superior results as compared with several state-of-the-art baselines.
[ "['Huibing Wang' 'Mingze Yao' 'Yawei Chen' 'Yunqiu Xu' 'Haipeng Liu'\n 'Wei Jia' 'Xianping Fu' 'Yang Wang']" ]
null
null
2405.10988
null
null
http://arxiv.org/pdf/2405.10988v1
2024-05-16T06:05:16Z
2024-05-16T06:05:16Z
Flow Score Distillation for Diverse Text-to-3D Generation
Recent advancements in Text-to-3D generation have yielded remarkable progress, particularly through methods that rely on Score Distillation Sampling (SDS). While SDS exhibits the capability to create impressive 3D assets, it is hindered by its inherent maximum-likelihood-seeking essence, resulting in limited diversity in generation outcomes. In this paper, we discover that the Denoise Diffusion Implicit Models (DDIM) generation process (ie PF-ODE) can be succinctly expressed using an analogue of SDS loss. One step further, one can see SDS as a generalized DDIM generation process. Following this insight, we show that the noise sampling strategy in the noise addition stage significantly restricts the diversity of generation results. To address this limitation, we present an innovative noise sampling approach and introduce a novel text-to-3D method called Flow Score Distillation (FSD). Our validation experiments across various text-to-image Diffusion Models demonstrate that FSD substantially enhances generation diversity without compromising quality.
[ "['Runjie Yan' 'Kailu Wu' 'Kaisheng Ma']" ]
null
null
2405.10989
null
null
http://arxiv.org/pdf/2405.10989v1
2024-05-16T08:11:08Z
2024-05-16T08:11:08Z
Learnable Privacy Neurons Localization in Language Models
Concerns regarding Large Language Models (LLMs) to memorize and disclose private information, particularly Personally Identifiable Information (PII), become prominent within the community. Many efforts have been made to mitigate the privacy risks. However, the mechanism through which LLMs memorize PII remains poorly understood. To bridge this gap, we introduce a pioneering method for pinpointing PII-sensitive neurons (privacy neurons) within LLMs. Our method employs learnable binary weight masks to localize specific neurons that account for the memorization of PII in LLMs through adversarial training. Our investigations discover that PII is memorized by a small subset of neurons across all layers, which shows the property of PII specificity. Furthermore, we propose to validate the potential in PII risk mitigation by deactivating the localized privacy neurons. Both quantitative and qualitative experiments demonstrate the effectiveness of our neuron localization algorithm.
[ "['Ruizhe Chen' 'Tianxiang Hu' 'Yang Feng' 'Zuozhu Liu']" ]
null
null
2405.10991
null
null
http://arxiv.org/pdf/2405.10991v1
2024-05-16T08:57:00Z
2024-05-16T08:57:00Z
Relative Counterfactual Contrastive Learning for Mitigating Pretrained Stance Bias in Stance Detection
Stance detection classifies stance relations (namely, Favor, Against, or Neither) between comments and targets. Pretrained language models (PLMs) are widely used to mine the stance relation to improve the performance of stance detection through pretrained knowledge. However, PLMs also embed ``bad'' pretrained knowledge concerning stance into the extracted stance relation semantics, resulting in pretrained stance bias. It is not trivial to measure pretrained stance bias due to its weak quantifiability. In this paper, we propose Relative Counterfactual Contrastive Learning (RCCL), in which pretrained stance bias is mitigated as relative stance bias instead of absolute stance bias to overtake the difficulty of measuring bias. Firstly, we present a new structural causal model for characterizing complicated relationships among context, PLMs and stance relations to locate pretrained stance bias. Then, based on masked language model prediction, we present a target-aware relative stance sample generation method for obtaining relative bias. Finally, we use contrastive learning based on counterfactual theory to mitigate pretrained stance bias and preserve context stance relation. Experiments show that the proposed method is superior to stance detection and debiasing baselines.
[ "['Jiarui Zhang' 'Shaojuan Wu' 'Xiaowang Zhang' 'Zhiyong Feng']" ]
null
null
2405.10992
null
null
http://arxiv.org/pdf/2405.10992v1
2024-05-16T10:54:46Z
2024-05-16T10:54:46Z
Overcoming Catastrophic Forgetting by Exemplar Selection in Task-oriented Dialogue System
Intelligent task-oriented dialogue systems (ToDs) are expected to continuously acquire new knowledge, also known as Continual Learning (CL), which is crucial to fit ever-changing user needs. However, catastrophic forgetting dramatically degrades the model performance in face of a long streamed curriculum. In this paper, we aim to overcome the forgetting problem in ToDs and propose a method (HESIT) with hyper-gradient-based exemplar strategy, which samples influential exemplars for periodic retraining. Instead of unilaterally observing data or models, HESIT adopts a profound exemplar selection strategy that considers the general performance of the trained model when selecting exemplars for each task domain. Specifically, HESIT analyzes the training data influence by tracing their hyper-gradient in the optimization process. Furthermore, HESIT avoids estimating Hessian to make it compatible for ToDs with a large pre-trained model. Experimental results show that HESIT effectively alleviates catastrophic forgetting by exemplar selection, and achieves state-of-the-art performance on the largest CL benchmark of ToDs in terms of all metrics.
[ "['Chen Chen' 'Ruizhe Li' 'Yuchen Hu' 'Yuanyuan Chen' 'Chengwei Qin'\n 'Qiang Zhang']" ]
null
null
2405.10995
null
null
http://arxiv.org/pdf/2405.10995v1
2024-05-16T16:35:43Z
2024-05-16T16:35:43Z
Physics-incorporated Graph Neural Network for Multivariate Time Series Imputation
Exploring the missing values is an essential but challenging issue due to the complex latent spatio-temporal correlation and dynamic nature of time series. Owing to the outstanding performance in dealing with structure learning potentials, Graph Neural Networks (GNNs) and Recurrent Neural Networks (RNNs) are often used to capture such complex spatio-temporal features in multivariate time series. However, these data-driven models often fail to capture the essential spatio-temporal relationships when significant signal corruption occurs. Additionally, calculating the high-order neighbor nodes in these models is of high computational complexity. To address these problems, we propose a novel higher-order spatio-temporal physics-incorporated GNN (HSPGNN). Firstly, the dynamic Laplacian matrix can be obtained by the spatial attention mechanism. Then, the generic inhomogeneous partial differential equation (PDE) of physical dynamic systems is used to construct the dynamic higher-order spatio-temporal GNN to obtain the missing time series values. Moreover, we estimate the missing impact by Normalizing Flows (NF) to evaluate the importance of each node in the graph for better explainability. Experimental results on four benchmark datasets demonstrate the effectiveness of HSPGNN and the superior performance when combining various order neighbor nodes. Also, graph-like optical flow, dynamic graphs, and missing impact can be obtained naturally by HSPGNN, which provides better dynamic analysis and explanation than traditional data-driven models. Our code is available at https://github.com/gorgen2020/HSPGNN.
[ "['Guojun Liang' 'Prayag Tiwari' 'Slawomir Nowaczyk' 'Stefan Byttner']" ]
null
null
2405.10999
null
null
http://arxiv.org/pdf/2405.10999v1
2024-05-16T21:14:32Z
2024-05-16T21:14:32Z
Large Language Models for Tuning Evolution Strategies
Large Language Models (LLMs) exhibit world knowledge and inference capabilities, making them powerful tools for various applications. This paper proposes a feedback loop mechanism that leverages these capabilities to tune Evolution Strategies (ES) parameters effectively. The mechanism involves a structured process of providing programming instructions, executing the corresponding code, and conducting thorough analysis. This process is specifically designed for the optimization of ES parameters. The method operates through an iterative cycle, ensuring continuous refinement of the ES parameters. First, LLMs process the instructions to generate or modify the code. The code is then executed, and the results are meticulously logged. Subsequent analysis of these results provides insights that drive further improvements. An experiment on tuning the learning rates of ES using the LLaMA3 model demonstrate the feasibility of this approach. This research illustrates how LLMs can be harnessed to improve ES algorithms' performance and suggests broader applications for similar feedback loop mechanisms in various domains.
[ "['Oliver Kramer']" ]
null
null
2405.11000
null
null
http://arxiv.org/pdf/2405.11000v1
2024-05-16T21:58:10Z
2024-05-16T21:58:10Z
Data-Driven Revenue Management for Air Cargo
It is well-recognized that Air Cargo revenue management is quite different from its passenger airline counterpart. Inherent demand volatility due to short booking horizon and lumpy shipments, multi-dimensionality and uncertainty of capacity as well as the flexibility in routing are a few of the challenges to be handled for Air Cargo revenue management. In this paper, we present a data-driven revenue management approach which is well-designed to handle the challenges associated with Air Cargo industry. We present findings from simulations tailored to Air Cargo setting and compare different scenarios for handling of weight and volume bid prices. Our results show that running our algorithm independently to generate weight and volume bid prices and summing the weight and volume bid prices into price optimization works the best by outperforming other strategies with more than 3% revenue gap.
[ "['Ezgi Eren' 'Jiabing Li']" ]
null
null
2405.11002
null
null
http://arxiv.org/pdf/2405.11002v1
2024-05-17T02:56:31Z
2024-05-17T02:56:31Z
Large Language Models in Wireless Application Design: In-Context Learning-enhanced Automatic Network Intrusion Detection
Large language models (LLMs), especially generative pre-trained transformers (GPTs), have recently demonstrated outstanding ability in information comprehension and problem-solving. This has motivated many studies in applying LLMs to wireless communication networks. In this paper, we propose a pre-trained LLM-empowered framework to perform fully automatic network intrusion detection. Three in-context learning methods are designed and compared to enhance the performance of LLMs. With experiments on a real network intrusion detection dataset, in-context learning proves to be highly beneficial in improving the task processing performance in a way that no further training or fine-tuning of LLMs is required. We show that for GPT-4, testing accuracy and F1-Score can be improved by 90%. Moreover, pre-trained LLMs demonstrate big potential in performing wireless communication-related tasks. Specifically, the proposed framework can reach an accuracy and F1-Score of over 95% on different types of attacks with GPT-4 using only 10 in-context learning examples.
[ "['Han Zhang' 'Akram Bin Sediq' 'Ali Afana' 'Melike Erol-Kantarci']" ]
null
null
2405.11007
null
null
http://arxiv.org/pdf/2405.11007v1
2024-05-17T10:19:32Z
2024-05-17T10:19:32Z
Generative modeling of Sparse Approximate Inverse Preconditioners
We present a new deep learning paradigm for the generation of sparse approximate inverse (SPAI) preconditioners for matrix systems arising from the mesh-based discretization of elliptic differential operators. Our approach is based upon the observation that matrices generated in this manner are not arbitrary, but inherit properties from differential operators that they discretize. Consequently, we seek to represent a learnable distribution of high-performance preconditioners from a low-dimensional subspace through a carefully-designed autoencoder, which is able to generate SPAI preconditioners for these systems. The concept has been implemented on a variety of finite element discretizations of second- and fourth-order elliptic partial differential equations with highly promising results.
[ "['Mou Li' 'He Wang' 'Peter K. Jimack']" ]
null
null
2405.11008
null
null
http://arxiv.org/pdf/2405.11008v1
2024-05-17T11:09:33Z
2024-05-17T11:09:33Z
A Systematic Review and Meta-Analysis on Sleep Stage Classification and Sleep Disorder Detection Using Artificial Intelligence
Sleep is vital for people's physical and mental health, and sound sleep can help them focus on daily activities. Therefore, a sleep study that includes sleep patterns and disorders is crucial to enhancing our knowledge about individuals' health status. The findings on sleep stages and sleep disorders relied on polysomnography and self-report measures, and then the study went through clinical assessments by expert physicians. However, the evaluation process of sleep stage classification and sleep disorder has become more convenient with artificial intelligence applications and numerous investigations focusing on various datasets with advanced algorithms and techniques that offer improved computational ease and accuracy. This study aims to provide a comprehensive, systematic review and meta-analysis of the recent literature to analyze the different approaches and their outcomes in sleep studies, which includes works on sleep stages classification and sleep disorder detection using AI. In this review, 183 articles were initially selected from different journals, among which 80 records were enlisted for explicit review, ranging from 2016 to 2023. Brain waves were the most commonly employed body parameters for sleep staging and disorder studies. The convolutional neural network, the most widely used of the 34 distinct artificial intelligence models, comprised 27%. The other models included the long short-term memory, support vector machine, random forest, and recurrent neural network, which consisted of 11%, 6%, 6%, and 5% sequentially. For performance metrics, accuracy was widely used for a maximum of 83.75% of the cases, the F1 score of 45%, Kappa of 36.25%, Sensitivity of 31.25%, and Specificity of 30% of cases, along with the other metrics. This article would help physicians and researchers get the gist of AI's contribution to sleep studies and the feasibility of their intended work.
[ "['Tayab Uddin Wara' 'Ababil Hossain Fahad' 'Adri Shankar Das'\n 'Md. Mehedi Hasan Shawon']" ]
null
null
2405.11013
null
null
http://arxiv.org/pdf/2405.11013v1
2024-05-17T16:53:19Z
2024-05-17T16:53:19Z
ARDDQN: Attention Recurrent Double Deep Q-Network for UAV Coverage Path Planning and Data Harvesting
Unmanned Aerial Vehicles (UAVs) have gained popularity in data harvesting (DH) and coverage path planning (CPP) to survey a given area efficiently and collect data from aerial perspectives, while data harvesting aims to gather information from various Internet of Things (IoT) sensor devices, coverage path planning guarantees that every location within the designated area is visited with minimal redundancy and maximum efficiency. We propose the ARDDQN (Attention-based Recurrent Double Deep Q Network), which integrates double deep Q-networks (DDQN) with recurrent neural networks (RNNs) and an attention mechanism to generate path coverage choices that maximize data collection from IoT devices and to learn a control scheme for the UAV that generalizes energy restrictions. We employ a structured environment map comprising a compressed global environment map and a local map showing the UAV agent's locate efficiently scaling to large environments. We have compared Long short-term memory (LSTM), Bi-directional long short-term memory (Bi-LSTM), Gated recurrent unit (GRU) and Bidirectional gated recurrent unit (Bi-GRU) as recurrent neural networks (RNN) to the result without RNN We propose integrating the LSTM with the Attention mechanism to the existing DDQN model, which works best on evolution parameters, i.e., data collection, landing, and coverage ratios for the CPP and data harvesting scenarios.
[ "['Praveen Kumar' 'Priyadarshni' 'Rajiv Misra']" ]
null
null
2405.11024
null
null
http://arxiv.org/pdf/2405.11024v1
2024-05-17T18:00:50Z
2024-05-17T18:00:50Z
GraSS: Combining Graph Neural Networks with Expert Knowledge for SAT Solver Selection
Boolean satisfiability (SAT) problems are routinely solved by SAT solvers in real-life applications, yet solving time can vary drastically between solvers for the same instance. This has motivated research into machine learning models that can predict, for a given SAT instance, which solver to select among several options. Existing SAT solver selection methods all rely on some hand-picked instance features, which are costly to compute and ignore the structural information in SAT graphs. In this paper we present GraSS, a novel approach for automatic SAT solver selection based on tripartite graph representations of instances and a heterogeneous graph neural network (GNN) model. While GNNs have been previously adopted in other SAT-related tasks, they do not incorporate any domain-specific knowledge and ignore the runtime variation introduced by different clause orders. We enrich the graph representation with domain-specific decisions, such as novel node feature design, positional encodings for clauses in the graph, a GNN architecture tailored to our tripartite graphs and a runtime-sensitive loss function. Through extensive experiments, we demonstrate that this combination of raw representations and domain-specific choices leads to improvements in runtime for a pool of seven state-of-the-art solvers on both an industrial circuit design benchmark, and on instances from the 20-year Anniversary Track of the 2022 SAT Competition.
[ "['Zhanguang Zhang' 'Didier Chetelat' 'Joseph Cotnareanu' 'Amur Ghose'\n 'Wenyi Xiao' 'Hui-Ling Zhen' 'Yingxue Zhang' 'Jianye Hao' 'Mark Coates'\n 'Mingxuan Yuan']" ]
null
null
2405.11029
null
null
http://arxiv.org/pdf/2405.11029v1
2024-05-17T18:03:59Z
2024-05-17T18:03:59Z
Generative Artificial Intelligence: A Systematic Review and Applications
In recent years, the study of artificial intelligence (AI) has undergone a paradigm shift. This has been propelled by the groundbreaking capabilities of generative models both in supervised and unsupervised learning scenarios. Generative AI has shown state-of-the-art performance in solving perplexing real-world conundrums in fields such as image translation, medical diagnostics, textual imagery fusion, natural language processing, and beyond. This paper documents the systematic review and analysis of recent advancements and techniques in Generative AI with a detailed discussion of their applications including application-specific models. Indeed, the major impact that generative AI has made to date, has been in language generation with the development of large language models, in the field of image translation and several other interdisciplinary applications of generative AI. Moreover, the primary contribution of this paper lies in its coherent synthesis of the latest advancements in these areas, seamlessly weaving together contemporary breakthroughs in the field. Particularly, how it shares an exploration of the future trajectory for generative AI. In conclusion, the paper ends with a discussion of Responsible AI principles, and the necessary ethical considerations for the sustainability and growth of these generative models.
[ "['Sandeep Singh Sengar' 'Affan Bin Hasan' 'Sanjay Kumar' 'Fiona Carroll']" ]
null
null
2405.11034
null
null
http://arxiv.org/pdf/2405.11034v1
2024-05-17T18:11:11Z
2024-05-17T18:11:11Z
Safety in Graph Machine Learning: Threats and Safeguards
Graph Machine Learning (Graph ML) has witnessed substantial advancements in recent years. With their remarkable ability to process graph-structured data, Graph ML techniques have been extensively utilized across diverse applications, including critical domains like finance, healthcare, and transportation. Despite their societal benefits, recent research highlights significant safety concerns associated with the widespread use of Graph ML models. Lacking safety-focused designs, these models can produce unreliable predictions, demonstrate poor generalizability, and compromise data confidentiality. In high-stakes scenarios such as financial fraud detection, these vulnerabilities could jeopardize both individuals and society at large. Therefore, it is imperative to prioritize the development of safety-oriented Graph ML models to mitigate these risks and enhance public confidence in their applications. In this survey paper, we explore three critical aspects vital for enhancing safety in Graph ML: reliability, generalizability, and confidentiality. We categorize and analyze threats to each aspect under three headings: model threats, data threats, and attack threats. This novel taxonomy guides our review of effective strategies to protect against these threats. Our systematic review lays a groundwork for future research aimed at developing practical, safety-centered Graph ML models. Furthermore, we highlight the significance of safe Graph ML practices and suggest promising avenues for further investigation in this crucial area.
[ "['Song Wang' 'Yushun Dong' 'Binchi Zhang' 'Zihan Chen' 'Xingbo Fu'\n 'Yinhan He' 'Cong Shen' 'Chuxu Zhang' 'Nitesh V. Chawla' 'Jundong Li']" ]
null
null
2405.11056
null
null
http://arxiv.org/pdf/2405.11056v1
2024-05-17T19:11:38Z
2024-05-17T19:11:38Z
A Comparative Study of Garment Draping Techniques
We present a comparison review that evaluates popular techniques for garment draping for 3D fashion design, virtual try-ons, and animations. A comparative study is performed between various methods for garment draping of clothing over the human body. These include numerous models, such as physics and machine learning based techniques, collision handling, and more. Performance evaluations and trade-offs are discussed to ensure informed decision-making when choosing the most appropriate approach. These methods aim to accurately represent deformations and fine wrinkles of digital garments, considering the factors of data requirements, and efficiency, to produce realistic results. The research can be insightful to researchers, designers, and developers in visualizing dynamic multi-layered 3D clothing.
[ "['Prerana Achar' 'Mayank Patel' 'Anushka Mulik' 'Neha Katre'\n 'Stevina Dias' 'Chirag Raman']" ]
null
null
2405.11059
null
null
http://arxiv.org/pdf/2405.11059v1
2024-05-17T19:23:30Z
2024-05-17T19:23:30Z
Frugal Algorithm Selection
When solving decision and optimisation problems, many competing algorithms (model and solver choices) have complementary strengths. Typically, there is no single algorithm that works well for all instances of a problem. Automated algorithm selection has been shown to work very well for choosing a suitable algorithm for a given instance. However, the cost of training can be prohibitively large due to running candidate algorithms on a representative set of training instances. In this work, we explore reducing this cost by choosing a subset of the training instances on which to train. We approach this problem in three ways: using active learning to decide based on prediction uncertainty, augmenting the algorithm predictors with a timeout predictor, and collecting training data using a progressively increasing timeout. We evaluate combinations of these approaches on six datasets from ASLib and present the reduction in labelling cost achieved by each option.
[ "['Erdem Kuş' 'Özgür Akgün' 'Nguyen Dang' 'Ian Miguel']" ]
null
null
2405.11070
null
null
http://arxiv.org/pdf/2405.11070v1
2024-05-17T19:55:57Z
2024-05-17T19:55:57Z
Jill Watson: A Virtual Teaching Assistant powered by ChatGPT
Conversational AI agents often require extensive datasets for training that are not publicly released, are limited to social chit-chat or handling a specific domain, and may not be easily extended to accommodate the latest advances in AI technologies. This paper introduces Jill Watson, a conversational Virtual Teaching Assistant (VTA) leveraging the capabilities of ChatGPT. Jill Watson based on ChatGPT requires no prior training and uses a modular design to allow the integration of new APIs using a skill-based architecture inspired by XiaoIce. Jill Watson is also well-suited for intelligent textbooks as it can process and converse using multiple large documents. We exclusively utilize publicly available resources for reproducibility and extensibility. Comparative analysis shows that our system outperforms the legacy knowledge-based Jill Watson as well as the OpenAI Assistants service. We employ many safety measures that reduce instances of hallucinations and toxicity. The paper also includes real-world examples from a classroom setting that demonstrate different features of Jill Watson and its effectiveness.
[ "['Karan Taneja' 'Pratyusha Maiti' 'Sandeep Kakar' 'Pranav Guruprasad'\n 'Sanjeev Rao' 'Ashok K. Goel']" ]
null
null
2405.11079
null
null
http://arxiv.org/pdf/2405.11079v1
2024-05-17T20:22:39Z
2024-05-17T20:22:39Z
FeMLoc: Federated Meta-learning for Adaptive Wireless Indoor Localization Tasks in IoT Networks
The rapid growth of the Internet of Things fosters collaboration among connected devices for tasks like indoor localization. However, existing indoor localization solutions struggle with dynamic and harsh conditions, requiring extensive data collection and environment-specific calibration. These factors impede cooperation, scalability, and the utilization of prior research efforts. To address these challenges, we propose FeMLoc, a federated meta-learning framework for localization. FeMLoc operates in two stages: (i) collaborative meta-training where a global meta-model is created by training on diverse localization datasets from edge devices. (ii) Rapid adaptation for new environments, where the pre-trained global meta-model initializes the localization model, requiring only minimal fine-tuning with a small amount of new data. In this paper, we provide a detailed technical overview of FeMLoc, highlighting its unique approach to privacy-preserving meta-learning in the context of indoor localization. Our performance evaluation demonstrates the superiority of FeMLoc over state-of-the-art methods, enabling swift adaptation to new indoor environments with reduced calibration effort. Specifically, FeMLoc achieves up to 80.95% improvement in localization accuracy compared to the conventional baseline neural network (NN) approach after only 100 gradient steps. Alternatively, for a target accuracy of around 5m, FeMLoc achieves the same level of accuracy up to 82.21% faster than the baseline NN approach. This translates to FeMLoc requiring fewer training iterations, thereby significantly reducing fingerprint data collection and calibration efforts. Moreover, FeMLoc exhibits enhanced scalability, making it well-suited for location-aware massive connectivity driven by emerging wireless communication technologies.
[ "['Yaya Etiabi' 'Wafa Njima' 'El Mehdi Amhoud']" ]
null
null
2405.11083
null
null
http://arxiv.org/pdf/2405.11083v1
2024-05-17T20:30:49Z
2024-05-17T20:30:49Z
Prompt Exploration with Prompt Regression
In the advent of democratized usage of large language models (LLMs), there is a growing desire to systematize LLM prompt creation and selection processes beyond iterative trial-and-error. Prior works majorly focus on searching the space of prompts without accounting for relations between prompt variations. Here we propose a framework, Prompt Exploration with Prompt Regression (PEPR), to predict the effect of prompt combinations given results for individual prompt elements as well as a simple method to select an effective prompt for a given use-case. We evaluate our approach with open-source LLMs of different sizes on several different tasks.
[ "['Michael Feffer' 'Ronald Xu' 'Yuekai Sun' 'Mikhail Yurochkin']" ]
null
null
2405.11095
null
null
http://arxiv.org/pdf/2405.11095v1
2024-05-17T21:17:27Z
2024-05-17T21:17:27Z
Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance
We propose a novel algorithm for distributed stochastic gradient descent (SGD) with compressed gradient communication in the parameter-server framework. Our gradient compression technique, named flattened one-bit stochastic gradient descent (FO-SGD), relies on two simple algorithmic ideas: (i) a one-bit quantization procedure leveraging the technique of dithering, and (ii) a randomized fast Walsh-Hadamard transform to flatten the stochastic gradient before quantization. As a result, the approximation of the true gradient in this scheme is biased, but it prevents commonly encountered algorithmic problems, such as exploding variance in the one-bit compression regime, deterioration of performance in the case of sparse gradients, and restrictive assumptions on the distribution of the stochastic gradients. In fact, we show SGD-like convergence guarantees under mild conditions. The compression technique can be used in both directions of worker-server communication, therefore admitting distributed optimization with full communication compression.
[ "['Alexander Stollenwerk' 'Laurent Jacques']" ]
null
null
2405.11106
null
null
http://arxiv.org/pdf/2405.11106v1
2024-05-17T22:10:23Z
2024-05-17T22:10:23Z
LLM-based Multi-Agent Reinforcement Learning: Current and Future Directions
In recent years, Large Language Models (LLMs) have shown great abilities in various tasks, including question answering, arithmetic problem solving, and poem writing, among others. Although research on LLM-as-an-agent has shown that LLM can be applied to Reinforcement Learning (RL) and achieve decent results, the extension of LLM-based RL to Multi-Agent System (MAS) is not trivial, as many aspects, such as coordination and communication between agents, are not considered in the RL frameworks of a single agent. To inspire more research on LLM-based MARL, in this letter, we survey the existing LLM-based single-agent and multi-agent RL frameworks and provide potential research directions for future research. In particular, we focus on the cooperative tasks of multiple agents with a common goal and communication among them. We also consider human-in/on-the-loop scenarios enabled by the language component in the framework.
[ "['Chuanneng Sun' 'Songjun Huang' 'Dario Pompili']" ]
null
null
2405.11117
null
null
http://arxiv.org/pdf/2405.11117v2
2024-06-21T20:51:59Z
2024-05-17T23:18:15Z
Dynamic Embeddings with Task-Oriented prompting
This paper introduces Dynamic Embeddings with Task-Oriented prompting (DETOT), a novel approach aimed at improving the adaptability and efficiency of machine learning models by implementing a flexible embedding layer. Unlike traditional static embeddings [14], DETOT dynamically adjusts embeddings based on task-specific requirements and performance feedback, optimizing input data representation for individual tasks [4]. This method enhances both accuracy and computational performance by tailoring the representation layer to meet the unique needs of each task. The structure of DETOT is detailed, highlighting its task-specific adaptation, continuous feedback loop, and mechanisms for preventing overfitting. Empirical evaluations demonstrate its superiority over existing methods.
[ "['Allmin Balloccu' 'Jack Zhang']" ]
null
null
2405.11120
null
null
http://arxiv.org/pdf/2405.11120v1
2024-05-17T23:27:33Z
2024-05-17T23:27:33Z
Latent State Estimation Helps UI Agents to Reason
A common problem for agents operating in real-world environments is that the response of an environment to their actions may be non-deterministic and observed through noise. This renders environmental state and progress towards completing a task latent. Despite recent impressive demonstrations of LLM's reasoning abilities on various benchmarks, whether LLMs can build estimates of latent state and leverage them for reasoning has not been explicitly studied. We investigate this problem in the real-world domain of autonomous UI agents. We establish that appropriately prompting LLMs in a zero-shot manner can be formally understood as forming point estimates of latent state in a textual space. In the context of autonomous UI agents we then show that LLMs used in this manner are more than $76%$ accurate at inferring various aspects of latent state, such as performed (vs. commanded) actions and task progression. Using both public and internal benchmarks and three reasoning methods (zero-shot, CoT-SC & ReAct), we show that LLM-powered agents that explicitly estimate and reason about latent state are able to successfully complete up to 1.6x more tasks than those that do not.
[ "['William E Bishop' 'Alice Li' 'Christopher Rawles' 'Oriana Riva']" ]
null
null
2405.11124
null
null
http://arxiv.org/pdf/2405.11124v1
2024-05-17T23:52:33Z
2024-05-17T23:52:33Z
AdaWaveNet: Adaptive Wavelet Network for Time Series Analysis
Time series data analysis is a critical component in various domains such as finance, healthcare, and meteorology. Despite the progress in deep learning for time series analysis, there remains a challenge in addressing the non-stationary nature of time series data. Traditional models, which are built on the assumption of constant statistical properties over time, often struggle to capture the temporal dynamics in realistic time series, resulting in bias and error in time series analysis. This paper introduces the Adaptive Wavelet Network (AdaWaveNet), a novel approach that employs Adaptive Wavelet Transformation for multi-scale analysis of non-stationary time series data. AdaWaveNet designed a lifting scheme-based wavelet decomposition and construction mechanism for adaptive and learnable wavelet transforms, which offers enhanced flexibility and robustness in analysis. We conduct extensive experiments on 10 datasets across 3 different tasks, including forecasting, imputation, and a newly established super-resolution task. The evaluations demonstrate the effectiveness of AdaWaveNet over existing methods in all three tasks, which illustrates its potential in various real-world applications.
[ "['Han Yu' 'Peikun Guo' 'Akane Sano']" ]
null
null
2405.11126
null
null
http://arxiv.org/pdf/2405.11126v2
2024-05-23T23:23:39Z
2024-05-17T23:55:51Z
Flexible Motion In-betweening with Diffusion Models
Motion in-betweening, a fundamental task in character animation, consists of generating motion sequences that plausibly interpolate user-provided keyframe constraints. It has long been recognized as a labor-intensive and challenging process. We investigate the potential of diffusion models in generating diverse human motions guided by keyframes. Unlike previous inbetweening methods, we propose a simple unified model capable of generating precise and diverse motions that conform to a flexible range of user-specified spatial constraints, as well as text conditioning. To this end, we propose Conditional Motion Diffusion In-betweening (CondMDI) which allows for arbitrary dense-or-sparse keyframe placement and partial keyframe constraints while generating high-quality motions that are diverse and coherent with the given keyframes. We evaluate the performance of CondMDI on the text-conditioned HumanML3D dataset and demonstrate the versatility and efficacy of diffusion models for keyframe in-betweening. We further explore the use of guidance and imputation-based approaches for inference-time keyframing and compare CondMDI against these methods.
[ "['Setareh Cohan' 'Guy Tevet' 'Daniele Reda' 'Xue Bin Peng'\n 'Michiel van de Panne']" ]
null
null
2405.11139
null
null
http://arxiv.org/pdf/2405.11139v1
2024-05-18T01:49:16Z
2024-05-18T01:49:16Z
RuleFuser: Injecting Rules in Evidential Networks for Robust Out-of-Distribution Trajectory Prediction
Modern neural trajectory predictors in autonomous driving are developed using imitation learning (IL) from driving logs. Although IL benefits from its ability to glean nuanced and multi-modal human driving behaviors from large datasets, the resulting predictors often struggle with out-of-distribution (OOD) scenarios and with traffic rule compliance. On the other hand, classical rule-based predictors, by design, can predict traffic rule satisfying behaviors while being robust to OOD scenarios, but these predictors fail to capture nuances in agent-to-agent interactions and human driver's intent. In this paper, we present RuleFuser, a posterior-net inspired evidential framework that combines neural predictors with classical rule-based predictors to draw on the complementary benefits of both, thereby striking a balance between performance and traffic rule compliance. The efficacy of our approach is demonstrated on the real-world nuPlan dataset where RuleFuser leverages the higher performance of the neural predictor in in-distribution (ID) scenarios and the higher safety offered by the rule-based predictor in OOD scenarios.
[ "['Jay Patrikar' 'Sushant Veer' 'Apoorva Sharma' 'Marco Pavone'\n 'Sebastian Scherer']" ]
null
null
2405.11143
null
null
http://arxiv.org/pdf/2405.11143v2
2024-06-03T12:19:18Z
2024-05-20T01:04:40Z
OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework
As large language models (LLMs) continue to grow by scaling laws, reinforcement learning from human feedback (RLHF) has gained significant attention due to its outstanding performance. However, unlike pretraining or fine-tuning a single model, scaling reinforcement learning from human feedback (RLHF) for training large language models poses coordination challenges across four models. We present OpenRLHF, an open-source framework enabling efficient RLHF scaling. Unlike existing RLHF frameworks that co-locate four models on the same GPUs, OpenRLHF re-designs scheduling for the models beyond 70B parameters using Ray, vLLM, and DeepSpeed, leveraging improved resource utilization and diverse training approaches. Integrating seamlessly with Hugging Face, OpenRLHF provides an out-of-the-box solution with optimized algorithms and launch scripts, which ensures user-friendliness. OpenRLHF implements RLHF, DPO, rejection sampling, and other alignment techniques. Empowering state-of-the-art LLM development, OpenRLHF's code is available at https://github.com/OpenLLMAI/OpenRLHF.
[ "['Jian Hu' 'Xibin Wu' 'Weixun Wang' 'Xianyu' 'Dehao Zhang' 'Yu Cao']" ]
null
null
2405.11157
null
null
http://arxiv.org/pdf/2405.11157v1
2024-05-18T03:02:23Z
2024-05-18T03:02:23Z
Towards Modular LLMs by Building and Reusing a Library of LoRAs
The growing number of parameter-efficient adaptations of a base large language model (LLM) calls for studying whether we can reuse such trained adapters to improve performance for new tasks. We study how to best build a library of adapters given multi-task data and devise techniques for both zero-shot and supervised task generalization through routing in such library. We benchmark existing approaches to build this library and introduce model-based clustering, MBC, a method that groups tasks based on the similarity of their adapter parameters, indirectly optimizing for transfer across the multi-task dataset. To re-use the library, we present a novel zero-shot routing mechanism, Arrow, which enables dynamic selection of the most relevant adapters for new inputs without the need for retraining. We experiment with several LLMs, such as Phi-2 and Mistral, on a wide array of held-out tasks, verifying that MBC-based adapters and Arrow routing lead to superior generalization to new tasks. We make steps towards creating modular, adaptable LLMs that can match or outperform traditional joint training.
[ "['Oleksiy Ostapenko' 'Zhan Su' 'Edoardo Maria Ponti' 'Laurent Charlin'\n 'Nicolas Le Roux' 'Matheus Pereira' 'Lucas Caccia' 'Alessandro Sordoni']" ]
null
null
2405.11171
null
null
http://arxiv.org/pdf/2405.11171v1
2024-05-18T04:20:14Z
2024-05-18T04:20:14Z
Graph Feedback Bandits with Similar Arms
In this paper, we study the stochastic multi-armed bandit problem with graph feedback. Motivated by the clinical trials and recommendation problem, we assume that two arms are connected if and only if they are similar (i.e., their means are close enough). We establish a regret lower bound for this novel feedback structure and introduce two UCB-based algorithms: D-UCB with problem-independent regret upper bounds and C-UCB with problem-dependent upper bounds. Leveraging the similarity structure, we also consider the scenario where the number of arms increases over time. Practical applications related to this scenario include Q&A platforms (Reddit, Stack Overflow, Quora) and product reviews in Amazon and Flipkart. Answers (product reviews) continually appear on the website, and the goal is to display the best answers (product reviews) at the top. When the means of arms are independently generated from some distribution, we provide regret upper bounds for both algorithms and discuss the sub-linearity of bounds in relation to the distribution of means. Finally, we conduct experiments to validate the theoretical results.
[ "['Han Qi' 'Guo Fei' 'Li Zhu']" ]
null
null
2405.11179
null
null
http://arxiv.org/pdf/2405.11179v1
2024-05-18T05:13:11Z
2024-05-18T05:13:11Z
Accelerating Multilevel Markov Chain Monte Carlo Using Machine Learning Models
This work presents an efficient approach for accelerating multilevel Markov Chain Monte Carlo (MCMC) sampling for large-scale problems using low-fidelity machine learning models. While conventional techniques for large-scale Bayesian inference often substitute computationally expensive high-fidelity models with machine learning models, thereby introducing approximation errors, our approach offers a computationally efficient alternative by augmenting high-fidelity models with low-fidelity ones within a hierarchical framework. The multilevel approach utilizes the low-fidelity machine learning model (MLM) for inexpensive evaluation of proposed samples thereby improving the acceptance of samples by the high-fidelity model. The hierarchy in our multilevel algorithm is derived from geometric multigrid hierarchy. We utilize an MLM to acclerate the coarse level sampling. Training machine learning model for the coarsest level significantly reduces the computational cost associated with generating training data and training the model. We present an MCMC algorithm to accelerate the coarsest level sampling using MLM and account for the approximation error introduced. We provide theoretical proofs of detailed balance and demonstrate that our multilevel approach constitutes a consistent MCMC algorithm. Additionally, we derive conditions on the accuracy of the machine learning model to facilitate more efficient hierarchical sampling. Our technique is demonstrated on a standard benchmark inference problem in groundwater flow, where we estimate the probability density of a quantity of interest using a four-level MCMC algorithm. Our proposed algorithm accelerates multilevel sampling by a factor of two while achieving similar accuracy compared to sampling using the standard multilevel algorithm.
[ "['Sohail Reddy' 'Hillary Fairbanks']" ]
null
null
2405.11188
null
null
http://arxiv.org/pdf/2405.11188v1
2024-05-18T05:57:52Z
2024-05-18T05:57:52Z
Wind Power Prediction across Different Locations using Deep Domain Adaptive Learning
Accurate prediction of wind power is essential for the grid integration of this intermittent renewable source and aiding grid planners in forecasting available wind capacity. Spatial differences lead to discrepancies in climatological data distributions between two geographically dispersed regions, consequently making the prediction task more difficult. Thus, a prediction model that learns from the data of a particular climatic region can suffer from being less robust. A deep neural network (DNN) based domain adaptive approach is proposed to counter this drawback. Effective weather features from a large set of weather parameters are selected using a random forest approach. A pre-trained model from the source domain is utilized to perform the prediction task, assuming no source data is available during target domain prediction. The weights of only the last few layers of the DNN model are updated throughout the task, keeping the rest of the network unchanged, making the model faster compared to the traditional approaches. The proposed approach demonstrates higher accuracy ranging from 6.14% to even 28.44% compared to the traditional non-adaptive method.
[ "['Md Saiful Islam Sajol' 'Md Shazid Islam' 'A S M Jahid Hasan'\n 'Md Saydur Rahman' 'Jubair Yusuf']" ]
null
null
2405.11191
null
null
http://arxiv.org/pdf/2405.11191v1
2024-05-18T06:07:54Z
2024-05-18T06:07:54Z
Biathlon: Harnessing Model Resilience for Accelerating ML Inference Pipelines
Machine learning inference pipelines commonly encountered in data science and industries often require real-time responsiveness due to their user-facing nature. However, meeting this requirement becomes particularly challenging when certain input features require aggregating a large volume of data online. Recent literature on interpretable machine learning reveals that most machine learning models exhibit a notable degree of resilience to variations in input. This suggests that machine learning models can effectively accommodate approximate input features with minimal discernible impact on accuracy. In this paper, we introduce Biathlon, a novel ML serving system that leverages the inherent resilience of models and determines the optimal degree of approximation for each aggregation feature. This approach enables maximum speedup while ensuring a guaranteed bound on accuracy loss. We evaluate Biathlon on real pipelines from both industry applications and data science competitions, demonstrating its ability to meet real-time latency requirements by achieving 5.3x to 16.6x speedup with almost no accuracy loss.
[ "['Chaokun Chang' 'Eric Lo' 'Chunxiao Ye']" ]
null
null
2405.11195
null
null
http://arxiv.org/pdf/2405.11195v1
2024-05-18T06:14:00Z
2024-05-18T06:14:00Z
Trustworthy Actionable Perturbations
Counterfactuals, or modified inputs that lead to a different outcome, are an important tool for understanding the logic used by machine learning classifiers and how to change an undesirable classification. Even if a counterfactual changes a classifier's decision, however, it may not affect the true underlying class probabilities, i.e. the counterfactual may act like an adversarial attack and ``fool'' the classifier. We propose a new framework for creating modified inputs that change the true underlying probabilities in a beneficial way which we call Trustworthy Actionable Perturbations (TAP). This includes a novel verification procedure to ensure that TAP change the true class probabilities instead of acting adversarially. Our framework also includes new cost, reward, and goal definitions that are better suited to effectuating change in the real world. We present PAC-learnability results for our verification procedure and theoretically analyze our new method for measuring reward. We also develop a methodology for creating TAP and compare our results to those achieved by previous counterfactual methods.
[ "['Jesse Friedbaum' 'Sudarshan Adiga' 'Ravi Tandon']" ]
null
null
2405.11204
null
null
http://arxiv.org/pdf/2405.11204v1
2024-05-18T07:18:43Z
2024-05-18T07:18:43Z
Learning from Imperfect Human Feedback: a Tale from Corruption-Robust Dueling
This paper studies Learning from Imperfect Human Feedback (LIHF), motivated by humans' potential irrationality or imperfect perception of true preference. We revisit the classic dueling bandit problem as a model of learning from comparative human feedback, and enrich it by casting the imperfection in human feedback as agnostic corruption to user utilities. We start by identifying the fundamental limits of LIHF and prove a regret lower bound of $Omega(max{T^{1/2},C})$, even when the total corruption $C$ is known and when the corruption decays gracefully over time (i.e., user feedback becomes increasingly more accurate). We then turn to design robust algorithms applicable in real-world scenarios with arbitrary corruption and unknown $C$. Our key finding is that gradient-based algorithms enjoy a smooth efficiency-robustness tradeoff under corruption by varying their learning rates. Specifically, under general concave user utility, Dueling Bandit Gradient Descent (DBGD) of Yue and Joachims (2009) can be tuned to achieve regret $O(T^{1-alpha} + T^{ alpha} C)$ for any given parameter $alpha in (0, frac{1}{4}]$. Additionally, this result enables us to pin down the regret lower bound of the standard DBGD (the $alpha=1/4$ case) as $Omega(T^{3/4})$ for the first time, to the best of our knowledge. For strongly concave user utility we show a better tradeoff: there is an algorithm that achieves $O(T^{alpha} + T^{frac{1}{2}(1-alpha)}C)$ for any given $alpha in [frac{1}{2},1)$. Our theoretical insights are corroborated by extensive experiments on real-world recommendation data.
[ "['Yuwei Cheng' 'Fan Yao' 'Xuefeng Liu' 'Haifeng Xu']" ]
null
null
2405.11206
null
null
http://arxiv.org/pdf/2405.11206v1
2024-05-18T07:23:44Z
2024-05-18T07:23:44Z
Towards Robust Policy: Enhancing Offline Reinforcement Learning with Adversarial Attacks and Defenses
Offline reinforcement learning (RL) addresses the challenge of expensive and high-risk data exploration inherent in RL by pre-training policies on vast amounts of offline data, enabling direct deployment or fine-tuning in real-world environments. However, this training paradigm can compromise policy robustness, leading to degraded performance in practical conditions due to observation perturbations or intentional attacks. While adversarial attacks and defenses have been extensively studied in deep learning, their application in offline RL is limited. This paper proposes a framework to enhance the robustness of offline RL models by leveraging advanced adversarial attacks and defenses. The framework attacks the actor and critic components by perturbing observations during training and using adversarial defenses as regularization to enhance the learned policy. Four attacks and two defenses are introduced and evaluated on the D4RL benchmark. The results show the vulnerability of both the actor and critic to attacks and the effectiveness of the defenses in improving policy robustness. This framework holds promise for enhancing the reliability of offline RL models in practical scenarios.
[ "['Thanh Nguyen' 'Tung M. Luu' 'Tri Ton' 'Chang D. Yoo']" ]
null
null
2405.11208
null
null
http://arxiv.org/abs/2405.11208v1
2024-05-18T07:32:02Z
2024-05-18T07:32:02Z
Discovering Physics-Informed Neural Networks Model for Solving Partial Differential Equations through Evolutionary Computation
In recent years, the researches about solving partial differential equations (PDEs) based on artificial neural network have attracted considerable attention. In these researches, the neural network models are usually designed depend on human experience or trial and error. Despite the emergence of several model searching methods, these methods primarily concentrate on optimizing the hyperparameters of fully connected neural network model based on the framework of physics-informed neural networks (PINNs), and the corresponding search spaces are relatively restricted, thereby limiting the exploration of superior models. This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate. In addition to searching the numbers of layers and neurons per hidden layer, this method concurrently explores the optimal shortcut connections between the layers and the novel parametric activation functions expressed by the binary trees. In evolution, the strategy about dynamic population size and training epochs (DPSTE) is adopted, which significantly increases the number of models to be explored and facilitates the discovery of models with fast convergence rate. In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam'e equations. The experimental results affirm that the models discovered by the proposed evolutionary computation method generally exhibit superior approximation accuracy and convergence rate, and these models also show commendable generalization performance with respect to the source term, initial and boundary conditions, equation coefficient and computational domain. The corresponding code is available at https://github.com/MathBon/Discover-PINNs-Model.
[ "['Bo Zhang' 'Chao Yang']" ]
null
null
2405.11211
null
null
http://arxiv.org/pdf/2405.11211v1
2024-05-18T07:35:38Z
2024-05-18T07:35:38Z
Excess Delay from GDP: Measurement and Causal Analysis
Ground Delay Programs (GDPs) have been widely used to resolve excessive demand-capacity imbalances at arrival airports by shifting foreseen airborne delay to pre-departure ground delay. While offering clear safety and efficiency benefits, GDPs may also create additional delay because of imperfect execution and uncertainty in predicting arrival airport capacity. This paper presents a methodology for measuring excess delay resulting from individual GDPs and investigates factors that influence excess delay using regularized regression models. We measured excess delay for 1210 GDPs from 33 U.S. airports in 2019. On a per-restricted flight basis, the mean excess delay is 35.4 min with std of 20.6 min. In our regression analysis of the variation in excess delay, ridge regression is found to perform best. The factors affecting excess delay include time variations during gate out and taxi out for flights subject to the GDP, program rate setting and revisions, and GDP time duration.
[ "['Ke Liu' 'Mark Hansen']" ]
null
null
2405.11226
null
null
http://arxiv.org/pdf/2405.11226v1
2024-05-18T08:29:15Z
2024-05-18T08:29:15Z
The Power of Active Multi-Task Learning in Reinforcement Learning from Human Feedback
Reinforcement learning from human feedback (RLHF) has contributed to performance improvements in large language models. To tackle its reliance on substantial amounts of human-labeled data, a successful approach is multi-task representation learning, which involves learning a high-quality, low-dimensional representation from a wide range of source tasks. In this paper, we formulate RLHF as the contextual dueling bandit problem and assume a common linear representation. We demonstrate that the sample complexity of source tasks in multi-task RLHF can be reduced by considering task relevance and allocating different sample sizes to source tasks with varying task relevance. We further propose an algorithm to estimate task relevance by a small number of additional data and then learn a policy. We prove that to achieve $varepsilon-$optimal, the sample complexity of the source tasks can be significantly reduced compared to uniform sampling. Additionally, the sample complexity of the target task is only linear in the dimension of the latent space, thanks to representation learning.
[ "['Ruitao Chen' 'Liwei Wang']" ]
null
null
2405.11230
null
null
http://arxiv.org/pdf/2405.11230v1
2024-05-18T08:51:42Z
2024-05-18T08:51:42Z
OTLP: Output Thresholding Using Mixed Integer Linear Programming
Output thresholding is the technique to search for the best threshold to be used during inference for any classifiers that can produce probability estimates on train and testing datasets. It is particularly useful in high imbalance classification problems where the default threshold is not able to refer to imbalance in class distributions and fail to give the best performance. This paper proposes OTLP, a thresholding framework using mixed integer linear programming which is model agnostic, can support different objective functions and different set of constraints for a diverse set of problems including both balanced and imbalanced classification problems. It is particularly useful in real world applications where the theoretical thresholding techniques are not able to address to product related requirements and complexity of the applications which utilize machine learning models. Through the use of Credit Card Fraud Detection Dataset, we evaluate the usefulness of the framework.
[ "['Baran Koseoglu' 'Luca Traverso' 'Mohammed Topiwalla' 'Egor Kraev'\n 'Zoltan Szopory']" ]
null
null
2405.11237
null
null
http://arxiv.org/pdf/2405.11237v1
2024-05-18T09:31:54Z
2024-05-18T09:31:54Z
Lag Selection for Univariate Time Series Forecasting using Deep Learning: An Empirical Study
Most forecasting methods use recent past observations (lags) to model the future values of univariate time series. Selecting an adequate number of lags is important for training accurate forecasting models. Several approaches and heuristics have been devised to solve this task. However, there is no consensus about what the best approach is. Besides, lag selection procedures have been developed based on local models and classical forecasting techniques such as ARIMA. We bridge this gap in the literature by carrying out an extensive empirical analysis of different lag selection methods. We focus on deep learning methods trained in a global approach, i.e., on datasets comprising multiple univariate time series. The experiments were carried out using three benchmark databases that contain a total of 2411 univariate time series. The results indicate that the lag size is a relevant parameter for accurate forecasts. In particular, excessively small or excessively large lag sizes have a considerable negative impact on forecasting performance. Cross-validation approaches show the best performance for lag selection, but this performance is comparable with simple heuristics.
[ "['José Leites' 'Vitor Cerqueira' 'Carlos Soares']" ]
null
null
2405.11238
null
null
http://arxiv.org/pdf/2405.11238v1
2024-05-18T09:37:04Z
2024-05-18T09:37:04Z
SimAD: A Simple Dissimilarity-based Approach for Time Series Anomaly Detection
Despite the prevalence of reconstruction-based deep learning methods, time series anomaly detection remains challenging. Existing approaches often struggle with limited temporal contexts, inadequate representation of normal patterns, and flawed evaluation metrics, hindering their effectiveness in identifying aberrant behavior. To address these issues, we introduce $textbf{{SimAD}}$, a $textbf{{Sim}}$ple dissimilarity-based approach for time series $textbf{{A}}$nomaly $textbf{{D}}$etection. SimAD incorporates an advanced feature extractor adept at processing extended temporal windows, utilizes the EmbedPatch encoder to integrate normal behavioral patterns comprehensively, and introduces an innovative ContrastFusion module designed to accentuate distributional divergences between normal and abnormal data, thereby enhancing the robustness of anomaly discrimination. Additionally, we propose two robust evaluation metrics, UAff and NAff, addressing the limitations of existing metrics and demonstrating their reliability through theoretical and experimental analyses. Experiments across $textbf{seven}$ diverse time series datasets demonstrate SimAD's superior performance compared to state-of-the-art methods, achieving relative improvements of $textbf{19.85%}$ on F1, $textbf{4.44%}$ on Aff-F1, $textbf{77.79%}$ on NAff-F1, and $textbf{9.69%}$ on AUC on six multivariate datasets. Code and pre-trained models are available at https://github.com/EmorZz1G/SimAD.
[ "['Zhijie Zhong' 'Zhiwen Yu' 'Xing Xi' 'Yue Xu' 'Jiahui Chen'\n 'Kaixiang Yang']" ]
null
null
2405.11242
null
null
http://arxiv.org/pdf/2405.11242v1
2024-05-18T09:50:19Z
2024-05-18T09:50:19Z
Advancing fNIRS Neuroimaging through Synthetic Data Generation and Machine Learning Applications
This study presents an integrated approach for advancing functional Near-Infrared Spectroscopy (fNIRS) neuroimaging through the synthesis of data and application of machine learning models. By addressing the scarcity of high-quality neuroimaging datasets, this work harnesses Monte Carlo simulations and parametric head models to generate a comprehensive synthetic dataset, reflecting a wide spectrum of conditions. We developed a containerized environment employing Docker and Xarray for standardized and reproducible data analysis, facilitating meaningful comparisons across different signal processing modalities. Additionally, a cloud-based infrastructure is established for scalable data generation and processing, enhancing the accessibility and quality of neuroimaging data. The combination of synthetic data generation with machine learning techniques holds promise for improving the accuracy, efficiency, and applicability of fNIRS tomography, potentially revolutionizing diagnostics and treatment strategies for neurological conditions. The methodologies and infrastructure developed herein set new standards in data simulation and analysis, paving the way for future research in neuroimaging and the broader biomedical engineering field.
[ "['Eitan Waks']" ]
null
null
2405.11255
null
null
http://arxiv.org/pdf/2405.11255v1
2024-05-18T10:56:45Z
2024-05-18T10:56:45Z
WisPerMed at "Discharge Me!": Advancing Text Generation in Healthcare with Large Language Models, Dynamic Expert Selection, and Priming Techniques on MIMIC-IV
This study aims to leverage state of the art language models to automate generating the "Brief Hospital Course" and "Discharge Instructions" sections of Discharge Summaries from the MIMIC-IV dataset, reducing clinicians' administrative workload. We investigate how automation can improve documentation accuracy, alleviate clinician burnout, and enhance operational efficacy in healthcare facilities. This research was conducted within our participation in the Shared Task Discharge Me! at BioNLP @ ACL 2024. Various strategies were employed, including few-shot learning, instruction tuning, and Dynamic Expert Selection (DES), to develop models capable of generating the required text sections. Notably, utilizing an additional clinical domain-specific dataset demonstrated substantial potential to enhance clinical language processing. The DES method, which optimizes the selection of text outputs from multiple predictions, proved to be especially effective. It achieved the highest overall score of 0.332 in the competition, surpassing single-model outputs. This finding suggests that advanced deep learning methods in combination with DES can effectively automate parts of electronic health record documentation. These advancements could enhance patient care by freeing clinician time for patient interactions. The integration of text selection strategies represents a promising avenue for further research.
[ "['Hendrik Damm' 'Tabea M. G. Pakull' 'Bahadır Eryılmaz' 'Helmut Becker'\n 'Ahmad Idrissi-Yaghir' 'Henning Schäfer' 'Sergej Schultenkämper'\n 'Christoph M. Friedrich']" ]
null
null
2405.11264
null
null
http://arxiv.org/pdf/2405.11264v1
2024-05-18T11:29:19Z
2024-05-18T11:29:19Z
Cross-Language Assessment of Mathematical Capability of ChatGPT
This paper presents an evaluation of the mathematical capability of ChatGPT across diverse languages like Hindi, Gujarati, and Marathi. ChatGPT, based on GPT-3.5 by OpenAI, has garnered significant attention for its natural language understanding and generation abilities. However, its performance in solving mathematical problems across multiple natural languages remains a comparatively unexplored area, especially in regional Indian languages. In this paper, we explore those capabilities as well as using chain-of-thought prompting to figure out if it increases the accuracy of responses as much as it does in the English language and provide insights into the current limitations.
[ "['Gargi Sathe' 'Aneesh Shamraj' 'Aditya Surve' 'Nahush Patil'\n 'Kumkum Saxena']" ]
null
null
2405.11275
null
null
http://arxiv.org/abs/2405.11275v1
2024-05-18T12:19:16Z
2024-05-18T12:19:16Z
Predicting and Explaining Hearing Aid Usage Using Encoder-Decoder with Attention Mechanism and SHAP
It is essential to understand the personal, behavioral, environmental, and other factors that correlate with optimal hearing aid fitting and hearing aid users' experiences in order to improve hearing loss patient satisfaction and quality of life, as well as reduce societal and financial burdens. This work proposes a novel framework that uses Encoder-decoder with attention mechanism (attn-ED) for predicting future hearing aid usage and SHAP to explain the factors contributing to this prediction. It has been demonstrated in experiments that attn-ED performs well at predicting future hearing aid usage, and that SHAP can be utilized to calculate the contribution of different factors affecting hearing aid usage. This framework aims to establish confidence that AI models can be utilized in the medical domain with the use of XAI methods. Moreover, the proposed framework can also assist clinicians in determining the nature of interventions.
[ "['Qiqi Su' 'Eleftheria Iliadou']" ]
null
null
2405.11277
null
null
http://arxiv.org/pdf/2405.11277v2
2024-07-01T23:23:41Z
2024-05-18T12:26:31Z
Action Controlled Paraphrasing
Recent studies have demonstrated the potential to control paraphrase generation, such as through syntax, which has broad applications in various downstream tasks. However, these methods often require detailed parse trees or syntactic exemplars, countering human-like paraphrasing behavior in language use. Furthermore, an inference gap exists, as control specifications are only available during training but not during inference. In this work, we propose a new setup for controlled paraphrase generation. Specifically, we represent user intent as action tokens, embedding and concatenating them with text embeddings, thus flowing together into a self-attention encoder for representation fusion. To address the inference gap, we introduce an optional action token as a placeholder that encourages the model to determine the appropriate action independently when users' intended actions are not provided. Experimental results show that our method successfully enables precise action-controlled paraphrasing and preserves or even enhances performance compared to conventional uncontrolled methods when actions are not given. Our findings promote the concept of action-controlled paraphrasing for a more user-centered design.
[ "['Ning Shi' 'Zijun Wu']" ]
null
null
2405.11280
null
null
http://arxiv.org/pdf/2405.11280v1
2024-05-18T12:32:21Z
2024-05-18T12:32:21Z
Joint Analysis of Single-Cell Data across Cohorts with Missing Modalities
Joint analysis of multi-omic single-cell data across cohorts has significantly enhanced the comprehensive analysis of cellular processes. However, most of the existing approaches for this purpose require access to samples with complete modality availability, which is impractical in many real-world scenarios. In this paper, we propose (Single-Cell Cross-Cohort Cross-Category) integration, a novel framework that learns unified cell representations under domain shift without requiring full-modality reference samples. Our generative approach learns rich cross-modal and cross-domain relationships that enable imputation of these missing modalities. Through experiments on real-world multi-omic datasets, we demonstrate that offers a robust solution to single-cell tasks such as cell type clustering, cell type classification, and feature imputation.
[ "['Marianne Arriola' 'Weishen Pan' 'Manqi Zhou' 'Qiannan Zhang' 'Chang Su'\n 'Fei Wang']" ]
null
null
2405.11295
null
null
http://arxiv.org/abs/2405.11295v1
2024-05-18T13:43:43Z
2024-05-18T13:43:43Z
Medical Image Analysis for Detection, Treatment and Planning of Disease using Artificial Intelligence Approaches
X-ray is one of the prevalent image modalities for the detection and diagnosis of the human body. X-ray provides an actual anatomical structure of an organ present with disease or absence of disease. Segmentation of disease in chest X-ray images is essential for the diagnosis and treatment. In this paper, a framework for the segmentation of X-ray images using artificial intelligence techniques has been discussed. Here data has been pre-processed and cleaned followed by segmentation using SegNet and Residual Net approaches to X-ray images. Finally, segmentation has been evaluated using well known metrics like Loss, Dice Coefficient, Jaccard Coefficient, Precision, Recall, Binary Accuracy, and Validation Accuracy. The experimental results reveal that the proposed approach performs better in all respect of well-known parameters with 16 batch size and 50 epochs. The value of validation accuracy, precision, and recall of SegNet and Residual Unet models are 0.9815, 0.9699, 0.9574, and 0.9901, 0.9864, 0.9750 respectively.
[ "['Nand Lal Yadav' 'Satyendra Singh' 'Rajesh Kumar' 'Sudhakar Singh']" ]
null
null
2405.11299
null
null
http://arxiv.org/pdf/2405.11299v2
2024-05-27T01:09:07Z
2024-05-18T14:00:04Z
The CAP Principle for LLM Serving: A Survey of Long-Context Large Language Model Serving
We survey the large language model (LLM) serving area to understand the intricate dynamics between cost-efficiency and accuracy, which is magnified by the growing need for longer contextual understanding when deploying models at a massive scale. Our findings reveal that works in this space optimize along three distinct but conflicting goals: improving serving context length (C), improving serving accuracy (A), and improving serving performance (P). Drawing inspiration from the CAP theorem in databases, we propose a CAP principle for LLM serving, which suggests that any optimization can improve at most two of these three goals simultaneously. Our survey categorizes existing works within this framework. We find the definition and continuity of user-perceived measurement metrics are crucial in determining whether a goal has been met, akin to prior CAP databases in the wild. We recognize the CAP principle for LLM serving as a guiding principle, rather than a formal theorem, to inform designers of the inherent and dynamic trade-offs in serving models. As serving accuracy and performance have been extensively studied, this survey focuses on works that extend serving context length and address the resulting challenges.
[ "['Pai Zeng' 'Zhenyu Ning' 'Jieru Zhao' 'Weihao Cui' 'Mengwei Xu'\n 'Liwei Guo' 'Xusheng Chen' 'Yizhou Shan']" ]
null
null
2405.11311
null
null
http://arxiv.org/pdf/2405.11311v1
2024-05-18T15:04:44Z
2024-05-18T15:04:44Z
A Dual Power Grid Cascading Failure Model for the Vulnerability Analysis
Considering the attacks against the power grid, one of the most effective approaches could be the attack to the transmission lines that leads to large cascading failures. Hence, the problem of locating the most critical or vulnerable transmission lines for a Power Grid Cascading Failure (PGCF) has drawn much attention from the research society. There exists many deterministic solutions and stochastic approximation algorithms aiming to analyze the power grid vulnerability. However, it has been challenging to reveal the correlations between the transmission lines to identify the critical ones. In this paper, we propose a novel approach of learning such correlations via attention mechanism inspired by the Transformer based models that were initially designated to learn the correlation of words in sentences. Multiple modifications and adjustments are proposed to support the attention mechanism producing an informative correlation matrix, the Attention Matrix. With the Attention Ranking algorithm, we are able to identify the most critical lines. The proposed Dual PGCF model provide a novel and effective analysis to improve the power grid resilience against cascading failure, which is proved by extensive experiment results.
[ "['Tianxin Zhou' 'Xiang Li' 'Haibing Lu']" ]
null
null
2405.11318
null
null
http://arxiv.org/pdf/2405.11318v2
2024-05-27T09:32:35Z
2024-05-18T15:27:14Z
Smooth Kolmogorov Arnold networks enabling structural knowledge representation
Kolmogorov-Arnold Networks (KANs) offer an efficient and interpretable alternative to traditional multi-layer perceptron (MLP) architectures due to their finite network topology. However, according to the results of Kolmogorov and Vitushkin, the representation of generic smooth functions by KAN implementations using analytic functions constrained to a finite number of cutoff points cannot be exact. Hence, the convergence of KAN throughout the training process may be limited. This paper explores the relevance of smoothness in KANs, proposing that smooth, structurally informed KANs can achieve equivalence to MLPs in specific function classes. By leveraging inherent structural knowledge, KANs may reduce the data required for training and mitigate the risk of generating hallucinated predictions, thereby enhancing model reliability and performance in computational biomedicine.
[ "['Moein E. Samadi' 'Younes Müller' 'Andreas Schuppert']" ]
null
null
2405.11320
null
null
http://arxiv.org/pdf/2405.11320v1
2024-05-18T15:30:14Z
2024-05-18T15:30:14Z
Sampling Strategies for Mitigating Bias in Face Synthesis Methods
Synthetically generated images can be used to create media content or to complement datasets for training image analysis models. Several methods have recently been proposed for the synthesis of high-fidelity face images; however, the potential biases introduced by such methods have not been sufficiently addressed. This paper examines the bias introduced by the widely popular StyleGAN2 generative model trained on the Flickr Faces HQ dataset and proposes two sampling strategies to balance the representation of selected attributes in the generated face images. We focus on two protected attributes, gender and age, and reveal that biases arise in the distribution of randomly sampled images against very young and very old age groups, as well as against female faces. These biases are also assessed for different image quality levels based on the GIQA score. To mitigate bias, we propose two alternative methods for sampling on selected lines or spheres of the latent space to increase the number of generated samples from the under-represented classes. The experimental results show a decrease in bias against underrepresented groups and a more uniform distribution of the protected features at different levels of image quality.
[ "['Emmanouil Maragkoudakis' 'Symeon Papadopoulos' 'Iraklis Varlamis'\n 'Christos Diou']" ]
null
null
2405.11326
null
null
http://arxiv.org/pdf/2405.11326v1
2024-05-18T15:59:41Z
2024-05-18T15:59:41Z
On the Trajectory Regularity of ODE-based Diffusion Sampling
Diffusion-based generative models use stochastic differential equations (SDEs) and their equivalent ordinary differential equations (ODEs) to establish a smooth connection between a complex data distribution and a tractable prior distribution. In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models. We characterize an implicit denoising trajectory and discuss its vital role in forming the coupled sampling trajectory with a strong shape regularity, regardless of the generated content. We also describe a dynamic programming-based scheme to make the time schedule in sampling better fit the underlying trajectory structure. This simple strategy requires minimal modification to any given ODE-based numerical solvers and incurs negligible computational cost, while delivering superior performance in image generation, especially in $5sim 10$ function evaluations.
[ "['Defang Chen' 'Zhenyu Zhou' 'Can Wang' 'Chunhua Shen' 'Siwei Lyu']" ]
null
null
2405.11331
null
null
http://arxiv.org/pdf/2405.11331v1
2024-05-18T16:31:32Z
2024-05-18T16:31:32Z
Generalized Multi-Objective Reinforcement Learning with Envelope Updates in URLLC-enabled Vehicular Networks
We develop a novel multi-objective reinforcement learning (MORL) framework to jointly optimize wireless network selection and autonomous driving policies in a multi-band vehicular network operating on conventional sub-6GHz spectrum and Terahertz frequencies. The proposed framework is designed to 1. maximize the traffic flow and 2. minimize collisions by controlling the vehicle's motion dynamics (i.e., speed and acceleration), and enhance the ultra-reliable low-latency communication (URLLC) while minimizing handoffs (HOs). We cast this problem as a multi-objective Markov Decision Process (MOMDP) and develop solutions for both predefined and unknown preferences of the conflicting objectives. Specifically, deep-Q-network and double deep-Q-network-based solutions are developed first that consider scalarizing the transportation and telecommunication rewards using predefined preferences. We then develop a novel envelope MORL solution which develop policies that address multiple objectives with unknown preferences to the agent. While this approach reduces reliance on scalar rewards, policy effectiveness varying with different preferences is a challenge. To address this, we apply a generalized version of the Bellman equation and optimize the convex envelope of multi-objective Q values to learn a unified parametric representation capable of generating optimal policies across all possible preference configurations. Following an initial learning phase, our agent can execute optimal policies under any specified preference or infer preferences from minimal data samples.Numerical results validate the efficacy of the envelope-based MORL solution and demonstrate interesting insights related to the inter-dependency of vehicle motion dynamics, HOs, and the communication data rate. The proposed policies enable autonomous vehicles to adopt safe driving behaviors with improved connectivity.
[ "['Zijiang Yan' 'Hina Tabassum']" ]
null
null
2405.11333
null
null
http://arxiv.org/pdf/2405.11333v1
2024-05-18T16:42:44Z
2024-05-18T16:42:44Z
GinAR: An End-To-End Multivariate Time Series Forecasting Model Suitable for Variable Missing
Multivariate time series forecasting (MTSF) is crucial for decision-making to precisely forecast the future values/trends, based on the complex relationships identified from historical observations of multiple sequences. Recently, Spatial-Temporal Graph Neural Networks (STGNNs) have gradually become the theme of MTSF model as their powerful capability in mining spatial-temporal dependencies, but almost of them heavily rely on the assumption of historical data integrity. In reality, due to factors such as data collector failures and time-consuming repairment, it is extremely challenging to collect the whole historical observations without missing any variable. In this case, STGNNs can only utilize a subset of normal variables and easily suffer from the incorrect spatial-temporal dependency modeling issue, resulting in the degradation of their forecasting performance. To address the problem, in this paper, we propose a novel Graph Interpolation Attention Recursive Network (named GinAR) to precisely model the spatial-temporal dependencies over the limited collected data for forecasting. In GinAR, it consists of two key components, that is, interpolation attention and adaptive graph convolution to take place of the fully connected layer of simple recursive units, and thus are capable of recovering all missing variables and reconstructing the correct spatial-temporal dependencies for recursively modeling of multivariate time series data, respectively. Extensive experiments conducted on five real-world datasets demonstrate that GinAR outperforms 11 SOTA baselines, and even when 90% of variables are missing, it can still accurately predict the future values of all variables.
[ "['Chengqing Yu' 'Fei Wang' 'Zezhi Shao' 'Tangwen Qian' 'Zhao Zhang'\n 'Wei Wei' 'Yongjun Xu']" ]
null
null
2405.11344
null
null
http://arxiv.org/pdf/2405.11344v3
2024-07-13T20:00:31Z
2024-05-18T17:28:29Z
LiPost: Improved Content Understanding With Effective Use of Multi-task Contrastive Learning
In enhancing LinkedIn core content recommendation models, a significant challenge lies in improving their semantic understanding capabilities. This paper addresses the problem by leveraging multi-task learning, a method that has shown promise in various domains. We fine-tune a pre-trained, transformer-based LLM using multi-task contrastive learning with data from a diverse set of semantic labeling tasks. We observe positive transfer, leading to superior performance across all tasks when compared to training independently on each. Our model outperforms the baseline on zero shot learning and offers improved multilingual support, highlighting its potential for broader application. The specialized content embeddings produced by our model outperform generalized embeddings offered by OpenAI on Linkedin dataset and tasks. This work provides a robust foundation for vertical teams across LinkedIn to customize and fine-tune the LLM to their specific applications. Our work offers insights and best practices for the field to build on.
[ "['Akanksha Bindal' 'Sudarshan Ramanujam' 'Dave Golland' 'TJ Hazen'\n 'Tina Jiang' 'Fengyu Zhang' 'Peng Yan']" ]
null
null
2405.11349
null
null
http://arxiv.org/pdf/2405.11349v2
2024-06-03T12:55:58Z
2024-05-18T17:38:25Z
Unlock the Power of Algorithm Features: A Generalization Analysis for Algorithm Selection
In the algorithm selection research, the discussion surrounding algorithm features has been significantly overshadowed by the emphasis on problem features. Although a few empirical studies have yielded evidence regarding the effectiveness of algorithm features, the potential benefits of incorporating algorithm features into algorithm selection models and their suitability for different scenarios remain unclear. In this paper, we address this gap by proposing the first provable guarantee for algorithm selection based on algorithm features, taking a generalization perspective. We analyze the benefits and costs associated with algorithm features and investigate how the generalization error is affected by different factors. Specifically, we examine adaptive and predefined algorithm features under transductive and inductive learning paradigms, respectively, and derive upper bounds for the generalization error based on their model's Rademacher complexity. Our theoretical findings not only provide tight upper bounds, but also offer analytical insights into the impact of various factors, such as the training scale of problem instances and candidate algorithms, model parameters, feature values, and distributional differences between the training and test data. Notably, we demonstrate how models will benefit from algorithm features in complex scenarios involving many algorithms, and proves the positive correlation between generalization error bound and $chi^2$-divergence of distributions.
[ "['Xingyu Wu' 'Yan Zhong' 'Jibin Wu' 'Yuxiao Huang' 'Sheng-hao Wu'\n 'Kay Chen Tan']" ]
null
null
2405.11372
null
null
http://arxiv.org/pdf/2405.11372v1
2024-05-18T19:13:49Z
2024-05-18T19:13:49Z
ReModels: Quantile Regression Averaging models
Electricity price forecasts play a crucial role in making key business decisions within the electricity markets. A focal point in this domain are probabilistic predictions, which delineate future price values in a more comprehensive manner than simple point forecasts. The golden standard in probabilistic approaches to predict energy prices is the Quantile Regression Averaging (QRA) method. In this paper, we present a Python package that encompasses the implementation of QRA, along with modifications of this approach that have appeared in the literature over the past few years. The proposed package also facilitates the acquisition and preparation of data related to electricity markets, as well as the evaluation of model predictions.
[ "['Grzegorz Zakrzewski' 'Kacper Skonieczka' 'Mikołaj Małkiński'\n 'Jacek Mańdziuk']" ]
null
null
2405.11377
null
null
http://arxiv.org/pdf/2405.11377v1
2024-05-18T19:54:14Z
2024-05-18T19:54:14Z
Causal Customer Churn Analysis with Low-rank Tensor Block Hazard Model
This study introduces an innovative method for analyzing the impact of various interventions on customer churn, using the potential outcomes framework. We present a new causal model, the tensorized latent factor block hazard model, which incorporates tensor completion methods for a principled causal analysis of customer churn. A crucial element of our approach is the formulation of a 1-bit tensor completion for the parameter tensor. This captures hidden customer characteristics and temporal elements from churn records, effectively addressing the binary nature of churn data and its time-monotonic trends. Our model also uniquely categorizes interventions by their similar impacts, enhancing the precision and practicality of implementing customer retention strategies. For computational efficiency, we apply a projected gradient descent algorithm combined with spectral clustering. We lay down the theoretical groundwork for our model, including its non-asymptotic properties. The efficacy and superiority of our model are further validated through comprehensive experiments on both simulated and real-world applications.
[ "['Chenyin Gao' 'Zhiming Zhang' 'Shu Yang']" ]
null
null
2405.11383
null
null
http://arxiv.org/pdf/2405.11383v2
2024-05-21T11:00:13Z
2024-05-18T20:12:16Z
Investigating KAN-Based Physics-Informed Neural Networks for EMI/EMC Simulations
The main objective of this paper is to investigate the feasibility of employing Physics-Informed Neural Networks (PINNs) techniques, in particular KolmogorovArnold Networks (KANs), for facilitating Electromagnetic Interference (EMI) simulations. It introduces some common EM problem formulations and how they can be solved using AI-driven solutions instead of lengthy and complex full-wave numerical simulations. This research may open new horizons for green EMI simulation workflows with less energy consumption and feasible computational capacity.
[ "['Kun Qian' 'Mohamed Kheir']" ]
null
null
2405.11389
null
null
http://arxiv.org/pdf/2405.11389v1
2024-05-18T20:24:11Z
2024-05-18T20:24:11Z
Adjacent Leader Decentralized Stochastic Gradient Descent
This work focuses on the decentralized deep learning optimization framework. We propose Adjacent Leader Decentralized Gradient Descent (AL-DSGD), for improving final model performance, accelerating convergence, and reducing the communication overhead of decentralized deep learning optimizers. AL-DSGD relies on two main ideas. Firstly, to increase the influence of the strongest learners on the learning system it assigns weights to different neighbor workers according to both their performance and the degree when averaging among them, and it applies a corrective force on the workers dictated by both the currently best-performing neighbor and the neighbor with the maximal degree. Secondly, to alleviate the problem of the deterioration of the convergence speed and performance of the nodes with lower degrees, AL-DSGD relies on dynamic communication graphs, which effectively allows the workers to communicate with more nodes while keeping the degrees of the nodes low. Experiments demonstrate that AL-DSGD accelerates the convergence of the decentralized state-of-the-art techniques and improves their test performance especially in the communication constrained environments. We also theoretically prove the convergence of the proposed scheme. Finally, we release to the community a highly general and concise PyTorch-based library for distributed training of deep learning models that supports easy implementation of any distributed deep learning approach ((a)synchronous, (de)centralized).
[ "['Haoze He' 'Jing Wang' 'Anna Choromanska']" ]
null
null
2405.11397
null
null
http://arxiv.org/pdf/2405.11397v1
2024-05-18T21:32:29Z
2024-05-18T21:32:29Z
Preparing for Black Swans: The Antifragility Imperative for Machine Learning
Operating safely and reliably despite continual distribution shifts is vital for high-stakes machine learning applications. This paper builds upon the transformative concept of ``antifragility'' introduced by (Taleb, 2014) as a constructive design paradigm to not just withstand but benefit from volatility. We formally define antifragility in the context of online decision making as dynamic regret's strictly concave response to environmental variability, revealing limitations of current approaches focused on resisting rather than benefiting from nonstationarity. Our contribution lies in proposing potential computational pathways for engineering antifragility, grounding the concept in online learning theory and drawing connections to recent advancements in areas such as meta-learning, safe exploration, continual learning, multi-objective/quality-diversity optimization, and foundation models. By identifying promising mechanisms and future research directions, we aim to put antifragility on a rigorous theoretical foundation in machine learning. We further emphasize the need for clear guidelines, risk assessment frameworks, and interdisciplinary collaboration to ensure responsible application.
[ "['Ming Jin']" ]
null
null
2405.11401
null
null
http://arxiv.org/pdf/2405.11401v2
2024-05-24T01:40:41Z
2024-05-18T22:01:55Z
PDE Control Gym: A Benchmark for Data-Driven Boundary Control of Partial Differential Equations
Over the last decade, data-driven methods have surged in popularity, emerging as valuable tools for control theory. As such, neural network approximations of control feedback laws, system dynamics, and even Lyapunov functions have attracted growing attention. With the ascent of learning based control, the need for accurate, fast, and easy-to-use benchmarks has increased. In this work, we present the first learning-based environment for boundary control of PDEs. In our benchmark, we introduce three foundational PDE problems - a 1D transport PDE, a 1D reaction-diffusion PDE, and a 2D Navier-Stokes PDE - whose solvers are bundled in an user-friendly reinforcement learning gym. With this gym, we then present the first set of model-free, reinforcement learning algorithms for solving this series of benchmark problems, achieving stability, although at a higher cost compared to model-based PDE backstepping. With the set of benchmark environments and detailed examples, this work significantly lowers the barrier to entry for learning-based PDE control - a topic largely unexplored by the data-driven control community. The entire benchmark is available on Github along with detailed documentation and the presented reinforcement learning models are open sourced.
[ "['Luke Bhan' 'Yuexin Bian' 'Miroslav Krstic' 'Yuanyuan Shi']" ]
null
null
2405.11404
null
null
http://arxiv.org/pdf/2405.11404v1
2024-05-18T22:13:55Z
2024-05-18T22:13:55Z
How big is Big Data?
Big data has ushered in a new wave of predictive power using machine learning models. In this work, we assess what {it big} means in the context of typical materials-science machine-learning problems. This concerns not only data volume, but also data quality and veracity as much as infrastructure issues. With selected examples, we ask (i) how models generalize to similar datasets, (ii) how high-quality datasets can be gathered from heterogenous sources, (iii) how the feature set and complexity of a model can affect expressivity, and (iv) what infrastructure requirements are needed to create larger datasets and train models on them. In sum, we find that big data present unique challenges along very different aspects that should serve to motivate further work.
[ "['Daniel T. Speckhard' 'Tim Bechtel' 'Luca M. Ghiringhelli' 'Martin Kuban'\n 'Santiago Rigamonti' 'Claudia Draxl']" ]
null
null
2405.11413
null
null
http://arxiv.org/pdf/2405.11413v1
2024-05-18T23:21:39Z
2024-05-18T23:21:39Z
Exploring speech style spaces with language models: Emotional TTS without emotion labels
Many frameworks for emotional text-to-speech (E-TTS) rely on human-annotated emotion labels that are often inaccurate and difficult to obtain. Learning emotional prosody implicitly presents a tough challenge due to the subjective nature of emotions. In this study, we propose a novel approach that leverages text awareness to acquire emotional styles without the need for explicit emotion labels or text prompts. We present TEMOTTS, a two-stage framework for E-TTS that is trained without emotion labels and is capable of inference without auxiliary inputs. Our proposed method performs knowledge transfer between the linguistic space learned by BERT and the emotional style space constructed by global style tokens. Our experimental results demonstrate the effectiveness of our proposed framework, showcasing improvements in emotional accuracy and naturalness. This is one of the first studies to leverage the emotional correlation between spoken content and expressive delivery for emotional TTS.
[ "['Shreeram Suresh Chandra' 'Zongyang Du' 'Berrak Sisman']" ]
null
null
2405.11416
null
null
http://arxiv.org/pdf/2405.11416v1
2024-05-19T00:09:42Z
2024-05-19T00:09:42Z
Discrete-state Continuous-time Diffusion for Graph Generation
Graph is a prevalent discrete data structure, whose generation has wide applications such as drug discovery and circuit design. Diffusion generative models, as an emerging research focus, have been applied to graph generation tasks. Overall, according to the space of states and time steps, diffusion generative models can be categorized into discrete-/continuous-state discrete-/continuous-time fashions. In this paper, we formulate the graph diffusion generation in a discrete-state continuous-time setting, which has never been studied in previous graph diffusion models. The rationale of such a formulation is to preserve the discrete nature of graph-structured data and meanwhile provide flexible sampling trade-offs between sample quality and efficiency. Analysis shows that our training objective is closely related to generation quality, and our proposed generation framework enjoys ideal invariant/equivariant properties concerning the permutation of node ordering. Our proposed model shows competitive empirical performance against state-of-the-art graph generation solutions on various benchmarks and, at the same time, can flexibly trade off the generation quality and efficiency in the sampling phase.
[ "['Zhe Xu' 'Ruizhong Qiu' 'Yuzhong Chen' 'Huiyuan Chen' 'Xiran Fan'\n 'Menghai Pan' 'Zhichen Zeng' 'Mahashweta Das' 'Hanghang Tong']" ]
null
null
2405.11417
null
null
http://arxiv.org/pdf/2405.11417v1
2024-05-19T00:19:59Z
2024-05-19T00:19:59Z
Budgeted Recommendation with Delayed Feedback
In a conventional contextual multi-armed bandit problem, the feedback (or reward) is immediately observable after an action. Nevertheless, delayed feedback arises in numerous real-life situations and is particularly crucial in time-sensitive applications. The exploration-exploitation dilemma becomes particularly challenging under such conditions, as it couples with the interplay between delays and limited resources. Besides, a limited budget often aggravates the problem by restricting the exploration potential. A motivating example is the distribution of medical supplies at the early stage of COVID-19. The delayed feedback of testing results, thus insufficient information for learning, degraded the efficiency of resource allocation. Motivated by such applications, we study the effect of delayed feedback on constrained contextual bandits. We develop a decision-making policy, delay-oriented resource allocation with learning (DORAL), to optimize the resource expenditure in a contextual multi-armed bandit problem with arm-dependent delayed feedback.
[ "['Kweiguu Liu' 'Setareh Maghsudi']" ]
null
null
2405.11422
null
null
http://arxiv.org/pdf/2405.11422v1
2024-05-19T01:43:52Z
2024-05-19T01:43:52Z
Large Language Models are Biased Reinforcement Learners
In-context learning enables large language models (LLMs) to perform a variety of tasks, including learning to make reward-maximizing choices in simple bandit tasks. Given their potential use as (autonomous) decision-making agents, it is important to understand how these models perform such reinforcement learning (RL) tasks and the extent to which they are susceptible to biases. Motivated by the fact that, in humans, it has been widely documented that the value of an outcome depends on how it compares to other local outcomes, the present study focuses on whether similar value encoding biases apply to how LLMs encode rewarding outcomes. Results from experiments with multiple bandit tasks and models show that LLMs exhibit behavioral signatures of a relative value bias. Adding explicit outcome comparisons to the prompt produces opposing effects on performance, enhancing maximization in trained choice sets but impairing generalization to new choice sets. Computational cognitive modeling reveals that LLM behavior is well-described by a simple RL algorithm that incorporates relative values at the outcome encoding stage. Lastly, we present preliminary evidence that the observed biases are not limited to fine-tuned LLMs, and that relative value processing is detectable in the final hidden layer activations of a raw, pretrained model. These findings have important implications for the use of LLMs in decision-making applications.
[ "['William M. Hayes' 'Nicolas Yax' 'Stefano Palminteri']" ]
null
null
2405.11427
null
null
http://arxiv.org/pdf/2405.11427v1
2024-05-19T02:18:04Z
2024-05-19T02:18:04Z
Quantum Neural Networks for Solving Power System Transient Simulation Problem
Quantum computing, leveraging principles of quantum mechanics, represents a transformative approach in computational methodologies, offering significant enhancements over traditional classical systems. This study tackles the complex and computationally demanding task of simulating power system transients through solving differential algebraic equations (DAEs). We introduce two novel Quantum Neural Networks (QNNs): the Sinusoidal-Friendly QNN and the Polynomial-Friendly QNN, proposing them as effective alternatives to conventional simulation techniques. Our application of these QNNs successfully simulates two small power systems, demonstrating their potential to achieve good accuracy. We further explore various configurations, including time intervals, training points, and the selection of classical optimizers, to optimize the solving of DAEs using QNNs. This research not only marks a pioneering effort in applying quantum computing to power system simulations but also expands the potential of quantum technologies in addressing intricate engineering challenges.
[ "['Mohammadreza Soltaninia' 'Junpeng Zhan']" ]
null
null
2405.11431
null
null
http://arxiv.org/pdf/2405.11431v2
2024-06-02T07:20:29Z
2024-05-19T03:15:27Z
Review of deep learning models for crypto price prediction: implementation and evaluation
There has been much interest in accurate cryptocurrency price forecast models by investors and researchers. Deep Learning models are prominent machine learning techniques that have transformed various fields and have shown potential for finance and economics. Although various deep learning models have been explored for cryptocurrency price forecasting, it is not clear which models are suitable due to high market volatility. In this study, we review the literature about deep learning for cryptocurrency price forecasting and evaluate novel deep learning models for cryptocurrency stock price prediction. Our deep learning models include variants of long short-term memory (LSTM) recurrent neural networks, variants of convolutional neural networks (CNNs), and the Transformer model. We evaluate univariate and multivariate approaches for multi-step ahead predicting of cryptocurrencies close-price. We also carry out volatility analysis on the four cryptocurrencies which reveals significant fluctuations in their prices throughout the COVID-19 pandemic. Additionally, we investigate the prediction accuracy of two scenarios identified by different training sets for the models. First, we use the pre-COVID-19 datasets to model cryptocurrency close-price forecasting during the early period of COVID-19. Secondly, we utilise data from the COVID-19 period to predict prices for 2023 to 2024. Our results show that the convolutional LSTM with a multivariate approach provides the best prediction accuracy in two major experimental settings. Our results also indicate that the multivariate deep learning models exhibit better performance in forecasting four different cryptocurrencies when compared to the univariate models.
[ "['Jingyang Wu' 'Xinyi Zhang' 'Fangyixuan Huang' 'Haochen Zhou'\n 'Rohtiash Chandra']" ]
null
null
2405.11432
null
null
http://arxiv.org/pdf/2405.11432v1
2024-05-19T03:27:31Z
2024-05-19T03:27:31Z
On Robust Reinforcement Learning with Lipschitz-Bounded Policy Networks
This paper presents a study of robust policy networks in deep reinforcement learning. We investigate the benefits of policy parameterizations that naturally satisfy constraints on their Lipschitz bound, analyzing their empirical performance and robustness on two representative problems: pendulum swing-up and Atari Pong. We illustrate that policy networks with small Lipschitz bounds are significantly more robust to disturbances, random noise, and targeted adversarial attacks than unconstrained policies composed of vanilla multi-layer perceptrons or convolutional neural networks. Moreover, we find that choosing a policy parameterization with a non-conservative Lipschitz bound and an expressive, nonlinear layer architecture gives the user much finer control over the performance-robustness trade-off than existing state-of-the-art methods based on spectral normalization.
[ "['Nicholas H. Barbara' 'Ruigang Wang' 'Ian R. Manchester']" ]
null
null
2405.11446
null
null
http://arxiv.org/pdf/2405.11446v1
2024-05-19T04:49:42Z
2024-05-19T04:49:42Z
MAML-en-LLM: Model Agnostic Meta-Training of LLMs for Improved In-Context Learning
Adapting large language models (LLMs) to unseen tasks with in-context training samples without fine-tuning remains an important research problem. To learn a robust LLM that adapts well to unseen tasks, multiple meta-training approaches have been proposed such as MetaICL and MetaICT, which involve meta-training pre-trained LLMs on a wide variety of diverse tasks. These meta-training approaches essentially perform in-context multi-task fine-tuning and evaluate on a disjointed test set of tasks. Even though they achieve impressive performance, their goal is never to compute a truly general set of parameters. In this paper, we propose MAML-en-LLM, a novel method for meta-training LLMs, which can learn truly generalizable parameters that not only perform well on disjointed tasks but also adapts to unseen tasks. We see an average increase of 2% on unseen domains in the performance while a massive 4% improvement on adaptation performance. Furthermore, we demonstrate that MAML-en-LLM outperforms baselines in settings with limited amount of training data on both seen and unseen domains by an average of 2%. Finally, we discuss the effects of type of tasks, optimizers and task complexity, an avenue barely explored in meta-training literature. Exhaustive experiments across 7 task settings along with two data settings demonstrate that models trained with MAML-en-LLM outperform SOTA meta-training approaches.
[ "['Sanchit Sinha' 'Yuguang Yue' 'Victor Soto' 'Mayank Kulkarni'\n 'Jianhua Lu' 'Aidong Zhang']" ]
null
null
2405.11449
null
null
http://arxiv.org/pdf/2405.11449v2
2024-05-25T11:58:27Z
2024-05-19T04:58:53Z
NetMamba: Efficient Network Traffic Classification via Pre-training Unidirectional Mamba
Network traffic classification is a crucial research area aiming to enhance service quality, streamline network management, and bolster cybersecurity. To address the growing complexity of transmission encryption techniques, various machine learning and deep learning methods have been proposed. However, existing approaches face two main challenges. Firstly, they struggle with model inefficiency due to the quadratic complexity of the widely used Transformer architecture. Secondly, they suffer from inadequate traffic representation because of discarding important byte information while retaining unwanted biases. To address these challenges, we propose NetMamba, an efficient linear-time state space model equipped with a comprehensive traffic representation scheme. We adopt a specially selected and improved unidirectional Mamba architecture for the networking field, instead of the Transformer, to address efficiency issues. In addition, we design a traffic representation scheme to extract valid information from massive traffic data while removing biased information. Evaluation experiments on six public datasets encompassing three main classification tasks showcase NetMamba's superior classification performance compared to state-of-the-art baselines. It achieves an accuracy rate of nearly 99% (some over 99%) in all tasks. Additionally, NetMamba demonstrates excellent efficiency, improving inference speed by up to 60 times while maintaining comparably low memory usage. Furthermore, NetMamba exhibits superior few-shot learning abilities, achieving better classification performance with fewer labeled data. To the best of our knowledge, NetMamba is the first model to tailor the Mamba architecture for networking.
[ "['Tongze Wang' 'Xiaohui Xie' 'Wenduo Wang' 'Chuyi Wang' 'Youjian Zhao'\n 'Yong Cui']" ]
null
null
2405.11454
null
null
http://arxiv.org/pdf/2405.11454v1
2024-05-19T05:39:46Z
2024-05-19T05:39:46Z
Comparisons Are All You Need for Optimizing Smooth Functions
When optimizing machine learning models, there are various scenarios where gradient computations are challenging or even infeasible. Furthermore, in reinforcement learning (RL), preference-based RL that only compares between options has wide applications, including reinforcement learning with human feedback in large language models. In this paper, we systematically study optimization of a smooth function $fcolonmathbb{R}^ntomathbb{R}$ only assuming an oracle that compares function values at two points and tells which is larger. When $f$ is convex, we give two algorithms using $tilde{O}(n/epsilon)$ and $tilde{O}(n^{2})$ comparison queries to find an $epsilon$-optimal solution, respectively. When $f$ is nonconvex, our algorithm uses $tilde{O}(n/epsilon^2)$ comparison queries to find an $epsilon$-approximate stationary point. All these results match the best-known zeroth-order algorithms with function evaluation queries in $n$ dependence, thus suggest that emph{comparisons are all you need for optimizing smooth functions using derivative-free methods}. In addition, we also give an algorithm for escaping saddle points and reaching an $epsilon$-second order stationary point of a nonconvex $f$, using $tilde{O}(n^{1.5}/epsilon^{2.5})$ comparison queries.
[ "['Chenyi Zhang' 'Tongyang Li']" ]
null
null
2405.11457
null
null
http://arxiv.org/pdf/2405.11457v1
2024-05-19T05:58:44Z
2024-05-19T05:58:44Z
Deep Dive into Model-free Reinforcement Learning for Biological and Robotic Systems: Theory and Practice
Animals and robots exist in a physical world and must coordinate their bodies to achieve behavioral objectives. With recent developments in deep reinforcement learning, it is now possible for scientists and engineers to obtain sensorimotor strategies (policies) for specific tasks using physically simulated bodies and environments. However, the utility of these methods goes beyond the constraints of a specific task; they offer an exciting framework for understanding the organization of an animal sensorimotor system in connection to its morphology and physical interaction with the environment, as well as for deriving general design rules for sensing and actuation in robotic systems. Algorithms and code implementing both learning agents and environments are increasingly available, but the basic assumptions and choices that go into the formulation of an embodied feedback control problem using deep reinforcement learning may not be immediately apparent. Here, we present a concise exposition of the mathematical and algorithmic aspects of model-free reinforcement learning, specifically through the use of textit{actor-critic} methods, as a tool for investigating the feedback control underlying animal and robotic behavior.
[ "['Yusheng Jiao' 'Feng Ling' 'Sina Heydari' 'Nicolas Heess' 'Josh Merel'\n 'Eva Kanso']" ]
null
null
2405.11464
null
null
http://arxiv.org/pdf/2405.11464v2
2024-07-01T14:27:51Z
2024-05-19T06:43:12Z
Efficient Prompt Tuning by Multi-Space Projection and Prompt Fusion
Prompt tuning is a promising method to fine-tune a pre-trained language model without retraining its large-scale parameters. Instead, it attaches a soft prompt to the input text, whereby downstream tasks can be well adapted by merely learning the embeddings of prompt tokens. Nevertheless, existing methods still suffer from two challenges: (i) they are hard to balance accuracy and efficiency. A longer (shorter) soft prompt generally leads to a better(worse) accuracy but at the cost of more (less) training time. (ii)The performance may not be consistent when adapting to different downstream tasks. We attribute it to the same embedding space but responsible for different requirements of downstream tasks. To address these issues, we propose an Efficient Prompt Tuning method (EPT) by multi-space projection and prompt fusion. Specifically, it decomposes a given soft prompt into a shorter prompt and two low-rank matrices, significantly reducing the training time. Accuracy is also enhanced by leveraging low-rank matrices and the short prompt as additional knowledge sources to enrich the semantics of the original short prompt. In addition, we project the soft prompt into multiple subspaces to improve the performance consistency, and then adaptively learn the combination weights of different spaces through a gating network. Experiments on 13 natural language processing downstream tasks show that our method significantly and consistently outperforms 11 comparison methods with the relative percentage of improvements up to 12.9%, and training time decreased by 14%.
[ "['Pengxiang Lan' 'Enneng Yang' 'Yuting Liu' 'Guibing Guo' 'Linying Jiang'\n 'Jianzhe Zhao' 'Xingwei Wang']" ]
null
null
2405.11470
null
null
http://arxiv.org/pdf/2405.11470v1
2024-05-19T07:39:22Z
2024-05-19T07:39:22Z
VCformer: Variable Correlation Transformer with Inherent Lagged Correlation for Multivariate Time Series Forecasting
Multivariate time series (MTS) forecasting has been extensively applied across diverse domains, such as weather prediction and energy consumption. However, current studies still rely on the vanilla point-wise self-attention mechanism to capture cross-variable dependencies, which is inadequate in extracting the intricate cross-correlation implied between variables. To fill this gap, we propose Variable Correlation Transformer (VCformer), which utilizes Variable Correlation Attention (VCA) module to mine the correlations among variables. Specifically, based on the stochastic process theory, VCA calculates and integrates the cross-correlation scores corresponding to different lags between queries and keys, thereby enhancing its ability to uncover multivariate relationships. Additionally, inspired by Koopman dynamics theory, we also develop Koopman Temporal Detector (KTD) to better address the non-stationarity in time series. The two key components enable VCformer to extract both multivariate correlations and temporal dependencies. Our extensive experiments on eight real-world datasets demonstrate the effectiveness of VCformer, achieving top-tier performance compared to other state-of-the-art baseline models. Code is available at this repository: https://github.com/CSyyn/VCformer.
[ "['Yingnan Yang' 'Qingling Zhu' 'Jianyong Chen']" ]
null
null
2405.11494
null
null
http://arxiv.org/abs/2405.11494v1
2024-05-19T09:25:55Z
2024-05-19T09:25:55Z
Automated Coastline Extraction Using Edge Detection Algorithms
We analyse the effectiveness of edge detection algorithms for the purpose of automatically extracting coastlines from satellite images. Four algorithms - Canny, Sobel, Scharr and Prewitt are compared visually and using metrics. With an average SSIM of 0.8, Canny detected edges that were closest to the reference edges. However, the algorithm had difficulty distinguishing noisy edges, e.g. due to development, from coastline edges. In addition, histogram equalization and Gaussian blur were shown to improve the effectiveness of the edge detection algorithms by up to 1.5 and 1.6 times respectively.
[ "[\"Conor O'Sullivan\" 'Seamus Coveney' 'Xavier Monteys' 'Soumyabrata Dev']" ]
null
null
2405.11498
null
null
http://arxiv.org/abs/2405.11498v1
2024-05-19T09:51:10Z
2024-05-19T09:51:10Z
The Effectiveness of Edge Detection Evaluation Metrics for Automated Coastline Detection
We analyse the effectiveness of RMSE, PSNR, SSIM and FOM for evaluating edge detection algorithms used for automated coastline detection. Typically, the accuracy of detected coastlines is assessed visually. This can be impractical on a large scale leading to the need for objective evaluation metrics. Hence, we conduct an experiment to find reliable metrics. We apply Canny edge detection to 95 coastline satellite images across 49 testing locations. We vary the Hysteresis thresholds and compare metric values to a visual analysis of detected edges. We found that FOM was the most reliable metric for selecting the best threshold. It could select a better threshold 92.6% of the time and the best threshold 66.3% of the time. This is compared RMSE, PSNR and SSIM which could select the best threshold 6.3%, 6.3% and 11.6% of the time respectively. We provide a reason for these results by reformulating RMSE, PSNR and SSIM in terms of confusion matrix measures. This suggests these metrics not only fail for this experiment but are not useful for evaluating edge detection in general.
[ "[\"Conor O'Sullivan\" 'Seamus Coveney' 'Xavier Monteys' 'Soumyabrata Dev']" ]
null
null
2405.11500
null
null
http://arxiv.org/abs/2405.11500v1
2024-05-19T09:57:34Z
2024-05-19T09:57:34Z
Interpreting a Semantic Segmentation Model for Coastline Detection
We interpret a deep-learning semantic segmentation model used to classify coastline satellite images into land and water. This is to build trust in the model and gain new insight into the process of coastal water body extraction. Specifically, we seek to understand which spectral bands are important for predicting segmentation masks. This is done using a permutation importance approach. Results show that the NIR is the most important spectral band. Permuting this band lead to a decrease in accuracy of 38.12 percentage points. This is followed by Water Vapour, SWIR 1, and Blue bands with 2.58, 0.78 and 0.19 respectively. Water Vapour is not typically used in water indices and these results suggest it may be useful for water body extraction. Permuting, the Coastal Aerosol, Green, Red, RE1, RE2, RE3, RE4, and SWIR 2 bands did not decrease accuracy. This suggests they could be excluded from future model builds reducing complexity and computational requirements.
[ "[\"Conor O'Sullivan\" 'Seamus Coveney' 'Xavier Monteys' 'Soumyabrata Dev']" ]
null
null
2405.11519
null
null
http://arxiv.org/pdf/2405.11519v1
2024-05-19T11:17:00Z
2024-05-19T11:17:00Z
MSNER: A Multilingual Speech Dataset for Named Entity Recognition
While extensively explored in text-based tasks, Named Entity Recognition (NER) remains largely neglected in spoken language understanding. Existing resources are limited to a single, English-only dataset. This paper addresses this gap by introducing MSNER, a freely available, multilingual speech corpus annotated with named entities. It provides annotations to the VoxPopuli dataset in four languages (Dutch, French, German, and Spanish). We have also releasing an efficient annotation tool that leverages automatic pre-annotations for faster manual refinement. This results in 590 and 15 hours of silver-annotated speech for training and validation, alongside a 17-hour, manually-annotated evaluation set. We further provide an analysis comparing silver and gold annotations. Finally, we present baseline NER models to stimulate further research on this newly available dataset.
[ "['Quentin Meeus' 'Marie-Francine Moens' 'Hugo Van hamme']" ]
null
null
2405.11525
null
null
http://arxiv.org/pdf/2405.11525v1
2024-05-19T11:36:45Z
2024-05-19T11:36:45Z
Overcoming Data and Model Heterogeneities in Decentralized Federated Learning via Synthetic Anchors
Conventional Federated Learning (FL) involves collaborative training of a global model while maintaining user data privacy. One of its branches, decentralized FL, is a serverless network that allows clients to own and optimize different local models separately, which results in saving management and communication resources. Despite the promising advancements in decentralized FL, it may reduce model generalizability due to lacking a global model. In this scenario, managing data and model heterogeneity among clients becomes a crucial problem, which poses a unique challenge that must be overcome: How can every client's local model learn generalizable representation in a decentralized manner? To address this challenge, we propose a novel Decentralized FL technique by introducing Synthetic Anchors, dubbed as DeSA. Based on the theory of domain adaptation and Knowledge Distillation (KD), we theoretically and empirically show that synthesizing global anchors based on raw data distribution facilitates mutual knowledge transfer. We further design two effective regularization terms for local training: 1) REG loss that regularizes the distribution of the client's latent embedding with the anchors and 2) KD loss that enables clients to learn from others. Through extensive experiments on diverse client data distributions, we showcase the effectiveness of DeSA in enhancing both inter- and intra-domain accuracy of each client.
[ "['Chun-Yin Huang' 'Kartik Srinivas' 'Xin Zhang' 'Xiaoxiao Li']" ]
null
null
2405.11530
null
null
http://arxiv.org/pdf/2405.11530v1
2024-05-19T11:55:48Z
2024-05-19T11:55:48Z
Learning More Generalized Experts by Merging Experts in Mixture-of-Experts
We observe that incorporating a shared layer in a mixture-of-experts can lead to performance degradation. This leads us to hypothesize that learning shared features poses challenges in deep learning, potentially caused by the same feature being learned as various different features. To address this issue, we track each expert's usage frequency and merge the two most frequently selected experts. We then update the least frequently selected expert using the combination of experts. This approach, combined with the subsequent learning of the router's expert selection, allows the model to determine if the most frequently selected experts have learned the same feature differently. If they have, the combined expert can be further trained to learn a more general feature. Consequently, our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
[ "['Sejik Park']" ]
null
null
2405.11533
null
null
http://arxiv.org/pdf/2405.11533v1
2024-05-19T12:24:30Z
2024-05-19T12:24:30Z
Hierarchical Selective Classification
Deploying deep neural networks for risk-sensitive tasks necessitates an uncertainty estimation mechanism. This paper introduces hierarchical selective classification, extending selective classification to a hierarchical setting. Our approach leverages the inherent structure of class relationships, enabling models to reduce the specificity of their predictions when faced with uncertainty. In this paper, we first formalize hierarchical risk and coverage, and introduce hierarchical risk-coverage curves. Next, we develop algorithms for hierarchical selective classification (which we refer to as "inference rules"), and propose an efficient algorithm that guarantees a target accuracy constraint with high probability. Lastly, we conduct extensive empirical studies on over a thousand ImageNet classifiers, revealing that training regimes such as CLIP, pretraining on ImageNet21k and knowledge distillation boost hierarchical selective performance.
[ "['Shani Goren' 'Ido Galil' 'Ran El-Yaniv']" ]
null
null
2405.11542
null
null
http://arxiv.org/pdf/2405.11542v2
2024-05-23T02:27:10Z
2024-05-19T13:15:23Z
From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems
Modeling complex systems using standard neural ordinary differential equations (NODEs) often faces some essential challenges, including high computational costs and susceptibility to local optima. To address these challenges, we propose a simulation-free framework, called Fourier NODEs (FNODEs), that effectively trains NODEs by directly matching the target vector field based on Fourier analysis. Specifically, we employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data. We then incorporate the estimated spatial gradients as additional inputs to a neural network. Furthermore, we utilize the estimated temporal gradient as the optimization objective for the output of the neural network. Later, the trained neural network generates more data points through an ODE solver without participating in the computational graph, facilitating more accurate estimations of gradients based on Fourier analysis. These two steps form a positive feedback loop, enabling accurate dynamics modeling in our framework. Consequently, our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness. Finally, we demonstrate the superior performance of our framework using a number of representative complex systems.
[ "['Xin Li' 'Jingdong Zhang' 'Qunxi Zhu' 'Chengli Zhao' 'Xue Zhang'\n 'Xiaojun Duan' 'Wei Lin']" ]
null
null
2405.11547
null
null
http://arxiv.org/pdf/2405.11547v2
2024-06-20T15:15:15Z
2024-05-19T13:23:05Z
Certified Robust Accuracy of Neural Networks Are Bounded due to Bayes Errors
Adversarial examples pose a security threat to many critical systems built on neural networks. While certified training improves robustness, it also decreases accuracy noticeably. Despite various proposals for addressing this issue, the significant accuracy drop remains. More importantly, it is not clear whether there is a certain fundamental limit on achieving robustness whilst maintaining accuracy. In this work, we offer a novel perspective based on Bayes errors. By adopting Bayes error to robustness analysis, we investigate the limit of certified robust accuracy, taking into account data distribution uncertainties. We first show that the accuracy inevitably decreases in the pursuit of robustness due to changed Bayes error in the altered data distribution. Subsequently, we establish an upper bound for certified robust accuracy, considering the distribution of individual classes and their boundaries. Our theoretical results are empirically evaluated on real-world datasets and are shown to be consistent with the limited success of existing certified training results, e.g., for CIFAR10, our analysis results in an upper bound (of certified robust accuracy) of 67.49%, meanwhile existing approaches are only able to increase it from 53.89% in 2017 to 62.84% in 2023.
[ "['Ruihan Zhang' 'Jun Sun']" ]
null
null
2405.11548
null
null
http://arxiv.org/pdf/2405.11548v3
2024-06-22T07:37:33Z
2024-05-19T13:26:33Z
Adaptive Online Experimental Design for Causal Discovery
Causal discovery aims to uncover cause-and-effect relationships encoded in causal graphs by leveraging observational, interventional data, or their combination. The majority of existing causal discovery methods are developed assuming infinite interventional data. We focus on data interventional efficiency and formalize causal discovery from the perspective of online learning, inspired by pure exploration in bandit problems. A graph separating system, consisting of interventions that cut every edge of the graph at least once, is sufficient for learning causal graphs when infinite interventional data is available, even in the worst case. We propose a track-and-stop causal discovery algorithm that adaptively selects interventions from the graph separating system via allocation matching and learns the causal graph based on sampling history. Given any desired confidence value, the algorithm determines a termination condition and runs until it is met. We analyze the algorithm to establish a problem-dependent upper bound on the expected number of required interventional samples. Our proposed algorithm outperforms existing methods in simulations across various randomly generated causal graphs. It achieves higher accuracy, measured by the structural hamming distance (SHD) between the learned causal graph and the ground truth, with significantly fewer samples.
[ "['Muhammad Qasim Elahi' 'Lai Wei' 'Murat Kocaoglu' 'Mahsa Ghasemi']" ]
null
null
2405.11566
null
null
http://arxiv.org/pdf/2405.11566v1
2024-05-19T14:30:57Z
2024-05-19T14:30:57Z
Uncertainty-Aware PPG-2-ECG for Enhanced Cardiovascular Diagnosis using Diffusion Models
Analyzing the cardiovascular system condition via Electrocardiography (ECG) is a common and highly effective approach, and it has been practiced and perfected over many decades. ECG sensing is non-invasive and relatively easy to acquire, and yet it is still cumbersome for holter monitoring tests that may span over hours and even days. A possible alternative in this context is Photoplethysmography (PPG): An optically-based signal that measures blood volume fluctuations, as typically sensed by conventional ``wearable devices''. While PPG presents clear advantages in acquisition, convenience, and cost-effectiveness, ECG provides more comprehensive information, allowing for a more precise detection of heart conditions. This implies that a conversion from PPG to ECG, as recently discussed in the literature, inherently involves an unavoidable level of uncertainty. In this paper we introduce a novel methodology for addressing the PPG-2-ECG conversion, and offer an enhanced classification of cardiovascular conditions using the given PPG, all while taking into account the uncertainties arising from the conversion process. We provide a mathematical justification for our proposed computational approach, and present empirical studies demonstrating its superior performance compared to state-of-the-art baseline methods.
[ "['Omer Belhasin' 'Idan Kligvasser' 'George Leifman' 'Regev Cohen'\n 'Erin Rainaldi' 'Li-Fang Cheng' 'Nishant Verma' 'Paul Varghese'\n 'Ehud Rivlin' 'Michael Elad']" ]
null
null
2405.11573
null
null
http://arxiv.org/pdf/2405.11573v1
2024-05-19T14:42:19Z
2024-05-19T14:42:19Z
Quantile Activation: departing from single point estimation for better generalization across distortions
A classifier is, in its essence, a function which takes an input and returns the class of the input and implicitly assumes an underlying distribution. We argue in this article that one has to move away from this basic tenet to obtain generalisation across distributions. Specifically, the class of the sample should depend on the points from its context distribution for better generalisation across distributions. How does one achieve this? The key idea is to adapt the outputs of each neuron of the network to its context distribution. We propose quantile activation, QACT, which, in simple terms, outputs the relative quantile of the sample in its context distribution, instead of the actual values in traditional networks. The scope of this article is to validate the proposed activation across several experimental settings, and compare it with conventional techniques. For this, we use the datasets developed to test robustness against distortions CIFAR10C, CIFAR100C, MNISTC, TinyImagenetC, and show that we achieve a significantly higher generalisation across distortions than the conventional classifiers, across different architectures. Although this paper is only a proof of concept, we surprisingly find that this approach outperforms DINOv2(small) at large distortions, even though DINOv2 is trained with a far bigger network on a considerably larger dataset.
[ "['Aditya Challa' 'Sravan Danda' 'Laurent Najman' 'Snehanshu Saha']" ]
null
null
2405.11574
null
null
http://arxiv.org/pdf/2405.11574v1
2024-05-19T14:48:19Z
2024-05-19T14:48:19Z
Reproducibility Study of CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification
This report is a reproducibility study of the paper "CDUL: CLIP-Driven Unsupervised Learning for Multi-Label Image Classification" (Abdelfattah et al, ICCV 2023). Our report makes the following contributions: (1) We provide a reproducible, well commented and open-sourced code implementation for the entire method specified in the original paper. (2) We try to verify the effectiveness of the novel aggregation strategy which uses the CLIP model to initialize the pseudo labels for the subsequent unsupervised multi-label image classification task. (3) We try to verify the effectiveness of the gradient-alignment training method specified in the original paper, which is used to update the network parameters and pseudo labels. The code can be found at https://github.com/cs-mshah/CDUL
[ "['Manan Shah' 'Yash Bhalgat']" ]
null
null
2405.11580
null
null
http://arxiv.org/pdf/2405.11580v1
2024-05-19T15:15:18Z
2024-05-19T15:15:18Z
Securing Health Data on the Blockchain: A Differential Privacy and Federated Learning Framework
This study proposes a framework to enhance privacy in Blockchain-based Internet of Things (BIoT) systems used in the healthcare sector. The framework addresses the challenge of leveraging health data for analytics while protecting patient privacy. To achieve this, the study integrates Differential Privacy (DP) with Federated Learning (FL) to protect sensitive health data collected by IoT nodes. The proposed framework utilizes dynamic personalization and adaptive noise distribution strategies to balance privacy and data utility. Additionally, blockchain technology ensures secure and transparent aggregation and storage of model updates. Experimental results on the SVHN dataset demonstrate that the proposed framework achieves strong privacy guarantees against various attack scenarios while maintaining high accuracy in health analytics tasks. For 15 rounds of federated learning with an epsilon value of 8.0, the model obtains an accuracy of 64.50%. The blockchain integration, utilizing Ethereum, Ganache, Web3.py, and IPFS, exhibits an average transaction latency of around 6 seconds and consistent gas consumption across rounds, validating the practicality and feasibility of the proposed approach.
[ "['Daniel Commey' 'Sena Hounsinou' 'Garth V. Crosby']" ]
null
null
2405.11590
null
null
http://arxiv.org/pdf/2405.11590v1
2024-05-19T15:50:57Z
2024-05-19T15:50:57Z
Global Convergence of Decentralized Retraction-Free Optimization on the Stiefel Manifold
Many classical and modern machine learning algorithms require solving optimization tasks under orthogonal constraints. Solving these tasks often require calculating retraction-based gradient descent updates on the corresponding Riemannian manifold, which can be computationally expensive. Recently Ablin et al. proposed an infeasible retraction-free algorithm, which is significantly more efficient. In this paper, we study the decentralized non-convex optimization task over a network of agents on the Stiefel manifold with retraction-free updates. We propose textbf{D}ecentralized textbf{R}etraction-textbf{F}ree textbf{G}radient textbf{T}racking (DRFGT) algorithm, and show that DRFGT exhibits ergodic $mathcal{O}(1/K)$ convergence rate, the same rate of convergence as the centralized, retraction-based methods. We also provide numerical experiments demonstrating that DRFGT performs on par with the state-of-the-art retraction based methods with substantially reduced computational overhead.
[ "['Youbang Sun' 'Shixiang Chen' 'Alfredo Garcia' 'Shahin Shahrampour']" ]
null
null
2405.11601
null
null
http://arxiv.org/pdf/2405.11601v1
2024-05-19T16:10:03Z
2024-05-19T16:10:03Z
How to integrate cloud service, data analytic and machine learning technique to reduce cyber risks associated with the modern cloud based infrastructure
The combination of cloud technology, machine learning, and data visualization techniques allows hybrid enterprise networks to hold massive volumes of data and provide employees and customers easy access to these cloud data. These massive collections of complex data sets are facing security challenges. While cloud platforms are more vulnerable to security threats and traditional security technologies are unable to cope with the rapid data explosion in cloud platforms, machine learning powered security solutions and data visualization techniques are playing instrumental roles in detecting security threat, data breaches, and automatic finding software vulnerabilities. The purpose of this paper is to present some of the widely used cloud services, machine learning techniques and data visualization approach and demonstrate how to integrate cloud service, data analytic and machine learning techniques that can be used to detect and reduce cyber risks associated with the modern cloud based infrastructure. In this paper I applied the machine learning supervised classifier to design a model based on well-known UNSW-NB15 dataset to predict the network behavior metrics and demonstrated how data analytics techniques can be integrated to visualize network traffics.
[ "['Upakar Bhatta']" ]
null
null
2405.11605
null
null
http://arxiv.org/pdf/2405.11605v2
2024-05-23T13:42:02Z
2024-05-19T16:21:04Z
Switched Flow Matching: Eliminating Singularities via Switching ODEs
Continuous-time generative models, such as Flow Matching (FM), construct probability paths to transport between one distribution and another through the simulation-free learning of the neural ordinary differential equations (ODEs). During inference, however, the learned model often requires multiple neural network evaluations to accurately integrate the flow, resulting in a slow sampling speed. We attribute the reason to the inherent (joint) heterogeneity of source and/or target distributions, namely the singularity problem, which poses challenges for training the neural ODEs effectively. To address this issue, we propose a more general framework, termed Switched FM (SFM), that eliminates singularities via switching ODEs, as opposed to using a uniform ODE in FM. Importantly, we theoretically show that FM cannot transport between two simple distributions due to the existence and uniqueness of initial value problems of ODEs, while these limitations can be well tackled by SFM. From an orthogonal perspective, our framework can seamlessly integrate with the existing advanced techniques, such as minibatch optimal transport, to further enhance the straightness of the flow, yielding a more efficient sampling process with reduced costs. We demonstrate the effectiveness of the newly proposed SFM through several numerical examples.
[ "['Qunxi Zhu' 'Wei Lin']" ]