categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2405.19806
null
null
http://arxiv.org/pdf/2405.19806v1
2024-05-30T08:16:22Z
2024-05-30T08:16:22Z
Preference Alignment with Flow Matching
We present Preference Flow Matching (PFM), a new framework for preference-based reinforcement learning (PbRL) that streamlines the integration of preferences into an arbitrary class of pre-trained models. Existing PbRL methods require fine-tuning pre-trained models, which presents challenges such as scalability, inefficiency, and the need for model modifications, especially with black-box APIs like GPT-4. In contrast, PFM utilizes flow matching techniques to directly learn from preference data, thereby reducing the dependency on extensive fine-tuning of pre-trained models. By leveraging flow-based models, PFM transforms less preferred data into preferred outcomes, and effectively aligns model outputs with human preferences without relying on explicit or implicit reward function estimation, thus avoiding common issues like overfitting in reward models. We provide theoretical insights that support our method's alignment with standard PbRL objectives. Experimental results indicate the practical effectiveness of our method, offering a new direction in aligning a pre-trained model to preference.
[ "['Minu Kim' 'Yongsik Lee' 'Sehyeok Kang' 'Jihwan Oh' 'Song Chong'\n 'Seyoung Yun']" ]
null
null
2405.19807
null
null
http://arxiv.org/pdf/2405.19807v1
2024-05-30T08:17:00Z
2024-05-30T08:17:00Z
MetaCURL: Non-stationary Concave Utility Reinforcement Learning
We explore online learning in episodic loop-free Markov decision processes on non-stationary environments (changing losses and probability transitions). Our focus is on the Concave Utility Reinforcement Learning problem (CURL), an extension of classical RL for handling convex performance criteria in state-action distributions induced by agent policies. While various machine learning problems can be written as CURL, its non-linearity invalidates traditional Bellman equations. Despite recent solutions to classical CURL, none address non-stationary MDPs. This paper introduces MetaCURL, the first CURL algorithm for non-stationary MDPs. It employs a meta-algorithm running multiple black-box algorithms instances over different intervals, aggregating outputs via a sleeping expert framework. The key hurdle is partial information due to MDP uncertainty. Under partial information on the probability transitions (uncertainty and non-stationarity coming only from external noise, independent of agent state-action pairs), we achieve optimal dynamic regret without prior knowledge of MDP changes. Unlike approaches for RL, MetaCURL handles full adversarial losses, not just stochastic ones. We believe our approach for managing non-stationarity with experts can be of interest to the RL community.
[ "['Bianca Marin Moreno' 'Margaux Brégère' 'Pierre Gaillard' 'Nadia Oudjane']" ]
null
null
2405.19811
null
null
http://arxiv.org/pdf/2405.19811v1
2024-05-30T08:20:34Z
2024-05-30T08:20:34Z
Approximate Global Convergence of Independent Learning in Multi-Agent Systems
Independent learning (IL), despite being a popular approach in practice to achieve scalability in large-scale multi-agent systems, usually lacks global convergence guarantees. In this paper, we study two representative algorithms, independent $Q$-learning and independent natural actor-critic, within value-based and policy-based frameworks, and provide the first finite-sample analysis for approximate global convergence. The results imply a sample complexity of $tilde{mathcal{O}}(epsilon^{-2})$ up to an error term that captures the dependence among agents and characterizes the fundamental limit of IL in achieving global convergence. To establish the result, we develop a novel approach for analyzing IL by constructing a separable Markov decision process (MDP) for convergence analysis and then bounding the gap due to model difference between the separable MDP and the original one. Moreover, we conduct numerical experiments using a synthetic MDP and an electric vehicle charging example to verify our theoretical findings and to demonstrate the practical applicability of IL.
[ "['Ruiyang Jin' 'Zaiwei Chen' 'Yiheng Lin' 'Jie Song' 'Adam Wierman']" ]
null
null
2405.19815
null
null
http://arxiv.org/pdf/2405.19815v1
2024-05-30T08:23:04Z
2024-05-30T08:23:04Z
Efficient Stimuli Generation using Reinforcement Learning in Design Verification
The increasing design complexity of System-on-Chips (SoCs) has led to significant verification challenges, particularly in meeting coverage targets within a timely manner. At present, coverage closure is heavily dependent on constrained random and coverage driven verification methodologies where the randomized stimuli are bounded to verify certain scenarios and to reach coverage goals. This process is said to be exhaustive and to consume a lot of project time. In this paper, a novel methodology is proposed to generate efficient stimuli with the help of Reinforcement Learning (RL) to reach the maximum code coverage of the Design Under Verification (DUV). Additionally, an automated framework is created using metamodeling to generate a SystemVerilog testbench and an RL environment for any given design. The proposed approach is applied to various designs and the produced results proves that the RL agent provides effective stimuli to achieve code coverage faster in comparison with baseline random simulations. Furthermore, various RL agents and reward schemes are analyzed in our work.
[ "['Deepak Narayan Gadde' 'Thomas Nalapat' 'Aman Kumar' 'Djones Lettnin'\n 'Wolfgang Kunz' 'Sebastian Simon']" ]
null
null
2405.19823
null
null
http://arxiv.org/pdf/2405.19823v1
2024-05-30T08:31:18Z
2024-05-30T08:31:18Z
Joint Selective State Space Model and Detrending for Robust Time Series Anomaly Detection
Deep learning-based sequence models are extensively employed in Time Series Anomaly Detection (TSAD) tasks due to their effective sequential modeling capabilities. However, the ability of TSAD is limited by two key challenges: (i) the ability to model long-range dependency and (ii) the generalization issue in the presence of non-stationary data. To tackle these challenges, an anomaly detector that leverages the selective state space model known for its proficiency in capturing long-term dependencies across various domains is proposed. Additionally, a multi-stage detrending mechanism is introduced to mitigate the prominent trend component in non-stationary data to address the generalization issue. Extensive experiments conducted on realworld public datasets demonstrate that the proposed methods surpass all 12 compared baseline methods.
[ "['Junqi Chen' 'Xu Tan' 'Sylwan Rahardja' 'Jiawei Yang' 'Susanto Rahardja']" ]
null
null
2405.19836
null
null
http://arxiv.org/pdf/2405.19836v1
2024-05-30T08:45:45Z
2024-05-30T08:45:45Z
The Merit of River Network Topology for Neural Flood Forecasting
Climate change exacerbates riverine floods, which occur with higher frequency and intensity than ever. The much-needed forecasting systems typically rely on accurate river discharge predictions. To this end, the SOTA data-driven approaches treat forecasting at spatially distributed gauge stations as isolated problems, even within the same river network. However, incorporating the known topology of the river network into the prediction model has the potential to leverage the adjacency relationship between gauges. Thus, we model river discharge for a network of gauging stations with GNNs and compare the forecasting performance achieved by different adjacency definitions. Our results show that the model fails to benefit from the river network topology information, both on the entire network and small subgraphs. The learned edge weights correlate with neither of the static definitions and exhibit no regular pattern. Furthermore, the GNNs struggle to predict sudden, narrow discharge spikes. Our work hints at a more general underlying phenomenon of neural prediction not always benefitting from graphical structure and may inspire a systematic study of the conditions under which this happens.
[ "['Nikolas Kirschstein' 'Yixuan Sun']" ]
null
null
2405.19864
null
null
http://arxiv.org/pdf/2405.19864v1
2024-05-30T09:14:01Z
2024-05-30T09:14:01Z
Out-of-distribution Reject Option Method for Dataset Shift Problem in Early Disease Onset Prediction
Machine learning is increasingly used to predict lifestyle-related disease onset using health and medical data. However, the prediction effectiveness is hindered by dataset shift, which involves discrepancies in data distribution between the training and testing datasets, misclassifying out-of-distribution (OOD) data. To diminish dataset shift effects, this paper proposes the out-of-distribution reject option for prediction (ODROP), which integrates OOD detection models to preclude OOD data from the prediction phase. We investigated the efficacy of five OOD detection methods (variational autoencoder, neural network ensemble std, neural network ensemble epistemic, neural network energy, and neural network gaussian mixture based energy measurement) across two datasets, the Hirosaki and Wakayama health checkup data, in the context of three disease onset prediction tasks: diabetes, dyslipidemia, and hypertension. To evaluate the ODROP method, we trained disease onset prediction models and OOD detection models on Hirosaki data and used AUROC-rejection curve plots from Wakayama data. The variational autoencoder method showed superior stability and magnitude of improvement in Area Under the Receiver Operating Curve (AUROC) in five cases: AUROC in the Wakayama data was improved from 0.80 to 0.90 at a 31.1% rejection rate for diabetes onset and from 0.70 to 0.76 at a 34% rejection rate for dyslipidemia. We categorized dataset shifts into two types using SHAP clustering - those that considerably affect predictions and those that do not. We expect that this classification will help standardize measuring instruments. This study is the first to apply OOD detection to actual health and medical data, demonstrating its potential to substantially improve the accuracy and reliability of disease prediction models amidst dataset shift.
[ "['Taisei Tosaki' 'Eiichiro Uchino' 'Ryosuke Kojima' 'Yohei Mineharu'\n 'Mikio Arita' 'Nobuyuki Miyai' 'Yoshinori Tamada' 'Tatsuya Mikami'\n 'Koichi Murashita' 'Shigeyuki Nakaji' 'Yasushi Okuno']" ]
null
null
2405.19870
null
null
http://arxiv.org/pdf/2405.19870v1
2024-05-30T09:23:48Z
2024-05-30T09:23:48Z
On Vessel Location Forecasting and the Effect of Federated Learning
The wide spread of Automatic Identification System (AIS) has motivated several maritime analytics operations. Vessel Location Forecasting (VLF) is one of the most critical operations for maritime awareness. However, accurate VLF is a challenging problem due to the complexity and dynamic nature of maritime traffic conditions. Furthermore, as privacy concerns and restrictions have grown, training data has become increasingly fragmented, resulting in dispersed databases of several isolated data silos among different organizations, which in turn decreases the quality of learning models. In this paper, we propose an efficient VLF solution based on LSTM neural networks, in two variants, namely Nautilus and FedNautilus for the centralized and the federated learning approach, respectively. We also demonstrate the superiority of the centralized approach with respect to current state of the art and discuss the advantages and disadvantages of the federated against the centralized approach.
[ "['Andreas Tritsarolis' 'Nikos Pelekis' 'Konstantina Bereta'\n 'Dimitris Zissis' 'Yannis Theodoridis']" ]
null
null
2405.19874
null
null
http://arxiv.org/pdf/2405.19874v1
2024-05-30T09:28:56Z
2024-05-30T09:28:56Z
Is In-Context Learning Sufficient for Instruction Following in LLMs?
In-context learning (ICL) allows LLMs to learn from examples without changing their weights, which is a particularly promising capability for long-context LLMs that can potentially learn from many examples. Recently, Lin et al. (2024) proposed URIAL, a method using only three in-context examples to align base LLMs, achieving non-trivial instruction following performance. In this work, we show that, while effective, ICL alignment with URIAL still underperforms compared to instruction fine-tuning on established benchmarks such as MT-Bench and AlpacaEval 2.0 (LC), especially with more capable base LMs. Unlike for tasks such as classification, translation, or summarization, adding more ICL demonstrations for long-context LLMs does not systematically improve instruction following performance. To address this limitation, we derive a greedy selection approach for ICL examples that noticeably improves performance, yet without bridging the gap to instruction fine-tuning. Finally, we provide a series of ablation studies to better understand the reasons behind the remaining gap, and we show how some aspects of ICL depart from the existing knowledge and are specific to the instruction tuning setting. Overall, our work advances the understanding of ICL as an alignment technique. We provide our code at https://github.com/tml-epfl/icl-alignment.
[ "['Hao Zhao' 'Maksym Andriushchenko' 'Francesco Croce' 'Nicolas Flammarion']" ]
null
null
2405.19878
null
null
http://arxiv.org/pdf/2405.19878v1
2024-05-30T09:34:31Z
2024-05-30T09:34:31Z
Learning from Random Demonstrations: Offline Reinforcement Learning with Importance-Sampled Diffusion Models
Generative models such as diffusion have been employed as world models in offline reinforcement learning to generate synthetic data for more effective learning. Existing work either generates diffusion models one-time prior to training or requires additional interaction data to update it. In this paper, we propose a novel approach for offline reinforcement learning with closed-loop policy evaluation and world-model adaptation. It iteratively leverages a guided diffusion world model to directly evaluate the offline target policy with actions drawn from it, and then performs an importance-sampled world model update to adaptively align the world model with the updated policy. We analyzed the performance of the proposed method and provided an upper bound on the return gap between our method and the real environment under an optimal policy. The result sheds light on various factors affecting learning performance. Evaluations in the D4RL environment show significant improvement over state-of-the-art baselines, especially when only random or medium-expertise demonstrations are available -- thus requiring improved alignment between the world model and offline policy evaluation.
[ "['Zeyu Fang' 'Tian Lan']" ]
null
null
2405.19883
null
null
http://arxiv.org/pdf/2405.19883v1
2024-05-30T09:42:54Z
2024-05-30T09:42:54Z
From Words to Actions: Unveiling the Theoretical Underpinnings of LLM-Driven Autonomous Systems
In this work, from a theoretical lens, we aim to understand why large language model (LLM) empowered agents are able to solve decision-making problems in the physical world. To this end, consider a hierarchical reinforcement learning (RL) model where the LLM Planner and the Actor perform high-level task planning and low-level execution, respectively. Under this model, the LLM Planner navigates a partially observable Markov decision process (POMDP) by iteratively generating language-based subgoals via prompting. Under proper assumptions on the pretraining data, we prove that the pretrained LLM Planner effectively performs Bayesian aggregated imitation learning (BAIL) through in-context learning. Additionally, we highlight the necessity for exploration beyond the subgoals derived from BAIL by proving that naively executing the subgoals returned by LLM leads to a linear regret. As a remedy, we introduce an $epsilon$-greedy exploration strategy to BAIL, which is proven to incur sublinear regret when the pretraining error is small. Finally, we extend our theoretical framework to include scenarios where the LLM Planner serves as a world model for inferring the transition model of the environment and to multi-agent settings, enabling coordination among multiple Actors.
[ "['Jianliang He' 'Siyu Chen' 'Fengzhuo Zhang' 'Zhuoran Yang']" ]
null
null
2405.19885
null
null
http://arxiv.org/pdf/2405.19885v2
2024-06-05T09:03:03Z
2024-05-30T09:43:59Z
Fourier Controller Networks for Real-Time Decision-Making in Embodied Learning
Transformer has shown promise in reinforcement learning to model time-varying features for obtaining generalized low-level robot policies on diverse robotics datasets in embodied learning. However, it still suffers from the issues of low data efficiency and high inference latency. In this paper, we propose to investigate the task from a new perspective of the frequency domain. We first observe that the energy density in the frequency domain of a robot's trajectory is mainly concentrated in the low-frequency part. Then, we present the Fourier Controller Network (FCNet), a new network that uses Short-Time Fourier Transform (STFT) to extract and encode time-varying features through frequency domain interpolation. In order to do real-time decision-making, we further adopt FFT and Sliding DFT methods in the model architecture to achieve parallel training and efficient recurrent inference. Extensive results in both simulated (e.g., D4RL) and real-world environments (e.g., robot locomotion) demonstrate FCNet's substantial efficiency and effectiveness over existing methods such as Transformer, e.g., FCNet outperforms Transformer on multi-environmental robotics datasets of all types of sizes (from 1.9M to 120M). The project page and code can be found https://thkkk.github.io/fcnet.
[ "['Hengkai Tan' 'Songming Liu' 'Kai Ma' 'Chengyang Ying' 'Xingxing Zhang'\n 'Hang Su' 'Jun Zhu']" ]
null
null
2405.19886
null
null
http://arxiv.org/pdf/2405.19886v1
2024-05-30T09:45:18Z
2024-05-30T09:45:18Z
Federated Learning with Multi-resolution Model Broadcast
In federated learning, a server must periodically broadcast a model to the agents. We propose to use multi-resolution coding and modulation (also known as non-uniform modulation) for this purpose. In the simplest instance, broadcast transmission is used, whereby all agents are targeted with one and the same transmission (typically without any particular favored beam direction), which is coded using multi-resolution coding/modulation. This enables high-SNR agents, with high path gains to the server, to receive a more accurate model than the low-SNR agents do, without consuming more downlink resources. As one implementation, we use transmission with a non-uniform 8-PSK constellation, where a high-SNR receiver (agent) can separate all 8 constellation points (hence receive 3 bits) whereas a low-SNR receiver can only separate 4 points (hence receive 2 bits). By encoding the least significant information in the third bit, the high-SNR receivers can obtain the model with higher accuracy, while the low-SNR receiver can still obtain the model although with reduced accuracy, thereby facilitating at least some basic participation of the low-SNR receiver. We show the effectiveness of our proposed scheme via experimentation using federated learning with the MNIST data-set.
[ "['Henrik Rydén' 'Reza Moosavi' 'Erik G. Larsson']" ]
null
null
2405.19888
null
null
http://arxiv.org/pdf/2405.19888v1
2024-05-30T09:46:36Z
2024-05-30T09:46:36Z
Parrot: Efficient Serving of LLM-based Applications with Semantic Variable
The rise of large language models (LLMs) has enabled LLM-based applications (a.k.a. AI agents or co-pilots), a new software paradigm that combines the strength of LLM and conventional software. Diverse LLM applications from different tenants could design complex workflows using multiple LLM requests to accomplish one task. However, they have to use the over-simplified request-level API provided by today's public LLM services, losing essential application-level information. Public LLM services have to blindly optimize individual LLM requests, leading to sub-optimal end-to-end performance of LLM applications. This paper introduces Parrot, an LLM service system that focuses on the end-to-end experience of LLM-based applications. Parrot proposes Semantic Variable, a unified abstraction to expose application-level knowledge to public LLM services. A Semantic Variable annotates an input/output variable in the prompt of a request, and creates the data pipeline when connecting multiple LLM requests, providing a natural way to program LLM applications. Exposing Semantic Variables to the public LLM service allows it to perform conventional data flow analysis to uncover the correlation across multiple LLM requests. This correlation opens a brand-new optimization space for the end-to-end performance of LLM-based applications. Extensive evaluations demonstrate that Parrot can achieve up to an order-of-magnitude improvement for popular and practical use cases of LLM applications.
[ "['Chaofan Lin' 'Zhenhua Han' 'Chengruidong Zhang' 'Yuqing Yang' 'Fan Yang'\n 'Chen Chen' 'Lili Qiu']" ]
null
null
2405.19889
null
null
http://arxiv.org/pdf/2405.19889v1
2024-05-30T09:46:59Z
2024-05-30T09:46:59Z
Deep Joint Semantic Coding and Beamforming for Near-Space Airship-Borne Massive MIMO Network
Near-space airship-borne communication network is recognized to be an indispensable component of the future integrated ground-air-space network thanks to airships' advantage of long-term residency at stratospheric altitudes, but it urgently needs reliable and efficient Airship-to-X link. To improve the transmission efficiency and capacity, this paper proposes to integrate semantic communication with massive multiple-input multiple-output (MIMO) technology. Specifically, we propose a deep joint semantic coding and beamforming (JSCBF) scheme for airship-based massive MIMO image transmission network in space, in which semantics from both source and channel are fused to jointly design the semantic coding and physical layer beamforming. First, we design two semantic extraction networks to extract semantics from image source and channel state information, respectively. Then, we propose a semantic fusion network that can fuse these semantics into complex-valued semantic features for subsequent physical-layer transmission. To efficiently transmit the fused semantic features at the physical layer, we then propose the hybrid data and model-driven semantic-aware beamforming networks. At the receiver, a semantic decoding network is designed to reconstruct the transmitted images. Finally, we perform end-to-end deep learning to jointly train all the modules, using the image reconstruction quality at the receivers as a metric. The proposed deep JSCBF scheme fully combines the efficient source compressibility and robust error correction capability of semantic communication with the high spectral efficiency of massive MIMO, achieving a significant performance improvement over existing approaches.
[ "['Minghui Wu' 'Zhen Gao' 'Zhaocheng Wang' 'Dusit Niyato'\n 'George K. Karagiannidis' 'Sheng Chen']" ]
null
null
2405.19893
null
null
http://arxiv.org/pdf/2405.19893v1
2024-05-30T09:50:38Z
2024-05-30T09:50:38Z
Similarity is Not All You Need: Endowing Retrieval Augmented Generation with Multi Layered Thoughts
In recent years, large language models (LLMs) have made remarkable achievements in various domains. However, the untimeliness and cost of knowledge updates coupled with hallucination issues of LLMs have curtailed their applications in knowledge intensive tasks, where retrieval augmented generation (RAG) can be of help. Nevertheless, existing retrieval augmented models typically use similarity as a bridge between queries and documents and follow a retrieve then read procedure. In this work, we argue that similarity is not always the panacea and totally relying on similarity would sometimes degrade the performance of retrieval augmented generation. To this end, we propose MetRag, a Multi layEred Thoughts enhanced Retrieval Augmented Generation framework. To begin with, beyond existing similarity oriented thought, we embrace a small scale utility model that draws supervision from an LLM for utility oriented thought and further come up with a smarter model by comprehensively combining the similarity and utility oriented thoughts. Furthermore, given the fact that the retrieved document set tends to be huge and using them in isolation makes it difficult to capture the commonalities and characteristics among them, we propose to make an LLM as a task adaptive summarizer to endow retrieval augmented generation with compactness-oriented thought. Finally, with multi layered thoughts from the precedent stages, an LLM is called for knowledge augmented generation. Extensive experiments on knowledge-intensive tasks have demonstrated the superiority of MetRag.
[ "['Chunjing Gan' 'Dan Yang' 'Binbin Hu' 'Hanxiao Zhang' 'Siyuan Li'\n 'Ziqi Liu' 'Yue Shen' 'Lin Ju' 'Zhiqiang Zhang' 'Jinjie Gu' 'Lei Liang'\n 'Jun Zhou']" ]
null
null
2405.19901
null
null
http://arxiv.org/pdf/2405.19901v1
2024-05-30T10:02:53Z
2024-05-30T10:02:53Z
Urban Air Pollution Forecasting: a Machine Learning Approach leveraging Satellite Observations and Meteorological Forecasts
Air pollution poses a significant threat to public health and well-being, particularly in urban areas. This study introduces a series of machine-learning models that integrate data from the Sentinel-5P satellite, meteorological conditions, and topological characteristics to forecast future levels of five major pollutants. The investigation delineates the process of data collection, detailing the combination of diverse data sources utilized in the study. Through experiments conducted in the Milan metropolitan area, the models demonstrate their efficacy in predicting pollutant levels for the forthcoming day, achieving a percentage error of around 30%. The proposed models are advantageous as they are independent of monitoring stations, facilitating their use in areas without existing infrastructure. Additionally, we have released the collected dataset to the public, aiming to stimulate further research in this field. This research contributes to advancing our understanding of urban air quality dynamics and emphasizes the importance of amalgamating satellite, meteorological, and topographical data to develop robust pollution forecasting models.
[ "['Giacomo Blanco' 'Luca Barco' 'Lorenzo Innocenti' 'Claudio Rossi']" ]
null
null
2405.19902
null
null
http://arxiv.org/pdf/2405.19902v1
2024-05-30T10:06:06Z
2024-05-30T10:06:06Z
Learning Discriminative Dynamics with Label Corruption for Noisy Label Detection
Label noise, commonly found in real-world datasets, has a detrimental impact on a model's generalization. To effectively detect incorrectly labeled instances, previous works have mostly relied on distinguishable training signals, such as training loss, as indicators to differentiate between clean and noisy labels. However, they have limitations in that the training signals incompletely reveal the model's behavior and are not effectively generalized to various noise types, resulting in limited detection accuracy. In this paper, we propose DynaCor framework that distinguishes incorrectly labeled instances from correctly labeled ones based on the dynamics of the training signals. To cope with the absence of supervision for clean and noisy labels, DynaCor first introduces a label corruption strategy that augments the original dataset with intentionally corrupted labels, enabling indirect simulation of the model's behavior on noisy labels. Then, DynaCor learns to identify clean and noisy instances by inducing two clearly distinguishable clusters from the latent representations of training dynamics. Our comprehensive experiments show that DynaCor outperforms the state-of-the-art competitors and shows strong robustness to various noise types and noise rates.
[ "['Suyeon Kim' 'Dongha Lee' 'SeongKu Kang' 'Sukang Chae' 'Sanghwan Jang'\n 'Hwanjo Yu']" ]
null
null
2405.19909
null
null
http://arxiv.org/pdf/2405.19909v3
2024-07-15T10:55:57Z
2024-05-30T10:20:55Z
Adaptive Advantage-Guided Policy Regularization for Offline Reinforcement Learning
In offline reinforcement learning, the challenge of out-of-distribution (OOD) is pronounced. To address this, existing methods often constrain the learned policy through policy regularization. However, these methods often suffer from the issue of unnecessary conservativeness, hampering policy improvement. This occurs due to the indiscriminate use of all actions from the behavior policy that generates the offline dataset as constraints. The problem becomes particularly noticeable when the quality of the dataset is suboptimal. Thus, we propose Adaptive Advantage-guided Policy Regularization (A2PR), obtaining high-advantage actions from an augmented behavior policy combined with VAE to guide the learned policy. A2PR can select high-advantage actions that differ from those present in the dataset, while still effectively maintaining conservatism from OOD actions. This is achieved by harnessing the VAE capacity to generate samples matching the distribution of the data points. We theoretically prove that the improvement of the behavior policy is guaranteed. Besides, it effectively mitigates value overestimation with a bounded performance gap. Empirically, we conduct a series of experiments on the D4RL benchmark, where A2PR demonstrates state-of-the-art performance. Furthermore, experimental results on additional suboptimal mixed datasets reveal that A2PR exhibits superior performance. Code is available at https://github.com/ltlhuuu/A2PR.
[ "['Tenglong Liu' 'Yang Li' 'Yixing Lan' 'Hao Gao' 'Wei Pan' 'Xin Xu']" ]
null
null
2405.19912
null
null
http://arxiv.org/pdf/2405.19912v1
2024-05-30T10:23:16Z
2024-05-30T10:23:16Z
Robust Kernel Hypothesis Testing under Data Corruption
We propose two general methods for constructing robust permutation tests under data corruption. The proposed tests effectively control the non-asymptotic type I error under data corruption, and we prove their consistency in power under minimal conditions. This contributes to the practical deployment of hypothesis tests for real-world applications with potential adversarial attacks. One of our methods inherently ensures differential privacy, further broadening its applicability to private data analysis. For the two-sample and independence settings, we show that our kernel robust tests are minimax optimal, in the sense that they are guaranteed to be non-asymptotically powerful against alternatives uniformly separated from the null in the kernel MMD and HSIC metrics at some optimal rate (tight with matching lower bound). Finally, we provide publicly available implementations and empirically illustrate the practicality of our proposed tests.
[ "['Antonin Schrab' 'Ilmun Kim']" ]
null
null
2405.19919
null
null
http://arxiv.org/pdf/2405.19919v2
2024-06-01T10:28:20Z
2024-05-30T10:30:44Z
Unraveling the Impact of Heterophilic Structures on Graph Positive-Unlabeled Learning
While Positive-Unlabeled (PU) learning is vital in many real-world scenarios, its application to graph data still remains under-explored. We unveil that a critical challenge for PU learning on graph lies on the edge heterophily, which directly violates the irreducibility assumption for Class-Prior Estimation (class prior is essential for building PU learning algorithms) and degenerates the latent label inference on unlabeled nodes during classifier training. In response to this challenge, we introduce a new method, named Graph PU Learning with Label Propagation Loss (GPL). Specifically, GPL considers learning from PU nodes along with an intermediate heterophily reduction, which helps mitigate the negative impact of the heterophilic structure. We formulate this procedure as a bilevel optimization that reduces heterophily in the inner loop and efficiently learns a classifier in the outer loop. Extensive experiments across a variety of datasets have shown that GPL significantly outperforms baseline methods, confirming its effectiveness and superiority.
[ "['Yuhao Wu' 'Jiangchao Yao' 'Bo Han' 'Lina Yao' 'Tongliang Liu']" ]
null
null
2405.19928
null
null
http://arxiv.org/pdf/2405.19928v1
2024-05-30T10:44:45Z
2024-05-30T10:44:45Z
BAN: Detecting Backdoors Activated by Adversarial Neuron Noise
Backdoor attacks on deep learning represent a recent threat that has gained significant attention in the research community. Backdoor defenses are mainly based on backdoor inversion, which has been shown to be generic, model-agnostic, and applicable to practical threat scenarios. State-of-the-art backdoor inversion recovers a mask in the feature space to locate prominent backdoor features, where benign and backdoor features can be disentangled. However, it suffers from high computational overhead, and we also find that it overly relies on prominent backdoor features that are highly distinguishable from benign features. To tackle these shortcomings, this paper improves backdoor feature inversion for backdoor detection by incorporating extra neuron activation information. In particular, we adversarially increase the loss of backdoored models with respect to weights to activate the backdoor effect, based on which we can easily differentiate backdoored and clean models. Experimental results demonstrate our defense, BAN, is 1.37$times$ (on CIFAR-10) and 5.11$times$ (on ImageNet200) more efficient with 9.99% higher detect success rate than the state-of-the-art defense BTI-DBF. Our code and trained models are publicly available.url{https://anonymous.4open.science/r/ban-4B32}
[ "['Xiaoyun Xu' 'Zhuoran Liu' 'Stefanos Koffas' 'Shujian Yu' 'Stjepan Picek']" ]
null
null
2405.19931
null
null
http://arxiv.org/pdf/2405.19931v1
2024-05-30T10:47:48Z
2024-05-30T10:47:48Z
Exploring Diffusion Models' Corruption Stage in Few-Shot Fine-tuning and Mitigating with Bayesian Neural Networks
Few-shot fine-tuning of Diffusion Models (DMs) is a key advancement, significantly reducing training costs and enabling personalized AI applications. However, we explore the training dynamics of DMs and observe an unanticipated phenomenon: during the training process, image fidelity initially improves, then unexpectedly deteriorates with the emergence of noisy patterns, only to recover later with severe overfitting. We term the stage with generated noisy patterns as corruption stage. To understand this corruption stage, we begin by theoretically modeling the one-shot fine-tuning scenario, and then extend this modeling to more general cases. Through this modeling, we identify the primary cause of this corruption stage: a narrowed learning distribution inherent in the nature of few-shot fine-tuning. To tackle this, we apply Bayesian Neural Networks (BNNs) on DMs with variational inference to implicitly broaden the learned distribution, and present that the learning target of the BNNs can be naturally regarded as an expectation of the diffusion loss and a further regularization with the pretrained DMs. This approach is highly compatible with current few-shot fine-tuning methods in DMs and does not introduce any extra inference costs. Experimental results demonstrate that our method significantly mitigates corruption, and improves the fidelity, quality and diversity of the generated images in both object-driven and subject-driven generation tasks.
[ "['Xiaoyu Wu' 'Jiaru Zhang' 'Yang Hua' 'Bohan Lyu' 'Hao Wang' 'Tao Song'\n 'Haibing Guan']" ]
null
null
2405.19933
null
null
http://arxiv.org/pdf/2405.19933v1
2024-05-30T10:49:22Z
2024-05-30T10:49:22Z
Learning Latent Graph Structures and their Uncertainty
Within a prediction task, Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy. As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task. In this paper, we demonstrate that minimization of a point-prediction loss function, e.g., the mean absolute error, does not guarantee proper learning of the latent relational information and its associated uncertainty. Conversely, we prove that a suitable loss function on the stochastic model outputs simultaneously grants (i) the unknown adjacency matrix latent distribution and (ii) optimal performance on the prediction task. Finally, we propose a sampling-based method that solves this joint learning task. Empirical results validate our theoretical claims and demonstrate the effectiveness of the proposed approach.
[ "['Alessandro Manenti' 'Daniele Zambon' 'Cesare Alippi']" ]
null
null
2405.19950
null
null
http://arxiv.org/pdf/2405.19950v1
2024-05-30T11:14:01Z
2024-05-30T11:14:01Z
MM-Lego: Modular Biomedical Multimodal Models with Minimal Fine-Tuning
Learning holistic computational representations in physical, chemical or biological systems requires the ability to process information from different distributions and modalities within the same model. Thus, the demand for multimodal machine learning models has sharply risen for modalities that go beyond vision and language, such as sequences, graphs, time series, or tabular data. While there are many available multimodal fusion and alignment approaches, most of them require end-to-end training, scale quadratically with the number of modalities, cannot handle cases of high modality imbalance in the training set, or are highly topology-specific, making them too restrictive for many biomedical learning tasks. This paper presents Multimodal Lego (MM-Lego), a modular and general-purpose fusion and model merging framework to turn any set of encoders into a competitive multimodal model with no or minimal fine-tuning. We achieve this by introducing a wrapper for unimodal encoders that enforces lightweight dimensionality assumptions between modalities and harmonises their representations by learning features in the frequency domain to enable model merging with little signal interference. We show that MM-Lego 1) can be used as a model merging method which achieves competitive performance with end-to-end fusion models without any fine-tuning, 2) can operate on any unimodal encoder, and 3) is a model fusion method that, with minimal fine-tuning, achieves state-of-the-art results on six benchmarked multimodal biomedical tasks.
[ "['Konstantin Hemker' 'Nikola Simidjievski' 'Mateja Jamnik']" ]
null
null
2405.19954
null
null
http://arxiv.org/pdf/2405.19954v1
2024-05-30T11:18:52Z
2024-05-30T11:18:52Z
GenKubeSec: LLM-Based Kubernetes Misconfiguration Detection, Localization, Reasoning, and Remediation
A key challenge associated with Kubernetes configuration files (KCFs) is that they are often highly complex and error-prone, leading to security vulnerabilities and operational setbacks. Rule-based (RB) tools for KCF misconfiguration detection rely on static rule sets, making them inherently limited and unable to detect newly-discovered misconfigurations. RB tools also suffer from misdetection, since mistakes are likely when coding the detection rules. Recent methods for detecting and remediating KCF misconfigurations are limited in terms of their scalability and detection coverage, or due to the fact that they have high expertise requirements and do not offer automated remediation along with misconfiguration detection. Novel approaches that employ LLMs in their pipeline rely on API-based, general-purpose, and mainly commercial models. Thus, they pose security challenges, have inconsistent classification performance, and can be costly. In this paper, we propose GenKubeSec, a comprehensive and adaptive, LLM-based method, which, in addition to detecting a wide variety of KCF misconfigurations, also identifies the exact location of the misconfigurations and provides detailed reasoning about them, along with suggested remediation. When empirically compared with three industry-standard RB tools, GenKubeSec achieved equivalent precision (0.990) and superior recall (0.999). When a random sample of KCFs was examined by a Kubernetes security expert, GenKubeSec's explanations as to misconfiguration localization, reasoning and remediation were 100% correct, informative and useful. To facilitate further advancements in this domain, we share the unique dataset we collected, a unified misconfiguration index we developed for label standardization, our experimentation code, and GenKubeSec itself as an open-source tool.
[ "['Ehud Malul' 'Yair Meidan' 'Dudu Mimran' 'Yuval Elovici' 'Asaf Shabtai']" ]
null
null
2405.19961
null
null
http://arxiv.org/pdf/2405.19961v2
2024-05-31T17:18:35Z
2024-05-30T11:32:42Z
Collective Variable Free Transition Path Sampling with Generative Flow Network
Understanding transition paths between meta-stable states in molecular systems is fundamental for material design and drug discovery. However, sampling these paths via molecular dynamics simulations is computationally prohibitive due to the high-energy barriers between the meta-stable states. Recent machine learning approaches are often restricted to simple systems or rely on collective variables (CVs) extracted from expensive domain knowledge. In this work, we propose to leverage generative flow networks (GFlowNets) to sample transition paths without relying on CVs. We reformulate the problem as amortized energy-based sampling over molecular trajectories and train a bias potential by minimizing the squared log-ratio between the target distribution and the generator, derived from the flow matching objective of GFlowNets. Our evaluation on three proteins (Alanine Dipeptide, Polyproline, and Chignolin) demonstrates that our approach, called TPS-GFN, generates more realistic and diverse transition paths than the previous CV-free machine learning approach.
[ "['Kiyoung Seong' 'Seonghyun Park' 'Seonghwan Kim' 'Woo Youn Kim'\n 'Sungsoo Ahn']" ]
null
null
2405.19967
null
null
http://arxiv.org/pdf/2405.19967v2
2024-05-31T08:54:24Z
2024-05-30T11:46:42Z
Improved Out-of-Scope Intent Classification with Dual Encoding and Threshold-based Re-Classification
Detecting out-of-scope user utterances is essential for task-oriented dialogues and intent classification. Current methodologies face difficulties with the unpredictable distribution of outliers and often rely on assumptions about data distributions. We present the Dual Encoder for Threshold-Based Re-Classification (DETER) to address these challenges. This end-to-end framework efficiently detects out-of-scope intents without requiring assumptions on data distributions or additional post-processing steps. The core of DETER utilizes dual text encoders, the Universal Sentence Encoder (USE) and the Transformer-based Denoising AutoEncoder (TSDAE), to generate user utterance embeddings, which are classified through a branched neural architecture. Further, DETER generates synthetic outliers using self-supervision and incorporates out-of-scope phrases from open-domain datasets. This approach ensures a comprehensive training set for out-of-scope detection. Additionally, a threshold-based re-classification mechanism refines the model's initial predictions. Evaluations on the CLINC-150, Stackoverflow, and Banking77 datasets demonstrate DETER's efficacy. Our model outperforms previous benchmarks, increasing up to 13% and 5% in F1 score for known and unknown intents on CLINC-150 and Stackoverflow, and 16% for known and 24% % for unknown intents on Banking77. The source code has been released at https://github.com/Hossam-Mohammed-tech/Intent_Classification_OOS.
[ "['Hossam M. Zawbaa' 'Wael Rashwan' 'Sourav Dutta' 'Haytham Assem']" ]
null
null
2405.19971
null
null
http://arxiv.org/pdf/2405.19971v2
2024-06-09T14:25:34Z
2024-05-30T11:55:21Z
GasTrace: Detecting Sandwich Attack Malicious Accounts in Ethereum
The openness and transparency of Ethereum transaction data make it easy to be exploited by any entities, executing malicious attacks. The sandwich attack manipulates the Automated Market Maker (AMM) mechanism, profiting from manipulating the market price through front or after-running transactions. To identify and prevent sandwich attacks, we propose a cascade classification framework GasTrace. GasTrace analyzes various transaction features to detect malicious accounts, notably through the analysis and modeling of Gas features. In the initial classification, we utilize the Support Vector Machine (SVM) with the Radial Basis Function (RBF) kernel to generate the predicted probabilities of accounts, further constructing a detailed transaction network. Subsequently, the behavior features are captured by the Graph Attention Network (GAT) technique in the second classification. Through cascade classification, GasTrace can analyze and classify the sandwich attacks. Our experimental results demonstrate that GasTrace achieves a remarkable detection and generation capability, performing an accuracy of 96.73% and an F1 score of 95.71% for identifying sandwich attack accounts.
[ "['Zekai Liu' 'Xiaoqi Li' 'Hongli Peng' 'Wenkai Li']" ]
null
null
2405.19977
null
null
http://arxiv.org/pdf/2405.19977v1
2024-05-30T11:59:58Z
2024-05-30T11:59:58Z
Consistent Submodular Maximization
Maximizing monotone submodular functions under cardinality constraints is a classic optimization task with several applications in data mining and machine learning. In this paper we study this problem in a dynamic environment with consistency constraints: elements arrive in a streaming fashion and the goal is maintaining a constant approximation to the optimal solution while having a stable solution (i.e., the number of changes between two consecutive solutions is bounded). We provide algorithms in this setting with different trade-offs between consistency and approximation quality. We also complement our theoretical results with an experimental analysis showing the effectiveness of our algorithms in real-world instances.
[ "['Paul Dütting' 'Federico Fusco' 'Silvio Lattanzi' 'Ashkan Norouzi-Fard'\n 'Morteza Zadimoghaddam']" ]
null
null
2405.19978
null
null
http://arxiv.org/pdf/2405.19978v1
2024-05-30T12:01:12Z
2024-05-30T12:01:12Z
Domain Adaptation with Cauchy-Schwarz Divergence
Domain adaptation aims to use training data from one or multiple source domains to learn a hypothesis that can be generalized to a different, but related, target domain. As such, having a reliable measure for evaluating the discrepancy of both marginal and conditional distributions is crucial. We introduce Cauchy-Schwarz (CS) divergence to the problem of unsupervised domain adaptation (UDA). The CS divergence offers a theoretically tighter generalization error bound than the popular Kullback-Leibler divergence. This holds for the general case of supervised learning, including multi-class classification and regression. Furthermore, we illustrate that the CS divergence enables a simple estimator on the discrepancy of both marginal and conditional distributions between source and target domains in the representation space, without requiring any distributional assumptions. We provide multiple examples to illustrate how the CS divergence can be conveniently used in both distance metric- or adversarial training-based UDA frameworks, resulting in compelling performance.
[ "['Wenzhe Yin' 'Shujian Yu' 'Yicong Lin' 'Jie Liu' 'Jan-Jakob Sonke'\n 'Efstratios Gavves']" ]
null
null
2405.19985
null
null
http://arxiv.org/pdf/2405.19985v1
2024-05-30T12:14:25Z
2024-05-30T12:14:25Z
Targeted Sequential Indirect Experiment Design
Scientific hypotheses typically concern specific aspects of complex, imperfectly understood or entirely unknown mechanisms, such as the effect of gene expression levels on phenotypes or how microbial communities influence environmental health. Such queries are inherently causal (rather than purely associational), but in many settings, experiments can not be conducted directly on the target variables of interest, but are indirect. Therefore, they perturb the target variable, but do not remove potential confounding factors. If, additionally, the resulting experimental measurements are multi-dimensional and the studied mechanisms nonlinear, the query of interest is generally not identified. We develop an adaptive strategy to design indirect experiments that optimally inform a targeted query about the ground truth mechanism in terms of sequentially narrowing the gap between an upper and lower bound on the query. While the general formulation consists of a bi-level optimization procedure, we derive an efficiently estimable analytical kernel-based estimator of the bounds for the causal effect, a query of key interest, and demonstrate the efficacy of our approach in confounded, multivariate, nonlinear synthetic settings.
[ "['Elisabeth Ailer' 'Niclas Dern' 'Jason Hartford' 'Niki Kilbertus']" ]
null
null
2405.19988
null
null
http://arxiv.org/pdf/2405.19988v1
2024-05-30T12:18:06Z
2024-05-30T12:18:06Z
Video-Language Critic: Transferable Reward Functions for Language-Conditioned Robotics
Natural language is often the easiest and most convenient modality for humans to specify tasks for robots. However, learning to ground language to behavior typically requires impractical amounts of diverse, language-annotated demonstrations collected on each target robot. In this work, we aim to separate the problem of what to accomplish from how to accomplish it, as the former can benefit from substantial amounts of external observation-only data, and only the latter depends on a specific robot embodiment. To this end, we propose Video-Language Critic, a reward model that can be trained on readily available cross-embodiment data using contrastive learning and a temporal ranking objective, and use it to score behavior traces from a separate reinforcement learning actor. When trained on Open X-Embodiment data, our reward model enables 2x more sample-efficient policy training on Meta-World tasks than a sparse reward only, despite a significant domain gap. Using in-domain data but in a challenging task generalization setting on Meta-World, we further demonstrate more sample-efficient training than is possible with prior language-conditioned reward models that are either trained with binary classification, use static images, or do not leverage the temporal information present in video data.
[ "['Minttu Alakuijala' 'Reginald McLean' 'Isaac Woungang' 'Nariman Farsad'\n 'Samuel Kaski' 'Pekka Marttinen' 'Kai Yuan']" ]
null
null
2405.19995
null
null
http://arxiv.org/pdf/2405.19995v1
2024-05-30T12:32:18Z
2024-05-30T12:32:18Z
Symmetries in Overparametrized Neural Networks: A Mean-Field View
We develop a Mean-Field (MF) view of the learning dynamics of overparametrized Artificial Neural Networks (NN) under data symmetric in law wrt the action of a general compact group $G$. We consider for this a class of generalized shallow NNs given by an ensemble of $N$ multi-layer units, jointly trained using stochastic gradient descent (SGD) and possibly symmetry-leveraging (SL) techniques, such as Data Augmentation (DA), Feature Averaging (FA) or Equivariant Architectures (EA). We introduce the notions of weakly and strongly invariant laws (WI and SI) on the parameter space of each single unit, corresponding, respectively, to $G$-invariant distributions, and to distributions supported on parameters fixed by the group action (which encode EA). This allows us to define symmetric models compatible with taking $Ntoinfty$ and give an interpretation of the asymptotic dynamics of DA, FA and EA in terms of Wasserstein Gradient Flows describing their MF limits. When activations respect the group action, we show that, for symmetric data, DA, FA and freely-trained models obey the exact same MF dynamic, which stays in the space of WI laws and minimizes therein the population risk. We also give a counterexample to the general attainability of an optimum over SI laws. Despite this, quite remarkably, we show that the set of SI laws is also preserved by the MF dynamics even when freely trained. This sharply contrasts the finite-$N$ setting, in which EAs are generally not preserved by unconstrained SGD. We illustrate the validity of our findings as $N$ gets larger in a teacher-student experimental setting, training a student NN to learn from a WI, SI or arbitrary teacher model through various SL schemes. We last deduce a data-driven heuristic to discover the largest subspace of parameters supporting SI distributions for a problem, that could be used for designing EA with minimal generalization error.
[ "['Javier Maass Martínez' 'Joaquin Fontbona']" ]
null
null
2405.20003
null
null
http://arxiv.org/pdf/2405.20003v1
2024-05-30T12:42:05Z
2024-05-30T12:42:05Z
Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities
Uncertainty quantification in Large Language Models (LLMs) is crucial for applications where safety and reliability are important. In particular, uncertainty can be used to improve the trustworthiness of LLMs by detecting factually incorrect model responses, commonly called hallucinations. Critically, one should seek to capture the model's semantic uncertainty, i.e., the uncertainty over the meanings of LLM outputs, rather than uncertainty over lexical or syntactic variations that do not affect answer correctness. To address this problem, we propose Kernel Language Entropy (KLE), a novel method for uncertainty estimation in white- and black-box LLMs. KLE defines positive semidefinite unit trace kernels to encode the semantic similarities of LLM outputs and quantifies uncertainty using the von Neumann entropy. It considers pairwise semantic dependencies between answers (or semantic clusters), providing more fine-grained uncertainty estimates than previous methods based on hard clustering of answers. We theoretically prove that KLE generalizes the previous state-of-the-art method called semantic entropy and empirically demonstrate that it improves uncertainty quantification performance across multiple natural language generation datasets and LLM architectures.
[ "['Alexander Nikitin' 'Jannik Kossen' 'Yarin Gal' 'Pekka Marttinen']" ]
null
null
2405.20012
null
null
http://arxiv.org/pdf/2405.20012v1
2024-05-30T12:48:44Z
2024-05-30T12:48:44Z
FlexiDrop: Theoretical Insights and Practical Advances in Random Dropout Method on GNNs
Graph Neural Networks (GNNs) are powerful tools for handling graph-type data. Recently, GNNs have been widely applied in various domains, but they also face some issues, such as overfitting, over-smoothing and non-robustness. The existing research indicates that random dropout methods are an effective way to address these issues. However, random dropout methods in GNNs still face unresolved problems. Currently, the choice of dropout rate, often determined by heuristic or grid search methods, can increase the generalization error, contradicting the principal aims of dropout. In this paper, we propose a novel random dropout method for GNNs called FlexiDrop. First, we conduct a theoretical analysis of dropout in GNNs using rademacher complexity and demonstrate that the generalization error of traditional random dropout methods is constrained by a function related to the dropout rate. Subsequently, we use this function as a regularizer to unify the dropout rate and empirical loss within a single loss function, optimizing them simultaneously. Therefore, our method enables adaptive adjustment of the dropout rate and theoretically balances the trade-off between model complexity and generalization ability. Furthermore, extensive experimental results on benchmark datasets show that FlexiDrop outperforms traditional random dropout methods in GNNs.
[ "['Zhiheng Zhou' 'Sihao Liu' 'Weichen Zhao']" ]
null
null
2405.20014
null
null
http://arxiv.org/pdf/2405.20014v1
2024-05-30T12:49:34Z
2024-05-30T12:49:34Z
subMFL: Compatiple subModel Generation for Federated Learning in Device Heterogenous Environment
Federated Learning (FL) is commonly used in systems with distributed and heterogeneous devices with access to varying amounts of data and diverse computing and storage capacities. FL training process enables such devices to update the weights of a shared model locally using their local data and then a trusted central server combines all of those models to generate a global model. In this way, a global model is generated while the data remains local to devices to preserve privacy. However, training large models such as Deep Neural Networks (DNNs) on resource-constrained devices can take a prohibitively long time and consume a large amount of energy. In the current process, the low-capacity devices are excluded from the training process, although they might have access to unseen data. To overcome this challenge, we propose a model compression approach that enables heterogeneous devices with varying computing capacities to participate in the FL process. In our approach, the server shares a dense model with all devices to train it: Afterwards, the trained model is gradually compressed to obtain submodels with varying levels of sparsity to be used as suitable initial global models for resource-constrained devices that were not capable of train the first dense model. This results in an increased participation rate of resource-constrained devices while the transferred weights from the previous round of training are preserved. Our validation experiments show that despite reaching about 50 per cent global sparsity, generated submodels maintain their accuracy while can be shared to increase participation by around 50 per cent.
[ "['Zeyneddin Oz' 'Ceylan Soygul Oz' 'Abdollah Malekjafarian' 'Nima Afraz'\n 'Fatemeh Golpayegani']" ]
null
null
2405.20018
null
null
http://arxiv.org/pdf/2405.20018v1
2024-05-30T12:57:35Z
2024-05-30T12:57:35Z
Safe Multi-agent Reinforcement Learning with Natural Language Constraints
The role of natural language constraints in Safe Multi-agent Reinforcement Learning (MARL) is crucial, yet often overlooked. While Safe MARL has vast potential, especially in fields like robotics and autonomous vehicles, its full potential is limited by the need to define constraints in pre-designed mathematical terms, which requires extensive domain expertise and reinforcement learning knowledge, hindering its broader adoption. To address this limitation and make Safe MARL more accessible and adaptable, we propose a novel approach named Safe Multi-agent Reinforcement Learning with Natural Language constraints (SMALL). Our method leverages fine-tuned language models to interpret and process free-form textual constraints, converting them into semantic embeddings that capture the essence of prohibited states and behaviours. These embeddings are then integrated into the multi-agent policy learning process, enabling agents to learn policies that minimize constraint violations while optimizing rewards. To evaluate the effectiveness of SMALL, we introduce the LaMaSafe, a multi-task benchmark designed to assess the performance of multiple agents in adhering to natural language constraints. Empirical evaluations across various environments demonstrate that SMALL achieves comparable rewards and significantly fewer constraint violations, highlighting its effectiveness in understanding and enforcing natural language constraints.
[ "['Ziyan Wang' 'Meng Fang' 'Tristan Tomilin' 'Fei Fang' 'Yali Du']" ]
null
null
2405.20028
null
null
http://arxiv.org/pdf/2405.20028v1
2024-05-30T13:13:12Z
2024-05-30T13:13:12Z
A Simple and Adaptive Learning Rate for FTRL in Online Learning with Minimax Regret of $Θ(T^{2/3})$ and its Application to Best-of-Both-Worlds
Follow-the-Regularized-Leader (FTRL) is a powerful framework for various online learning problems. By designing its regularizer and learning rate to be adaptive to past observations, FTRL is known to work adaptively to various properties of an underlying environment. However, most existing adaptive learning rates are for online learning problems with a minimax regret of $Theta(sqrt{T})$ for the number of rounds $T$, and there are only a few studies on adaptive learning rates for problems with a minimax regret of $Theta(T^{2/3})$, which include several important problems dealing with indirect feedback. To address this limitation, we establish a new adaptive learning rate framework for problems with a minimax regret of $Theta(T^{2/3})$. Our learning rate is designed by matching the stability, penalty, and bias terms that naturally appear in regret upper bounds for problems with a minimax regret of $Theta(T^{2/3})$. As applications of this framework, we consider two major problems dealing with indirect feedback: partial monitoring and graph bandits. We show that FTRL with our learning rate and the Tsallis entropy regularizer improves existing Best-of-Both-Worlds (BOBW) regret upper bounds, which achieve simultaneous optimality in the stochastic and adversarial regimes. The resulting learning rate is surprisingly simple compared to the existing learning rates for BOBW algorithms for problems with a minimax regret of $Theta(T^{2/3})$.
[ "['Taira Tsuchiya' 'Shinji Ito']" ]
null
null
2405.20029
null
null
http://arxiv.org/pdf/2405.20029v2
2024-06-01T11:03:18Z
2024-05-30T13:13:48Z
A Random Forest-based Prediction Model for Turning Points in Antagonistic Event-Group Competitions
At present, most of the prediction studies related to antagonistic event-group competitions focus on the prediction of competition results, and less on the prediction of the competition process, which can not provide real-time feedback of the athletes' state information in the actual competition, and thus can not analyze the changes of the competition situation. In order to solve this problem, this paper proposes a prediction model based on Random Forest for the turning point of the antagonistic event-group. Firstly, the quantitative equation of competitive potential energy is proposed; Secondly, the quantitative value of competitive potential energy is obtained by using the dynamic combination of weights method, and the turning point of the competition situation of the antagonistic event-group is marked according to the quantitative time series graph; Finally, the random forest prediction model based on the optimisation of the KM-SMOTE algorithm and the grid search method is established. The experimental analysis shows that: The quantitative equation of competitive potential energy can effectively reflect the dynamic situation of the competition; The model can effectively predict the turning point of the competition situation of the antagonistic event-group, and the recall rate of the model in the test set is 86.13%; The model has certain significance for the future study of the competition situation of the antagonistic event-group.
[ "['Zishuo Zhu']" ]
null
null
2405.20039
null
null
http://arxiv.org/pdf/2405.20039v1
2024-05-30T13:19:49Z
2024-05-30T13:19:49Z
Task-Agnostic Machine Learning-Assisted Inference
Machine learning (ML) is playing an increasingly important role in scientific research. In conjunction with classical statistical approaches, ML-assisted analytical strategies have shown great promise in accelerating research findings. This has also opened up a whole new field of methodological research focusing on integrative approaches that leverage both ML and statistics to tackle data science challenges. One type of study that has quickly gained popularity employs ML to predict unobserved outcomes in massive samples and then uses the predicted outcomes in downstream statistical inference. However, existing methods designed to ensure the validity of this type of post-prediction inference are limited to very basic tasks such as linear regression analysis. This is because any extension of these approaches to new, more sophisticated statistical tasks requires task-specific algebraic derivations and software implementations, which ignores the massive library of existing software tools already developed for complex inference tasks and severely constrains the scope of post-prediction inference in real applications. To address this challenge, we propose a novel statistical framework for task-agnostic ML-assisted inference. It provides a post-prediction inference solution that can be easily plugged into almost any established data analysis routine. It delivers valid and efficient inference that is robust to arbitrary choices of ML models, while allowing nearly all existing analytical frameworks to be incorporated into the analysis of ML-predicted outcomes. Through extensive experiments, we showcase the validity, versatility, and superiority of our method compared to existing approaches.
[ "['Jiacheng Miao' 'Qiongshi Lu']" ]
null
null
2405.20042
null
null
http://arxiv.org/pdf/2405.20042v2
2024-05-31T14:42:52Z
2024-05-30T13:23:02Z
CycleFormer : TSP Solver Based on Language Modeling
We propose a new transformer model for the Traveling Salesman Problem (TSP) called CycleFormer. We identified distinctive characteristics that need to be considered when applying a conventional transformer model to TSP and aimed to fully incorporate these elements into the TSP-specific transformer. Unlike the token sets in typical language models, which are limited and static, the token (node) set in TSP is unlimited and dynamic. To exploit this fact to the fullest, we equated the encoder output with the decoder linear layer and directly connected the context vector of the encoder to the decoder encoding. Additionally, we added a positional encoding to the encoder tokens that reflects the two-dimensional nature of TSP, and devised a circular positional encoding for the decoder tokens that considers the cyclic properties of a tour. By incorporating these ideas, CycleFormer outperforms state-of-the-art (SOTA) transformer models for TSP from TSP-50 to TSP-500. Notably, on TSP-500, the optimality gap was reduced by approximately 2.8 times, from 3.09% to 1.10%, compared to the existing SOTA. The code will be made available at https://github.com/Giventicket/CycleFormer.
[ "['Jieun Yook' 'Junpyo Seo' 'Joon Huh' 'Han Joon Byun' 'Byung-ro Mooon']" ]
null
null
2405.20045
null
null
http://arxiv.org/pdf/2405.20045v1
2024-05-30T13:27:17Z
2024-05-30T13:27:17Z
Iterative Learning Control of Fast, Nonlinear, Oscillatory Dynamics (Preprint)
The sudden onset of deleterious and oscillatory dynamics (often called instabilities) is a known challenge in many fluid, plasma, and aerospace systems. These dynamics are difficult to address because they are nonlinear, chaotic, and are often too fast for active control schemes. In this work, we develop an alternative active controls system using an iterative, trajectory-optimization and parameter-tuning approach based on Iterative Learning Control (ILC), Time-Lagged Phase Portraits (TLPP) and Gaussian Process Regression (GPR). The novelty of this approach is that it can control a system's dynamics despite the controller being much slower than the dynamics. We demonstrate this controller on the Lorenz system of equations where it iteratively adjusts (tunes) the system's input parameters to successfully reproduce a desired oscillatory trajectory or state. Additionally, we investigate the system's dynamical sensitivity to its control parameters, identify continuous and bounded regions of desired dynamical trajectories, and demonstrate that the controller is robust to missing information and uncontrollable parameters as long as certain requirements are met. The controller presented in this work provides a framework for low-speed control for a variety of fast, nonlinear systems that may aid in instability suppression and mitigation.
[ "['John W. Brooks' 'Christine M. Greve']" ]
null
null
2405.20051
null
null
http://arxiv.org/abs/2405.20051v1
2024-05-30T13:37:53Z
2024-05-30T13:37:53Z
Threshold-Independent Fair Matching through Score Calibration
Entity Matching (EM) is a critical task in numerous fields, such as healthcare, finance, and public administration, as it identifies records that refer to the same entity within or across different databases. EM faces considerable challenges, particularly with false positives and negatives. These are typically addressed by generating matching scores and apply thresholds to balance false positives and negatives in various contexts. However, adjusting these thresholds can affect the fairness of the outcomes, a critical factor that remains largely overlooked in current fair EM research. The existing body of research on fair EM tends to concentrate on static thresholds, neglecting their critical impact on fairness. To address this, we introduce a new approach in EM using recent metrics for evaluating biases in score based binary classification, particularly through the lens of distributional parity. This approach enables the application of various bias metrics like equalized odds, equal opportunity, and demographic parity without depending on threshold settings. Our experiments with leading matching methods reveal potential biases, and by applying a calibration technique for EM scores using Wasserstein barycenters, we not only mitigate these biases but also preserve accuracy across real world datasets. This paper contributes to the field of fairness in data cleaning, especially within EM, which is a central task in data cleaning, by promoting a method for generating matching scores that reduce biases across different thresholds.
[ "['Mohammad Hossein Moslemi' 'Mostafa Milani']" ]
null
null
2405.20052
null
null
http://arxiv.org/pdf/2405.20052v2
2024-05-31T08:50:25Z
2024-05-30T13:38:28Z
Hardware-Efficient EMG Decoding for Next-Generation Hand Prostheses
Advancements in neural engineering have enabled the development of Robotic Prosthetic Hands (RPHs) aimed at restoring hand functionality. Current commercial RPHs offer limited control through basic on/off commands. Recent progresses in machine learning enable finger movement decoding with higher degrees of freedom, yet the high computational complexity of such models limits their application in portable devices. Future RPH designs must balance portability, low power consumption, and high decoding accuracy to be practical for individuals with disabilities. To this end, we introduce a novel attractor-based neural network to realize on-chip movement decoding for next-generation portable RPHs. The proposed architecture comprises an encoder, an attention layer, an attractor network, and a refinement regressor. We tested our model on four healthy subjects and achieved a decoding accuracy of 80.3%. Our proposed model is over 120 and 50 times more compact compared to state-of-the-art LSTM and CNN models, respectively, with comparable (or superior) decoding accuracy. Therefore, it exhibits minimal hardware complexity and can be effectively integrated as a System-on-Chip.
[ "['Mohammad Kalbasi' 'MohammadAli Shaeri' 'Vincent Alexandre Mendez'\n 'Solaiman Shokur' 'Silvestro Micera' 'Mahsa Shoaran']" ]
null
null
2405.20053
null
null
http://arxiv.org/pdf/2405.20053v1
2024-05-30T13:38:52Z
2024-05-30T13:38:52Z
Would I Lie To You? Inference Time Alignment of Language Models using Direct Preference Heads
Pre-trained Language Models (LMs) exhibit strong zero-shot and in-context learning capabilities; however, their behaviors are often difficult to control. By utilizing Reinforcement Learning from Human Feedback (RLHF), it is possible to fine-tune unsupervised LMs to follow instructions and produce outputs that reflect human preferences. Despite its benefits, RLHF has been shown to potentially harm a language model's reasoning capabilities and introduce artifacts such as hallucinations where the model may fabricate facts. To address this issue we introduce Direct Preference Heads (DPH), a fine-tuning framework that enables LMs to learn human preference signals through an auxiliary reward head without directly affecting the output distribution of the language modeling head. We perform a theoretical analysis of our objective function and find strong ties to Conservative Direct Preference Optimization (cDPO). Finally we evaluate our models on GLUE, RACE, and the GPT4All evaluation suite and demonstrate that our method produces models which achieve higher scores than those fine-tuned with Supervised Fine-Tuning (SFT) or Direct Preference Optimization (DPO) alone.
[ "['Avelina Asada Hadji-Kyriacou' 'Ognjen Arandjelovic']" ]
null
null
2405.20071
null
null
http://arxiv.org/pdf/2405.20071v1
2024-05-30T14:01:02Z
2024-05-30T14:01:02Z
A Staged Approach using Machine Learning and Uncertainty Quantification to Predict the Risk of Hip Fracture
Despite advancements in medical care, hip fractures impose a significant burden on individuals and healthcare systems. This paper focuses on the prediction of hip fracture risk in older and middle-aged adults, where falls and compromised bone quality are predominant factors. We propose a novel staged model that combines advanced imaging and clinical data to improve predictive performance. By using CNNs to extract features from hip DXA images, along with clinical variables, shape measurements, and texture features, our method provides a comprehensive framework for assessing fracture risk. A staged machine learning-based model was developed using two ensemble models: Ensemble 1 (clinical variables only) and Ensemble 2 (clinical variables and DXA imaging features). This staged approach used uncertainty quantification from Ensemble 1 to decide if DXA features are necessary for further prediction. Ensemble 2 exhibited the highest performance, achieving an AUC of 0.9541, an accuracy of 0.9195, a sensitivity of 0.8078, and a specificity of 0.9427. The staged model also performed well, with an AUC of 0.8486, an accuracy of 0.8611, a sensitivity of 0.5578, and a specificity of 0.9249, outperforming Ensemble 1, which had an AUC of 0.5549, an accuracy of 0.7239, a sensitivity of 0.1956, and a specificity of 0.8343. Furthermore, the staged model suggested that 54.49% of patients did not require DXA scanning. It effectively balanced accuracy and specificity, offering a robust solution when DXA data acquisition is not always feasible. Statistical tests confirmed significant differences between the models, highlighting the advantages of the advanced modeling strategies. Our staged approach could identify individuals at risk with a high accuracy but reduce the unnecessary DXA scanning. It has great promise to guide interventions to prevent hip fractures with reduced cost and radiation.
[ "['Anjum Shaik' 'Kristoffer Larsen' 'Nancy E. Lane' 'Chen Zhao'\n 'Kuan-Jui Su' 'Joyce H. Keyak' 'Qing Tian' 'Qiuying Sha' 'Hui Shen'\n 'Hong-Wen Deng' 'Weihua Zhou']" ]
null
null
2405.20079
null
null
http://arxiv.org/pdf/2405.20079v1
2024-05-30T14:09:43Z
2024-05-30T14:09:43Z
Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning
Intelligent Tutoring Systems (ITS) enhance personalized learning by predicting student answers to provide immediate and customized instruction. However, recent research has primarily focused on the correctness of the answer rather than the student's performance on specific answer choices, limiting insights into students' thought processes and potential misconceptions. To address this gap, we present MCQStudentBert, an answer forecasting model that leverages the capabilities of Large Language Models (LLMs) to integrate contextual understanding of students' answering history along with the text of the questions and answers. By predicting the specific answer choices students are likely to make, practitioners can easily extend the model to new answer choices or remove answer choices for the same multiple-choice question (MCQ) without retraining the model. In particular, we compare MLP, LSTM, BERT, and Mistral 7B architectures to generate embeddings from students' past interactions, which are then incorporated into a finetuned BERT's answer-forecasting mechanism. We apply our pipeline to a dataset of language learning MCQ, gathered from an ITS with over 10,000 students to explore the predictive accuracy of MCQStudentBert, which incorporates student interaction patterns, in comparison to correct answer prediction and traditional mastery-learning feature-based approaches. This work opens the door to more personalized content, modularization, and granular support.
[ "['Elena Grazia Gado' 'Tommaso Martorella' 'Luca Zunino'\n 'Paola Mejia-Domenzain' 'Vinitra Swamy' 'Jibril Frej' 'Tanja Käser']" ]
null
null
2405.20082
null
null
http://arxiv.org/pdf/2405.20082v1
2024-05-30T14:11:29Z
2024-05-30T14:11:29Z
Segment, Shuffle, and Stitch: A Simple Mechanism for Improving Time-Series Representations
Existing approaches for learning representations of time-series keep the temporal arrangement of the time-steps intact with the presumption that the original order is the most optimal for learning. However, non-adjacent sections of real-world time-series may have strong dependencies. Accordingly we raise the question: Is there an alternative arrangement for time-series which could enable more effective representation learning? To address this, we propose a simple plug-and-play mechanism called Segment, Shuffle, and Stitch (S3) designed to improve time-series representation learning of existing models. S3 works by creating non-overlapping segments from the original sequence and shuffling them in a learned manner that is the most optimal for the task at hand. It then re-attaches the shuffled segments back together and performs a learned weighted sum with the original input to capture both the newly shuffled sequence along with the original sequence. S3 is modular and can be stacked to create various degrees of granularity, and can be added to many forms of neural architectures including CNNs or Transformers with negligible computation overhead. Through extensive experiments on several datasets and state-of-the-art baselines, we show that incorporating S3 results in significant improvements for the tasks of time-series classification and forecasting, improving performance on certain datasets by up to 68%. We also show that S3 makes the learning more stable with a smoother training loss curve and loss landscape compared to the original baseline. The code is available at https://github.com/shivam-grover/S3-TimeSeries .
[ "['Shivam Grover' 'Amin Jalali' 'Ali Etemad']" ]
null
null
2405.20085
null
null
http://arxiv.org/pdf/2405.20085v2
2024-06-04T10:15:42Z
2024-05-30T14:16:19Z
Soft Partitioning of Latent Space for Semantic Channel Equalization
Semantic channel equalization has emerged as a solution to address language mismatch in multi-user semantic communications. This approach aims to align the latent spaces of an encoder and a decoder which were not jointly trained and it relies on a partition of the semantic (latent) space into atoms based on the the semantic meaning. In this work we explore the role of the semantic space partition in scenarios where the task structure involves a one-to-many mapping between the semantic space and the action space. In such scenarios, partitioning based on hard inference results results in loss of information which degrades the equalization performance. We propose a soft criterion to derive the atoms of the partition which leverages the soft decoder's output and offers a more comprehensive understanding of the semantic space's structure. Through empirical validation, we demonstrate that soft partitioning yields a more descriptive and regular partition of the space, consequently enhancing the performance of the equalization algorithm.
[ "['Tomás Hüttebräucker' 'Mohamed Sana' 'Emilio Calvanese Strinati']" ]
null
null
2405.20086
null
null
http://arxiv.org/pdf/2405.20086v1
2024-05-30T14:16:32Z
2024-05-30T14:16:32Z
Analysis of a multi-target linear shrinkage covariance estimator
Multi-target linear shrinkage is an extension of the standard single-target linear shrinkage for covariance estimation. We combine several constant matrices - the targets - with the sample covariance matrix. We derive the oracle and a textit{bona fide} multi-target linear shrinkage estimator with exact and empirical mean. In both settings, we proved its convergence towards the oracle under Kolmogorov asymptotics. Finally, we show empirically that it outperforms other standard estimators in various situations.
[ "['Benoit Oriol']" ]
null
null
2405.20091
null
null
http://arxiv.org/pdf/2405.20091v2
2024-05-31T09:35:36Z
2024-05-30T14:27:40Z
Visual Attention Analysis in Online Learning
In this paper, we present an approach in the Multimodal Learning Analytics field. Within this approach, we have developed a tool to visualize and analyze eye movement data collected during learning sessions in online courses. The tool is named VAAD (an acronym for Visual Attention Analysis Dashboard). These eye movement data have been gathered using an eye-tracker and subsequently processed and visualized for interpretation. The purpose of the tool is to conduct a descriptive analysis of the data by facilitating its visualization, enabling the identification of differences and learning patterns among various learner populations. Additionally, it integrates a predictive module capable of anticipating learner activities during a learning session. Consequently, VAAD holds the potential to offer valuable insights into online learning behaviors from both descriptive and predictive perspectives.
[ "['Miriam Navarro' 'Álvaro Becerra' 'Roberto Daza' 'Ruth Cobos'\n 'Aythami Morales' 'Julian Fierrez']" ]
null
null
2405.20094
null
null
http://arxiv.org/pdf/2405.20094v1
2024-05-30T14:32:06Z
2024-05-30T14:32:06Z
Low-dimensional approximations of the conditional law of Volterra processes: a non-positive curvature approach
Predicting the conditional evolution of Volterra processes with stochastic volatility is a crucial challenge in mathematical finance. While deep neural network models offer promise in approximating the conditional law of such processes, their effectiveness is hindered by the curse of dimensionality caused by the infinite dimensionality and non-smooth nature of these problems. To address this, we propose a two-step solution. Firstly, we develop a stable dimension reduction technique, projecting the law of a reasonably broad class of Volterra process onto a low-dimensional statistical manifold of non-positive sectional curvature. Next, we introduce a sequentially deep learning model tailored to the manifold's geometry, which we show can approximate the projected conditional law of the Volterra process. Our model leverages an auxiliary hypernetwork to dynamically update its internal parameters, allowing it to encode non-stationary dynamics of the Volterra process, and it can be interpreted as a gating mechanism in a mixture of expert models where each expert is specialized at a specific point in time. Our hypernetwork further allows us to achieve approximation rates that would seemingly only be possible with very large networks.
[ "['Reza Arabpour' 'John Armstrong' 'Luca Galimberti' 'Anastasis Kratsios'\n 'Giulia Livieri']" ]
null
null
2405.20114
null
null
http://arxiv.org/pdf/2405.20114v1
2024-05-30T14:51:57Z
2024-05-30T14:51:57Z
Near Optimal Decentralized Optimization with Compression and Momentum Tracking
Communication efficiency has garnered significant attention as it is considered the main bottleneck for large-scale decentralized Machine Learning applications in distributed and federated settings. In this regime, clients are restricted to transmitting small amounts of quantized information to their neighbors over a communication graph. Numerous endeavors have been made to address this challenging problem by developing algorithms with compressed communication for decentralized non-convex optimization problems. Despite considerable efforts, the current results suffer from various issues such as non-scalability with the number of clients, requirements for large batches, or bounded gradient assumption. In this paper, we introduce MoTEF, a novel approach that integrates communication compression with Momentum Tracking and Error Feedback. Our analysis demonstrates that MoTEF achieves most of the desired properties, and significantly outperforms existing methods under arbitrary data heterogeneity. We provide numerical experiments to validate our theoretical findings and confirm the practical superiority of MoTEF.
[ "['Rustem Islamov' 'Yuan Gao' 'Sebastian U. Stich']" ]
null
null
2405.20124
null
null
http://arxiv.org/pdf/2405.20124v1
2024-05-30T15:01:18Z
2024-05-30T15:01:18Z
A Geometric Unification of Distributionally Robust Covariance Estimators: Shrinking the Spectrum by Inflating the Ambiguity Set
The state-of-the-art methods for estimating high-dimensional covariance matrices all shrink the eigenvalues of the sample covariance matrix towards a data-insensitive shrinkage target. The underlying shrinkage transformation is either chosen heuristically - without compelling theoretical justification - or optimally in view of restrictive distributional assumptions. In this paper, we propose a principled approach to construct covariance estimators without imposing restrictive assumptions. That is, we study distributionally robust covariance estimation problems that minimize the worst-case Frobenius error with respect to all data distributions close to a nominal distribution, where the proximity of distributions is measured via a divergence on the space of covariance matrices. We identify mild conditions on this divergence under which the resulting minimizers represent shrinkage estimators. We show that the corresponding shrinkage transformations are intimately related to the geometrical properties of the underlying divergence. We also prove that our robust estimators are efficiently computable and asymptotically consistent and that they enjoy finite-sample performance guarantees. We exemplify our general methodology by synthesizing explicit estimators induced by the Kullback-Leibler, Fisher-Rao, and Wasserstein divergences. Numerical experiments based on synthetic and real data show that our robust estimators are competitive with state-of-the-art estimators.
[ "['Man-Chung Yue' 'Yves Rychener' 'Daniel Kuhn' 'Viet Anh Nguyen']" ]
null
null
2405.20127
null
null
http://arxiv.org/pdf/2405.20127v1
2024-05-30T15:07:30Z
2024-05-30T15:07:30Z
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning
Cross-device training is a crucial subfield of federated learning, where the number of clients can reach into the billions. Standard approaches and local methods are prone to issues such as client drift and insensitivity to data similarities. We propose a novel algorithm (SPAM) for cross-device federated learning with non-convex losses, which solves both issues. We provide sharp analysis under second-order (Hessian) similarity, a condition satisfied by a variety of machine learning problems in practice. Additionally, we extend our results to the partial participation setting, where a cohort of selected clients communicate with the server at each communication round. Our method is the first in its kind, that does not require the smoothness of the objective and provably benefits from clients having similar data.
[ "['Avetik Karagulyan' 'Egor Shulgin' 'Abdurakhmon Sadiev' 'Peter Richtárik']" ]
null
null
2405.20139
null
null
http://arxiv.org/pdf/2405.20139v1
2024-05-30T15:14:24Z
2024-05-30T15:14:24Z
GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning
Knowledge Graphs (KGs) represent human-crafted factual knowledge in the form of triplets (head, relation, tail), which collectively form a graph. Question Answering over KGs (KGQA) is the task of answering natural questions grounding the reasoning to the information provided by the KG. Large Language Models (LLMs) are the state-of-the-art models for QA tasks due to their remarkable ability to understand natural language. On the other hand, Graph Neural Networks (GNNs) have been widely used for KGQA as they can handle the complex graph information stored in the KG. In this work, we introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style. First, a GNN reasons over a dense KG subgraph to retrieve answer candidates for a given question. Second, the shortest paths in the KG that connect question entities and answer candidates are extracted to represent KG reasoning paths. The extracted paths are verbalized and given as input for LLM reasoning with RAG. In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information, while the LLM leverages its natural language processing ability for ultimate KGQA. Furthermore, we develop a retrieval augmentation (RA) technique to further boost KGQA performance with GNN-RAG. Experimental results show that GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ), outperforming or matching GPT-4 performance with a 7B tuned LLM. In addition, GNN-RAG excels on multi-hop and multi-entity questions outperforming competing approaches by 8.9--15.5% points at answer F1.
[ "['Costas Mavromatis' 'George Karypis']" ]
null
null
2405.20165
null
null
http://arxiv.org/pdf/2405.20165v1
2024-05-30T15:39:19Z
2024-05-30T15:39:19Z
Randomized Exploration for Reinforcement Learning with Multinomial Logistic Function Approximation
We study reinforcement learning with multinomial logistic (MNL) function approximation where the underlying transition probability kernel of the Markov decision processes (MDPs) is parametrized by an unknown transition core with features of state and action. For the finite horizon episodic setting with inhomogeneous state transitions, we propose provably efficient algorithms with randomized exploration having frequentist regret guarantees. For our first algorithm, $texttt{RRL-MNL}$, we adapt optimistic sampling to ensure the optimism of the estimated value function with sufficient frequency and establish that $texttt{RRL-MNL}$ is both statistically and computationally efficient, achieving a $tilde{O}(kappa^{-1} d^{frac{3}{2}} H^{frac{3}{2}} sqrt{T})$ frequentist regret bound with constant-time computational cost per episode. Here, $d$ is the dimension of the transition core, $H$ is the horizon length, $T$ is the total number of steps, and $kappa$ is a problem-dependent constant. Despite the simplicity and practicality of $texttt{RRL-MNL}$, its regret bound scales with $kappa^{-1}$, which is potentially large in the worst case. To improve the dependence on $kappa^{-1}$, we propose $texttt{ORRL-MNL}$, which estimates the value function using local gradient information of the MNL transition model. We show that its frequentist regret bound is $tilde{O}(d^{frac{3}{2}} H^{frac{3}{2}} sqrt{T} + kappa^{-1} d^2 H^2)$. To the best of our knowledge, these are the first randomized RL algorithms for the MNL transition model that achieve both computational and statistical efficiency. Numerical experiments demonstrate the superior performance of the proposed algorithms.
[ "['Wooseong Cho' 'Taehyun Hwang' 'Joongkyu Lee' 'Min-hwan Oh']" ]
null
null
2405.20172
null
null
http://arxiv.org/abs/2405.20172v3
2024-06-05T22:28:13Z
2024-05-30T15:44:27Z
Iterative Feature Boosting for Explainable Speech Emotion Recognition
In speech emotion recognition (SER), using predefined features without considering their practical importance may lead to high dimensional datasets, including redundant and irrelevant information. Consequently, high-dimensional learning often results in decreasing model accuracy while increasing computational complexity. Our work underlines the importance of carefully considering and analyzing features in order to build efficient SER systems. We present a new supervised SER method based on an efficient feature engineering approach. We pay particular attention to the explainability of results to evaluate feature relevance and refine feature sets. This is performed iteratively through feature evaluation loop, using Shapley values to boost feature selection and improve overall framework performance. Our approach allows thus to balance the benefits between model performance and transparency. The proposed method outperforms human-level performance (HLP) and state-of-the-art machine learning methods in emotion recognition on the TESS dataset. The source code of this paper is publicly available at https://github.com/alaaNfissi/Iterative-Feature-Boosting-for-Explainable-Speech-Emotion-Recognition.
[ "['Alaa Nfissi' 'Wassim Bouachir' 'Nizar Bouguila' 'Brian Mishara']" ]
null
null
2405.20174
null
null
http://arxiv.org/pdf/2405.20174v1
2024-05-30T15:45:03Z
2024-05-30T15:45:03Z
Tropical Expressivity of Neural Networks
We propose an algebraic geometric framework to study the expressivity of linear activation neural networks. A particular quantity that has been actively studied in the field of deep learning is the number of linear regions, which gives an estimate of the information capacity of the architecture. To study and evaluate information capacity and expressivity, we work in the setting of tropical geometry -- a combinatorial and polyhedral variant of algebraic geometry -- where there are known connections between tropical rational maps and feedforward neural networks. Our work builds on and expands this connection to capitalize on the rich theory of tropical geometry to characterize and study various architectural aspects of neural networks. Our contributions are threefold: we provide a novel tropical geometric approach to selecting sampling domains among linear regions; an algebraic result allowing for a guided restriction of the sampling domain for network architectures with symmetries; and an open source library to analyze neural networks as tropical Puiseux rational maps. We provide a comprehensive set of proof-of-concept numerical experiments demonstrating the breadth of neural network architectures to which tropical geometric theory can be applied to reveal insights on expressivity characteristics of a network. Our work provides the foundations for the adaptation of both theory and existing software from computational tropical geometry and symbolic computation to deep learning.
[ "['Shiv Bhatia' 'Yueqi Cao' 'Paul Lezeau' 'Anthea Monod']" ]
null
null
2405.20178
null
null
http://arxiv.org/pdf/2405.20178v1
2024-05-30T15:47:48Z
2024-05-30T15:47:48Z
Non-intrusive data-driven model order reduction for circuits based on Hammerstein architectures
We demonstrate that data-driven system identification techniques can provide a basis for effective, non-intrusive model order reduction (MOR) for common circuits that are key building blocks in microelectronics. Our approach is motivated by the practical operation of these circuits and utilizes a canonical Hammerstein architecture. To demonstrate the approach we develop a parsimonious Hammerstein model for a non-linear CMOS differential amplifier. We train this model on a combination of direct current (DC) and transient Spice (Xyce) circuit simulation data using a novel sequential strategy to identify the static nonlinear and linear dynamical parts of the model. Simulation results show that the Hammerstein model is an effective surrogate for the differential amplifier circuit that accurately and efficiently reproduces its behavior over a wide range of operating points and input frequencies.
[ "['Joshua Hanson' 'Biliana Paskaleva' 'Pavel Bochev']" ]
null
null
2405.20180
null
null
http://arxiv.org/pdf/2405.20180v1
2024-05-30T15:48:04Z
2024-05-30T15:48:04Z
Transformers and Slot Encoding for Sample Efficient Physical World Modelling
World modelling, i.e. building a representation of the rules that govern the world so as to predict its evolution, is an essential ability for any agent interacting with the physical world. Recent applications of the Transformer architecture to the problem of world modelling from video input show notable improvements in sample efficiency. However, existing approaches tend to work only at the image level thus disregarding that the environment is composed of objects interacting with each other. In this paper, we propose an architecture combining Transformers for world modelling with the slot-attention paradigm, an approach for learning representations of objects appearing in a scene. We describe the resulting neural architecture and report experimental results showing an improvement over the existing solutions in terms of sample efficiency and a reduction of the variation of the performance over the training examples. The code for our architecture and experiments is available at https://github.com/torchipeppo/transformers-and-slot-encoding-for-wm
[ "['Francesco Petri' 'Luigi Asprino' 'Aldo Gangemi']" ]
null
null
2405.20194
null
null
http://arxiv.org/pdf/2405.20194v2
2024-06-17T23:51:55Z
2024-05-30T15:58:22Z
Occam Gradient Descent
Deep learning neural network models must be large enough to adapt to their problem domain, while small enough to avoid overfitting training data during gradient descent. To balance these competing demands, overprovisioned deep learning models such as transformers are trained for a single epoch on large data sets, and hence inefficient with both computing resources and training data. In response to these inefficiencies, we exploit learning theory to derive Occam Gradient Descent, an algorithm that interleaves adaptive reduction of model size to minimize generalization error, with gradient descent on model weights to minimize fitting error. In contrast, traditional gradient descent greedily minimizes fitting error without regard to generalization error. Our algorithm simultaneously descends the space of weights and topological size of any neural network without modification, and is effective in our experiments in outperforming traditional gradient descent with or without post-train pruning in accuracy, compute and model compression.
[ "['B. N. Kausik']" ]
null
null
2405.20200
null
null
http://arxiv.org/pdf/2405.20200v1
2024-05-30T16:04:35Z
2024-05-30T16:04:35Z
Unified Explanations in Machine Learning Models: A Perturbation Approach
A high-velocity paradigm shift towards Explainable Artificial Intelligence (XAI) has emerged in recent years. Highly complex Machine Learning (ML) models have flourished in many tasks of intelligence, and the questions have started to shift away from traditional metrics of validity towards something deeper: What is this model telling me about my data, and how is it arriving at these conclusions? Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches. To address these problems, we propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap). We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold. We propose a taxonomy for feature importance methodology, measure alignment, and observe quantifiable similarity amongst explanation models across several datasets.
[ "['Jacob Dineen' 'Don Kridel' 'Daniel Dolk' 'David Castillo']" ]
null
null
2405.20213
null
null
http://arxiv.org/pdf/2405.20213v1
2024-05-30T16:16:25Z
2024-05-30T16:16:25Z
PostDoc: Generating Poster from a Long Multimodal Document Using Deep Submodular Optimization
A poster from a long input document can be considered as a one-page easy-to-read multimodal (text and images) summary presented on a nice template with good design elements. Automatic transformation of a long document into a poster is a very less studied but challenging task. It involves content summarization of the input document followed by template generation and harmonization. In this work, we propose a novel deep submodular function which can be trained on ground truth summaries to extract multimodal content from the document and explicitly ensures good coverage, diversity and alignment of text and images. Then, we use an LLM based paraphraser and propose to generate a template with various design aspects conditioned on the input content. We show the merits of our approach through extensive automated and human evaluations.
[ "['Vijay Jaisankar' 'Sambaran Bandyopadhyay' 'Kalp Vyas' 'Varre Chaitanya'\n 'Shwetha Somasundaram']" ]
null
null
2405.20216
null
null
http://arxiv.org/pdf/2405.20216v1
2024-05-30T16:18:05Z
2024-05-30T16:18:05Z
Boost Your Own Human Image Generation Model via Direct Preference Optimization with AI Feedback
The generation of high-quality human images through text-to-image (T2I) methods is a significant yet challenging task. Distinct from general image generation, human image synthesis must satisfy stringent criteria related to human pose, anatomy, and alignment with textual prompts, making it particularly difficult to achieve realistic results. Recent advancements in T2I generation based on diffusion models have shown promise, yet challenges remain in meeting human-specific preferences. In this paper, we introduce a novel approach tailored specifically for human image generation utilizing Direct Preference Optimization (DPO). Specifically, we introduce an efficient method for constructing a specialized DPO dataset for training human image generation models without the need for costly human feedback. We also propose a modified loss function that enhances the DPO training process by minimizing artifacts and improving image fidelity. Our method demonstrates its versatility and effectiveness in generating human images, including personalized text-to-image generation. Through comprehensive evaluations, we show that our approach significantly advances the state of human image generation, achieving superior results in terms of natural anatomies, poses, and text-image alignment.
[ "['Sanghyeon Na' 'Yonggyu Kim' 'Hyunjoon Lee']" ]
null
null
2405.20230
null
null
http://arxiv.org/pdf/2405.20230v1
2024-05-23T20:44:10Z
2024-05-23T20:44:10Z
Feature Fusion for Improved Classification: Combining Dempster-Shafer Theory and Multiple CNN Architectures
Addressing uncertainty in Deep Learning (DL) is essential, as it enables the development of models that can make reliable predictions and informed decisions in complex, real-world environments where data may be incomplete or ambiguous. This paper introduces a novel algorithm leveraging Dempster-Shafer Theory (DST) to integrate multiple pre-trained models to form an ensemble capable of providing more reliable and enhanced classifications. The main steps of the proposed method include feature extraction, mass function calculation, fusion, and expected utility calculation. Several experiments have been conducted on CIFAR-10 and CIFAR-100 datasets, demonstrating superior classification accuracy of the proposed DST-based method, achieving improvements of 5.4% and 8.4%, respectively, compared to the best individual pre-trained models. Results highlight the potential of DST as a robust framework for managing uncertainties related to data when applying DL in real-world scenarios.
[ "['Ayyub Alzahem' 'Wadii Boulila' 'Maha Driss' 'Anis Koubaa']" ]
null
null
2405.20231
null
null
http://arxiv.org/pdf/2405.20231v2
2024-06-20T12:11:37Z
2024-05-30T16:32:31Z
The Empirical Impact of Neural Parameter Symmetries, or Lack Thereof
Many algorithms and observed phenomena in deep learning appear to be affected by parameter symmetries -- transformations of neural network parameters that do not change the underlying neural network function. These include linear mode connectivity, model merging, Bayesian neural network inference, metanetworks, and several other characteristics of optimization or loss-landscapes. However, theoretical analysis of the relationship between parameter space symmetries and these phenomena is difficult. In this work, we empirically investigate the impact of neural parameter symmetries by introducing new neural network architectures that have reduced parameter space symmetries. We develop two methods, with some provable guarantees, of modifying standard neural networks to reduce parameter space symmetries. With these new methods, we conduct a comprehensive experimental study consisting of multiple tasks aimed at assessing the effect of removing parameter symmetries. Our experiments reveal several interesting observations on the empirical impact of parameter symmetries; for instance, we observe linear mode connectivity between our networks without alignment of weight spaces, and we find that our networks allow for faster and more effective Bayesian neural network training.
[ "['Derek Lim' 'Moe Putterman' 'Robin Walters' 'Haggai Maron'\n 'Stefanie Jegelka']" ]
null
null
2405.20233
null
null
http://arxiv.org/pdf/2405.20233v2
2024-06-05T15:12:00Z
2024-05-30T16:35:30Z
Grokfast: Accelerated Grokking by Amplifying Slow Gradients
One puzzling artifact in machine learning dubbed grokking is where delayed generalization is achieved tenfolds of iterations after near perfect overfitting to the training data. Focusing on the long delay itself on behalf of machine learning practitioners, our goal is to accelerate generalization of a model under grokking phenomenon. By regarding a series of gradients of a parameter over training iterations as a random signal over time, we can spectrally decompose the parameter trajectories under gradient descent into two components: the fast-varying, overfitting-yielding component and the slow-varying, generalization-inducing component. This analysis allows us to accelerate the grokking phenomenon more than $times 50$ with only a few lines of code that amplifies the slow-varying components of gradients. The experiments show that our algorithm applies to diverse tasks involving images, languages, and graphs, enabling practical availability of this peculiar artifact of sudden generalization. Our code is available at https://github.com/ironjr/grokfast.
[ "['Jaerin Lee' 'Bong Gyun Kang' 'Kihoon Kim' 'Kyoung Mu Lee']" ]
null
null
2405.20236
null
null
http://arxiv.org/pdf/2405.20236v1
2024-05-30T16:40:07Z
2024-05-30T16:40:07Z
Disentangling and Mitigating the Impact of Task Similarity for Continual Learning
Continual learning of partially similar tasks poses a challenge for artificial neural networks, as task similarity presents both an opportunity for knowledge transfer and a risk of interference and catastrophic forgetting. However, it remains unclear how task similarity in input features and readout patterns influences knowledge transfer and forgetting, as well as how they interact with common algorithms for continual learning. Here, we develop a linear teacher-student model with latent structure and show analytically that high input feature similarity coupled with low readout similarity is catastrophic for both knowledge transfer and retention. Conversely, the opposite scenario is relatively benign. Our analysis further reveals that task-dependent activity gating improves knowledge retention at the expense of transfer, while task-dependent plasticity gating does not affect either retention or transfer performance at the over-parameterized limit. In contrast, weight regularization based on the Fisher information metric significantly improves retention, regardless of task similarity, without compromising transfer performance. Nevertheless, its diagonal approximation and regularization in the Euclidean space are much less robust against task similarity. We demonstrate consistent results in a permuted MNIST task with latent variables. Overall, this work provides insights into when continual learning is difficult and how to mitigate it.
[ "['Naoki Hiratani']" ]
null
null
2405.20237
null
null
http://arxiv.org/pdf/2405.20237v1
2024-05-30T16:40:28Z
2024-05-30T16:40:28Z
Training-efficient density quantum machine learning
Quantum machine learning requires powerful, flexible and efficiently trainable models to be successful in solving challenging problems. In this work, we present density quantum neural networks, a learning model incorporating randomisation over a set of trainable unitaries. These models generalise quantum neural networks using parameterised quantum circuits, and allow a trade-off between expressibility and efficient trainability, particularly on quantum hardware. We demonstrate the flexibility of the formalism by applying it to two recently proposed model families. The first are commuting-block quantum neural networks (QNNs) which are efficiently trainable but may be limited in expressibility. The second are orthogonal (Hamming-weight preserving) quantum neural networks which provide well-defined and interpretable transformations on data but are challenging to train at scale on quantum devices. Density commuting QNNs improve capacity with minimal gradient complexity overhead, and density orthogonal neural networks admit a quadratic-to-constant gradient query advantage with minimal to no performance loss. We conduct numerical experiments on synthetic translationally invariant data and MNIST image data with hyperparameter optimisation to support our findings. Finally, we discuss the connection to post-variational quantum neural networks, measurement-based quantum machine learning and the dropout mechanism.
[ "['Brian Coyle' 'El Amine Cherrat' 'Nishant Jain' 'Natansh Mathur'\n 'Snehal Raj' 'Skander Kazdaghli' 'Iordanis Kerenidis']" ]
null
null
2405.20245
null
null
http://arxiv.org/pdf/2405.20245v1
2024-05-30T16:54:42Z
2024-05-30T16:54:42Z
Retrieval Augmented Structured Generation: Business Document Information Extraction As Tool Use
Business Document Information Extraction (BDIE) is the problem of transforming a blob of unstructured information (raw text, scanned documents, etc.) into a structured format that downstream systems can parse and use. It has two main tasks: Key-Information Extraction (KIE) and Line Items Recognition (LIR). In this paper, we argue that BDIE is best modeled as a Tool Use problem, where the tools are these downstream systems. We then present Retrieval Augmented Structured Generation (RASG), a novel general framework for BDIE that achieves state of the art (SOTA) results on both KIE and LIR tasks on BDIE benchmarks. The contributions of this paper are threefold: (1) We show, with ablation benchmarks, that Large Language Models (LLMs) with RASG are already competitive with or surpasses current SOTA Large Multimodal Models (LMMs) without RASG on BDIE benchmarks. (2) We propose a new metric class for Line Items Recognition, General Line Items Recognition Metric (GLIRM), that is more aligned with practical BDIE use cases compared to existing metrics, such as ANLS*, DocILE, and GriTS. (3) We provide a heuristic algorithm for backcalculating bounding boxes of predicted line items and tables without the need for vision encoders. Finally, we claim that, while LMMs might sometimes offer marginal performance benefits, LLMs + RASG is oftentimes superior given real-world applications and constraints of BDIE.
[ "['Franz Louis Cesista' 'Rui Aguiar' 'Jason Kim' 'Paolo Acilo']" ]
null
null
2405.20247
null
null
http://arxiv.org/pdf/2405.20247v3
2024-06-05T07:52:07Z
2024-05-30T16:58:34Z
KerasCV and KerasNLP: Vision and Language Power-Ups
We present the Keras domain packages KerasCV and KerasNLP, extensions of the Keras API for Computer Vision and Natural Language Processing workflows, capable of running on either JAX, TensorFlow, or PyTorch. These domain packages are designed to enable fast experimentation, with a focus on ease-of-use and performance. We adopt a modular, layered design: at the library's lowest level of abstraction, we provide building blocks for creating models and data preprocessing pipelines, and at the library's highest level of abstraction, we provide pretrained ``task" models for popular architectures such as Stable Diffusion, YOLOv8, GPT2, BERT, Mistral, CLIP, Gemma, T5, etc. Task models have built-in preprocessing, pretrained weights, and can be fine-tuned on raw inputs. To enable efficient training, we support XLA compilation for all models, and run all preprocessing via a compiled graph of TensorFlow operations using the tf.data API. The libraries are fully open-source (Apache 2.0 license) and available on GitHub.
[ "['Matthew Watson' 'Divyashree Shivakumar Sreepathihalli'\n 'Francois Chollet' 'Martin Gorner' 'Kiranbir Sodhia' 'Ramesh Sampath'\n 'Tirth Patel' 'Haifeng Jin' 'Neel Kovelamudi' 'Gabriel Rasskin'\n 'Samaneh Saadat' 'Luke Wood' 'Chen Qian' 'Jonathan Bischof' 'Ian Stenbit'\n 'Abheesht Sharma' 'Anshuman Mishra']" ]
null
null
2405.20250
null
null
http://arxiv.org/pdf/2405.20250v2
2024-06-06T15:31:08Z
2024-05-30T17:02:18Z
Entropy annealing for policy mirror descent in continuous time and space
Entropy regularization has been extensively used in policy optimization algorithms to regularize the optimization landscape and accelerate convergence; however, it comes at the cost of introducing an additional regularization bias. This work quantifies the impact of entropy regularization on the convergence of policy gradient methods for stochastic exit time control problems. We analyze a continuous-time policy mirror descent dynamics, which updates the policy based on the gradient of an entropy-regularized value function and adjusts the strength of entropy regularization as the algorithm progresses. We prove that with a fixed entropy level, the dynamics converges exponentially to the optimal solution of the regularized problem. We further show that when the entropy level decays at suitable polynomial rates, the annealed flow converges to the solution of the unregularized problem at a rate of $mathcal O(1/S)$ for discrete action spaces and, under suitable conditions, at a rate of $mathcal O(1/sqrt{S})$ for general action spaces, with $S$ being the gradient flow time. This paper explains how entropy regularization improves policy optimization, even with the true gradient, from the perspective of convergence rate.
[ "['Deven Sethi' 'David Šiška' 'Yufei Zhang']" ]
null
null
2405.20271
null
null
http://arxiv.org/pdf/2405.20271v1
2024-05-30T17:26:02Z
2024-05-30T17:26:02Z
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections
Parameter-efficient finetuning (PEFT) has become ubiquitous to adapt foundation models to downstream task requirements while retaining their generalization ability. However, the amount of additionally introduced parameters and compute for successful adaptation and hyperparameter searches can explode quickly, especially when deployed at scale to serve numerous individual requests. To ensure effective, parameter-efficient, and hyperparameter-robust adaptation, we propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections. By design, ETHER transformations require a minimal number of parameters, are less likely to deteriorate model performance, and exhibit robustness to hyperparameter and learning rate choices. In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters ($sim$$10$-$100$ times lower than LoRA or OFT) across multiple image synthesis and natural language tasks without exhaustive hyperparameter tuning. Finally, we investigate the recent emphasis on Hyperspherical Energy retention for adaptation and raise questions on its practical utility. The code is available at https://github.com/mwbini/ether.
[ "['Massimo Bini' 'Karsten Roth' 'Zeynep Akata' 'Anna Khoreva']" ]
null
null
2405.20272
null
null
http://arxiv.org/pdf/2405.20272v1
2024-05-30T17:27:44Z
2024-05-30T17:27:44Z
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable
Machine unlearning is motivated by desire for data autonomy: a person can request to have their data's influence removed from deployed models, and those models should be updated as if they were retrained without the person's data. We show that, counter-intuitively, these updates expose individuals to high-accuracy reconstruction attacks which allow the attacker to recover their data in its entirety, even when the original models are so simple that privacy risk might not otherwise have been a concern. We show how to mount a near-perfect attack on the deleted data point from linear regression models. We then generalize our attack to other loss functions and architectures, and empirically demonstrate the effectiveness of our attacks across a wide range of datasets (capturing both tabular and image data). Our work highlights that privacy risk is significant even for extremely simple model classes when individuals can request deletion of their data from the model.
[ "['Martin Bertran' 'Shuai Tang' 'Michael Kearns' 'Jamie Morgenstern'\n 'Aaron Roth' 'Zhiwei Steven Wu']" ]
null
null
2405.20274
null
null
http://arxiv.org/pdf/2405.20274v1
2024-05-30T17:29:15Z
2024-05-30T17:29:15Z
ROAST: Review-level Opinion Aspect Sentiment Target Joint Detection
Aspect-Based Sentiment Analysis (ABSA) has experienced tremendous expansion and diversity due to various shared tasks spanning several languages and fields and organized via SemEval workshops and Germeval. Nonetheless, a few shortcomings still need to be addressed, such as the lack of low-resource language evaluations and the emphasis on sentence-level analysis. To thoroughly assess ABSA techniques in the context of complete reviews, this research presents a novel task, Review-Level Opinion Aspect Sentiment Target (ROAST). ROAST seeks to close the gap between sentence-level and text-level ABSA by identifying every ABSA constituent at the review level. We extend the available datasets to enable ROAST, addressing the drawbacks noted in previous research by incorporating low-resource languages, numerous languages, and a variety of topics. Through this effort, ABSA research will be able to cover more ground and get a deeper comprehension of the task and its practical application in a variety of languages and domains (https://github.com/RiTUAL-UH/ROAST-ABSA).
[ "['Siva Uday Sampreeth Chebolu' 'Franck Dernoncourt' 'Nedim Lipka'\n 'Thamar Solorio']" ]
null
null
2405.20278
null
null
http://arxiv.org/pdf/2405.20278v2
2024-07-11T07:55:14Z
2024-05-30T17:32:46Z
Length independent generalization bounds for deep SSM architectures
Many state-of-the-art models trained on long-range sequences, for example S4, S5 or LRU, are made of sequential blocks combining State-Space Models (SSMs) with neural networks. In this paper we provide a PAC bound that holds for these kind of architectures with stable SSM blocks and does not depend on the length of the input sequence. Imposing stability of the SSM blocks is a standard practice in the literature, and it is known to help performance. Our results provide a theoretical justification for the use of stable SSM blocks as the proposed PAC bound decreases as the degree of stability of the SSM blocks increases.
[ "['Dániel Rácz' 'Mihály Petreczky' 'Bálint Daróczy']" ]
null
null
2405.20287
null
null
http://arxiv.org/pdf/2405.20287v1
2024-05-30T17:39:15Z
2024-05-30T17:39:15Z
Flexible SE(2) graph neural networks with applications to PDE surrogates
This paper presents a novel approach for constructing graph neural networks equivariant to 2D rotations and translations and leveraging them as PDE surrogates on non-gridded domains. We show that aligning the representations with the principal axis allows us to sidestep many constraints while preserving SE(2) equivariance. By applying our model as a surrogate for fluid flow simulations and conducting thorough benchmarks against non-equivariant models, we demonstrate significant gains in terms of both data efficiency and accuracy.
[ "['Maria Bånkestad' 'Olof Mogren' 'Aleksis Pirinen']" ]
null
null
2405.20289
null
null
http://arxiv.org/pdf/2405.20289v1
2024-05-30T17:40:11Z
2024-05-30T17:40:11Z
DITTO-2: Distilled Diffusion Inference-Time T-Optimization for Music Generation
Controllable music generation methods are critical for human-centered AI-based music creation, but are currently limited by speed, quality, and control design trade-offs. Diffusion Inference-Time T-optimization (DITTO), in particular, offers state-of-the-art results, but is over 10x slower than real-time, limiting practical use. We propose Distilled Diffusion Inference-Time T -Optimization (or DITTO-2), a new method to speed up inference-time optimization-based control and unlock faster-than-real-time generation for a wide-variety of applications such as music inpainting, outpainting, intensity, melody, and musical structure control. Our method works by (1) distilling a pre-trained diffusion model for fast sampling via an efficient, modified consistency or consistency trajectory distillation process (2) performing inference-time optimization using our distilled model with one-step sampling as an efficient surrogate optimization task and (3) running a final multi-step sampling generation (decoding) using our estimated noise latents for best-quality, fast, controllable generation. Through thorough evaluation, we find our method not only speeds up generation over 10-20x, but simultaneously improves control adherence and generation quality all at once. Furthermore, we apply our approach to a new application of maximizing text adherence (CLAP score) and show we can convert an unconditional diffusion model without text inputs into a model that yields state-of-the-art text control. Sound examples can be found at https://ditto-music.github.io/ditto2/.
[ "['Zachary Novack' 'Julian McAuley' 'Taylor Berg-Kirkpatrick'\n 'Nicholas Bryan']" ]
null
null
2405.20291
null
null
http://arxiv.org/pdf/2405.20291v1
2024-05-30T17:41:32Z
2024-05-30T17:41:32Z
Unveiling and Mitigating Backdoor Vulnerabilities based on Unlearning Weight Changes and Backdoor Activeness
The security threat of backdoor attacks is a central concern for deep neural networks (DNNs). Recently, without poisoned data, unlearning models with clean data and then learning a pruning mask have contributed to backdoor defense. Additionally, vanilla fine-tuning with those clean data can help recover the lost clean accuracy. However, the behavior of clean unlearning is still under-explored, and vanilla fine-tuning unintentionally induces back the backdoor effect. In this work, we first investigate model unlearning from the perspective of weight changes and gradient norms, and find two interesting observations in the backdoored model: 1) the weight changes between poison and clean unlearning are positively correlated, making it possible for us to identify the backdoored-related neurons without using poisoned data; 2) the neurons of the backdoored model are more active (i.e., larger changes in gradient norm) than those in the clean model, suggesting the need to suppress the gradient norm during fine-tuning. Then, we propose an effective two-stage defense method. In the first stage, an efficient Neuron Weight Change (NWC)-based Backdoor Reinitialization is proposed based on observation 1). In the second stage, based on observation 2), we design an Activeness-Aware Fine-Tuning to replace the vanilla fine-tuning. Extensive experiments, involving eight backdoor attacks on three benchmark datasets, demonstrate the superior performance of our proposed method compared to recent state-of-the-art backdoor defense approaches.
[ "['Weilin Lin' 'Li Liu' 'Shaokui Wei' 'Jianze Li' 'Hui Xiong']" ]
null
null
2405.20304
null
null
http://arxiv.org/pdf/2405.20304v1
2024-05-30T17:50:04Z
2024-05-30T17:50:04Z
Group Robust Preference Optimization in Reward-free RLHF
Adapting large language models (LLMs) for specific tasks usually involves fine-tuning through reinforcement learning with human feedback (RLHF) on preference data. While these data often come from diverse labelers' groups (e.g., different demographics, ethnicities, company teams, etc.), traditional RLHF approaches adopt a "one-size-fits-all" approach, i.e., they indiscriminately assume and optimize a single preference model, thus not being robust to unique characteristics and needs of the various groups. To address this limitation, we propose a novel Group Robust Preference Optimization (GRPO) method to align LLMs to individual groups' preferences robustly. Our approach builds upon reward-free direct preference optimization methods, but unlike previous approaches, it seeks a robust policy which maximizes the worst-case group performance. To achieve this, GRPO adaptively and sequentially weights the importance of different groups, prioritizing groups with worse cumulative loss. We theoretically study the feasibility of GRPO and analyze its convergence for the log-linear policy class. By fine-tuning LLMs with GRPO using diverse group-based global opinion data, we significantly improved performance for the worst-performing groups, reduced loss imbalances across groups, and improved probability accuracies compared to non-robust baselines.
[ "['Shyam Sundhar Ramesh' 'Yifan Hu' 'Iason Chaimalas' 'Viraj Mehta'\n 'Pier Giuseppe Sessa' 'Haitham Bou Ammar' 'Ilija Bogunovic']" ]
null
null
2405.20309
null
null
http://arxiv.org/pdf/2405.20309v1
2024-05-30T17:52:36Z
2024-05-30T17:52:36Z
Large Language Models Can Self-Improve At Web Agent Tasks
Training models to act as agents that can effectively navigate and perform actions in a complex environment, such as a web browser, has typically been challenging due to lack of training data. Large language models (LLMs) have recently demonstrated some capability to navigate novel environments as agents in a zero-shot or few-shot fashion, purely guided by natural language instructions as prompts. Recent research has also demonstrated LLMs have the capability to exceed their base performance through self-improvement, i.e. fine-tuning on data generated by the model itself. In this work, we explore the extent to which LLMs can self-improve their performance as agents in long-horizon tasks in a complex environment using the WebArena benchmark. In WebArena, an agent must autonomously navigate and perform actions on web pages to achieve a specified objective. We explore fine-tuning on three distinct synthetic training data mixtures and achieve a 31% improvement in task completion rate over the base model on the WebArena benchmark through a self-improvement procedure. We additionally contribute novel evaluation metrics for assessing the performance, robustness, capabilities, and quality of trajectories of our fine-tuned agent models to a greater degree than simple, aggregate-level benchmark scores currently used to measure self-improvement.
[ "['Ajay Patel' 'Markus Hofmarcher' 'Claudiu Leoveanu-Condrei'\n 'Marius-Constantin Dinu' 'Chris Callison-Burch' 'Sepp Hochreiter']" ]
null
null
2405.20313
null
null
http://arxiv.org/pdf/2405.20313v1
2024-05-30T17:53:50Z
2024-05-30T17:53:50Z
Sequence-Augmented SE(3)-Flow Matching For Conditional Protein Backbone Generation
Proteins are essential for almost all biological processes and derive their diverse functions from complex 3D structures, which are in turn determined by their amino acid sequences. In this paper, we exploit the rich biological inductive bias of amino acid sequences and introduce FoldFlow-2, a novel sequence-conditioned SE(3)-equivariant flow matching model for protein structure generation. FoldFlow-2 presents substantial new architectural features over the previous FoldFlow family of models including a protein large language model to encode sequence, a new multi-modal fusion trunk that combines structure and sequence representations, and a geometric transformer based decoder. To increase diversity and novelty of generated samples -- crucial for de-novo drug design -- we train FoldFlow-2 at scale on a new dataset that is an order of magnitude larger than PDB datasets of prior works, containing both known proteins in PDB and high-quality synthetic structures achieved through filtering. We further demonstrate the ability to align FoldFlow-2 to arbitrary rewards, e.g. increasing secondary structures diversity, by introducing a Reinforced Finetuning (ReFT) objective. We empirically observe that FoldFlow-2 outperforms previous state-of-the-art protein structure-based generative models, improving over RFDiffusion in terms of unconditional generation across all metrics including designability, diversity, and novelty across all protein lengths, as well as exhibiting generalization on the task of equilibrium conformation sampling. Finally, we demonstrate that a fine-tuned FoldFlow-2 makes progress on challenging conditional design tasks such as designing scaffolds for the VHH nanobody.
[ "['Guillaume Huguet' 'James Vuckovic' 'Kilian Fatras'\n 'Eric Thibodeau-Laufer' 'Pablo Lemos' 'Riashat Islam' 'Cheng-Hao Liu'\n 'Jarrid Rector-Brooks' 'Tara Akhound-Sadegh' 'Michael Bronstein'\n 'Alexander Tong' 'Avishek Joey Bose']" ]
null
null
2405.20318
null
null
http://arxiv.org/pdf/2405.20318v1
2024-05-30T17:55:28Z
2024-05-30T17:55:28Z
CausalQuest: Collecting Natural Causal Questions for AI Agents
Humans have an innate drive to seek out causality. Whether fuelled by curiosity or specific goals, we constantly question why things happen, how they are interconnected, and many other related phenomena. To develop AI agents capable of addressing this natural human quest for causality, we urgently need a comprehensive dataset of natural causal questions. Unfortunately, existing datasets either contain only artificially-crafted questions that do not reflect real AI usage scenarios or have limited coverage of questions from specific sources. To address this gap, we present CausalQuest, a dataset of 13,500 naturally occurring questions sourced from social networks, search engines, and AI assistants. We formalize the definition of causal questions and establish a taxonomy for finer-grained classification. Through a combined effort of human annotators and large language models (LLMs), we carefully label the dataset. We find that 42% of the questions humans ask are indeed causal, with the majority seeking to understand the causes behind given effects. Using this dataset, we train efficient classifiers (up to 2.85B parameters) for the binary task of identifying causal questions, achieving high performance with F1 scores of up to 0.877. We conclude with a rich set of future research directions that can build upon our data and models.
[ "['Roberto Ceraolo' 'Dmitrii Kharlapenko' 'Amélie Reymond' 'Rada Mihalcea'\n 'Mrinmaya Sachan' 'Bernhard Schölkopf' 'Zhijing Jin']" ]
null
null
2405.20320
null
null
http://arxiv.org/pdf/2405.20320v1
2024-05-30T17:56:04Z
2024-05-30T17:56:04Z
Improving the Training of Rectified Flows
Diffusion models have shown great promise for image and video generation, but sampling from state-of-the-art models requires expensive numerical integration of a generative ODE. One approach for tackling this problem is rectified flows, which iteratively learn smooth ODE paths that are less susceptible to truncation error. However, rectified flows still require a relatively large number of function evaluations (NFEs). In this work, we propose improved techniques for training rectified flows, allowing them to compete with knowledge distillation methods even in the low NFE setting. Our main insight is that under realistic settings, a single iteration of the Reflow algorithm for training rectified flows is sufficient to learn nearly straight trajectories; hence, the current practice of using multiple Reflow iterations is unnecessary. We thus propose techniques to improve one-round training of rectified flows, including a U-shaped timestep distribution and LPIPS-Huber premetric. With these techniques, we improve the FID of the previous 2-rectified flow by up to 72% in the 1 NFE setting on CIFAR-10. On ImageNet 64$times$64, our improved rectified flow outperforms the state-of-the-art distillation methods such as consistency distillation and progressive distillation in both one-step and two-step settings and rivals the performance of improved consistency training (iCT) in FID. Code is available at https://github.com/sangyun884/rfpp.
[ "['Sangyun Lee' 'Zinan Lin' 'Giulia Fanti']" ]
null
null
2405.20321
null
null
http://arxiv.org/pdf/2405.20321v1
2024-05-30T17:56:54Z
2024-05-30T17:56:54Z
Vision-based Manipulation from Single Human Video with Open-World Object Graphs
We present an object-centric approach to empower robots to learn vision-based manipulation skills from human videos. We investigate the problem of imitating robot manipulation from a single human video in the open-world setting, where a robot must learn to manipulate novel objects from one video demonstration. We introduce ORION, an algorithm that tackles the problem by extracting an object-centric manipulation plan from a single RGB-D video and deriving a policy that conditions on the extracted plan. Our method enables the robot to learn from videos captured by daily mobile devices such as an iPad and generalize the policies to deployment environments with varying visual backgrounds, camera angles, spatial layouts, and novel object instances. We systematically evaluate our method on both short-horizon and long-horizon tasks, demonstrating the efficacy of ORION in learning from a single human video in the open world. Videos can be found in the project website https://ut-austin-rpl.github.io/ORION-release.
[ "['Yifeng Zhu' 'Arisrei Lim' 'Peter Stone' 'Yuke Zhu']" ]
null
null
2405.20324
null
null
http://arxiv.org/pdf/2405.20324v1
2024-05-30T17:57:26Z
2024-05-30T17:57:26Z
Don't drop your samples! Coherence-aware training benefits Conditional diffusion
Conditional diffusion models are powerful generative models that can leverage various types of conditional information, such as class labels, segmentation masks, or text captions. However, in many real-world scenarios, conditional information may be noisy or unreliable due to human annotation errors or weak alignment. In this paper, we propose the Coherence-Aware Diffusion (CAD), a novel method that integrates coherence in conditional information into diffusion models, allowing them to learn from noisy annotations without discarding data. We assume that each data point has an associated coherence score that reflects the quality of the conditional information. We then condition the diffusion model on both the conditional information and the coherence score. In this way, the model learns to ignore or discount the conditioning when the coherence is low. We show that CAD is theoretically sound and empirically effective on various conditional generation tasks. Moreover, we show that leveraging coherence generates realistic and diverse samples that respect conditional information better than models trained on cleaned datasets where samples with low coherence have been discarded.
[ "['Nicolas Dufour' 'Victor Besnier' 'Vicky Kalogeiton' 'David Picard']" ]
null
null
2405.20331
null
null
http://arxiv.org/pdf/2405.20331v1
2024-05-30T17:59:04Z
2024-05-30T17:59:04Z
CoSy: Evaluating Textual Explanations of Neurons
A crucial aspect of understanding the complex nature of Deep Neural Networks (DNNs) is the ability to explain learned concepts within their latent representations. While various methods exist to connect neurons to textual descriptions of human-understandable concepts, evaluating the quality of these explanation methods presents a major challenge in the field due to a lack of unified, general-purpose quantitative evaluation. In this work, we introduce CoSy (Concept Synthesis) -- a novel, architecture-agnostic framework to evaluate the quality of textual explanations for latent neurons. Given textual explanations, our proposed framework leverages a generative model conditioned on textual input to create data points representing the textual explanation. Then, the neuron's response to these explanation data points is compared with the response to control data points, providing a quality estimate of the given explanation. We ensure the reliability of our proposed framework in a series of meta-evaluation experiments and demonstrate practical value through insights from benchmarking various concept-based textual explanation methods for Computer Vision tasks, showing that tested explanation methods significantly differ in quality.
[ "['Laura Kopf' 'Philine Lou Bommer' 'Anna Hedström' 'Sebastian Lapuschkin'\n 'Marina M. -C. Höhne' 'Kirill Bykov']" ]
null
null
2405.20341
null
null
http://arxiv.org/pdf/2405.20341v1
2024-05-30T17:59:51Z
2024-05-30T17:59:51Z
From Zero to Hero: Cold-Start Anomaly Detection
When first deploying an anomaly detection system, e.g., to detect out-of-scope queries in chatbots, there are no observed data, making data-driven approaches ineffective. Zero-shot anomaly detection methods offer a solution to such "cold-start" cases, but unfortunately they are often not accurate enough. This paper studies the realistic but underexplored cold-start setting where an anomaly detection model is initialized using zero-shot guidance, but subsequently receives a small number of contaminated observations (namely, that may include anomalies). The goal is to make efficient use of both the zero-shot guidance and the observations. We propose ColdFusion, a method that effectively adapts the zero-shot anomaly detector to contaminated observations. To support future development of this new setting, we propose an evaluation suite consisting of evaluation protocols and metrics.
[ "['Tal Reiss' 'George Kour' 'Naama Zwerdling' 'Ateret Anaby-Tavor'\n 'Yedid Hoshen']" ]
null
null
2405.20343
null
null
http://arxiv.org/pdf/2405.20343v2
2024-06-13T06:36:38Z
2024-05-30T17:59:54Z
Unique3D: High-Quality and Efficient 3D Mesh Generation from a Single Image
In this work, we introduce Unique3D, a novel image-to-3D framework for efficiently generating high-quality 3D meshes from single-view images, featuring state-of-the-art generation fidelity and strong generalizability. Previous methods based on Score Distillation Sampling (SDS) can produce diversified 3D results by distilling 3D knowledge from large 2D diffusion models, but they usually suffer from long per-case optimization time with inconsistent issues. Recent works address the problem and generate better 3D results either by finetuning a multi-view diffusion model or training a fast feed-forward model. However, they still lack intricate textures and complex geometries due to inconsistency and limited generated resolution. To simultaneously achieve high fidelity, consistency, and efficiency in single image-to-3D, we propose a novel framework Unique3D that includes a multi-view diffusion model with a corresponding normal diffusion model to generate multi-view images with their normal maps, a multi-level upscale process to progressively improve the resolution of generated orthographic multi-views, as well as an instant and consistent mesh reconstruction algorithm called ISOMER, which fully integrates the color and geometric priors into mesh results. Extensive experiments demonstrate that our Unique3D significantly outperforms other image-to-3D baselines in terms of geometric and textural details.
[ "['Kailu Wu' 'Fangfu Liu' 'Zhihan Cai' 'Runjie Yan' 'Hanyang Wang'\n 'Yating Hu' 'Yueqi Duan' 'Kaisheng Ma']" ]
null
null
2405.20347
null
null
http://arxiv.org/pdf/2405.20347v1
2024-05-23T17:33:32Z
2024-05-23T17:33:32Z
Small Language Models for Application Interactions: A Case Study
We study the efficacy of Small Language Models (SLMs) in facilitating application usage through natural language interactions. Our focus here is on a particular internal application used in Microsoft for cloud supply chain fulfilment. Our experiments show that small models can outperform much larger ones in terms of both accuracy and running time, even when fine-tuned on small datasets. Alongside these results, we also highlight SLM-based system design considerations.
[ "['Beibin Li' 'Yi Zhang' 'Sébastien Bubeck' 'Jeevan Pathuri'\n 'Ishai Menache']" ]
null
null
2405.20348
null
null
http://arxiv.org/pdf/2405.20348v1
2024-05-24T15:25:09Z
2024-05-24T15:25:09Z
Personalized Adapter for Large Meteorology Model on Devices: Towards Weather Foundation Models
This paper demonstrates that pre-trained language models (PLMs) are strong foundation models for on-device meteorological variables modeling. We present LM-Weather, a generic approach to taming PLMs, that have learned massive sequential knowledge from the universe of natural language databases, to acquire an immediate capability to obtain highly customized models for heterogeneous meteorological data on devices while keeping high efficiency. Concretely, we introduce a lightweight personalized adapter into PLMs and endows it with weather pattern awareness. During communication between clients and the server, low-rank-based transmission is performed to effectively fuse the global knowledge among devices while maintaining high communication efficiency and ensuring privacy. Experiments on real-wold dataset show that LM-Weather outperforms the state-of-the-art results by a large margin across various tasks (e.g., forecasting and imputation at different scales). We provide extensive and in-depth analyses experiments, which verify that LM-Weather can (1) indeed leverage sequential knowledge from natural language to accurately handle meteorological sequence, (2) allows each devices obtain highly customized models under significant heterogeneity, and (3) generalize under data-limited and out-of-distribution (OOD) scenarios.
[ "['Shengchao Chen' 'Guodong Long' 'Jing Jiang' 'Chengqi Zhang']" ]
null
null
2405.20350
null
null
http://arxiv.org/pdf/2405.20350v1
2024-05-27T22:51:58Z
2024-05-27T22:51:58Z
Linear Function Approximation as a Computationally Efficient Method to Solve Classical Reinforcement Learning Challenges
Neural Network based approximations of the Value function make up the core of leading Policy Based methods such as Trust Regional Policy Optimization (TRPO) and Proximal Policy Optimization (PPO). While this adds significant value when dealing with very complex environments, we note that in sufficiently low State and action space environments, a computationally expensive Neural Network architecture offers marginal improvement over simpler Value approximation methods. We present an implementation of Natural Actor Critic algorithms with actor updates through Natural Policy Gradient methods. This paper proposes that Natural Policy Gradient (NPG) methods with Linear Function Approximation as a paradigm for value approximation may surpass the performance and speed of Neural Network based models such as TRPO and PPO within these environments. Over Reinforcement Learning benchmarks Cart Pole and Acrobot, we observe that our algorithm trains much faster than complex neural network architectures, and obtains an equivalent or greater result. This allows us to recommend the use of NPG methods with Linear Function Approximation over TRPO and PPO for both traditional and sparse reward low dimensional problems.
[ "['Hari Srikanth']" ]
null
null
2405.20351
null
null
http://arxiv.org/pdf/2405.20351v1
2024-05-28T06:59:16Z
2024-05-28T06:59:16Z
ADR-BC: Adversarial Density Weighted Regression Behavior Cloning
Typically, traditional Imitation Learning (IL) methods first shape a reward or Q function and then use this shaped function within a reinforcement learning (RL) framework to optimize the empirical policy. However, if the shaped reward/Q function does not adequately represent the ground truth reward/Q function, updating the policy within a multi-step RL framework may result in cumulative bias, further impacting policy learning. Although utilizing behavior cloning (BC) to learn a policy by directly mimicking a few demonstrations in a single-step updating manner can avoid cumulative bias, BC tends to greedily imitate demonstrated actions, limiting its capacity to generalize to unseen state action pairs. To address these challenges, we propose ADR-BC, which aims to enhance behavior cloning through augmented density-based action support, optimizing the policy with this augmented support. Specifically, the objective of ADR-BC shares the similar physical meanings that matching expert distribution while diverging the sub-optimal distribution. Therefore, ADR-BC can achieve more robust expert distribution matching. Meanwhile, as a one-step behavior cloning framework, ADR-BC avoids the cumulative bias associated with multi-step RL frameworks. To validate the performance of ADR-BC, we conduct extensive experiments. Specifically, ADR-BC showcases a 10.5% improvement over the previous state-of-the-art (SOTA) generalized IL baseline, CEIL, across all tasks in the Gym-Mujoco domain. Additionally, it achieves an 89.5% improvement over Implicit Q Learning (IQL) using real rewards across all tasks in the Adroit and Kitchen domains. On the other hand, we conduct extensive ablations to further demonstrate the effectiveness of ADR-BC.
[ "['Ziqi Zhang' 'Zifeng Zhuang' 'Donglin Wang' 'Jingzehua Xu' 'Miao Liu'\n 'Shuai Zhang']" ]
null
null
2405.20354
null
null
http://arxiv.org/pdf/2405.20354v1
2024-05-30T02:55:49Z
2024-05-30T02:55:49Z
Literature Filtering for Systematic Reviews with Transformers
Identifying critical research within the growing body of academic work is an essential element of quality research. Systematic review processes, used in evidence-based medicine, formalise this as a procedure that must be followed in a research program. However, it comes with an increasing burden in terms of the time required to identify the important articles of research for a given topic. In this work, we develop a method for building a general-purpose filtering system that matches a research question, posed as a natural language description of the required content, against a candidate set of articles obtained via the application of broad search terms. Our results demonstrate that transformer models, pre-trained on biomedical literature then fine tuned for the specific task, offer a promising solution to this problem. The model can remove large volumes of irrelevant articles for most research questions.
[ "['John Hawkins' 'David Tivey']" ]
null
null
2405.20355
null
null
http://arxiv.org/pdf/2405.20355v1
2024-05-30T05:39:27Z
2024-05-30T05:39:27Z
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Spiking Neural Networks (SNNs) have attracted great attention for their energy-efficient operations and biologically inspired structures, offering potential advantages over Artificial Neural Networks (ANNs) in terms of energy efficiency and interpretability. Nonetheless, similar to ANNs, the robustness of SNNs remains a challenge, especially when facing adversarial attacks. Existing techniques, whether adapted from ANNs or specifically designed for SNNs, exhibit limitations in training SNNs or defending against strong attacks. In this paper, we propose a novel approach to enhance the robustness of SNNs through gradient sparsity regularization. We observe that SNNs exhibit greater resilience to random perturbations compared to adversarial perturbations, even at larger scales. Motivated by this, we aim to narrow the gap between SNNs under adversarial and random perturbations, thereby improving their overall robustness. To achieve this, we theoretically prove that this performance gap is upper bounded by the gradient sparsity of the probability associated with the true label concerning the input image, laying the groundwork for a practical strategy to train robust SNNs by regularizing the gradient sparsity. We validate the effectiveness of our approach through extensive experiments on both image-based and event-based datasets. The results demonstrate notable improvements in the robustness of SNNs. Our work highlights the importance of gradient sparsity in SNNs and its role in enhancing robustness.
[ "['Yujia Liu' 'Tong Bu' 'Jianhao Ding' 'Zecheng Hao' 'Tiejun Huang'\n 'Zhaofei Yu']" ]
null
null
2405.20358
null
null
http://arxiv.org/pdf/2405.20358v2
2024-07-09T03:13:12Z
2024-05-30T07:13:08Z
Medication Recommendation via Dual Molecular Modalities and Multi-Substructure Enhancement
Medication recommendation combines patient medical history with biomedical knowledge to assist doctors in determining medication combinations more accurately and safely. Existing works based on molecular knowledge neglect the 3D geometric structure of molecules and fail to learn the high-dimensional information of medications, leading to structural confusion. Additionally, it does not extract key substructures from a single patient visit, resulting in the failure to identify medication molecules suitable for the current patient visit. To address the above limitations, we propose a bimodal molecular recommendation framework named BiMoRec, which introduces 3D molecular structures to obtain atomic 3D coordinates and edge indices, overcoming the inherent lack of high-dimensional molecular information in 2D molecular structures. To retain the fast training and prediction efficiency of the recommendation system, we use bimodal graph contrastive pretraining to maximize the mutual information between the two molecular modalities, achieving the fusion of 2D and 3D molecular graphs and re-evaluating substructures at the visit level. Specifically, we use deep learning networks to construct a pretraining method that acquires 2D and 3D molecular structure representations and substructure representations, and obtain mutual information through contrastive learning. We then generate fused molecular representations using the trained GNN module and re-determine the relevance of substructure representations in combination with the patient's clinical history. Finally, we generate the final medication combination based on the extracted substructure sequences. Our implementation on the MIMIC-III and MIMIC-IV datasets demonstrates that our method achieves state-of-the-art performance. Compared to the second-best baseline, our model improves accuracy by 2.07%, with DDI at the same level as the baseline.
[ "['Shi Mu' 'Shunpan Liang' 'Xiang Li']" ]
null
null
2405.20384
null
null
http://arxiv.org/pdf/2405.20384v1
2024-05-30T18:00:06Z
2024-05-30T18:00:06Z
Recurrent neural network wave functions for Rydberg atom arrays on kagome lattice
Rydberg atom array experiments have demonstrated the ability to act as powerful quantum simulators, preparing strongly-correlated phases of matter which are challenging to study for conventional computer simulations. A key direction has been the implementation of interactions on frustrated geometries, in an effort to prepare exotic many-body states such as spin liquids and glasses. In this paper, we apply two-dimensional recurrent neural network (RNN) wave functions to study the ground states of Rydberg atom arrays on the kagome lattice. We implement an annealing scheme to find the RNN variational parameters in regions of the phase diagram where exotic phases may occur, corresponding to rough optimization landscapes. For Rydberg atom array Hamiltonians studied previously on the kagome lattice, our RNN ground states show no evidence of exotic spin liquid or emergent glassy behavior. In the latter case, we argue that the presence of a non-zero Edwards-Anderson order parameter is an artifact of the long autocorrelations times experienced with quantum Monte Carlo simulations. This result emphasizes the utility of autoregressive models, such as RNNs, to explore Rydberg atom array physics on frustrated lattices and beyond.
[ "['Mohamed Hibat-Allah' 'Ejaaz Merali' 'Giacomo Torlai' 'Roger G Melko'\n 'Juan Carrasquilla']" ]
null
null
2405.20390
null
null
http://arxiv.org/pdf/2405.20390v1
2024-05-30T18:01:14Z
2024-05-30T18:01:14Z
Quantitative Convergences of Lie Group Momentum Optimizers
Explicit, momentum-based dynamics that optimize functions defined on Lie groups can be constructed via variational optimization and momentum trivialization. Structure preserving time discretizations can then turn this dynamics into optimization algorithms. This article investigates two types of discretization, Lie Heavy-Ball, which is a known splitting scheme, and Lie NAG-SC, which is newly proposed. Their convergence rates are explicitly quantified under $L$-smoothness and local strong convexity assumptions. Lie NAG-SC provides acceleration over the momentumless case, i.e. Riemannian gradient descent, but Lie Heavy-Ball does not. When compared to existing accelerated optimizers for general manifolds, both Lie Heavy-Ball and Lie NAG-SC are computationally cheaper and easier to implement, thanks to their utilization of group structure. Only gradient oracle and exponential map are required, but not logarithm map or parallel transport which are computational costly.
[ "['Lingkai Kong' 'Molei Tao']" ]