categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2406.01782
null
null
http://arxiv.org/pdf/2406.01782v1
2024-06-03T20:56:12Z
2024-06-03T20:56:12Z
Multi-agent assignment via state augmented reinforcement learning
We address the conflicting requirements of a multi-agent assignment problem through constrained reinforcement learning, emphasizing the inadequacy of standard regularization techniques for this purpose. Instead, we recur to a state augmentation approach in which the oscillation of dual variables is exploited by agents to alternate between tasks. In addition, we coordinate the actions of the multiple agents acting on their local states through these multipliers, which are gossiped through a communication network, eliminating the need to access other agent states. By these means, we propose a distributed multi-agent assignment protocol with theoretical feasibility guarantees that we corroborate in a monitoring numerical experiment.
[ "['Leopoldo Agorio' 'Sean Van Alen' 'Miguel Calvo-Fullana'\n 'Santiago Paternain' 'Juan Andres Bazerque']" ]
null
null
2406.01789
null
null
http://arxiv.org/pdf/2406.01789v1
2024-06-03T21:13:02Z
2024-06-03T21:13:02Z
AI-based Classification of Customer Support Tickets: State of the Art and Implementation with AutoML
Automation of support ticket classification is crucial to improve customer support performance and shortening resolution time for customer inquiries. This research aims to test the applicability of automated machine learning (AutoML) as a technology to train a machine learning model (ML model) that can classify support tickets. The model evaluation conducted in this research shows that AutoML can be used to train ML models with good classification performance. Moreover, this paper fills a research gap by providing new insights into developing AI solutions without a dedicated professional by utilizing AutoML, which makes this technology more accessible for companies without specialized AI departments and staff.
[ "['Mario Truss' 'Stephan Boehm']" ]
null
null
2406.01793
null
null
http://arxiv.org/pdf/2406.01793v1
2024-06-03T21:18:08Z
2024-06-03T21:18:08Z
Towards the Transferability of Rewards Recovered via Regularized Inverse Reinforcement Learning
Inverse reinforcement learning (IRL) aims to infer a reward from expert demonstrations, motivated by the idea that the reward, rather than the policy, is the most succinct and transferable description of a task [Ng et al., 2000]. However, the reward corresponding to an optimal policy is not unique, making it unclear if an IRL-learned reward is transferable to new transition laws in the sense that its optimal policy aligns with the optimal policy corresponding to the expert's true reward. Past work has addressed this problem only under the assumption of full access to the expert's policy, guaranteeing transferability when learning from two experts with the same reward but different transition laws that satisfy a specific rank condition [Rolland et al., 2022]. In this work, we show that the conditions developed under full access to the expert's policy cannot guarantee transferability in the more practical scenario where we have access only to demonstrations of the expert. Instead of a binary rank condition, we propose principal angles as a more refined measure of similarity and dissimilarity between transition laws. Based on this, we then establish two key results: 1) a sufficient condition for transferability to any transition laws when learning from at least two experts with sufficiently different transition laws, and 2) a sufficient condition for transferability to local changes in the transition law when learning from a single expert. Furthermore, we also provide a probably approximately correct (PAC) algorithm and an end-to-end analysis for learning transferable rewards from demonstrations of multiple experts.
[ "['Andreas Schlaginhaufen' 'Maryam Kamgarpour']" ]
null
null
2406.01799
null
null
http://arxiv.org/pdf/2406.01799v2
2024-06-06T13:29:48Z
2024-06-03T21:40:59Z
Online Control in Population Dynamics
The study of population dynamics originated with early sociological works but has since extended into many fields, including biology, epidemiology, evolutionary game theory, and economics. Most studies on population dynamics focus on the problem of prediction rather than control. Existing mathematical models for control in population dynamics are often restricted to specific, noise-free dynamics, while real-world population changes can be complex and adversarial. To address this gap, we propose a new framework based on the paradigm of online control. We first characterize a set of linear dynamical systems that can naturally model evolving populations. We then give an efficient gradient-based controller for these systems, with near-optimal regret bounds with respect to a broad class of linear policies. Our empirical evaluations demonstrate the effectiveness of the proposed algorithm for control in population dynamics even for non-linear models such as SIR and replicator dynamics.
[ "['Noah Golowich' 'Elad Hazan' 'Zhou Lu' 'Dhruv Rohatgi' 'Y. Jennifer Sun']" ]
null
null
2406.01801
null
null
http://arxiv.org/pdf/2406.01801v1
2024-06-03T21:42:06Z
2024-06-03T21:42:06Z
Fearless Stochasticity in Expectation Propagation
Expectation propagation (EP) is a family of algorithms for performing approximate inference in probabilistic models. The updates of EP involve the evaluation of moments -- expectations of certain functions -- which can be estimated from Monte Carlo (MC) samples. However, the updates are not robust to MC noise when performed naively, and various prior works have attempted to address this issue in different ways. In this work, we provide a novel perspective on the moment-matching updates of EP; namely, that they perform natural-gradient-based optimisation of a variational objective. We use this insight to motivate two new EP variants, with updates that are particularly well-suited to MC estimation; they remain stable and are most sample-efficient when estimated with just a single sample. These new variants combine the benefits of their predecessors and address key weaknesses. In particular, they are easier to tune, offer an improved speed-accuracy trade-off, and do not rely on the use of debiasing estimators. We demonstrate their efficacy on a variety of probabilistic inference tasks.
[ "['Jonathan So' 'Richard E. Turner']" ]
null
null
2406.01805
null
null
http://arxiv.org/pdf/2406.01805v1
2024-06-03T21:51:13Z
2024-06-03T21:51:13Z
TabMDA: Tabular Manifold Data Augmentation for Any Classifier using Transformers with In-context Subsetting
Tabular data is prevalent in many critical domains, yet it is often challenging to acquire in large quantities. This scarcity usually results in poor performance of machine learning models on such data. Data augmentation, a common strategy for performance improvement in vision and language tasks, typically underperforms for tabular data due to the lack of explicit symmetries in the input space. To overcome this challenge, we introduce TabMDA, a novel method for manifold data augmentation on tabular data. This method utilises a pre-trained in-context model, such as TabPFN, to map the data into a manifold space. TabMDA performs label-invariant transformations by encoding the data multiple times with varied contexts. This process explores the manifold of the underlying in-context models, thereby enlarging the training dataset. TabMDA is a training-free method, making it applicable to any classifier. We evaluate TabMDA on five standard classifiers and observe significant performance improvements across various tabular datasets. Our results demonstrate that TabMDA provides an effective way to leverage information from pre-trained in-context models to enhance the performance of downstream classifiers.
[ "['Andrei Margeloiu' 'Adrián Bazaga' 'Nikola Simidjievski' 'Pietro Liò'\n 'Mateja Jamnik']" ]
null
null
2406.01808
null
null
http://arxiv.org/pdf/2406.01808v1
2024-06-03T21:59:21Z
2024-06-03T21:59:21Z
In-Context Learning of Physical Properties: Few-Shot Adaptation to Out-of-Distribution Molecular Graphs
Large language models manifest the ability of few-shot adaptation to a sequence of provided examples. This behavior, known as in-context learning, allows for performing nontrivial machine learning tasks during inference only. In this work, we address the question: can we leverage in-context learning to predict out-of-distribution materials properties? However, this would not be possible for structure property prediction tasks unless an effective method is found to pass atomic-level geometric features to the transformer model. To address this problem, we employ a compound model in which GPT-2 acts on the output of geometry-aware graph neural networks to adapt in-context information. To demonstrate our model's capabilities, we partition the QM9 dataset into sequences of molecules that share a common substructure and use them for in-context learning. This approach significantly improves the performance of the model on out-of-distribution examples, surpassing the one of general graph neural network models.
[ "['Grzegorz Kaszuba' 'Amirhossein D. Naghdi' 'Dario Massa'\n 'Stefanos Papanikolaou' 'Andrzej Jaszkiewicz' 'Piotr Sankowski']" ]
null
null
2406.01813
null
null
http://arxiv.org/pdf/2406.01813v1
2024-06-03T22:11:38Z
2024-06-03T22:11:38Z
Diffusion Boosted Trees
Combining the merits of both denoising diffusion probabilistic models and gradient boosting, the diffusion boosting paradigm is introduced for tackling supervised learning problems. We develop Diffusion Boosted Trees (DBT), which can be viewed as both a new denoising diffusion generative model parameterized by decision trees (one single tree for each diffusion timestep), and a new boosting algorithm that combines the weak learners into a strong learner of conditional distributions without making explicit parametric assumptions on their density forms. We demonstrate through experiments the advantages of DBT over deep neural network-based diffusion models as well as the competence of DBT on real-world regression tasks, and present a business application (fraud detection) of DBT for classification on tabular data with the ability of learning to defer.
[ "['Xizewen Han' 'Mingyuan Zhou']" ]
null
null
2406.01823
null
null
http://arxiv.org/pdf/2406.01823v1
2024-06-03T22:27:09Z
2024-06-03T22:27:09Z
Causal Discovery with Fewer Conditional Independence Tests
Many questions in science center around the fundamental problem of understanding causal relationships. However, most constraint-based causal discovery algorithms, including the well-celebrated PC algorithm, often incur an exponential number of conditional independence (CI) tests, posing limitations in various applications. Addressing this, our work focuses on characterizing what can be learned about the underlying causal graph with a reduced number of CI tests. We show that it is possible to a learn a coarser representation of the hidden causal graph with a polynomial number of tests. This coarser representation, named Causal Consistent Partition Graph (CCPG), comprises of a partition of the vertices and a directed graph defined over its components. CCPG satisfies consistency of orientations and additional constraints which favor finer partitions. Furthermore, it reduces to the underlying causal graph when the causal graph is identifiable. As a consequence, our results offer the first efficient algorithm for recovering the true causal graph with a polynomial number of tests, in special cases where the causal graph is fully identifiable through observational data and potentially additional interventions.
[ "['Kirankumar Shiragur' 'Jiaqi Zhang' 'Caroline Uhler']" ]
null
null
2406.01825
null
null
http://arxiv.org/pdf/2406.01825v2
2024-06-05T03:22:38Z
2024-06-03T22:37:45Z
EMOE: Expansive Matching of Experts for Robust Uncertainty Based Rejection
Expansive Matching of Experts (EMOE) is a novel method that utilizes support-expanding, extrapolatory pseudo-labeling to improve prediction and uncertainty based rejection on out-of-distribution (OOD) points. We propose an expansive data augmentation technique that generates OOD instances in a latent space, and an empirical trial based approach to filter out augmented expansive points for pseudo-labeling. EMOE utilizes a diverse set of multiple base experts as pseudo-labelers on the augmented data to improve OOD performance through a shared MLP with multiple heads (one per expert). We demonstrate that EMOE achieves superior performance compared to state-of-the-art methods on tabular data.
[ "['Yunni Qu' 'James Wellnitz' 'Alexander Tropsha' 'Junier Oliva']" ]
null
null
2406.01829
null
null
http://arxiv.org/pdf/2406.01829v1
2024-06-03T22:56:40Z
2024-06-03T22:56:40Z
FacAID: A Transformer Model for Neuro-Symbolic Facade Reconstruction
We introduce a neuro-symbolic transformer-based model that converts flat, segmented facade structures into procedural definitions using a custom-designed split grammar. To facilitate this, we first develop a semi-complex split grammar tailored for architectural facades and then generate a dataset comprising of facades alongside their corresponding procedural representations. This dataset is used to train our transformer model to convert segmented, flat facades into the procedural language of our grammar. During inference, the model applies this learned transformation to new facade segmentations, providing a procedural representation that users can adjust to generate varied facade designs. This method not only automates the conversion of static facade images into dynamic, editable procedural formats but also enhances the design flexibility, allowing for easy modifications and variations by architects and designers. Our approach sets a new standard in facade design by combining the precision of procedural generation with the adaptability of neuro-symbolic learning.
[ "['Aleksander Płocharski' 'Jan Swidzinski' 'Joanna Porter-Sobieraj'\n 'Przemyslaw Musialski']" ]
null
null
2406.01833
null
null
http://arxiv.org/abs/2406.01833v2
2024-06-12T01:27:51Z
2024-06-03T23:06:45Z
CAFO: Feature-Centric Explanation on Time Series Classification
In multivariate time series (MTS) classification, finding the important features (e.g., sensors) for model performance is crucial yet challenging due to the complex, high-dimensional nature of MTS data, intricate temporal dynamics, and the necessity for domain-specific interpretations. Current explanation methods for MTS mostly focus on time-centric explanations, apt for pinpointing important time periods but less effective in identifying key features. This limitation underscores the pressing need for a feature-centric approach, a vital yet often overlooked perspective that complements time-centric analysis. To bridge this gap, our study introduces a novel feature-centric explanation and evaluation framework for MTS, named CAFO (Channel Attention and Feature Orthgonalization). CAFO employs a convolution-based approach with channel attention mechanisms, incorporating a depth-wise separable channel attention module (DepCA) and a QR decomposition-based loss for promoting feature-wise orthogonality. We demonstrate that this orthogonalization enhances the separability of attention distributions, thereby refining and stabilizing the ranking of feature importance. This improvement in feature-wise ranking enhances our understanding of feature explainability in MTS. Furthermore, we develop metrics to evaluate global and class-specific feature importance. Our framework's efficacy is validated through extensive empirical analyses on two major public benchmarks and real-world datasets, both synthetic and self-collected, specifically designed to highlight class-wise discriminative features. The results confirm CAFO's robustness and informative capacity in assessing feature importance in MTS classification tasks. This study not only advances the understanding of feature-centric explanations in MTS but also sets a foundation for future explorations in feature-centric explanations.
[ "['Jaeho Kim' 'Seok-Ju Hahn' 'Yoontae Hwang' 'Junghye Lee' 'Seulki Lee']" ]
null
null
2406.01838
null
null
http://arxiv.org/pdf/2406.01838v1
2024-06-03T23:10:35Z
2024-06-03T23:10:35Z
Learning the Target Network in Function Space
We focus on the task of learning the value function in the reinforcement learning (RL) setting. This task is often solved by updating a pair of online and target networks while ensuring that the parameters of these two networks are equivalent. We propose Lookahead-Replicate (LR), a new value-function approximation algorithm that is agnostic to this parameter-space equivalence. Instead, the LR algorithm is designed to maintain an equivalence between the two networks in the function space. This value-based equivalence is obtained by employing a new target-network update. We show that LR leads to a convergent behavior in learning the value function. We also present empirical results demonstrating that LR-based target-network updates significantly improve deep RL on the Atari benchmark.
[ "['Kavosh Asadi' 'Yao Liu' 'Shoham Sabach' 'Ming Yin' 'Rasool Fakoor']" ]
null
null
2406.01852
null
null
http://arxiv.org/pdf/2406.01852v3
2024-07-09T19:27:34Z
2024-06-03T23:54:48Z
Non-uniformity is All You Need: Efficient and Timely Encrypted Traffic Classification With ECHO
With 95% of Internet traffic now encrypted, an effective approach to classifying this traffic is crucial for network security and management. This paper introduces ECHO -- a novel optimization process for ML/DL-based encrypted traffic classification. ECHO targets both classification time and memory utilization and incorporates two innovative techniques. The first component, HO (Hyperparameter Optimization of binnings), aims at creating efficient traffic representations. While previous research often uses representations that map packet sizes and packet arrival times to fixed-sized bins, we show that non-uniform binnings are significantly more efficient. These non-uniform binnings are derived by employing a hyperparameter optimization algorithm in the training stage. HO significantly improves accuracy given a required representation size, or, equivalently, achieves comparable accuracy using smaller representations. Then, we introduce EC (Early Classification of traffic), which enables faster classification using a cascade of classifiers adapted for different exit times, where classification is based on the level of confidence. EC reduces the average classification latency by up to 90%. Remarkably, this method not only maintains classification accuracy but also, in certain cases, improves it. Using three publicly available datasets, we demonstrate that the combined method, Early Classification with Hyperparameter Optimization (ECHO), leads to a significant improvement in classification efficiency.
[ "['Shilo Daum' 'Tal Shapira' 'Anat Bremler-Barr' 'David Hay']" ]
null
null
2406.01853
null
null
http://arxiv.org/pdf/2406.01853v1
2024-06-03T23:55:20Z
2024-06-03T23:55:20Z
Multi-Agent Reinforcement Learning Meets Leaf Sequencing in Radiotherapy
In contemporary radiotherapy planning (RTP), a key module leaf sequencing is predominantly addressed by optimization-based approaches. In this paper, we propose a novel deep reinforcement learning (DRL) model termed as Reinforced Leaf Sequencer (RLS) in a multi-agent framework for leaf sequencing. The RLS model offers improvements to time-consuming iterative optimization steps via large-scale training and can control movement patterns through the design of reward mechanisms. We have conducted experiments on four datasets with four metrics and compared our model with a leading optimization sequencer. Our findings reveal that the proposed RLS model can achieve reduced fluence reconstruction errors, and potential faster convergence when integrated in an optimization planner. Additionally, RLS has shown promising results in a full artificial intelligence RTP pipeline. We hope this pioneer multi-agent RL leaf sequencer can foster future research on machine learning for RTP.
[ "['Riqiang Gao' 'Florin C. Ghesu' 'Simon Arberet' 'Shahab Basiri'\n 'Esa Kuusela' 'Martin Kraus' 'Dorin Comaniciu' 'Ali Kamen']" ]
null
null
2406.01857
null
null
http://arxiv.org/pdf/2406.01857v1
2024-06-04T00:02:52Z
2024-06-04T00:02:52Z
Neural Green's Operators for Parametric Partial Differential Equations
This work introduces neural Green's operators (NGOs), a novel neural operator network architecture that learns the solution operator for a parametric family of linear partial differential equations (PDEs). Our construction of NGOs is derived directly from the Green's formulation of such a solution operator. Similar to deep operator networks (DeepONets) and variationally mimetic operator networks (VarMiONs), NGOs constitutes an expansion of the solution to the PDE in terms of basis functions, that is returned from a sub-network, contracted with coefficients, that are returned from another sub-network. However, in accordance with the Green's formulation, NGOs accept weighted averages of the input functions, rather than sampled values thereof, as is the case in DeepONets and VarMiONs. Application of NGOs to canonical linear parametric PDEs shows that, while they remain competitive with DeepONets, VarMiONs and Fourier neural operators when testing on data that lie within the training distribution, they robustly generalize when testing on finer-scale data generated outside of the training distribution. Furthermore, we show that the explicit representation of the Green's function that is returned by NGOs enables the construction of effective preconditioners for numerical solvers for PDEs.
[ "['Hugo Melchers' 'Joost Prins' 'Michael Abdelmalik']" ]
null
null
2406.01870
null
null
http://arxiv.org/pdf/2406.01870v1
2024-06-04T00:45:37Z
2024-06-04T00:45:37Z
Understanding Stochastic Natural Gradient Variational Inference
Stochastic natural gradient variational inference (NGVI) is a popular posterior inference method with applications in various probabilistic models. Despite its wide usage, little is known about the non-asymptotic convergence rate in the emph{stochastic} setting. We aim to lessen this gap and provide a better understanding. For conjugate likelihoods, we prove the first $mathcal{O}(frac{1}{T})$ non-asymptotic convergence rate of stochastic NGVI. The complexity is no worse than stochastic gradient descent (aka black-box variational inference) and the rate likely has better constant dependency that leads to faster convergence in practice. For non-conjugate likelihoods, we show that stochastic NGVI with the canonical parameterization implicitly optimizes a non-convex objective. Thus, a global convergence rate of $mathcal{O}(frac{1}{T})$ is unlikely without some significant new understanding of optimizing the ELBO using natural gradients.
[ "['Kaiwen Wu' 'Jacob R. Gardner']" ]
null
null
2406.01873
null
null
http://arxiv.org/pdf/2406.01873v2
2024-06-05T15:53:01Z
2024-06-04T01:02:22Z
CR-UTP: Certified Robustness against Universal Text Perturbations on Large Language Models
It is imperative to ensure the stability of every prediction made by a language model; that is, a language's prediction should remain consistent despite minor input variations, like word substitutions. In this paper, we investigate the problem of certifying a language model's robustness against Universal Text Perturbations (UTPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing certified robustness based on random smoothing has shown considerable promise in certifying the input-specific text perturbations (ISTPs), operating under the assumption that any random alteration of a sample's clean or adversarial words would negate the impact of sample-wise perturbations. However, with UTPs, masking only the adversarial words can eliminate the attack. A naive method is to simply increase the masking ratio and the likelihood of masking attack tokens, but it leads to a significant reduction in both certified accuracy and the certified radius due to input corruption by extensive masking. To solve this challenge, we introduce a novel approach, the superior prompt search method, designed to identify a superior prompt that maintains higher certified accuracy under extensive masking. Additionally, we theoretically motivate why ensembles are a particularly suitable choice as base prompts for random smoothing. The method is denoted by superior prompt ensembling technique. We also empirically confirm this technique, obtaining state-of-the-art results in multiple settings. These methodologies, for the first time, enable high certified accuracy against both UTPs and ISTPs. The source code of CR-UTP is available at url {https://github.com/UCFML-Research/CR-UTP}.
[ "['Qian Lou' 'Xin Liang' 'Jiaqi Xue' 'Yancheng Zhang' 'Rui Xie'\n 'Mengxin Zheng']" ]
null
null
2406.01876
null
null
http://arxiv.org/pdf/2406.01876v1
2024-06-04T01:08:00Z
2024-06-04T01:08:00Z
GRAM: Generative Retrieval Augmented Matching of Data Schemas in the Context of Data Security
Schema matching constitutes a pivotal phase in the data ingestion process for contemporary database systems. Its objective is to discern pairwise similarities between two sets of attributes, each associated with a distinct data table. This challenge emerges at the initial stages of data analytics, such as when incorporating a third-party table into existing databases to inform business insights. Given its significance in the realm of database systems, schema matching has been under investigation since the 2000s. This study revisits this foundational problem within the context of large language models. Adhering to increasingly stringent data security policies, our focus lies on the zero-shot and few-shot scenarios: the model should analyze only a minimal amount of customer data to execute the matching task, contrasting with the conventional approach of scrutinizing the entire data table. We emphasize that the zero-shot or few-shot assumption is imperative to safeguard the identity and privacy of customer data, even at the potential cost of accuracy. The capability to accurately match attributes under such stringent requirements distinguishes our work from previous literature in this domain.
[ "['Xuanqing Liu' 'Luyang Kong' 'Runhui Wang' 'Patrick Song' 'Austin Nevins'\n 'Henrik Johnson' 'Nimish Amlathe' 'Davor Golac']" ]
null
null
2406.01895
null
null
http://arxiv.org/pdf/2406.01895v1
2024-06-04T02:00:07Z
2024-06-04T02:00:07Z
Explicitly Encoding Structural Symmetry is Key to Length Generalization in Arithmetic Tasks
Despite the success of Transformers on language understanding, code generation, and logical reasoning, they still fail to generalize over length on basic arithmetic tasks such as addition and multiplication. A major reason behind this failure is the vast difference in structure between numbers and text; For example, the numbers are typically parsed from right to left, and there is a correspondence between digits at the same position across different numbers. In contrast, for text, such symmetries are quite unnatural. In this work, we propose to encode these semantics explicitly into the model via modified number formatting and custom positional encodings. Empirically, our method allows a Transformer trained on numbers with at most 5-digits for addition and multiplication to generalize up to 50-digit numbers, without using additional data for longer sequences. We further demonstrate that traditional absolute positional encodings (APE) fail to generalize to longer sequences, even when trained with augmented data that captures task symmetries. To elucidate the importance of explicitly encoding structure, we prove that explicit incorporation of structure via positional encodings is necessary for out-of-distribution generalization. Finally, we pinpoint other challenges inherent to length generalization beyond capturing symmetries, in particular complexity of the underlying task, and propose changes in the training distribution to address them.
[ "['Mahdi Sabbaghi' 'George Pappas' 'Hamed Hassani' 'Surbhi Goel']" ]
null
null
2406.01899
null
null
http://arxiv.org/pdf/2406.01899v1
2024-06-04T02:04:09Z
2024-06-04T02:04:09Z
Cross-Domain Graph Data Scaling: A Showcase with Diffusion Models
Models for natural language and images benefit from data scaling behavior: the more data fed into the model, the better they perform. This 'better with more' phenomenon enables the effectiveness of large-scale pre-training on vast amounts of data. However, current graph pre-training methods struggle to scale up data due to heterogeneity across graphs. To achieve effective data scaling, we aim to develop a general model that is able to capture diverse data patterns of graphs and can be utilized to adaptively help the downstream tasks. To this end, we propose UniAug, a universal graph structure augmentor built on a diffusion model. We first pre-train a discrete diffusion model on thousands of graphs across domains to learn the graph structural patterns. In the downstream phase, we provide adaptive enhancement by conducting graph structure augmentation with the help of the pre-trained diffusion model via guided generation. By leveraging the pre-trained diffusion model for structure augmentation, we consistently achieve performance improvements across various downstream tasks in a plug-and-play manner. To the best of our knowledge, this study represents the first demonstration of a data-scaling graph structure augmentor on graphs across domains.
[ "['Wenzhuo Tang' 'Haitao Mao' 'Danial Dervovic' 'Ivan Brugere'\n 'Saumitra Mishra' 'Yuying Xie' 'Jiliang Tang']" ]
null
null
2406.01901
null
null
http://arxiv.org/pdf/2406.01901v1
2024-06-04T02:12:27Z
2024-06-04T02:12:27Z
Bifurcated Generative Flow Networks
Generative Flow Networks (GFlowNets), a new family of probabilistic samplers, have recently emerged as a promising framework for learning stochastic policies that generate high-quality and diverse objects proportionally to their rewards. However, existing GFlowNets often suffer from low data efficiency due to the direct parameterization of edge flows or reliance on backward policies that may struggle to scale up to large action spaces. In this paper, we introduce Bifurcated GFlowNets (BN), a novel approach that employs a bifurcated architecture to factorize the flows into separate representations for state flows and edge-based flow allocation. This factorization enables BN to learn more efficiently from data and better handle large-scale problems while maintaining the convergence guarantee. Through extensive experiments on standard evaluation benchmarks, we demonstrate that BN significantly improves learning efficiency and effectiveness compared to strong baselines.
[ "['Chunhui Li' 'Cheng-Hao Liu' 'Dianbo Liu' 'Qingpeng Cai' 'Ling Pan']" ]
null
null
2406.01908
null
null
http://arxiv.org/pdf/2406.01908v2
2024-06-06T09:07:37Z
2024-06-04T02:39:42Z
PDHG-Unrolled Learning-to-Optimize Method for Large-Scale Linear Programming
Solving large-scale linear programming (LP) problems is an important task in various areas such as communication networks, power systems, finance and logistics. Recently, two distinct approaches have emerged to expedite LP solving: (i) First-order methods (FOMs); (ii) Learning to optimize (L2O). In this work, we propose an FOM-unrolled neural network (NN) called PDHG-Net, and propose a two-stage L2O method to solve large-scale LP problems. The new architecture PDHG-Net is designed by unrolling the recently emerged PDHG method into a neural network, combined with channel-expansion techniques borrowed from graph neural networks. We prove that the proposed PDHG-Net can recover PDHG algorithm, thus can approximate optimal solutions of LP instances with a polynomial number of neurons. We propose a two-stage inference approach: first use PDHG-Net to generate an approximate solution, and then apply PDHG algorithm to further improve the solution. Experiments show that our approach can significantly accelerate LP solving, achieving up to a 3$times$ speedup compared to FOMs for large-scale LP problems.
[ "['Bingheng Li' 'Linxin Yang' 'Yupeng Chen' 'Senmiao Wang' 'Qian Chen'\n 'Haitao Mao' 'Yao Ma' 'Akang Wang' 'Tian Ding' 'Jiliang Tang' 'Ruoyu Sun']" ]
null
null
2406.01909
null
null
http://arxiv.org/pdf/2406.01909v1
2024-06-04T02:39:48Z
2024-06-04T02:39:48Z
A Global Geometric Analysis of Maximal Coding Rate Reduction
The maximal coding rate reduction (MCR$^2$) objective for learning structured and compact deep representations is drawing increasing attention, especially after its recent usage in the derivation of fully explainable and highly effective deep network architectures. However, it lacks a complete theoretical justification: only the properties of its global optima are known, and its global landscape has not been studied. In this work, we give a complete characterization of the properties of all its local and global optima, as well as other types of critical points. Specifically, we show that each (local or global) maximizer of the MCR$^2$ problem corresponds to a low-dimensional, discriminative, and diverse representation, and furthermore, each critical point of the objective is either a local maximizer or a strict saddle point. Such a favorable landscape makes MCR$^2$ a natural choice of objective for learning diverse and discriminative representations via first-order optimization methods. To validate our theoretical findings, we conduct extensive experiments on both synthetic and real data sets.
[ "['Peng Wang' 'Huikang Liu' 'Druv Pai' 'Yaodong Yu' 'Zhihui Zhu' 'Qing Qu'\n 'Yi Ma']" ]
null
null
2406.01913
null
null
http://arxiv.org/pdf/2406.01913v1
2024-06-04T02:50:19Z
2024-06-04T02:50:19Z
Generating Synthetic Net Load Data with Physics-informed Diffusion Model
This paper presents a novel physics-informed diffusion model for generating synthetic net load data, addressing the challenges of data scarcity and privacy concerns. The proposed framework embeds physical models within denoising networks, offering a versatile approach that can be readily generalized to unforeseen scenarios. A conditional denoising neural network is designed to jointly train the parameters of the transition kernel of the diffusion model and the parameters of the physics-informed function. Utilizing the real-world smart meter data from Pecan Street, we validate the proposed method and conduct a thorough numerical study comparing its performance with state-of-the-art generative models, including generative adversarial networks, variational autoencoders, normalizing flows, and a well calibrated baseline diffusion model. A comprehensive set of evaluation metrics is used to assess the accuracy and diversity of the generated synthetic net load data. The numerical study results demonstrate that the proposed physics-informed diffusion model outperforms state-of-the-art models across all quantitative metrics, yielding at least 20% improvement.
[ "['Shaorong Zhang' 'Yuanbin Cheng' 'Nanpeng Yu']" ]
null
null
2406.01933
null
null
http://arxiv.org/pdf/2406.01933v1
2024-06-04T03:35:25Z
2024-06-04T03:35:25Z
Orthogonal Causal Calibration
Estimates of causal parameters such as conditional average treatment effects and conditional quantile treatment effects play an important role in real-world decision making. Given this importance, one should ensure these estimators are calibrated. While there is a rich literature on calibrating estimators of non-causal parameters, very few methods have been derived for calibrating estimators of causal parameters, or more generally estimators of quantities involving nuisance parameters. In this work, we provide a general framework for calibrating predictors involving nuisance estimation. We consider a notion of calibration defined with respect to an arbitrary, nuisance-dependent loss $ell$, under which we say an estimator $theta$ is calibrated if its predictions cannot be changed on any level set to decrease loss. We prove generic upper bounds on the calibration error of any causal parameter estimate $theta$ with respect to any loss $ell$ using a concept called Neyman Orthogonality. Our bounds involve two decoupled terms - one measuring the error in estimating the unknown nuisance parameters, and the other representing the calibration error in a hypothetical world where the learned nuisance estimates were true. We use our bound to analyze the convergence of two sample splitting algorithms for causal calibration. One algorithm, which applies to universally orthogonalizable loss functions, transforms the data into generalized pseudo-outcomes and applies an off-the-shelf calibration procedure. The other algorithm, which applies to conditionally orthogonalizable loss functions, extends the classical uniform mass binning algorithm to include nuisance estimation. Our results are exceedingly general, showing that essentially any existing calibration algorithm can be used in causal settings, with additional loss only arising from errors in nuisance estimation.
[ "['Justin Whitehouse' 'Christopher Jung' 'Vasilis Syrgkanis' 'Bryan Wilder'\n 'Zhiwei Steven Wu']" ]
null
null
2406.01939
null
null
http://arxiv.org/pdf/2406.01939v1
2024-06-04T03:48:08Z
2024-06-04T03:48:08Z
Speeding up Policy Simulation in Supply Chain RL
Simulating a single trajectory of a dynamical system under some state-dependent policy is a core bottleneck in policy optimization algorithms. The many inherently serial policy evaluations that must be performed in a single simulation constitute the bulk of this bottleneck. To wit, in applying policy optimization to supply chain optimization (SCO) problems, simulating a single month of a supply chain can take several hours. We present an iterative algorithm for policy simulation, which we dub Picard Iteration. This scheme carefully assigns policy evaluation tasks to independent processes. Within an iteration, a single process evaluates the policy only on its assigned tasks while assuming a certain 'cached' evaluation for other tasks; the cache is updated at the end of the iteration. Implemented on GPUs, this scheme admits batched evaluation of the policy on a single trajectory. We prove that the structure afforded by many SCO problems allows convergence in a small number of iterations, independent of the horizon. We demonstrate practical speedups of 400x on large-scale SCO problems even with a single GPU, and also demonstrate practical efficacy in other RL environments.
[ "['Vivek Farias' 'Joren Gijsbrechts' 'Aryan Khojandi' 'Tianyi Peng'\n 'Andrew Zheng']" ]
null
null
2406.01940
null
null
http://arxiv.org/pdf/2406.01940v1
2024-06-04T03:48:08Z
2024-06-04T03:48:08Z
Process-Driven Autoformalization in Lean 4
Autoformalization, the conversion of natural language mathematics into formal languages, offers significant potential for advancing mathematical reasoning. However, existing efforts are limited to formal languages with substantial online corpora and struggle to keep pace with rapidly evolving languages like Lean 4. To bridge this gap, we propose a new benchmark textbf{Form}alization for textbf{L}ean~textbf{4} (textbf{name}) designed to evaluate the autoformalization capabilities of large language models (LLMs). This benchmark encompasses a comprehensive assessment of questions, answers, formal statements, and proofs. Additionally, we introduce a textbf{P}rocess-textbf{S}upervised textbf{V}erifier (textbf{PSV}) model that leverages the precise feedback from Lean 4 compilers to enhance autoformalization. Our experiments demonstrate that the PSV method improves autoformalization, enabling higher accuracy using less filtered training data. Furthermore, when fine-tuned with data containing detailed process information, PSV can leverage the data more effectively, leading to more significant improvements in autoformalization for Lean 4. Our dataset and code are available at url{https://github.com/rookie-joe/PDA}.
[ "['Jianqiao Lu' 'Zhengying Liu' 'Yingjia Wan' 'Yinya Huang' 'Haiming Wang'\n 'Zhicheng Yang' 'Jing Tang' 'Zhijiang Guo']" ]
null
null
2406.01947
null
null
http://arxiv.org/pdf/2406.01947v1
2024-06-04T03:58:58Z
2024-06-04T03:58:58Z
Data-Driven Approaches for Thrust Prediction in Underwater Flapping Fin Propulsion Systems
Flapping-fin underwater vehicle propulsion systems provide an alternative to propeller-driven systems in situations that require involve a constrained environment or require high maneuverability. Testing new configurations through experiments or high-fidelity simulations is an expensive process, slowing development of new systems. This is especially true when introducing new fin geometries. In this work, we propose machine learning approaches for thrust prediction given the system's fin geometries and kinematics. We introduce data-efficient fin shape parameterization strategies that enable our network to predict thrust profiles for unseen fin geometries given limited fin shapes in input data. In addition to faster development of systems, generalizable surrogate models offer fast, accurate predictions that could be used on an unmanned underwater vehicle control system.
[ "['Julian Lee' 'Kamal Viswanath' 'Alisha Sharma' 'Jason Geder'\n 'Ravi Ramamurti' 'Marius D. Pruessner']" ]
null
null
2406.01950
null
null
http://arxiv.org/abs/2406.01950v1
2024-06-04T04:03:07Z
2024-06-04T04:03:07Z
A Comparative Study of Sampling Methods with Cross-Validation in the FedHome Framework
This paper presents a comparative study of sampling methods within the FedHome framework, designed for personalized in-home health monitoring. FedHome leverages federated learning (FL) and generative convolutional autoencoders (GCAE) to train models on decentralized edge devices while prioritizing data privacy. A notable challenge in this domain is the class imbalance in health data, where critical events such as falls are underrepresented, adversely affecting model performance. To address this, the research evaluates six oversampling techniques using Stratified K-fold cross-validation: SMOTE, Borderline-SMOTE, Random OverSampler, SMOTE-Tomek, SVM-SMOTE, and SMOTE-ENN. These methods are tested on FedHome's public implementation over 200 training rounds with and without stratified K-fold cross-validation. The findings indicate that SMOTE-ENN achieves the most consistent test accuracy, with a standard deviation range of 0.0167-0.0176, demonstrating stable performance compared to other samplers. In contrast, SMOTE and SVM-SMOTE exhibit higher variability in performance, as reflected by their wider standard deviation ranges of 0.0157-0.0180 and 0.0155-0.0180, respectively. Similarly, the Random OverSampler method shows a significant deviation range of 0.0155-0.0176. SMOTE-Tomek, with a deviation range of 0.0160-0.0175, also shows greater stability but not as much as SMOTE-ENN. This finding highlights the potential of SMOTE-ENN to enhance the reliability and accuracy of personalized health monitoring systems within the FedHome framework.
[ "['Arash Ahmadi' 'Sarah S. Sharif' 'Yaser M. Banad']" ]
null
null
2406.01959
null
null
http://arxiv.org/pdf/2406.01959v1
2024-06-04T04:39:51Z
2024-06-04T04:39:51Z
Adaptive Variance Reduction for Stochastic Optimization under Weaker Assumptions
This paper explores adaptive variance reduction methods for stochastic optimization based on the STORM technique. Existing adaptive extensions of STORM rely on strong assumptions like bounded gradients and bounded function values, or suffer an additional $mathcal{O}(log T)$ term in the convergence rate. To address these limitations, we introduce a novel adaptive STORM method that achieves an optimal convergence rate of $mathcal{O}(T^{-1/3})$ for non-convex functions with our newly designed learning rate strategy. Compared with existing approaches, our method requires weaker assumptions and attains the optimal convergence rate without the additional $mathcal{O}(log T)$ term. We also extend the proposed technique to stochastic compositional optimization, obtaining the same optimal rate of $mathcal{O}(T^{-1/3})$. Furthermore, we investigate the non-convex finite-sum problem and develop another innovative adaptive variance reduction method that achieves an optimal convergence rate of $mathcal{O}(n^{1/4} T^{-1/2} )$, where $n$ represents the number of component functions. Numerical experiments across various tasks validate the effectiveness of our method.
[ "['Wei Jiang' 'Sifan Yang' 'Yibo Wang' 'Lijun Zhang']" ]
null
null
2406.01960
null
null
http://arxiv.org/pdf/2406.01960v1
2024-06-04T04:43:30Z
2024-06-04T04:43:30Z
Certifiably Byzantine-Robust Federated Conformal Prediction
Conformal prediction has shown impressive capacity in constructing statistically rigorous prediction sets for machine learning models with exchangeable data samples. The siloed datasets, coupled with the escalating privacy concerns related to local data sharing, have inspired recent innovations extending conformal prediction into federated environments with distributed data samples. However, this framework for distributed uncertainty quantification is susceptible to Byzantine failures. A minor subset of malicious clients can significantly compromise the practicality of coverage guarantees. To address this vulnerability, we introduce a novel framework Rob-FCP, which executes robust federated conformal prediction, effectively countering malicious clients capable of reporting arbitrary statistics with the conformal calibration process. We theoretically provide the conformal coverage bound of Rob-FCP in the Byzantine setting and show that the coverage of Rob-FCP is asymptotically close to the desired coverage level. We also propose a malicious client number estimator to tackle a more challenging setting where the number of malicious clients is unknown to the defender and theoretically shows its effectiveness. We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks on five standard benchmark and real-world healthcare datasets.
[ "['Mintong Kang' 'Zhen Lin' 'Jimeng Sun' 'Cao Xiao' 'Bo Li']" ]
null
null
2406.01967
null
null
http://arxiv.org/pdf/2406.01967v1
2024-06-04T04:53:05Z
2024-06-04T04:53:05Z
DrEureka: Language Model Guided Sim-To-Real Transfer
Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale. However, sim-to-real approaches typically rely on manual design and tuning of the task reward function as well as the simulation physics parameters, rendering the process slow and human-labor intensive. In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design. Our LLM-guided sim-to-real approach, DrEureka, requires only the physics simulation for the target task and automatically constructs suitable reward functions and domain randomization distributions to support real-world transfer. We first demonstrate that our approach can discover sim-to-real configurations that are competitive with existing human-designed ones on quadruped locomotion and dexterous manipulation tasks. Then, we showcase that our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball, without iterative manual design.
[ "['Yecheng Jason Ma' 'William Liang' 'Hung-Ju Wang' 'Sam Wang' 'Yuke Zhu'\n 'Linxi Fan' 'Osbert Bastani' 'Dinesh Jayaraman']" ]
null
null
2406.01969
null
null
http://arxiv.org/pdf/2406.01969v1
2024-06-04T05:05:27Z
2024-06-04T05:05:27Z
Multiway Multislice PHATE: Visualizing Hidden Dynamics of RNNs through Training
Recurrent neural networks (RNNs) are a widely used tool for sequential data analysis, however, they are still often seen as black boxes of computation. Understanding the functional principles of these networks is critical to developing ideal model architectures and optimization strategies. Previous studies typically only emphasize the network representation post-training, overlooking their evolution process throughout training. Here, we present Multiway Multislice PHATE (MM-PHATE), a novel method for visualizing the evolution of RNNs' hidden states. MM-PHATE is a graph-based embedding using structured kernels across the multiple dimensions spanned by RNNs: time, training epoch, and units. We demonstrate on various datasets that MM-PHATE uniquely preserves hidden representation community structure among units and identifies information processing and compression phases during training. The embedding allows users to look under the hood of RNNs across training and provides an intuitive and comprehensive strategy to understanding the network's internal dynamics and draw conclusions, e.g., on why and how one model outperforms another or how a specific architecture might impact an RNN's learning ability.
[ "['Jiancheng Xie' 'Lou C. Kohler Voinov' 'Noga Mudrik' 'Gal Mishne'\n 'Adam Charles']" ]
null
null
2406.01975
null
null
http://arxiv.org/pdf/2406.01975v1
2024-06-04T05:19:32Z
2024-06-04T05:19:32Z
Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NAS
Recent advances in Out-of-Distribution (OOD) Detection is the driving force behind safe and reliable deployment of Convolutional Neural Networks (CNNs) in real world applications. However, existing studies focus on OOD detection through confidence score and deep generative model-based methods, without considering the impact of DNN structures, especially dense connectivity in architecture fabrications. In addition, existing outlier detection approaches exhibit high variance in generalization performance, lacking stability and confidence in evaluating and ranking different outlier detectors. In this work, we propose a novel paradigm, Dense Connectivity Search of Outlier Detector (DCSOD), that automatically explore the dense connectivity of CNN architectures on near-OOD detection task using Neural Architecture Search (NAS). We introduce a hierarchical search space containing versatile convolution operators and dense connectivity, allowing a flexible exploration of CNN architectures with diverse connectivity patterns. To improve the quality of evaluation on OOD detection during search, we propose evolving distillation based on our multi-view feature learning explanation. Evolving distillation stabilizes training for OOD detection evaluation, thus improves the quality of search. We thoroughly examine DCSOD on CIFAR benchmarks under OOD detection protocol. Experimental results show that DCSOD achieve remarkable performance over widely used architectures and previous NAS baselines. Notably, DCSOD achieves state-of-the-art (SOTA) performance on CIFAR benchmark, with AUROC improvement of $sim$1.0%.
[ "['Hao Fu' 'Tunhou Zhang' 'Hai Li' 'Yiran Chen']" ]
null
null
2406.01977
null
null
http://arxiv.org/pdf/2406.01977v1
2024-06-04T05:30:16Z
2024-06-04T05:30:16Z
What Improves the Generalization of Graph Transformers? A Theoretical Dive into the Self-attention and Positional Encoding
Graph Transformers, which incorporate self-attention and positional encoding, have recently emerged as a powerful architecture for various graph learning tasks. Despite their impressive performance, the complex non-convex interactions across layers and the recursive graph structure have made it challenging to establish a theoretical foundation for learning and generalization. This study introduces the first theoretical investigation of a shallow Graph Transformer for semi-supervised node classification, comprising a self-attention layer with relative positional encoding and a two-layer perceptron. Focusing on a graph data model with discriminative nodes that determine node labels and non-discriminative nodes that are class-irrelevant, we characterize the sample complexity required to achieve a desirable generalization error by training with stochastic gradient descent (SGD). This paper provides the quantitative characterization of the sample complexity and number of iterations for convergence dependent on the fraction of discriminative nodes, the dominant patterns, and the initial model errors. Furthermore, we demonstrate that self-attention and positional encoding enhance generalization by making the attention map sparse and promoting the core neighborhood during training, which explains the superior feature representation of Graph Transformers. Our theoretical results are supported by empirical experiments on synthetic and real-world benchmarks.
[ "['Hongkang Li' 'Meng Wang' 'Tengfei Ma' 'Sijia Liu' 'Zaixi Zhang'\n 'Pin-Yu Chen']" ]
null
null
2406.01996
null
null
http://arxiv.org/pdf/2406.01996v1
2024-06-04T06:27:48Z
2024-06-04T06:27:48Z
Bayesian Mesh Optimization for Graph Neural Networks to Enhance Engineering Performance Prediction
In engineering design, surrogate models are widely employed to replace computationally expensive simulations by leveraging design variables and geometric parameters from computer-aided design (CAD) models. However, these models often lose critical information when simplified to lower dimensions and face challenges in parameter definition, especially with the complex 3D shapes commonly found in industrial datasets. To address these limitations, we propose a Bayesian graph neural network (GNN) framework for a 3D deep-learning-based surrogate model that predicts engineering performance by directly learning geometric features from CAD using mesh representation. Our framework determines the optimal size of mesh elements through Bayesian optimization, resulting in a high-accuracy surrogate model. Additionally, it effectively handles the irregular and complex structures of 3D CADs, which differ significantly from the regular and uniform pixel structures of 2D images typically used in deep learning. Experimental results demonstrate that the quality of the mesh significantly impacts the prediction accuracy of the surrogate model, with an optimally sized mesh achieving superior performance. We compare the performance of models based on various 3D representations such as voxel, point cloud, and graph, and evaluate the computational costs of Monte Carlo simulation and Bayesian optimization methods to find the optimal mesh size. We anticipate that our proposed framework has the potential to be applied to mesh-based simulations across various engineering fields, leveraging physics-based information commonly used in computer-aided engineering.
[ "['Jangseop Park' 'Namwoo Kang']" ]
null
null
2406.02013
null
null
http://arxiv.org/pdf/2406.02013v1
2024-06-04T06:49:18Z
2024-06-04T06:49:18Z
Mamba as Decision Maker: Exploring Multi-scale Sequence Modeling in Offline Reinforcement Learning
Sequential modeling has demonstrated remarkable capabilities in offline reinforcement learning (RL), with Decision Transformer (DT) being one of the most notable representatives, achieving significant success. However, RL trajectories possess unique properties to be distinguished from the conventional sequence (e.g., text or audio): (1) local correlation, where the next states in RL are theoretically determined solely by current states and actions based on the Markov Decision Process (MDP), and (2) global correlation, where each step's features are related to long-term historical information due to the time-continuous nature of trajectories. In this paper, we propose a novel action sequence predictor, named Mamba Decision Maker (MambaDM), where Mamba is expected to be a promising alternative for sequence modeling paradigms, owing to its efficient modeling of multi-scale dependencies. In particular, we introduce a novel mixer module that proficiently extracts and integrates both global and local features of the input sequence, effectively capturing interrelationships in RL datasets. Extensive experiments demonstrate that MambaDM achieves state-of-the-art performance in Atari and OpenAI Gym datasets. Furthermore, we empirically investigate the scaling laws of MambaDM, finding that increasing model size does not bring performance improvement, but scaling the dataset amount by 2x for MambaDM can obtain up to 33.7% score improvement on Atari dataset. This paper delves into the sequence modeling capabilities of MambaDM in the RL domain, paving the way for future advancements in robust and efficient decision-making systems. Our code will be available at https://github.com/AndyCao1125/MambaDM.
[ "['Jiahang Cao' 'Qiang Zhang' 'Ziqing Wang' 'Jiaxu Wang' 'Hao Cheng'\n 'Yecheng Shao' 'Wen Zhao' 'Gang Han' 'Yijie Guo' 'Renjing Xu']" ]
null
null
2406.02014
null
null
http://arxiv.org/pdf/2406.02014v1
2024-06-04T06:53:32Z
2024-06-04T06:53:32Z
Understanding Auditory Evoked Brain Signal via Physics-informed Embedding Network with Multi-Task Transformer
In the fields of brain-computer interaction and cognitive neuroscience, effective decoding of auditory signals from task-based functional magnetic resonance imaging (fMRI) is key to understanding how the brain processes complex auditory information. Although existing methods have enhanced decoding capabilities, limitations remain in information utilization and model representation. To overcome these challenges, we propose an innovative multi-task learning model, Physics-informed Embedding Network with Multi-Task Transformer (PEMT-Net), which enhances decoding performance through physics-informed embedding and deep learning techniques. PEMT-Net consists of two principal components: feature augmentation and classification. For feature augmentation, we propose a novel approach by creating neural embedding graphs via node embedding, utilizing random walks to simulate the physical diffusion of neural information. This method captures both local and non-local information overflow and proposes a position encoding based on relative physical coordinates. In the classification segment, we propose adaptive embedding fusion to maximally capture linear and non-linear characteristics. Furthermore, we propose an innovative parameter-sharing mechanism to optimize the retention and learning of extracted features. Experiments on a specific dataset demonstrate PEMT-Net's significant performance in multi-task auditory signal decoding, surpassing existing methods and offering new insights into the brain's mechanisms for processing complex auditory information.
[ "['Wanli Ma' 'Xuegang Tang' 'Jin Gu' 'Ying Wang' 'Yuling Xia']" ]
null
null
2406.02015
null
null
http://arxiv.org/pdf/2406.02015v1
2024-06-04T06:54:53Z
2024-06-04T06:54:53Z
Parameterizing Federated Continual Learning for Reproducible Research
Federated Learning (FL) systems evolve in heterogeneous and ever-evolving environments that challenge their performance. Under real deployments, the learning tasks of clients can also evolve with time, which calls for the integration of methodologies such as Continual Learning. To enable research reproducibility, we propose a set of experimental best practices that precisely capture and emulate complex learning scenarios. Our framework, Freddie, is the first entirely configurable framework for Federated Continual Learning (FCL), and it can be seamlessly deployed on a large number of machines thanks to the use of Kubernetes and containerization. We demonstrate the effectiveness of Freddie on two use cases, (i) large-scale FL on CIFAR100 and (ii) heterogeneous task sequence on FCL, which highlight unaddressed performance challenges in FCL scenarios.
[ "['Bart Cox' 'Jeroen Galjaard' 'Aditya Shankar' 'Jérémie Decouchant'\n 'Lydia Y. Chen']" ]
null
null
2406.02016
null
null
http://arxiv.org/pdf/2406.02016v1
2024-06-04T06:56:41Z
2024-06-04T06:56:41Z
Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization
We propose adaptive, line search-free second-order methods with optimal rate of convergence for solving convex-concave min-max problems. By means of an adaptive step size, our algorithms feature a simple update rule that requires solving only one linear system per iteration, eliminating the need for line search or backtracking mechanisms. Specifically, we base our algorithms on the optimistic method and appropriately combine it with second-order information. Moreover, distinct from common adaptive schemes, we define the step size recursively as a function of the gradient norm and the prediction error in the optimistic update. We first analyze a variant where the step size requires knowledge of the Lipschitz constant of the Hessian. Under the additional assumption of Lipschitz continuous gradients, we further design a parameter-free version by tracking the Hessian Lipschitz constant locally and ensuring the iterates remain bounded. We also evaluate the practical performance of our algorithm by comparing it to existing second-order algorithms for minimax optimization.
[ "['Ruichen Jiang' 'Ali Kavis' 'Qiujiang Jin' 'Sujay Sanghavi'\n 'Aryan Mokhtari']" ]
null
null
2406.02017
null
null
http://arxiv.org/pdf/2406.02017v1
2024-06-04T06:57:12Z
2024-06-04T06:57:12Z
On the Mode-Seeking Properties of Langevin Dynamics
The Langevin Dynamics framework, which aims to generate samples from the score function of a probability distribution, is widely used for analyzing and interpreting score-based generative modeling. While the convergence behavior of Langevin Dynamics under unimodal distributions has been extensively studied in the literature, in practice the data distribution could consist of multiple distinct modes. In this work, we investigate Langevin Dynamics in producing samples from multimodal distributions and theoretically study its mode-seeking properties. We prove that under a variety of sub-Gaussian mixtures, Langevin Dynamics is unlikely to find all mixture components within a sub-exponential number of steps in the data dimension. To reduce the mode-seeking tendencies of Langevin Dynamics, we propose Chained Langevin Dynamics, which divides the data vector into patches of constant size and generates every patch sequentially conditioned on the previous patches. We perform a theoretical analysis of Chained Langevin Dynamics by reducing it to sampling from a constant-dimensional distribution. We present the results of several numerical experiments on synthetic and real image datasets, supporting our theoretical results on the iteration complexities of sample generation from mixture distributions using the chained and vanilla Langevin Dynamics. The code is available at https://github.com/Xiwei-Cheng/Chained_LD.
[ "['Xiwei Cheng' 'Kexin Fu' 'Farzan Farnia']" ]
null
null
2406.02021
null
null
http://arxiv.org/pdf/2406.02021v1
2024-06-04T07:00:14Z
2024-06-04T07:00:14Z
MetaMixer Is All You Need
Transformer, composed of self-attention and Feed-Forward Network, has revolutionized the landscape of network design across various vision tasks. FFN is a versatile operator seamlessly integrated into nearly all AI models to effectively harness rich representations. Recent works also show that FFN functions like key-value memories. Thus, akin to the query-key-value mechanism within self-attention, FFN can be viewed as a memory network, where the input serves as query and the two projection weights operate as keys and values, respectively. We hypothesize that the importance lies in query-key-value framework itself rather than in self-attention. To verify this, we propose converting self-attention into a more FFN-like efficient token mixer with only convolutions while retaining query-key-value framework, namely FFNification. Specifically, FFNification replaces query-key and attention coefficient-value interactions with large kernel convolutions and adopts GELU activation function instead of softmax. The derived token mixer, FFNified attention, serves as key-value memories for detecting locally distributed spatial patterns, and operates in the opposite dimension to the ConvNeXt block within each corresponding sub-operation of the query-key-value framework. Building upon the above two modules, we present a family of Fast-Forward Networks. Our FFNet achieves remarkable performance improvements over previous state-of-the-art methods across a wide range of tasks. The strong and general performance of our proposed method validates our hypothesis and leads us to introduce MetaMixer, a general mixer architecture that does not specify sub-operations within the query-key-value framework. We show that using only simple operations like convolution and GELU in the MetaMixer can achieve superior performance.
[ "['Seokju Yun' 'Dongheon Lee' 'Youngmin Ro']" ]
null
null
2406.02024
null
null
http://arxiv.org/pdf/2406.02024v3
2024-06-30T07:44:53Z
2024-06-04T07:02:59Z
Verifying the Generalization of Deep Learning to Out-of-Distribution Domains
Deep neural networks (DNNs) play a crucial role in the field of machine learning, demonstrating state-of-the-art performance across various application domains. However, despite their success, DNN-based models may occasionally exhibit challenges with generalization, i.e., may fail to handle inputs that were not encountered during training. This limitation is a significant challenge when it comes to deploying deep learning for safety-critical tasks, as well as in real-world settings characterized by substantial variability. We introduce a novel approach for harnessing DNN verification technology to identify DNN-driven decision rules that exhibit robust generalization to previously unencountered input domains. Our method assesses generalization within an input domain by measuring the level of agreement between independently trained deep neural networks for inputs in this domain. We also efficiently realize our approach by using off-the-shelf DNN verification engines, and extensively evaluate it on both supervised and unsupervised DNN benchmarks, including a deep reinforcement learning (DRL) system for Internet congestion control -- demonstrating the applicability of our approach for real-world settings. Moreover, our research introduces a fresh objective for formal verification, offering the prospect of mitigating the challenges linked to deploying DNN-driven systems in real-world scenarios.
[ "['Guy Amir' 'Osher Maayan' 'Tom Zelazny' 'Guy Katz' 'Michael Schapira']" ]
null
null
2406.02027
null
null
http://arxiv.org/pdf/2406.02027v2
2024-06-27T05:47:55Z
2024-06-04T07:06:06Z
Inference Attacks: A Taxonomy, Survey, and Promising Directions
The prosperity of machine learning has also brought people's concerns about data privacy. Among them, inference attacks can implement privacy breaches in various MLaaS scenarios and model training/prediction phases. Specifically, inference attacks can perform privacy inference on undisclosed target training sets based on outputs of the target model, including but not limited to statistics, membership, semantics, data representation, etc. For instance, infer whether the target data has the characteristics of AIDS. In addition, the rapid development of the machine learning community in recent years, especially the surge of model types and application scenarios, has further stimulated the inference attacks' research. Thus, studying inference attacks and analyzing them in depth is urgent and significant. However, there is still a gap in the systematic discussion of inference attacks from taxonomy, global perspective, attack, and defense perspectives. This survey provides an in-depth and comprehensive inference of attacks and corresponding countermeasures in ML-as-a-service based on taxonomy and the latest researches. Without compromising researchers' intuition, we first propose the 3MP taxonomy based on the community research status, trying to normalize the confusing naming system of inference attacks. Also, we analyze the pros and cons of each type of inference attack, their workflow, countermeasure, and how they interact with other attacks. In the end, we point out several promising directions for researchers from a more comprehensive and novel perspective.
[ "['Feng Wu' 'Lei Cui' 'Shaowen Yao' 'Shui Yu']" ]
null
null
2406.02035
null
null
http://arxiv.org/pdf/2406.02035v1
2024-06-04T07:22:12Z
2024-06-04T07:22:12Z
A Unifying Framework for Action-Conditional Self-Predictive Reinforcement Learning
Learning a good representation is a crucial challenge for Reinforcement Learning (RL) agents. Self-predictive learning provides means to jointly learn a latent representation and dynamics model by bootstrapping from future latent representations (BYOL). Recent work has developed theoretical insights into these algorithms by studying a continuous-time ODE model for self-predictive representation learning under the simplifying assumption that the algorithm depends on a fixed policy (BYOL-$Pi$); this assumption is at odds with practical instantiations of such algorithms, which explicitly condition their predictions on future actions. In this work, we take a step towards bridging the gap between theory and practice by analyzing an action-conditional self-predictive objective (BYOL-AC) using the ODE framework, characterizing its convergence properties and highlighting important distinctions between the limiting solutions of the BYOL-$Pi$ and BYOL-AC dynamics. We show how the two representations are related by a variance equation. This connection leads to a novel variance-like action-conditional objective (BYOL-VAR) and its corresponding ODE. We unify the study of all three objectives through two complementary lenses; a model-based perspective, where each objective is shown to be equivalent to a low-rank approximation of certain dynamics, and a model-free perspective, which establishes relationships between the objectives and their respective value, Q-value, and advantage function. Our empirical investigations, encompassing both linear function approximation and Deep RL environments, demonstrates that BYOL-AC is better overall in a variety of different settings.
[ "['Khimya Khetarpal' 'Zhaohan Daniel Guo' 'Bernardo Avila Pires'\n 'Yunhao Tang' 'Clare Lyle' 'Mark Rowland' 'Nicolas Heess' 'Diana Borsa'\n 'Arthur Guez' 'Will Dabney']" ]
null
null
2406.02040
null
null
http://arxiv.org/pdf/2406.02040v1
2024-06-04T07:24:51Z
2024-06-04T07:24:51Z
DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment
Graph neural networks are recognized for their strong performance across various applications, with the backpropagation algorithm playing a central role in the development of most GNN models. However, despite its effectiveness, BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks. While several non-BP training algorithms, such as the direct feedback alignment, have been successfully applied to fully-connected and convolutional network components for handling Euclidean data, directly adapting these non-BP frameworks to manage non-Euclidean graph data in GNN models presents significant challenges. These challenges primarily arise from the violation of the i.i.d. assumption in graph data and the difficulty in accessing prediction errors for all samples (nodes) within the graph. To overcome these obstacles, in this paper we propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning. The proposed method breaks the limitations of BP by using a dedicated forward training mechanism. Specifically, DFA-GNN extends the principles of DFA to adapt to graph data and unique architecture of GNNs, which incorporates the information of graph topology into the feedback links to accommodate the non-Euclidean characteristics of graph data. Additionally, for semi-supervised graph learning tasks, we developed a pseudo error generator that spreads residual errors from training data to create a pseudo error for each unlabeled node. These pseudo errors are then utilized to train GNNs using DFA. Extensive experiments on 10 public benchmarks reveal that our learning framework outperforms not only previous non-BP methods but also the standard BP methods, and it exhibits excellent robustness against various types of noise and attacks.
[ "['Gongpei Zhao' 'Tao Wang' 'Congyan Lang' 'Yi Jin' 'Yidong Li'\n 'Haibin Ling']" ]
null
null
2406.02044
null
null
http://arxiv.org/pdf/2406.02044v1
2024-06-04T07:27:36Z
2024-06-04T07:27:36Z
QROA: A Black-Box Query-Response Optimization Attack on LLMs
Large Language Models (LLMs) have surged in popularity in recent months, yet they possess concerning capabilities for generating harmful content when manipulated. This study introduces the Query-Response Optimization Attack (QROA), an optimization-based strategy designed to exploit LLMs through a black-box, query-only interaction. QROA adds an optimized trigger to a malicious instruction to compel the LLM to generate harmful content. Unlike previous approaches, QROA does not require access to the model's logit information or any other internal data and operates solely through the standard query-response interface of LLMs. Inspired by deep Q-learning and Greedy coordinate descent, the method iteratively updates tokens to maximize a designed reward function. We tested our method on various LLMs such as Vicuna, Falcon, and Mistral, achieving an Attack Success Rate (ASR) over 80%. We also tested the model against Llama2-chat, the fine-tuned version of Llama2 designed to resist Jailbreak attacks, achieving good ASR with a suboptimal initial trigger seed. This study demonstrates the feasibility of generating jailbreak attacks against deployed LLMs in the public domain using black-box optimization methods, enabling more comprehensive safety testing of LLMs.
[ "['Hussein Jawad' 'Nicolas J. -B. BRUNEL']" ]
null
null
2406.02049
null
null
http://arxiv.org/pdf/2406.02049v1
2024-06-04T07:30:27Z
2024-06-04T07:30:27Z
Causal Effect Identification in LiNGAM Models with Latent Confounders
We study the generic identifiability of causal effects in linear non-Gaussian acyclic models (LiNGAM) with latent variables. We consider the problem in two main settings: When the causal graph is known a priori, and when it is unknown. In both settings, we provide a complete graphical characterization of the identifiable direct or total causal effects among observed variables. Moreover, we propose efficient algorithms to certify the graphical conditions. Finally, we propose an adaptation of the reconstruction independent component analysis (RICA) algorithm that estimates the causal effects from the observational data given the causal graph. Experimental results show the effectiveness of the proposed method in estimating the causal effects.
[ "['Daniele Tramontano' 'Yaroslav Kivva' 'Saber Salehkaleybar'\n 'Mathias Drton' 'Negar Kiyavash']" ]
null
null
2406.02052
null
null
http://arxiv.org/pdf/2406.02052v1
2024-06-04T07:35:23Z
2024-06-04T07:35:23Z
PETRA: Parallel End-to-end Training with Reversible Architectures
Reversible architectures have been shown to be capable of performing on par with their non-reversible architectures, being applied in deep learning for memory savings and generative modeling. In this work, we show how reversible architectures can solve challenges in parallelizing deep model training. We introduce PETRA, a novel alternative to backpropagation for parallelizing gradient computations. PETRA facilitates effective model parallelism by enabling stages (i.e., a set of layers) to compute independently on different devices, while only needing to communicate activations and gradients between each other. By decoupling the forward and backward passes and keeping a single updated version of the parameters, the need for weight stashing is also removed. We develop a custom autograd-like training framework for PETRA, and we demonstrate its effectiveness on CIFAR-10, ImageNet32, and ImageNet, achieving competitive accuracies comparable to backpropagation using ResNet-18, ResNet-34, and ResNet-50 models.
[ "['Stéphane Rivaud' 'Louis Fournier' 'Thomas Pumir' 'Eugene Belilovsky'\n 'Michael Eickenberg' 'Edouard Oyallon']" ]
null
null
2406.02056
null
null
http://arxiv.org/pdf/2406.02056v1
2024-06-04T07:37:47Z
2024-06-04T07:37:47Z
CAP: A Context-Aware Neural Predictor for NAS
Neural predictors are effective in boosting the time-consuming performance evaluation stage in neural architecture search (NAS), owing to their direct estimation of unseen architectures. Despite the effectiveness, training a powerful neural predictor with fewer annotated architectures remains a huge challenge. In this paper, we propose a context-aware neural predictor (CAP) which only needs a few annotated architectures for training based on the contextual information from the architectures. Specifically, the input architectures are encoded into graphs and the predictor infers the contextual structure around the nodes inside each graph. Then, enhanced by the proposed context-aware self-supervised task, the pre-trained predictor can obtain expressive and generalizable representations of architectures. Therefore, only a few annotated architectures are sufficient for training. Experimental results in different search spaces demonstrate the superior performance of CAP compared with state-of-the-art neural predictors. In particular, CAP can rank architectures precisely at the budget of only 172 annotated architectures in NAS-Bench-101. Moreover, CAP can help find promising architectures in both NAS-Bench-101 and DARTS search spaces on the CIFAR-10 dataset, serving as a useful navigator for NAS to explore the search space efficiently.
[ "['Han Ji' 'Yuqi Feng' 'Yanan Sun']" ]
null
null
2406.02057
null
null
http://arxiv.org/abs/2406.02057v1
2024-06-04T07:41:15Z
2024-06-04T07:41:15Z
Tabular and Deep Learning for the Whittle Index
The Whittle index policy is a heuristic that has shown remarkably good performance (with guaranteed asymptotic optimality) when applied to the class of problems known as Restless Multi-Armed Bandit Problems (RMABPs). In this paper we present QWI and QWINN, two reinforcement learning algorithms, respectively tabular and deep, to learn the Whittle index for the total discounted criterion. The key feature is the use of two time-scales, a faster one to update the state-action Q -values, and a relatively slower one to update the Whittle indices. In our main theoretical result we show that QWI, which is a tabular implementation, converges to the real Whittle indices. We then present QWINN, an adaptation of QWI algorithm using neural networks to compute the Q -values on the faster time-scale, which is able to extrapolate information from one state to another and scales naturally to large state-space environments. For QWINN, we show that all local minima of the Bellman error are locally stable equilibria, which is the first result of its kind for DQN-based schemes. Numerical computations show that QWI and QWINN converge faster than the standard Q -learning algorithm, neural-network based approximate Q-learning and other state of the art algorithms.
[ "['Francisco Robledo Relaño' 'Vivek Borkar' 'Urtzi Ayesta'\n 'Konstantin Avrachenkov']" ]
null
null
2406.02059
null
null
http://arxiv.org/pdf/2406.02059v1
2024-06-04T07:43:04Z
2024-06-04T07:43:04Z
Graph Adversarial Diffusion Convolution
This paper introduces a min-max optimization formulation for the Graph Signal Denoising (GSD) problem. In this formulation, we first maximize the second term of GSD by introducing perturbations to the graph structure based on Laplacian distance and then minimize the overall loss of the GSD. By solving the min-max optimization problem, we derive a new variant of the Graph Diffusion Convolution (GDC) architecture, called Graph Adversarial Diffusion Convolution (GADC). GADC differs from GDC by incorporating an additional term that enhances robustness against adversarial attacks on the graph structure and noise in node features. Moreover, GADC improves the performance of GDC on heterophilic graphs. Extensive experiments demonstrate the effectiveness of GADC across various datasets. Code is available at https://github.com/SongtaoLiu0823/GADC.
[ "['Songtao Liu' 'Jinghui Chen' 'Tianfan Fu' 'Lu Lin' 'Marinka Zitnik'\n 'Dinghao Wu']" ]
null
null
2406.02061
null
null
http://arxiv.org/pdf/2406.02061v4
2024-07-13T21:02:21Z
2024-06-04T07:43:33Z
Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models
Large Language Models (LLMs) are often described as being instances of foundation models - that is, models that transfer strongly across various tasks and conditions in few-show or zero-shot manner, while exhibiting scaling laws that predict function improvement when increasing the pre-training scale. These claims of excelling in different functions and tasks rely on measurements taken across various sets of standardized benchmarks showing high scores for such models. We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales which claim strong function, using a simple, short, conventional common sense problem (AIW problem) formulated in concise natural language, easily solvable by humans. The breakdown is dramatic, as models show strong fluctuations across even slight problem variations that should not affect problem solving, also expressing strong overconfidence in the wrong solutions, often backed up by plausible sounding explanation-like confabulations. Various standard interventions in an attempt to get the right solution, like various type of enhanced prompting, or urging the models to reconsider the wrong solutions again by multi step re-evaluation, fail. We take these initial observations to the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of current generation of LLMs. Such re-assessment also requires common action to create standardized benchmarks that would allow proper detection of such basic reasoning deficits that obviously manage to remain undiscovered by current state-of-the-art evaluation procedures and benchmarks. Code for reproducing experiments in the paper and raw experiments data can be found at https://github.com/LAION-AI/AIW
[ "['Marianna Nezhurina' 'Lucia Cipolina-Kun' 'Mehdi Cherti' 'Jenia Jitsev']" ]
null
null
2406.02064
null
null
http://arxiv.org/pdf/2406.02064v1
2024-06-04T07:45:27Z
2024-06-04T07:45:27Z
Advancing Generalized Transfer Attack with Initialization Derived Bilevel Optimization and Dynamic Sequence Truncation
Transfer attacks generate significant interest for real-world black-box applications by crafting transferable adversarial examples through surrogate models. Whereas, existing works essentially directly optimize the single-level objective w.r.t. the surrogate model, which always leads to poor interpretability of attack mechanism and limited generalization performance over unknown victim models. In this work, we propose the textbf{B}iltextbf{E}vel textbf{T}ransfer textbf{A}ttactextbf{K} (BETAK) framework by establishing an initialization derived bilevel optimization paradigm, which explicitly reformulates the nested constraint relationship between the Upper-Level (UL) pseudo-victim attacker and the Lower-Level (LL) surrogate attacker. Algorithmically, we introduce the Hyper Gradient Response (HGR) estimation as an effective feedback for the transferability over pseudo-victim attackers, and propose the Dynamic Sequence Truncation (DST) technique to dynamically adjust the back-propagation path for HGR and reduce computational overhead simultaneously. Meanwhile, we conduct detailed algorithmic analysis and provide convergence guarantee to support non-convexity of the LL surrogate attacker. Extensive evaluations demonstrate substantial improvement of BETAK (e.g., $mathbf{53.41}$% increase of attack success rates against IncRes-v$2_{ens}$) against different victims and defense methods in targeted and untargeted attack scenarios. The source code is available at https://github.com/callous-youth/BETAK.
[ "['Yaohua Liu' 'Jiaxin Gao' 'Xuan Liu' 'Xianghao Jiao' 'Xin Fan'\n 'Risheng Liu']" ]
null
null
2406.02066
null
null
http://arxiv.org/pdf/2406.02066v1
2024-06-04T07:49:30Z
2024-06-04T07:49:30Z
Preference Optimization for Molecule Synthesis with Conditional Residual Energy-based Models
Molecule synthesis through machine learning is one of the fundamental problems in drug discovery. Current data-driven strategies employ one-step retrosynthesis models and search algorithms to predict synthetic routes in a top-bottom manner. Despite their effective performance, these strategies face limitations in the molecule synthetic route generation due to a greedy selection of the next molecule set without any lookahead. Furthermore, existing strategies cannot control the generation of synthetic routes based on possible criteria such as material costs, yields, and step count. In this work, we propose a general and principled framework via conditional residual energy-based models (EBMs), that focus on the quality of the entire synthetic route based on the specific criteria. By incorporating an additional energy-based function into our probabilistic model, our proposed algorithm can enhance the quality of the most probable synthetic routes (with higher probabilities) generated by various strategies in a plug-and-play fashion. Extensive experiments demonstrate that our framework can consistently boost performance across various strategies and outperforms previous state-of-the-art top-1 accuracy by a margin of 2.5%. Code is available at https://github.com/SongtaoLiu0823/CREBM.
[ "['Songtao Liu' 'Hanjun Dai' 'Yue Zhao' 'Peng Liu']" ]
null
null
2406.02075
null
null
http://arxiv.org/pdf/2406.02075v1
2024-06-04T07:54:31Z
2024-06-04T07:54:31Z
ReLU-KAN: New Kolmogorov-Arnold Networks that Only Need Matrix Addition, Dot Multiplication, and ReLU
Limited by the complexity of basis function (B-spline) calculations, Kolmogorov-Arnold Networks (KAN) suffer from restricted parallel computing capability on GPUs. This paper proposes a novel ReLU-KAN implementation that inherits the core idea of KAN. By adopting ReLU (Rectified Linear Unit) and point-wise multiplication, we simplify the design of KAN's basis function and optimize the computation process for efficient CUDA computing. The proposed ReLU-KAN architecture can be readily implemented on existing deep learning frameworks (e.g., PyTorch) for both inference and training. Experimental results demonstrate that ReLU-KAN achieves a 20x speedup compared to traditional KAN with 4-layer networks. Furthermore, ReLU-KAN exhibits a more stable training process with superior fitting ability while preserving the "catastrophic forgetting avoidance" property of KAN. You can get the code in https://github.com/quiqi/relu_kan
[ "['Qi Qiu' 'Tao Zhu' 'Helin Gong' 'Liming Chen' 'Huansheng Ning']" ]
null
null
2406.02080
null
null
http://arxiv.org/pdf/2406.02080v1
2024-06-04T08:02:39Z
2024-06-04T08:02:39Z
LongSSM: On the Length Extension of State-space Models in Language Modelling
In this paper, we investigate the length-extension of state-space models (SSMs) in language modeling. Length extension involves training models on short sequences and testing them on longer ones. We show that state-space models trained with zero hidden states initialization have difficulty doing length extension. We explain this difficulty by pointing out the length extension is equivalent to polynomial extrapolation. Based on the theory, we propose a simple yet effective method - changing the hidden states initialization scheme - to improve the length extension. Moreover, our method shows that using long training sequence length is beneficial but not necessary to length extension. Changing the hidden state initialization enables the efficient training of long-memory model with a smaller training context length.
[ "['Shida Wang']" ]
null
null
2406.02081
null
null
http://arxiv.org/pdf/2406.02081v2
2024-06-24T03:38:46Z
2024-06-04T08:04:23Z
FightLadder: A Benchmark for Competitive Multi-Agent Reinforcement Learning
Recent advances in reinforcement learning (RL) heavily rely on a variety of well-designed benchmarks, which provide environmental platforms and consistent criteria to evaluate existing and novel algorithms. Specifically, in multi-agent RL (MARL), a plethora of benchmarks based on cooperative games have spurred the development of algorithms that improve the scalability of cooperative multi-agent systems. However, for the competitive setting, a lightweight and open-sourced benchmark with challenging gaming dynamics and visual inputs has not yet been established. In this work, we present FightLadder, a real-time fighting game platform, to empower competitive MARL research. Along with the platform, we provide implementations of state-of-the-art MARL algorithms for competitive games, as well as a set of evaluation metrics to characterize the performance and exploitability of agents. We demonstrate the feasibility of this platform by training a general agent that consistently defeats 12 built-in characters in single-player mode, and expose the difficulty of training a non-exploitable agent without human knowledge and demonstrations in two-player mode. FightLadder provides meticulously designed environments to address critical challenges in competitive MARL research, aiming to catalyze a new era of discovery and advancement in the field. Videos and code at https://sites.google.com/view/fightladder/home.
[ "['Wenzhe Li' 'Zihan Ding' 'Seth Karten' 'Chi Jin']" ]
null
null
2406.02092
null
null
http://arxiv.org/pdf/2406.02092v1
2024-06-04T08:23:57Z
2024-06-04T08:23:57Z
MaskSR: Masked Language Model for Full-band Speech Restoration
Speech restoration aims at restoring high quality speech in the presence of a diverse set of distortions. Although several deep learning paradigms have been studied for this task, the power of the recently emerging language models has not been fully explored. In this paper, we propose MaskSR, a masked language model capable of restoring full-band 44.1 kHz speech jointly considering noise, reverb, clipping, and low bandwidth. MaskSR works with discrete acoustic tokens extracted using a pre-trained neural codec. During training, MaskSR is optimized to predict randomly masked tokens extracted from the high quality target speech, conditioned on the corrupted speech with various distortions. During inference, MaskSR reconstructs the target speech tokens with efficient iterative sampling. Extensive experiments show that MaskSR obtains competitive results on both the full-band speech restoration task and also on sub-tasks compared with a wide range of models.
[ "['Xu Li' 'Qirui Wang' 'Xiaoyu Liu']" ]
null
null
2406.02105
null
null
http://arxiv.org/pdf/2406.02105v2
2024-06-28T04:05:53Z
2024-06-04T08:33:56Z
Kernel vs. Kernel: Exploring How the Data Structure Affects Neural Collapse
Recently, a vast amount of literature has focused on the "Neural Collapse" (NC) phenomenon, which emerges when training neural network (NN) classifiers beyond the zero training error point. The core component of NC is the decrease in the within class variability of the network's deepest features, dubbed as NC1. The theoretical works that study NC are typically based on simplified unconstrained features models (UFMs) that mask any effect of the data on the extent of collapse. In this paper, we provide a kernel-based analysis that does not suffer from this limitation. First, given a kernel function, we establish expressions for the traces of the within- and between-class covariance matrices of the samples' features (and consequently an NC1 metric). Then, we turn to focus on kernels associated with shallow NNs. First, we consider the NN Gaussian Process kernel (NNGP), associated with the network at initialization, and the complement Neural Tangent Kernel (NTK), associated with its training in the "lazy regime". Interestingly, we show that the NTK does not represent more collapsed features than the NNGP for prototypical data models. As NC emerges from training, we then consider an alternative to NTK: the recently proposed adaptive kernel, which generalizes NNGP to model the feature mapping learned from the training data. Contrasting our NC1 analysis for these two kernels enables gaining insights into the effect of data distribution on the extent of collapse, which are empirically aligned with the behavior observed with practical training of NNs.
[ "['Vignesh Kothapalli' 'Tom Tirer']" ]
null
null
2406.02126
null
null
http://arxiv.org/pdf/2406.02126v2
2024-06-06T13:57:09Z
2024-06-04T09:10:14Z
CityLight: A Universal Model Towards Real-world City-scale Traffic Signal Control Coordination
Traffic signal control (TSC) is a promising low-cost measure to enhance transportation efficiency without affecting existing road infrastructure. While various reinforcement learning-based TSC methods have been proposed and experimentally outperform conventional rule-based methods, none of them has been deployed in the real world. An essential gap lies in the oversimplification of the scenarios in terms of intersection heterogeneity and road network intricacy. To make TSC applicable in urban traffic management, we target TSC coordination in city-scale high-authenticity road networks, aiming to solve the three unique and important challenges: city-level scalability, heterogeneity of real-world intersections, and effective coordination among intricate neighbor connections. Since optimizing multiple agents in a parameter-sharing paradigm can boost the training efficiency and help achieve scalability, we propose our method, CityLight, based on the well-acknowledged optimization framework, parameter-sharing MAPPO. To ensure the unified policy network can learn to fit large-scale heterogeneous intersections and tackle the intricate between-neighbor coordination, CityLight proposes a universal representation module that consists of two key designs: heterogeneous intersection alignment and neighborhood impact alignment for coordination. To further boost coordination, CityLight adopts neighborhood-integrated rewards to transition from achieving local optimal to global optimal. Extensive experiments on datasets with hundreds to tens of thousands of real-world intersections and authentic traffic demands validate the surprising effectiveness and generalizability of CityLight, with an overall performance gain of 11.66% and a 22.59% improvement in transfer scenarios in terms of throughput.
[ "['Jinwei Zeng' 'Chao Yu' 'Xinyi Yang' 'Wenxuan Ao' 'Jian Yuan' 'Yong Li'\n 'Yu Wang' 'Huazhong Yang']" ]
null
null
2406.02128
null
null
http://arxiv.org/pdf/2406.02128v1
2024-06-04T09:11:46Z
2024-06-04T09:11:46Z
Iteration Head: A Mechanistic Study of Chain-of-Thought
Chain-of-Thought (CoT) reasoning is known to improve Large Language Models both empirically and in terms of theoretical approximation power. However, our understanding of the inner workings and conditions of apparition of CoT capabilities remains limited. This paper helps fill this gap by demonstrating how CoT reasoning emerges in transformers in a controlled and interpretable setting. In particular, we observe the appearance of a specialized attention mechanism dedicated to iterative reasoning, which we coined "iteration heads". We track both the emergence and the precise working of these iteration heads down to the attention level, and measure the transferability of the CoT skills to which they give rise between tasks.
[ "['Vivien Cabannes' 'Charles Arnal' 'Wassim Bouaziz' 'Alice Yang'\n 'Francois Charton' 'Julia Kempe']" ]
null
null
2406.02131
null
null
http://arxiv.org/pdf/2406.02131v3
2024-06-11T15:49:07Z
2024-06-04T09:18:20Z
CondTSF: One-line Plugin of Dataset Condensation for Time Series Forecasting
Dataset condensation is a newborn technique that generates a small dataset that can be used in training deep neural networks to lower training costs. The objective of dataset condensation is to ensure that the model trained with the synthetic dataset can perform comparably to the model trained with full datasets. However, existing methods predominantly concentrate on classification tasks, posing challenges in their adaptation to time series forecasting (TS-forecasting). This challenge arises from disparities in the evaluation of synthetic data. In classification, the synthetic data is considered well-distilled if the model trained with the full dataset and the model trained with the synthetic dataset yield identical labels for the same input, regardless of variations in output logits distribution. Conversely, in TS-forecasting, the effectiveness of synthetic data distillation is determined by the distance between predictions of the two models. The synthetic data is deemed well-distilled only when all data points within the predictions are similar. Consequently, TS-forecasting has a more rigorous evaluation methodology compared to classification. To mitigate this gap, we theoretically analyze the optimization objective of dataset condensation for TS-forecasting and propose a new one-line plugin of dataset condensation designated as Dataset Condensation for Time Series Forecasting (CondTSF) based on our analysis. Plugging CondTSF into previous dataset condensation methods facilitates a reduction in the distance between the predictions of the model trained with the full dataset and the model trained with the synthetic dataset, thereby enhancing performance. We conduct extensive experiments on eight commonly used time series datasets. CondTSF consistently improves the performance of all previous dataset condensation methods across all datasets, particularly at low condensing ratios.
[ "['Jianrong Ding' 'Zhanyu Liu' 'Guanjie Zheng' 'Haiming Jin' 'Linghe Kong']" ]
null
null
2406.02133
null
null
http://arxiv.org/pdf/2406.02133v1
2024-06-04T09:21:31Z
2024-06-04T09:21:31Z
SimulTron: On-Device Simultaneous Speech to Speech Translation
Simultaneous speech-to-speech translation (S2ST) holds the promise of breaking down communication barriers and enabling fluid conversations across languages. However, achieving accurate, real-time translation through mobile devices remains a major challenge. We introduce SimulTron, a novel S2ST architecture designed to tackle this task. SimulTron is a lightweight direct S2ST model that uses the strengths of the Translatotron framework while incorporating key modifications for streaming operation, and an adjustable fixed delay. Our experiments show that SimulTron surpasses Translatotron 2 in offline evaluations. Furthermore, real-time evaluations reveal that SimulTron improves upon the performance achieved by Translatotron 1. Additionally, SimulTron achieves superior BLEU scores and latency compared to previous real-time S2ST method on the MuST-C dataset. Significantly, we have successfully deployed SimulTron on a Pixel 7 Pro device, show its potential for simultaneous S2ST on-device.
[ "['Alex Agranovich' 'Eliya Nachmani' 'Oleg Rybakov' 'Yifan Ding' 'Ye Jia'\n 'Nadav Bar' 'Heiga Zen' 'Michelle Tadmor Ramanovich']" ]
null
null
2406.02140
null
null
http://arxiv.org/pdf/2406.02140v1
2024-06-04T09:27:35Z
2024-06-04T09:27:35Z
Optimality of Matrix Mechanism on $\ell_p^p$-metric
In this paper, we introduce the $ell_p^p$-error metric (for $p geq 2$) when answering linear queries under the constraint of differential privacy. We characterize such an error under $(epsilon,delta)$-differential privacy. Before this paper, tight characterization in the hardness of privately answering linear queries was known under $ell_2^2$-error metric (Edmonds et al., STOC 2020) and $ell_p^2$-error metric for unbiased mechanisms (Nikolov and Tang, ITCS 2024). As a direct consequence of our results, we give tight bounds on answering prefix sum and parity queries under differential privacy for all constant $p$ in terms of the $ell_p^p$ error, generalizing the bounds in Henzinger et al. (SODA 2023) for $p=2$.
[ "['Jingcheng Liu' 'Jalaj Upadhyay' 'Zongrui Zou']" ]
null
null
2406.02146
null
null
http://arxiv.org/pdf/2406.02146v1
2024-06-04T09:34:08Z
2024-06-04T09:34:08Z
Activation Bottleneck: Sigmoidal Neural Networks Cannot Forecast a Straight Line
A neural network has an activation bottleneck if one of its hidden layers has a bounded image. We show that networks with an activation bottleneck cannot forecast unbounded sequences such as straight lines, random walks, or any sequence with a trend: The difference between prediction and ground truth becomes arbitrary large, regardless of the training procedure. Widely-used neural network architectures such as LSTM and GRU suffer from this limitation. In our analysis, we characterize activation bottlenecks and explain why they prevent sigmoidal networks from learning unbounded sequences. We experimentally validate our findings and discuss modifications to network architectures which mitigate the effects of activation bottlenecks.
[ "['Maximilian Toller' 'Hussain Hussain' 'Bernhard C Geiger']" ]
null
null
2406.02154
null
null
http://arxiv.org/pdf/2406.02154v1
2024-06-04T09:42:34Z
2024-06-04T09:42:34Z
Learning Hamiltonian neural Koopman operator and simultaneously sustaining and discovering conservation law
Accurately finding and predicting dynamics based on the observational data with noise perturbations is of paramount significance but still a major challenge presently. Here, for the Hamiltonian mechanics, we propose the Hamiltonian Neural Koopman Operator (HNKO), integrating the knowledge of mathematical physics in learning the Koopman operator, and making it automatically sustain and even discover the conservation laws. We demonstrate the outperformance of the HNKO and its extension using a number of representative physical systems even with hundreds or thousands of freedoms. Our results suggest that feeding the prior knowledge of the underlying system and the mathematical theory appropriately to the learning framework can reinforce the capability of machine learning in solving physical problems.
[ "['Jingdong Zhang' 'Qunxi Zhu' 'Wei Lin']" ]
null
null
2406.02156
null
null
http://arxiv.org/pdf/2406.02156v1
2024-06-04T09:44:24Z
2024-06-04T09:44:24Z
Almost linear time differentially private release of synthetic graphs
In this paper, we give an almost linear time and space algorithms to sample from an exponential mechanism with an $ell_1$-score function defined over an exponentially large non-convex set. As a direct result, on input an $n$ vertex $m$ edges graph $G$, we present the textit{first} $widetilde{O}(m)$ time and $O(m)$ space algorithms for differentially privately outputting an $n$ vertex $O(m)$ edges synthetic graph that approximates all the cuts and the spectrum of $G$. These are the emph{first} private algorithms for releasing synthetic graphs that nearly match this task's time and space complexity in the non-private setting while achieving the same (or better) utility as the previous works in the more practical sparse regime. Additionally, our algorithms can be extended to private graph analysis under continual observation.
[ "['Jingcheng Liu' 'Jalaj Upadhyay' 'Zongrui Zou']" ]
null
null
2406.02157
null
null
http://arxiv.org/pdf/2406.02157v1
2024-06-04T09:44:49Z
2024-06-04T09:44:49Z
Online Learning and Information Exponents: On The Importance of Batch size, and Time/Complexity Tradeoffs
We study the impact of the batch size $n_b$ on the iteration time $T$ of training two-layer neural networks with one-pass stochastic gradient descent (SGD) on multi-index target functions of isotropic covariates. We characterize the optimal batch size minimizing the iteration time as a function of the hardness of the target, as characterized by the information exponents. We show that performing gradient updates with large batches $n_b lesssim d^{frac{ell}{2}}$ minimizes the training time without changing the total sample complexity, where $ell$ is the information exponent of the target to be learned citep{arous2021online} and $d$ is the input dimension. However, larger batch sizes than $n_b gg d^{frac{ell}{2}}$ are detrimental for improving the time complexity of SGD. We provably overcome this fundamental limitation via a different training protocol, textit{Correlation loss SGD}, which suppresses the auto-correlation terms in the loss function. We show that one can track the training progress by a system of low-dimensional ordinary differential equations (ODEs). Finally, we validate our theoretical results with numerical experiments.
[ "['Luca Arnaboldi' 'Yatin Dandi' 'Florent Krzakala' 'Bruno Loureiro'\n 'Luca Pesce' 'Ludovic Stephan']" ]
null
null
2406.02158
null
null
http://arxiv.org/pdf/2406.02158v1
2024-06-04T09:45:04Z
2024-06-04T09:45:04Z
Radar Spectra-Language Model for Automotive Scene Parsing
Radar sensors are low cost, long-range, and weather-resilient. Therefore, they are widely used for driver assistance functions, and are expected to be crucial for the success of autonomous driving in the future. In many perception tasks only pre-processed radar point clouds are considered. In contrast, radar spectra are a raw form of radar measurements and contain more information than radar point clouds. However, radar spectra are rather difficult to interpret. In this work, we aim to explore the semantic information contained in spectra in the context of automated driving, thereby moving towards better interpretability of radar spectra. To this end, we create a radar spectra-language model, allowing us to query radar spectra measurements for the presence of scene elements using free text. We overcome the scarcity of radar spectra data by matching the embedding space of an existing vision-language model (VLM). Finally, we explore the benefit of the learned representation for scene parsing, and obtain improvements in free space segmentation and object detection merely by injecting the spectra embedding into a baseline model.
[ "['Mariia Pushkareva' 'Yuri Feldman' 'Csaba Domokos' 'Kilian Rambach'\n 'Dotan Di Castro']" ]
null
null
2406.02165
null
null
http://arxiv.org/pdf/2406.02165v1
2024-06-04T09:54:55Z
2024-06-04T09:54:55Z
SaVeR: Optimal Data Collection Strategy for Safe Policy Evaluation in Tabular MDP
In this paper, we study safe data collection for the purpose of policy evaluation in tabular Markov decision processes (MDPs). In policy evaluation, we are given a textit{target} policy and asked to estimate the expected cumulative reward it will obtain. Policy evaluation requires data and we are interested in the question of what textit{behavior} policy should collect the data for the most accurate evaluation of the target policy. While prior work has considered behavior policy selection, in this paper, we additionally consider a safety constraint on the behavior policy. Namely, we assume there exists a known default policy that incurs a particular expected cost when run and we enforce that the cumulative cost of all behavior policies ran is better than a constant factor of the cost that would be incurred had we always run the default policy. We first show that there exists a class of intractable MDPs where no safe oracle algorithm with knowledge about problem parameters can efficiently collect data and satisfy the safety constraints. We then define the tractability condition for an MDP such that a safe oracle algorithm can efficiently collect data and using that we prove the first lower bound for this setting. We then introduce an algorithm SaVeR for this problem that approximates the safe oracle algorithm and bound the finite-sample mean squared error of the algorithm while ensuring it satisfies the safety constraint. Finally, we show in simulations that SaVeR produces low MSE policy evaluation while satisfying the safety constraint.
[ "['Subhojyoti Mukherjee' 'Josiah P. Hanna' 'Robert Nowak']" ]
null
null
2406.02173
null
null
http://arxiv.org/pdf/2406.02173v1
2024-06-04T10:04:54Z
2024-06-04T10:04:54Z
Learning the Hodgkin-Huxley Model with Operator Learning Techniques
We construct and compare three operator learning architectures, DeepONet, Fourier Neural Operator, and Wavelet Neural Operator, in order to learn the operator mapping a time-dependent applied current to the transmembrane potential of the Hodgkin- Huxley ionic model. The underlying non-linearity of the Hodgkin-Huxley dynamical system, the stiffness of its solutions, and the threshold dynamics depending on the intensity of the applied current, are some of the challenges to address when exploiting artificial neural networks to learn this class of complex operators. By properly designing these operator learning techniques, we demonstrate their ability to effectively address these challenges, achieving a relative L2 error as low as 1.4% in learning the solutions of the Hodgkin-Huxley ionic model.
[ "['Edoardo Centofanti' 'Massimiliano Ghiotto' 'Luca F. Pavarino']" ]
null
null
2406.02175
null
null
http://arxiv.org/pdf/2406.02175v2
2024-06-21T13:45:51Z
2024-06-04T10:11:46Z
Branches: A Fast Dynamic Programming and Branch & Bound Algorithm for Optimal Decision Trees
Decision Tree Learning is a fundamental problem for Interpretable Machine Learning, yet it poses a formidable optimization challenge. Despite numerous efforts dating back to the early 1990's, practical algorithms have only recently emerged, primarily leveraging Dynamic Programming (DP) and Branch & Bound (B&B) techniques. These breakthroughs led to the development of two distinct approaches. Algorithms like DL8.5 and MurTree operate on the space of nodes (or branches), they are very fast, but do not penalise complex Decision Trees, i.e. they do not solve for sparsity. On the other hand, algorithms like OSDT and GOSDT operate on the space of Decision Trees, they solve for sparsity but at the detriment of speed. In this work, we introduce Branches, a novel algorithm that integrates the strengths of both paradigms. Leveraging DP and B&B, Branches achieves exceptional speed while also solving for sparsity. Central to its efficiency is a novel analytical bound enabling substantial pruning of the search space. Furthermore, Branches does not necessitate binary features. Theoretical analysis demonstrates that Branches has a lower complexity bound compared to state-of-the-art methods, a claim validated through extensive empirical evaluation. Our results illustrate that Branches outperforms the state of the art in terms of speed and number of iterations while consistently yielding optimal Decision Trees.
[ "['Ayman Chaouki' 'Jesse Read' 'Albert Bifet']" ]
null
null
2406.02176
null
null
http://arxiv.org/pdf/2406.02176v2
2024-06-05T12:23:46Z
2024-06-04T10:12:09Z
AROMA: Preserving Spatial Structure for Latent PDE Modeling with Local Neural Fields
We present AROMA (Attentive Reduced Order Model with Attention), a framework designed to enhance the modeling of partial differential equations (PDEs) using local neural fields. Our flexible encoder-decoder architecture can obtain smooth latent representations of spatial physical fields from a variety of data types, including irregular-grid inputs and point clouds. This versatility eliminates the need for patching and allows efficient processing of diverse geometries. The sequential nature of our latent representation can be interpreted spatially and permits the use of a conditional transformer for modeling the temporal dynamics of PDEs. By employing a diffusion-based formulation, we achieve greater stability and enable longer rollouts compared to conventional MSE training. AROMA's superior performance in simulating 1D and 2D equations underscores the efficacy of our approach in capturing complex dynamical behaviors.
[ "['Louis Serrano' 'Thomas X Wang' 'Etienne Le Naour' 'Jean-Noël Vittaut'\n 'Patrick Gallinari']" ]
null
null
2406.02177
null
null
http://arxiv.org/pdf/2406.02177v1
2024-06-04T10:14:39Z
2024-06-04T10:14:39Z
One-Shot Federated Learning with Bayesian Pseudocoresets
Optimization-based techniques for federated learning (FL) often come with prohibitive communication cost, as high dimensional model parameters need to be communicated repeatedly between server and clients. In this paper, we follow a Bayesian approach allowing to perform FL with one-shot communication, by solving the global inference problem as a product of local client posteriors. For models with multi-modal likelihoods, such as neural networks, a naive application of this scheme is hampered, since clients will capture different posterior modes, causing a destructive collapse of the posterior on the server side. Consequently, we explore approximate inference in the function-space representation of client posteriors, hence suffering less or not at all from multi-modality. We show that distributed function-space inference is tightly related to learning Bayesian pseudocoresets and develop a tractable Bayesian FL algorithm on this insight. We show that this approach achieves prediction performance competitive to state-of-the-art while showing a striking reduction in communication cost of up to two orders of magnitude. Moreover, due to its Bayesian nature, our method also delivers well-calibrated uncertainty estimates.
[ "[\"Tim d'Hondt\" 'Mykola Pechenizkiy' 'Robert Peharz']" ]
null
null
2406.02180
null
null
http://arxiv.org/pdf/2406.02180v1
2024-06-04T10:22:12Z
2024-06-04T10:22:12Z
On The Statistical Representation Properties Of The Perturb-Softmax And The Perturb-Argmax Probability Distributions
The Gumbel-Softmax probability distribution allows learning discrete tokens in generative learning, while the Gumbel-Argmax probability distribution is useful in learning discrete structures in discriminative learning. Despite the efforts invested in optimizing these probability models, their statistical properties are under-explored. In this work, we investigate their representation properties and determine for which families of parameters these probability distributions are complete, i.e., can represent any probability distribution, and minimal, i.e., can represent a probability distribution uniquely. We rely on convexity and differentiability to determine these statistical conditions and extend this framework to general probability models, such as Gaussian-Softmax and Gaussian-Argmax. We experimentally validate the qualities of these extensions, which enjoy a faster convergence rate. We conclude the analysis by identifying two sets of parameters that satisfy these assumptions and thus admit a complete and minimal representation. Our contribution is theoretical with supporting practical evaluation.
[ "['Hedda Cohen Indelman' 'Tamir Hazan']" ]
null
null
2406.02187
null
null
http://arxiv.org/pdf/2406.02187v1
2024-06-04T10:31:03Z
2024-06-04T10:31:03Z
DNCs Require More Planning Steps
Many recent works use machine learning models to solve various complex algorithmic problems. However, these models attempt to reach a solution without considering the problem's required computational complexity, which can be detrimental to their ability to solve it correctly. In this work we investigate the effect of computational time and memory on generalization of implicit algorithmic solvers. To do so, we focus on the Differentiable Neural Computer (DNC), a general problem solver that also lets us reason directly about its usage of time and memory. In this work, we argue that the number of planning steps the model is allowed to take, which we call "planning budget", is a constraint that can cause the model to generalize poorly and hurt its ability to fully utilize its external memory. We evaluate our method on Graph Shortest Path, Convex Hull, Graph MinCut and Associative Recall, and show how the planning budget can drastically change the behavior of the learned algorithm, in terms of learned time complexity, training time, stability and generalization to inputs larger than those seen during training.
[ "['Yara Shamshoum' 'Nitzan Hodos' 'Yuval Sieradzki' 'Assaf Schuster']" ]
null
null
2406.02189
null
null
http://arxiv.org/pdf/2406.02189v1
2024-06-04T10:34:40Z
2024-06-04T10:34:40Z
Fast and Scalable Multi-Kernel Encoder Classifier
This paper introduces a new kernel-based classifier by viewing kernel matrices as generalized graphs and leveraging recent progress in graph embedding techniques. The proposed method facilitates fast and scalable kernel matrix embedding, and seamlessly integrates multiple kernels to enhance the learning process. Our theoretical analysis offers a population-level characterization of this approach using random variables. Empirically, our method demonstrates superior running time compared to standard approaches such as support vector machines and two-layer neural network, while achieving comparable classification accuracy across various simulated and real datasets.
[ "['Cencheng Shen']" ]
null
null
2406.02191
null
null
http://arxiv.org/pdf/2406.02191v2
2024-06-11T17:53:39Z
2024-06-04T10:35:16Z
On the Recoverability of Causal Relations from Temporally Aggregated I.I.D. Data
We consider the effect of temporal aggregation on instantaneous (non-temporal) causal discovery in general setting. This is motivated by the observation that the true causal time lag is often considerably shorter than the observational interval. This discrepancy leads to high aggregation, causing time-delay causality to vanish and instantaneous dependence to manifest. Although we expect such instantaneous dependence has consistency with the true causal relation in certain sense to make the discovery results meaningful, it remains unclear what type of consistency we need and when will such consistency be satisfied. We proposed functional consistency and conditional independence consistency in formal way correspond functional causal model-based methods and conditional independence-based methods respectively and provide the conditions under which these consistencies will hold. We show theoretically and experimentally that causal discovery results may be seriously distorted by aggregation especially in complete nonlinear case and we also find causal relationship still recoverable from aggregated data if we have partial linearity or appropriate prior. Our findings suggest community should take a cautious and meticulous approach when interpreting causal discovery results from such data and show why and when aggregation will distort the performance of causal discovery methods.
[ "['Shunxing Fan' 'Mingming Gong' 'Kun Zhang']" ]
null
null
2406.02204
null
null
http://arxiv.org/pdf/2406.02204v1
2024-06-04T10:59:54Z
2024-06-04T10:59:54Z
The Deep Latent Space Particle Filter for Real-Time Data Assimilation with Uncertainty Quantification
In Data Assimilation, observations are fused with simulations to obtain an accurate estimate of the state and parameters for a given physical system. Combining data with a model, however, while accurately estimating uncertainty, is computationally expensive and infeasible to run in real-time for complex systems. Here, we present a novel particle filter methodology, the Deep Latent Space Particle filter or D-LSPF, that uses neural network-based surrogate models to overcome this computational challenge. The D-LSPF enables filtering in the low-dimensional latent space obtained using Wasserstein AEs with modified vision transformer layers for dimensionality reduction and transformers for parameterized latent space time stepping. As we demonstrate on three test cases, including leak localization in multi-phase pipe flow and seabed identification for fully nonlinear water waves, the D-LSPF runs orders of magnitude faster than a high-fidelity particle filter and 3-5 times faster than alternative methods while being up to an order of magnitude more accurate. The D-LSPF thus enables real-time data assimilation with uncertainty quantification for physical systems.
[ "['Nikolaj T. Mücke' 'Sander M. Bohté' 'Cornelis W. Oosterlee']" ]
null
null
2406.02213
null
null
http://arxiv.org/pdf/2406.02213v1
2024-06-04T11:11:53Z
2024-06-04T11:11:53Z
Rectifying Reinforcement Learning for Reward Matching
The Generative Flow Network (GFlowNet) is a probabilistic framework in which an agent learns a stochastic policy and flow functions to sample objects with probability proportional to an unnormalized reward function. GFlowNets share a strong resemblance to reinforcement learning (RL), that typically aims to maximize reward, due to their sequential decision-making processes. Recent works have studied connections between GFlowNets and maximum entropy (MaxEnt) RL, which modifies the standard objective of RL agents by learning an entropy-regularized objective. However, a critical theoretical gap persists: despite the apparent similarities in their sequential decision-making nature, a direct link between GFlowNets and standard RL has yet to be discovered, while bridging this gap could further unlock the potential of both fields. In this paper, we establish a new connection between GFlowNets and policy evaluation for a uniform policy. Surprisingly, we find that the resulting value function for the uniform policy has a close relationship to the flows in GFlowNets. Leveraging these insights, we further propose a novel rectified policy evaluation (RPE) algorithm, which achieves the same reward-matching effect as GFlowNets, offering a new perspective. We compare RPE, MaxEnt RL, and GFlowNets in a number of benchmarks, and show that RPE achieves competitive results compared to previous approaches. This work sheds light on the previously unexplored connection between (non-MaxEnt) RL and GFlowNets, potentially opening new avenues for future research in both fields.
[ "['Haoran He' 'Emmanuel Bengio' 'Qingpeng Cai' 'Ling Pan']" ]
null
null
2406.02214
null
null
http://arxiv.org/pdf/2406.02214v1
2024-06-04T11:14:21Z
2024-06-04T11:14:21Z
SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining
Large language models (LLMs) have shown impressive capabilities across various tasks. However, training LLMs from scratch requires significant computational power and extensive memory capacity. Recent studies have explored low-rank structures on weights for efficient fine-tuning in terms of parameters and memory, either through low-rank adaptation or factorization. While effective for fine-tuning, low-rank structures are generally less suitable for pretraining because they restrict parameters to a low-dimensional subspace. In this work, we propose to parameterize the weights as a sum of low-rank and sparse matrices for pretraining, which we call SLTrain. The low-rank component is learned via matrix factorization, while for the sparse component, we employ a simple strategy of uniformly selecting the sparsity support at random and learning only the non-zero entries with the fixed support. While being simple, the random fixed-support sparse learning strategy significantly enhances pretraining when combined with low-rank learning. Our results show that SLTrain adds minimal extra parameters and memory costs compared to pretraining with low-rank parameterization, yet achieves substantially better performance, which is comparable to full-rank training. Remarkably, when combined with quantization and per-layer updates, SLTrain can reduce memory requirements by up to 73% when pretraining the LLaMA 7B model.
[ "['Andi Han' 'Jiaxiang Li' 'Wei Huang' 'Mingyi Hong' 'Akiko Takeda'\n 'Pratik Jawanpuria' 'Bamdev Mishra']" ]
null
null
2406.02223
null
null
http://arxiv.org/abs/2406.02223v1
2024-06-04T11:33:40Z
2024-06-04T11:33:40Z
SMCL: Saliency Masked Contrastive Learning for Long-tailed Recognition
Real-world data often follow a long-tailed distribution with a high imbalance in the number of samples between classes. The problem with training from imbalanced data is that some background features, common to all classes, can be unobserved in classes with scarce samples. As a result, this background correlates to biased predictions into ``major" classes. In this paper, we propose saliency masked contrastive learning, a new method that uses saliency masking and contrastive learning to mitigate the problem and improve the generalizability of a model. Our key idea is to mask the important part of an image using saliency detection and use contrastive learning to move the masked image towards minor classes in the feature space, so that background features present in the masked image are no longer correlated with the original class. Experiment results show that our method achieves state-of-the-art level performance on benchmark long-tailed datasets.
[ "['Sanglee Park' 'Seung-won Hwang' 'Jungmin So']" ]
null
null
2406.02225
null
null
http://arxiv.org/pdf/2406.02225v1
2024-06-04T11:37:11Z
2024-06-04T11:37:11Z
Riemannian coordinate descent algorithms on matrix manifolds
Many machine learning applications are naturally formulated as optimization problems on Riemannian manifolds. The main idea behind Riemannian optimization is to maintain the feasibility of the variables while moving along a descent direction on the manifold. This results in updating all the variables at every iteration. In this work, we provide a general framework for developing computationally efficient coordinate descent (CD) algorithms on matrix manifolds that allows updating only a few variables at every iteration while adhering to the manifold constraint. In particular, we propose CD algorithms for various manifolds such as Stiefel, Grassmann, (generalized) hyperbolic, symplectic, and symmetric positive (semi)definite. While the cost per iteration of the proposed CD algorithms is low, we further develop a more efficient variant via a first-order approximation of the objective function. We analyze their convergence and complexity, and empirically illustrate their efficacy in several applications.
[ "['Andi Han' 'Pratik Jawanpuria' 'Bamdev Mishra']" ]
null
null
2406.02234
null
null
http://arxiv.org/pdf/2406.02234v1
2024-06-04T11:56:19Z
2024-06-04T11:56:19Z
On the Limitations of Fractal Dimension as a Measure of Generalization
Bounding and predicting the generalization gap of overparameterized neural networks remains a central open problem in theoretical machine learning. Neural network optimization trajectories have been proposed to possess fractal structure, leading to bounds and generalization measures based on notions of fractal dimension on these trajectories. Prominently, both the Hausdorff dimension and the persistent homology dimension have been proposed to correlate with generalization gap, thus serving as a measure of generalization. This work performs an extended evaluation of these topological generalization measures. We demonstrate that fractal dimension fails to predict generalization of models trained from poor initializations. We further identify that the $ell^2$ norm of the final parameter iterate, one of the simplest complexity measures in learning theory, correlates more strongly with the generalization gap than these notions of fractal dimension. Finally, our study reveals the intriguing manifestation of model-wise double descent in persistent homology-based generalization measures. This work lays the ground for a deeper investigation of the causal relationships between fractal geometry, topological data analysis, and neural network optimization.
[ "['Charlie Tan' 'Inés García-Redondo' 'Qiquan Wang' 'Michael M. Bronstein'\n 'Anthea Monod']" ]
null
null
2406.02245
null
null
http://arxiv.org/pdf/2406.02245v1
2024-06-04T12:09:44Z
2024-06-04T12:09:44Z
Description Boosting for Zero-Shot Entity and Relation Classification
Zero-shot entity and relation classification models leverage available external information of unseen classes -- e.g., textual descriptions -- to annotate input text data. Thanks to the minimum data requirement, Zero-Shot Learning (ZSL) methods have high value in practice, especially in applications where labeled data is scarce. Even though recent research in ZSL has demonstrated significant results, our analysis reveals that those methods are sensitive to provided textual descriptions of entities (or relations). Even a minor modification of descriptions can lead to a change in the decision boundary between entity (or relation) classes. In this paper, we formally define the problem of identifying effective descriptions for zero shot inference. We propose a strategy for generating variations of an initial description, a heuristic for ranking them and an ensemble method capable of boosting the predictions of zero-shot models through description enhancement. Empirical results on four different entity and relation classification datasets show that our proposed method outperform existing approaches and achieve new SOTA results on these datasets under the ZSL settings. The source code of the proposed solutions and the evaluation framework are open-sourced.
[ "['Gabriele Picco' 'Leopold Fuchs' 'Marcos Martínez Galindo'\n 'Alberto Purpura' 'Vanessa López' 'Hoang Thanh Lam']" ]
null
null
2406.02255
null
null
http://arxiv.org/pdf/2406.02255v1
2024-06-04T12:21:55Z
2024-06-04T12:21:55Z
MidiCaps -- A large-scale MIDI dataset with text captions
Generative models guided by text prompts are increasingly becoming more popular. However, no text-to-MIDI models currently exist, mostly due to the lack of a captioned MIDI dataset. This work aims to enable research that combines LLMs with symbolic music by presenting the first large-scale MIDI dataset with text captions that is openly available: MidiCaps. MIDI (Musical Instrument Digital Interface) files are a widely used format for encoding musical information. Their structured format captures the nuances of musical composition and has practical applications by music producers, composers, musicologists, as well as performers. Inspired by recent advancements in captioning techniques applied to various domains, we present a large-scale curated dataset of over 168k MIDI files accompanied by textual descriptions. Each MIDI caption succinctly describes the musical content, encompassing tempo, chord progression, time signature, instruments present, genre and mood; thereby facilitating multi-modal exploration and analysis. The dataset contains a mix of various genres, styles, and complexities, offering a rich source for training and evaluating models for tasks such as music information retrieval, music understanding and cross-modal translation. We provide detailed statistics about the dataset and have assessed the quality of the captions in an extensive listening study. We anticipate that this resource will stimulate further research in the intersection of music and natural language processing, fostering advancements in both fields.
[ "['Jan Melechovsky' 'Abhinaba Roy' 'Dorien Herremans']" ]
null
null
2406.02258
null
null
http://arxiv.org/pdf/2406.02258v1
2024-06-04T12:29:51Z
2024-06-04T12:29:51Z
Reinforcement Learning with Lookahead Information
We study reinforcement learning (RL) problems in which agents observe the reward or transition realizations at their current state before deciding which action to take. Such observations are available in many applications, including transactions, navigation and more. When the environment is known, previous work shows that this lookahead information can drastically increase the collected reward. However, outside of specific applications, existing approaches for interacting with unknown environments are not well-adapted to these observations. In this work, we close this gap and design provably-efficient learning algorithms able to incorporate lookahead information. To achieve this, we perform planning using the empirical distribution of the reward and transition observations, in contrast to vanilla approaches that only rely on estimated expectations. We prove that our algorithms achieve tight regret versus a baseline that also has access to lookahead information - linearly increasing the amount of collected reward compared to agents that cannot handle lookahead information.
[ "['Nadav Merlis']" ]
null
null
2406.02268
null
null
http://arxiv.org/pdf/2406.02268v1
2024-06-04T12:47:11Z
2024-06-04T12:47:11Z
Analyzing the Benefits of Prototypes for Semi-Supervised Category Learning
Categories can be represented at different levels of abstraction, from prototypes focused on the most typical members to remembering all observed exemplars of the category. These representations have been explored in the context of supervised learning, where stimuli are presented with known category labels. We examine the benefits of prototype-based representations in a less-studied domain: semi-supervised learning, where agents must form unsupervised representations of stimuli before receiving category labels. We study this problem in a Bayesian unsupervised learning model called a variational auto-encoder, and we draw on recent advances in machine learning to implement a prior that encourages the model to use abstract prototypes to represent data. We apply this approach to image datasets and show that forming prototypes can improve semi-supervised category learning. Additionally, we study the latent embeddings of the models and show that these prototypes allow the models to form clustered representations without supervision, contributing to their success in downstream categorization performance.
[ "['Liyi Zhang' 'Logan Nelson' 'Thomas L. Griffiths']" ]
null
null
2406.02269
null
null
http://arxiv.org/pdf/2406.02269v1
2024-06-04T12:47:13Z
2024-06-04T12:47:13Z
Graph Neural Networks Do Not Always Oversmooth
Graph neural networks (GNNs) have emerged as powerful tools for processing relational data in applications. However, GNNs suffer from the problem of oversmoothing, the property that the features of all nodes exponentially converge to the same vector over layers, prohibiting the design of deep GNNs. In this work we study oversmoothing in graph convolutional networks (GCNs) by using their Gaussian process (GP) equivalence in the limit of infinitely many hidden features. By generalizing methods from conventional deep neural networks (DNNs), we can describe the distribution of features at the output layer of deep GCNs in terms of a GP: as expected, we find that typical parameter choices from the literature lead to oversmoothing. The theory, however, allows us to identify a new, nonoversmoothing phase: if the initial weights of the network have sufficiently large variance, GCNs do not oversmooth, and node features remain informative even at large depth. We demonstrate the validity of this prediction in finite-size GCNs by training a linear classifier on their output. Moreover, using the linearization of the GCN GP, we generalize the concept of propagation depth of information from DNNs to GCNs. This propagation depth diverges at the transition between the oversmoothing and non-oversmoothing phase. We test the predictions of our approach and find good agreement with finite-size GCNs. Initializing GCNs near the transition to the non-oversmoothing phase, we obtain networks which are both deep and expressive.
[ "['Bastian Epping' 'Alexandre René' 'Moritz Helias' 'Michael T. Schaub']" ]
null
null
2406.02273
null
null
http://arxiv.org/pdf/2406.02273v1
2024-06-04T12:49:46Z
2024-06-04T12:49:46Z
A KL-based Analysis Framework with Applications to Non-Descent Optimization Methods
We propose a novel analysis framework for non-descent-type optimization methodologies in nonconvex scenarios based on the Kurdyka-Lojasiewicz property. Our framework allows covering a broad class of algorithms, including those commonly employed in stochastic and distributed optimization. Specifically, it enables the analysis of first-order methods that lack a sufficient descent property and do not require access to full (deterministic) gradient information. We leverage this framework to establish, for the first time, iterate convergence and the corresponding rates for the decentralized gradient method and federated averaging under mild assumptions. Furthermore, based on the new analysis techniques, we show the convergence of the random reshuffling and stochastic gradient descent method without necessitating typical a priori bounded iterates assumptions.
[ "['Junwen Qiu' 'Bohao Ma' 'Xiao Li' 'Andre Milzarek']" ]
null
null
2406.02282
null
null
http://arxiv.org/pdf/2406.02282v1
2024-06-04T12:56:10Z
2024-06-04T12:56:10Z
Test-Time Regret Minimization in Meta Reinforcement Learning
Meta reinforcement learning sets a distribution over a set of tasks on which the agent can train at will, then is asked to learn an optimal policy for any test task efficiently. In this paper, we consider a finite set of tasks modeled through Markov decision processes with various dynamics. We assume to have endured a long training phase, from which the set of tasks is perfectly recovered, and we focus on regret minimization against the optimal policy in the unknown test task. Under a separation condition that states the existence of a state-action pair revealing a task against another, Chen et al. (2022) show that $O(M^2 log(H))$ regret can be achieved, where $M, H$ are the number of tasks in the set and test episodes, respectively. In our first contribution, we demonstrate that the latter rate is nearly optimal by developing a novel lower bound for test-time regret minimization under separation, showing that a linear dependence with $M$ is unavoidable. Then, we present a family of stronger yet reasonable assumptions beyond separation, which we call strong identifiability, enabling algorithms achieving fast rates $log (H)$ and sublinear dependence with $M$ simultaneously. Our paper provides a new understanding of the statistical barriers of test-time regret minimization and when fast rates can be achieved.
[ "['Mirco Mutti' 'Aviv Tamar']" ]
null
null
2406.02285
null
null
http://arxiv.org/pdf/2406.02285v1
2024-06-04T12:58:19Z
2024-06-04T12:58:19Z
Towards Supervised Performance on Speaker Verification with Self-Supervised Learning by Leveraging Large-Scale ASR Models
Recent advancements in Self-Supervised Learning (SSL) have shown promising results in Speaker Verification (SV). However, narrowing the performance gap with supervised systems remains an ongoing challenge. Several studies have observed that speech representations from large-scale ASR models contain valuable speaker information. This work explores the limitations of fine-tuning these models for SV using an SSL contrastive objective in an end-to-end approach. Then, we propose a framework to learn speaker representations in an SSL context by fine-tuning a pre-trained WavLM with a supervised loss using pseudo-labels. Initial pseudo-labels are derived from an SSL DINO-based model and are iteratively refined by clustering the model embeddings. Our method achieves 0.99% EER on VoxCeleb1-O, establishing the new state-of-the-art on self-supervised SV. As this performance is close to our supervised baseline of 0.94% EER, this contribution is a step towards supervised performance on SV with SSL.
[ "['Victor Miara' 'Theo Lepage' 'Reda Dehak']" ]
null
null
2406.02290
null
null
http://arxiv.org/pdf/2406.02290v2
2024-06-06T16:09:31Z
2024-06-04T13:05:47Z
A Study of Optimizations for Fine-tuning Large Language Models
Fine-tuning large language models is a popular choice among users trying to adapt them for specific applications. However, fine-tuning these models is a demanding task because the user has to examine several factors, such as resource budget, runtime, model size and context length among others. A specific challenge is that fine-tuning is memory intensive, imposing constraints on the required hardware memory and context length of training data that can be handled. In this work, we share a detailed study on a variety of fine-tuning optimizations across different fine-tuning scenarios. In particular, we assess Gradient Checkpointing, Low-Rank Adaptation, DeepSpeed's Zero Redundancy Optimizer and FlashAttention. With a focus on memory and runtime, we examine the impact of different optimization combinations on GPU memory usage and execution runtime during fine-tuning phase. We provide our recommendation on the best default optimization for balancing memory and runtime across diverse model sizes. We share effective strategies for fine-tuning very large models with tens or hundreds of billions of parameters and enabling large context lengths during fine-tuning. Furthermore, we propose the appropriate optimization mixtures for fine-tuning under GPU resource limitations.
[ "['Arjun Singh' 'Nikhil Pandey' 'Anup Shirgaonkar' 'Pavan Manoj'\n 'Vijay Aski']" ]
null
null
2406.02292
null
null
http://arxiv.org/pdf/2406.02292v1
2024-06-04T13:11:01Z
2024-06-04T13:11:01Z
An Axiomatic Approach to Loss Aggregation and an Adapted Aggregating Algorithm
Supervised learning has gone beyond the expected risk minimization framework. Central to most of these developments is the introduction of more general aggregation functions for losses incurred by the learner. In this paper, we turn towards online learning under expert advice. Via easily justified assumptions we characterize a set of reasonable loss aggregation functions as quasi-sums. Based upon this insight, we suggest a variant of the Aggregating Algorithm tailored to these more general aggregation functions. This variant inherits most of the nice theoretical properties of the AA, such as recovery of Bayes' updating and a time-independent bound on quasi-sum regret. Finally, we argue that generalized aggregations express the attitude of the learner towards losses.
[ "['Armando J. Cabrera Pacheco' 'Rabanus Derr' 'Robert C. Williamson']" ]
null
null
2406.02293
null
null
http://arxiv.org/pdf/2406.02293v1
2024-06-04T13:13:29Z
2024-06-04T13:13:29Z
Composite Quantile Regression With XGBoost Using the Novel Arctan Pinball Loss
This paper explores the use of XGBoost for composite quantile regression. XGBoost is a highly popular model renowned for its flexibility, efficiency, and capability to deal with missing data. The optimization uses a second order approximation of the loss function, complicating the use of loss functions with a zero or vanishing second derivative. Quantile regression -- a popular approach to obtain conditional quantiles when point estimates alone are insufficient -- unfortunately uses such a loss function, the pinball loss. Existing workarounds are typically inefficient and can result in severe quantile crossings. In this paper, we present a smooth approximation of the pinball loss, the arctan pinball loss, that is tailored to the needs of XGBoost. Specifically, contrary to other smooth approximations, the arctan pinball loss has a relatively large second derivative, which makes it more suitable to use in the second order approximation. Using this loss function enables the simultaneous prediction of multiple quantiles, which is more efficient and results in far fewer quantile crossings.
[ "['Laurens Sluijterman' 'Frank Kreuwel' 'Eric Cator' 'Tom Heskes']" ]
null
null
2406.02294
null
null
http://arxiv.org/pdf/2406.02294v1
2024-06-04T13:16:08Z
2024-06-04T13:16:08Z
Smaller Batches, Bigger Gains? Investigating the Impact of Batch Sizes on Reinforcement Learning Based Real-World Production Scheduling
Production scheduling is an essential task in manufacturing, with Reinforcement Learning (RL) emerging as a key solution. In a previous work, RL was utilized to solve an extended permutation flow shop scheduling problem (PFSSP) for a real-world production line with two stages, linked by a central buffer. The RL agent was trained to sequence equallysized product batches to minimize setup efforts and idle times. However, the substantial impact caused by varying the size of these product batches has not yet been explored. In this follow-up study, we investigate the effects of varying batch sizes, exploring both the quality of solutions and the training dynamics of the RL agent. The results demonstrate that it is possible to methodically identify reasonable boundaries for the batch size. These boundaries are determined on one side by the increasing sample complexity associated with smaller batch sizes, and on the other side by the decreasing flexibility of the agent when dealing with larger batch sizes. This provides the practitioner the ability to make an informed decision regarding the selection of an appropriate batch size. Moreover, we introduce and investigate two new curriculum learning strategies to enable the training with small batch sizes. The findings of this work offer the potential for application in several industrial use cases with comparable scheduling problems.
[ "['Arthur Müller' 'Felix Grumbach' 'Matthia Sabatelli']" ]
null
null
2406.02295
null
null
http://arxiv.org/pdf/2406.02295v1
2024-06-04T13:16:34Z
2024-06-04T13:16:34Z
How to Explore with Belief: State Entropy Maximization in POMDPs
Recent works have studied *state entropy maximization* in reinforcement learning, in which the agent's objective is to learn a policy inducing high entropy over states visitation (Hazan et al., 2019). They typically assume full observability of the state of the system, so that the entropy of the observations is maximized. In practice, the agent may only get *partial* observations, e.g., a robot perceiving the state of a physical space through proximity sensors and cameras. A significant mismatch between the entropy over observations and true states of the system can arise in those settings. In this paper, we address the problem of entropy maximization over the *true states* with a decision policy conditioned on partial observations *only*. The latter is a generalization of POMDPs, which is intractable in general. We develop a memory and computationally efficient *policy gradient* method to address a first-order relaxation of the objective defined on *belief* states, providing various formal characterizations of approximation gaps, the optimization landscape, and the *hallucination* problem. This paper aims to generalize state entropy maximization to more realistic domains that meet the challenges of applications.
[ "['Riccardo Zamboni' 'Duilio Cirino' 'Marcello Restelli' 'Mirco Mutti']" ]
null
null
2406.02296
null
null
http://arxiv.org/pdf/2406.02296v1
2024-06-04T13:17:24Z
2024-06-04T13:17:24Z
Learning-Rate-Free Stochastic Optimization over Riemannian Manifolds
In recent years, interest in gradient-based optimization over Riemannian manifolds has surged. However, a significant challenge lies in the reliance on hyperparameters, especially the learning rate, which requires meticulous tuning by practitioners to ensure convergence at a suitable rate. In this work, we introduce innovative learning-rate-free algorithms for stochastic optimization over Riemannian manifolds, eliminating the need for hand-tuning and providing a more robust and user-friendly approach. We establish high probability convergence guarantees that are optimal, up to logarithmic factors, compared to the best-known optimally tuned rate in the deterministic setting. Our approach is validated through numerical experiments, demonstrating competitive performance against learning-rate-dependent algorithms.
[ "['Daniel Dodd' 'Louis Sharrock' 'Christopher Nemeth']" ]