categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.19446
null
null
http://arxiv.org/pdf/2402.19446v1
2024-02-29T18:45:56Z
2024-02-29T18:45:56Z
ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL
A broad use case of large language models (LLMs) is in goal-directed decision-making tasks (or "agent" tasks), where an LLM needs to not just generate completions for a given prompt, but rather make intelligent decisions over a multi-turn interaction to accomplish a task (e.g., when interacting with the web, using tools, or providing customer support). Reinforcement learning (RL) provides a general paradigm to address such agent tasks, but current RL methods for LLMs largely focus on optimizing single-turn rewards. By construction, most single-turn RL methods cannot endow LLMs with the ability to intelligently seek information over multiple turns, perform credit assignment, or reason about their past actions -- all of which are critical in agent tasks. This raises the question: how can we design effective and efficient multi-turn RL algorithms for LLMs? In this paper, we develop a framework for building multi-turn RL algorithms for fine-tuning LLMs, that preserves the flexibility of existing single-turn RL methods for LLMs (e.g., proximal policy optimization), while accommodating multiple turns, long horizons, and delayed rewards effectively. To do this, our framework adopts a hierarchical RL approach and runs two RL algorithms in parallel: a high-level off-policy value-based RL algorithm to aggregate reward over utterances, and a low-level RL algorithm that utilizes this high-level value function to train a token policy within each utterance or turn. Our hierarchical framework, Actor-Critic Framework with a Hierarchical Structure (ArCHer), can also give rise to other RL methods. Empirically, we find that ArCHer significantly improves efficiency and performance on agent tasks, attaining a sample efficiency of about 100x over existing methods, while also improving with larger model capacity (upto the 7 billion scale that we tested on).
[ "['Yifei Zhou' 'Andrea Zanette' 'Jiayi Pan' 'Sergey Levine' 'Aviral Kumar']" ]
null
null
2402.19449
null
null
http://arxiv.org/pdf/2402.19449v2
2024-07-12T05:10:32Z
2024-02-29T18:47:52Z
Heavy-Tailed Class Imbalance and Why Adam Outperforms Gradient Descent on Language Models
Adam has been shown to outperform gradient descent on large language models by a larger margin than on other tasks, but it is unclear why. We show that a key factor in this performance gap is the heavy-tailed class imbalance found in language tasks. When trained with gradient descent, the loss of infrequent words decreases more slowly than the loss of frequent ones. This leads to a slow decrease on the average loss as most samples come from infrequent words. On the other hand, Adam and sign-based methods are less sensitive to this problem. To establish that this behavior is caused by class imbalance, we show empirically that it can be reproduced across architectures and data types, on language transformers, vision CNNs, and linear models. On a linear model with cross-entropy loss, we show that class imbalance leads to imbalanced, correlated gradients and Hessians that have been hypothesized to benefit Adam. We also prove that, in continuous time, gradient descent converges slowly on low-frequency classes while sign descent does not.
[ "['Frederik Kunstner' 'Robin Yadav' 'Alan Milligan' 'Mark Schmidt'\n 'Alberto Bietti']" ]
null
null
2402.19455
null
null
http://arxiv.org/pdf/2402.19455v2
2024-06-25T22:43:54Z
2024-02-29T18:50:11Z
Listening to the Noise: Blind Denoising with Gibbs Diffusion
In recent years, denoising problems have become intertwined with the development of deep generative models. In particular, diffusion models are trained like denoisers, and the distribution they model coincide with denoising priors in the Bayesian picture. However, denoising through diffusion-based posterior sampling requires the noise level and covariance to be known, preventing blind denoising. We overcome this limitation by introducing Gibbs Diffusion (GDiff), a general methodology addressing posterior sampling of both the signal and the noise parameters. Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the family of noise distributions, and a Monte Carlo sampler to infer the noise parameters. Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution caused by the diffusion model. We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and spectral index, and 2) a cosmology problem, namely the analysis of cosmic microwave background data, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.
[ "['David Heurtel-Depeiges' 'Charles C. Margossian' 'Ruben Ohana'\n 'Bruno Régaldo-Saint Blancard']" ]
null
null
2402.19460
null
null
http://arxiv.org/pdf/2402.19460v1
2024-02-29T18:52:56Z
2024-02-29T18:52:56Z
Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks
Uncertainty quantification, once a singular task, has evolved into a spectrum of tasks, including abstained prediction, out-of-distribution detection, and aleatoric uncertainty quantification. The latest goal is disentanglement: the construction of multiple estimators that are each tailored to one and only one task. Hence, there is a plethora of recent advances with different intentions - that often entirely deviate from practical behavior. This paper conducts a comprehensive evaluation of numerous uncertainty estimators across diverse tasks on ImageNet. We find that, despite promising theoretical endeavors, disentanglement is not yet achieved in practice. Additionally, we reveal which uncertainty estimators excel at which specific tasks, providing insights for practitioners and guiding future research toward task-centric and disentangled uncertainty estimation methods. Our code is available at https://github.com/bmucsanyi/bud.
[ "['Bálint Mucsányi' 'Michael Kirchhof' 'Seong Joon Oh']" ]
null
null
2402.19464
null
null
http://arxiv.org/pdf/2402.19464v1
2024-02-29T18:55:03Z
2024-02-29T18:55:03Z
Curiosity-driven Red-teaming for Large Language Models
Large language models (LLMs) hold great potential for many natural language applications but risk generating incorrect or toxic content. To probe when an LLM generates unwanted content, the current paradigm is to recruit a textit{red team} of human testers to design input prompts (i.e., test cases) that elicit undesirable responses from LLMs. However, relying solely on human testers is expensive and time-consuming. Recent works automate red teaming by training a separate red team LLM with reinforcement learning (RL) to generate test cases that maximize the chance of eliciting undesirable responses from the target LLM. However, current RL methods are only able to generate a small number of effective test cases resulting in a low coverage of the span of prompts that elicit undesirable responses from the target LLM. To overcome this limitation, we draw a connection between the problem of increasing the coverage of generated test cases and the well-studied approach of curiosity-driven exploration that optimizes for novelty. Our method of curiosity-driven red teaming (CRT) achieves greater coverage of test cases while mantaining or increasing their effectiveness compared to existing methods. Our method, CRT successfully provokes toxic responses from LLaMA2 model that has been heavily fine-tuned using human preferences to avoid toxic outputs. Code is available at url{https://github.com/Improbable-AI/curiosity_redteam}
[ "['Zhang-Wei Hong' 'Idan Shenfeld' 'Tsun-Hsuan Wang' 'Yung-Sung Chuang'\n 'Aldo Pareja' 'James Glass' 'Akash Srivastava' 'Pulkit Agrawal']" ]
null
null
2402.19469
null
null
http://arxiv.org/pdf/2402.19469v1
2024-02-29T18:57:37Z
2024-02-29T18:57:37Z
Humanoid Locomotion as Next Token Prediction
We cast real-world humanoid control as a next token prediction problem, akin to predicting the next word in language. Our model is a causal transformer trained via autoregressive prediction of sensorimotor trajectories. To account for the multi-modal nature of the data, we perform prediction in a modality-aligned way, and for each input token predict the next token from the same modality. This general formulation enables us to leverage data with missing modalities, like video trajectories without actions. We train our model on a collection of simulated trajectories coming from prior neural network policies, model-based controllers, motion capture data, and YouTube videos of humans. We show that our model enables a full-sized humanoid to walk in San Francisco zero-shot. Our model can transfer to the real world even when trained on only 27 hours of walking data, and can generalize to commands not seen during training like walking backward. These findings suggest a promising path toward learning challenging real-world control tasks by generative modeling of sensorimotor trajectories.
[ "['Ilija Radosavovic' 'Bike Zhang' 'Baifeng Shi' 'Jathushan Rajasegaran'\n 'Sarthak Kamat' 'Trevor Darrell' 'Koushil Sreenath' 'Jitendra Malik']" ]
null
null
2402.19472
null
null
http://arxiv.org/pdf/2402.19472v1
2024-02-29T18:58:26Z
2024-02-29T18:58:26Z
Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress
Standardized benchmarks drive progress in machine learning. However, with repeated testing, the risk of overfitting grows as algorithms over-exploit benchmark idiosyncrasies. In our work, we seek to mitigate this challenge by compiling ever-expanding large-scale benchmarks called Lifelong Benchmarks. As exemplars of our approach, we create Lifelong-CIFAR10 and Lifelong-ImageNet, containing (for now) 1.69M and 1.98M test samples, respectively. While reducing overfitting, lifelong benchmarks introduce a key challenge: the high cost of evaluating a growing number of models across an ever-expanding sample set. To address this challenge, we also introduce an efficient evaluation framework: Sort & Search (S&S), which reuses previously evaluated models by leveraging dynamic programming algorithms to selectively rank and sub-select test samples, enabling cost-effective lifelong benchmarking. Extensive empirical evaluations across 31,000 models demonstrate that S&S achieves highly-efficient approximate accuracy measurement, reducing compute cost from 180 GPU days to 5 GPU hours (1000x reduction) on a single A100 GPU, with low approximation error. As such, lifelong benchmarks offer a robust, practical solution to the "benchmark exhaustion" problem.
[ "['Ameya Prabhu' 'Vishaal Udandarao' 'Philip Torr' 'Matthias Bethge'\n 'Adel Bibi' 'Samuel Albanie']" ]
null
null
2402.19475
null
null
http://arxiv.org/pdf/2402.19475v1
2024-02-29T18:59:25Z
2024-02-29T18:59:25Z
The Counterfeit Conundrum: Can Code Language Models Grasp the Nuances of Their Incorrect Generations?
While language models are increasingly more proficient at code generation, they still frequently generate incorrect programs. Many of these programs are obviously wrong, but others are more subtle and pass weaker correctness checks such as being able to compile. In this work, we focus on these counterfeit samples: programs sampled from a language model that 1) have a high enough log-probability to be generated at a moderate temperature and 2) pass weak correctness checks. Overall, we discover that most models have a very shallow understanding of counterfeits through three clear failure modes. First, models mistakenly classify them as correct. Second, models are worse at reasoning about the execution behaviour of counterfeits and often predict their execution results as if they were correct. Third, when asking models to fix counterfeits, the likelihood of a model successfully repairing a counterfeit is often even lower than that of sampling a correct program from scratch. Counterfeits also have very unexpected properties: first, counterfeit programs for problems that are easier for a model to solve are not necessarily easier to detect and only slightly easier to execute and repair. Second, counterfeits from a given model are just as confusing to the model itself as they are to other models. Finally, both strong and weak models are able to generate counterfeit samples that equally challenge all models. In light of our findings, we recommend that care and caution be taken when relying on models to understand their own samples, especially when no external feedback is incorporated.
[ "['Alex Gu' 'Wen-Ding Li' 'Naman Jain' 'Theo X. Olausson' 'Celine Lee'\n 'Koushik Sen' 'Armando Solar-Lezama']" ]
null
null
2403.00011
null
null
http://arxiv.org/pdf/2403.00011v1
2024-02-26T20:09:44Z
2024-02-26T20:09:44Z
Introducing User Feedback-based Counterfactual Explanations (UFCE)
Machine learning models are widely used in real-world applications. However, their complexity makes it often challenging to interpret the rationale behind their decisions. Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in eXplainable Artificial Intelligence (XAI). CE provides actionable information to users on how to achieve the desired outcome with minimal modifications to the input. However, current CE algorithms usually operate within the entire feature space when optimizing changes to turn over an undesired outcome, overlooking the identification of key contributors to the outcome and disregarding the practicality of the suggested changes. In this study, we introduce a novel methodology, that is named as user feedback-based counterfactual explanation (UFCE), which addresses these limitations and aims to bolster confidence in the provided explanations. UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features while considering feature dependence, and evaluates the practicality of suggested changes using benchmark evaluation metrics. We conducted three experiments with five datasets, demonstrating that UFCE outperforms two well-known CE methods in terms of textit{proximity}, textit{sparsity}, and textit{feasibility}. Reported results indicate that user constraints influence the generation of feasible CEs.
[ "['Muhammad Suffian' 'Jose M. Alonso-Moral' 'Alessandro Bogliolo']" ]
null
null
2403.00012
null
null
http://arxiv.org/pdf/2403.00012v2
2024-03-12T12:59:45Z
2024-02-27T02:23:07Z
PreRoutGNN for Timing Prediction with Order Preserving Partition: Global Circuit Pre-training, Local Delay Learning and Attentional Cell Modeling
Pre-routing timing prediction has been recently studied for evaluating the quality of a candidate cell placement in chip design. It involves directly estimating the timing metrics for both pin-level (slack, slew) and edge-level (net delay, cell delay), without time-consuming routing. However, it often suffers from signal decay and error accumulation due to the long timing paths in large-scale industrial circuits. To address these challenges, we propose a two-stage approach. First, we propose global circuit training to pre-train a graph auto-encoder that learns the global graph embedding from circuit netlist. Second, we use a novel node updating scheme for message passing on GCN, following the topological sorting sequence of the learned graph embedding and circuit graph. This scheme residually models the local time delay between two adjacent pins in the updating sequence, and extracts the lookup table information inside each cell via a new attention mechanism. To handle large-scale circuits efficiently, we introduce an order preserving partition scheme that reduces memory consumption while maintaining the topological dependencies. Experiments on 21 real world circuits achieve a new SOTA R2 of 0.93 for slack prediction, which is significantly surpasses 0.59 by previous SOTA method. Code will be available at: https://github.com/Thinklab-SJTU/EDA-AI.
[ "['Ruizhe Zhong' 'Junjie Ye' 'Zhentao Tang' 'Shixiong Kai' 'Mingxuan Yuan'\n 'Jianye Hao' 'Junchi Yan']" ]
null
null
2403.00013
null
null
http://arxiv.org/pdf/2403.00013v1
2024-02-27T07:15:35Z
2024-02-27T07:15:35Z
Prioritizing Informative Features and Examples for Deep Learning from Noisy Data
In this dissertation, we propose a systemic framework that prioritizes informative features and examples to enhance each stage of the development process. Specifically, we prioritize informative features and examples and improve the performance of feature learning, data labeling, and data selection. We first propose an approach to extract only informative features that are inherent to solving a target task by using auxiliary out-of-distribution data. We deactivate the noise features in the target distribution by using that in the out-of-distribution data. Next, we introduce an approach that prioritizes informative examples from unlabeled noisy data in order to reduce the labeling cost of active learning. In order to solve the purity-information dilemma, where an attempt to select informative examples induces the selection of many noisy examples, we propose a meta-model that finds the best balance between purity and informativeness. Lastly, we suggest an approach that prioritizes informative examples from labeled noisy data to preserve the performance of data selection. For labeled image noise data, we propose a data selection method that considers the confidence of neighboring samples to maintain the performance of the state-of-the-art Re-labeling models. For labeled text noise data, we present an instruction selection method that takes diversity into account for ranking the quality of instructions with prompting, thereby enhancing the performance of aligned large language models. Overall, our unified framework induces the deep learning development process robust to noisy data, thereby effectively mitigating noisy features and examples in real-world applications.
[ "['Dongmin Park']" ]
null
null
2403.00014
null
null
http://arxiv.org/abs/2403.00014v1
2024-02-27T09:35:54Z
2024-02-27T09:35:54Z
GIN-SD: Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion
Source detection in graphs has demonstrated robust efficacy in the domain of rumor source identification. Although recent solutions have enhanced performance by leveraging deep neural networks, they often require complete user data. In this paper, we address a more challenging task, rumor source detection with incomplete user data, and propose a novel framework, i.e., Source Detection in Graphs with Incomplete Nodes via Positional Encoding and Attentive Fusion (GIN-SD), to tackle this challenge. Specifically, our approach utilizes a positional embedding module to distinguish nodes that are incomplete and employs a self-attention mechanism to focus on nodes with greater information transmission capacity. To mitigate the prediction bias caused by the significant disparity between the numbers of source and non-source nodes, we also introduce a class-balancing mechanism. Extensive experiments validate the effectiveness of GIN-SD and its superiority to state-of-the-art methods.
[ "['Le Cheng' 'Peican Zhu' 'Keke Tang' 'Chao Gao' 'Zhen Wang']" ]
null
null
2403.00016
null
null
http://arxiv.org/pdf/2403.00016v1
2024-02-28T02:15:47Z
2024-02-28T02:15:47Z
Deep Sensitivity Analysis for Objective-Oriented Combinatorial Optimization
Pathogen control is a critical aspect of modern poultry farming, providing important benefits for both public health and productivity. Effective poultry management measures to reduce pathogen levels in poultry flocks promote food safety by lowering risks of food-borne illnesses. They also support animal health and welfare by preventing infectious diseases that can rapidly spread and impact flock growth, egg production, and overall health. This study frames the search for optimal management practices that minimize the presence of multiple pathogens as a combinatorial optimization problem. Specifically, we model the various possible combinations of management settings as a solution space that can be efficiently explored to identify configurations that optimally reduce pathogen levels. This design incorporates a neural network feedback-based method that combines feature explanations with global sensitivity analysis to ensure combinatorial optimization in multiobjective settings. Our preliminary experiments have promising results when applied to two real-world agricultural datasets. While further validation is still needed, these early experimental findings demonstrate the potential of the model to derive targeted feature interactions that adaptively optimize pathogen control under varying real-world constraints.
[ "['Ganga Gireesan' 'Nisha Pillai' 'Michael J Rothrock' 'Bindu Nanduri'\n 'Zhiqian Chen' 'Mahalingam Ramkumar']" ]
null
null
2403.00017
null
null
http://arxiv.org/pdf/2403.00017v1
2024-02-28T02:24:04Z
2024-02-28T02:24:04Z
Towards Interpreting Multi-Objective Feature Associations
Understanding how multiple features are associated and contribute to a specific objective is as important as understanding how each feature contributes to a particular outcome. Interpretability of a single feature in a prediction may be handled in multiple ways; however, in a multi-objective prediction, it is difficult to obtain interpretability of a combination of feature values. To address this issue, we propose an objective specific feature interaction design using multi-labels to find the optimal combination of features in agricultural settings. One of the novel aspects of this design is the identification of a method that integrates feature explanations with global sensitivity analysis in order to ensure combinatorial optimization in multi-objective settings. We have demonstrated in our preliminary experiments that an approximate combination of feature values can be found to achieve the desired outcome using two agricultural datasets: one with pre-harvest poultry farm practices for multi-drug resistance presence, and one with post-harvest poultry farm practices for food-borne pathogens. In our combinatorial optimization approach, all three pathogens are taken into consideration simultaneously to account for the interaction between conditions that favor different types of pathogen growth. These results indicate that explanation-based approaches are capable of identifying combinations of features that reduce pathogen presence in fewer iterations than a baseline.
[ "['Nisha Pillai' 'Ganga Gireesan' 'Michael J. Rothrock Jr.' 'Bindu Nanduri'\n 'Zhiqian Chen' 'Mahalingam Ramkumar']" ]
null
null
2403.00019
null
null
http://arxiv.org/pdf/2403.00019v1
2024-02-28T04:30:41Z
2024-02-28T04:30:41Z
Transformer-based Parameter Estimation in Statistics
Parameter estimation is one of the most important tasks in statistics, and is key to helping people understand the distribution behind a sample of observations. Traditionally parameter estimation is done either by closed-form solutions (e.g., maximum likelihood estimation for Gaussian distribution), or by iterative numerical methods such as Newton-Raphson method when closed-form solution does not exist (e.g., for Beta distribution). In this paper we propose a transformer-based approach to parameter estimation. Compared with existing solutions, our approach does not require a closed-form solution or any mathematical derivations. It does not even require knowing the probability density function, which is needed by numerical methods. After the transformer model is trained, only a single inference is needed to estimate the parameters of the underlying distribution based on a sample of observations. In the empirical study we compared our approach with maximum likelihood estimation on commonly used distributions such as normal distribution, exponential distribution and beta distribution. It is shown that our approach achieves similar or better accuracy as measured by mean-square-errors.
[ "['Xiaoxin Yin' 'David S. Yin']" ]
null
null
2403.00023
null
null
http://arxiv.org/pdf/2403.00023v1
2024-02-28T14:51:18Z
2024-02-28T14:51:18Z
Auditable Homomorphic-based Decentralized Collaborative AI with Attribute-based Differential Privacy
In recent years, the notion of federated learning (FL) has led to the new paradigm of distributed artificial intelligence (AI) with privacy preservation. However, most current FL systems suffer from data privacy issues due to the requirement of a trusted third party. Although some previous works introduce differential privacy to protect the data, however, it may also significantly deteriorate the model performance. To address these issues, we propose a novel decentralized collaborative AI framework, named Auditable Homomorphic-based Decentralised Collaborative AI (AerisAI), to improve security with homomorphic encryption and fine-grained differential privacy. Our proposed AerisAI directly aggregates the encrypted parameters with a blockchain-based smart contract to get rid of the need of a trusted third party. We also propose a brand-new concept for eliminating the negative impacts of differential privacy for model performance. Moreover, the proposed AerisAI also provides the broadcast-aware group key management based on ciphertext-policy attribute-based encryption (CPABE) to achieve fine-grained access control based on different service-level agreements. We provide a formal theoretical analysis of the proposed AerisAI as well as the functionality comparison with the other baselines. We also conduct extensive experiments on real datasets to evaluate the proposed approach. The experimental results indicate that our proposed AerisAI significantly outperforms the other state-of-the-art baselines.
[ "['Lo-Yao Yeh' 'Sheng-Po Tseng' 'Chia-Hsun Lu' 'Chih-Ya Shen']" ]
null
null
2403.00024
null
null
http://arxiv.org/pdf/2403.00024v2
2024-04-25T09:20:47Z
2024-02-28T15:01:59Z
FlowCyt: A Comparative Study of Deep Learning Approaches for Multi-Class Classification in Flow Cytometry Benchmarking
This paper presents FlowCyt, the first comprehensive benchmark for multi-class single-cell classification in flow cytometry data. The dataset comprises bone marrow samples from 30 patients, with each cell characterized by twelve markers. Ground truth labels identify five hematological cell types: T lymphocytes, B lymphocytes, Monocytes, Mast cells, and Hematopoietic Stem/Progenitor Cells (HSPCs). Experiments utilize supervised inductive learning and semi-supervised transductive learning on up to 1 million cells per patient. Baseline methods include Gaussian Mixture Models, XGBoost, Random Forests, Deep Neural Networks, and Graph Neural Networks (GNNs). GNNs demonstrate superior performance by exploiting spatial relationships in graph-encoded data. The benchmark allows standardized evaluation of clinically relevant classification tasks, along with exploratory analyses to gain insights into hematological cell phenotypes. This represents the first public flow cytometry benchmark with a richly annotated, heterogeneous dataset. It will empower the development and rigorous assessment of novel methodologies for single-cell analysis.
[ "['Lorenzo Bini' 'Fatemeh Nassajian Mojarrad' 'Margarita Liarou'\n 'Thomas Matthes' 'Stéphane Marchand-Maillet']" ]
null
null
2403.00025
null
null
http://arxiv.org/pdf/2403.00025v1
2024-02-28T15:19:33Z
2024-02-28T15:19:33Z
On the Challenges and Opportunities in Generative AI
The field of deep generative modeling has grown rapidly and consistently over the years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models show tremendous promise in synthesizing high-resolution images and text, as well as structured data such as videos and molecules. However, we argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains. In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability. By identifying these challenges, we aim to provide researchers with valuable insights for exploring fruitful research directions, thereby fostering the development of more robust and accessible generative AI solutions.
[ "['Laura Manduchi' 'Kushagra Pandey' 'Robert Bamler' 'Ryan Cotterell'\n 'Sina Däubener' 'Sophie Fellenz' 'Asja Fischer' 'Thomas Gärtner'\n 'Matthias Kirchler' 'Marius Kloft' 'Yingzhen Li' 'Christoph Lippert'\n 'Gerard de Melo' 'Eric Nalisnick' 'Björn Ommer' 'Rajesh Ranganath'\n 'Maja Rudolph' 'Karen Ullrich' 'Guy Van den Broeck' 'Julia E Vogt'\n 'Yixin Wang' 'Florian Wenzel' 'Frank Wood' 'Stephan Mandt'\n 'Vincent Fortuin']" ]
null
null
2403.00026
null
null
http://arxiv.org/pdf/2403.00026v1
2024-02-28T16:02:29Z
2024-02-28T16:02:29Z
Learning to Deliver: a Foundation Model for the Montreal Capacitated Vehicle Routing Problem
In this paper, we present the Foundation Model for the Montreal Capacitated Vehicle Routing Problem (FM-MCVRP), a novel Deep Learning (DL) model that approximates high-quality solutions to a variant of the Capacitated Vehicle Routing Problem (CVRP) that characterizes many real-world applications. The so-called Montreal Capacitated Vehicle Routing Problem (MCVRP), first formally described by Bengio et al. (2021), is defined on a fixed and finite graph, which is analogous to a city. Each MCVRP instance is essentially the sub-graph connecting a randomly sampled subset of the nodes in the fixed graph, which represent a set of potential addresses in a real-world delivery problem on a given day. Our work exploits this problem structure to frame the MCVRP as an analogous Natural Language Processing (NLP) task. Specifically, we leverage a Transformer architecture embedded in a Large Language Model (LLM) framework to train our model in a supervised manner on computationally inexpensive, sub-optimal MCVRP solutions obtained algorithmically. Through comprehensive computational experiments, we show that FM-MCVRP produces better MCVRP solutions than the training data and generalizes to larger sized problem instances not seen during training. Even when compared to near-optimal solutions from state-of-the-art heuristics, FM-MCVRP yields competitive results despite being trained on inferior data. For instance, for 400-customer problems, FM-MCVRP solutions on average fall within 2% of the benchmark. Our results further demonstrate that unlike prior works in the literature, FM-MCVRP is a unified model, which performs consistently and reliably on a range of problem instance sizes and parameter values such as the vehicle capacity.
[ "['Samuel J. K. Chin' 'Matthias Winkenbach' 'Akash Srivastava']" ]
null
null
2403.00027
null
null
http://arxiv.org/pdf/2403.00027v1
2024-02-28T16:32:47Z
2024-02-28T16:32:47Z
A Quick Framework for Evaluating Worst Robustness of Complex Networks
Robustness is pivotal for comprehending, designing, optimizing, and rehabilitating networks, with simulation attacks being the prevailing evaluation method. Simulation attacks are often time-consuming or even impractical, however, a more crucial yet persistently overlooked drawback is that any attack strategy merely provides a potential paradigm of disintegration. The key concern is: in the worst-case scenario or facing the most severe attacks, what is the limit of robustness, referred to as ``Worst Robustness'', for a given system? Understanding a system's worst robustness is imperative for grasping its reliability limits, accurately evaluating protective capabilities, and determining associated design and security maintenance costs. To address these challenges, we introduce the concept of Most Destruction Attack (MDA) based on the idea of knowledge stacking. MDA is employed to assess the worst robustness of networks, followed by the application of an adapted CNN algorithm for rapid worst robustness prediction. We establish the logical validity of MDA and highlight the exceptional performance of the adapted CNN algorithm in predicting the worst robustness across diverse network topologies, encompassing both model and empirical networks.
[ "['Wenjun Jiang' 'Peiyan Li' 'Tianlong Fan' 'Ting Li' 'Chuan-fu Zhang'\n 'Tao Zhang' 'Zong-fu Luo']" ]
null
null
2403.00028
null
null
http://arxiv.org/pdf/2403.00028v2
2024-04-17T23:59:19Z
2024-02-28T16:45:54Z
Lower Bounds for Differential Privacy Under Continual Observation and Online Threshold Queries
One of the most basic problems for studying the "price of privacy over time" is the so called private counter problem, introduced by Dwork et al. (2010) and Chan et al. (2010). In this problem, we aim to track the number of events that occur over time, while hiding the existence of every single event. More specifically, in every time step $tin[T]$ we learn (in an online fashion) that $Delta_tgeq 0$ new events have occurred, and must respond with an estimate $n_tapproxsum_{j=1}^t Delta_j$. The privacy requirement is that all of the outputs together, across all time steps, satisfy event level differential privacy. The main question here is how our error needs to depend on the total number of time steps $T$ and the total number of events $n$. Dwork et al. (2015) showed an upper bound of $Oleft(log(T)+log^2(n)right)$, and Henzinger et al. (2023) showed a lower bound of $Omegaleft(min{log n, log T}right)$. We show a new lower bound of $Omegaleft(min{n,log T}right)$, which is tight w.r.t. the dependence on $T$, and is tight in the sparse case where $log^2 n=O(log T)$. Our lower bound has the following implications: $bullet$ We show that our lower bound extends to the "online thresholds problem", where the goal is to privately answer many "quantile queries" when these queries are presented one-by-one. This resolves an open question of Bun et al. (2017). $bullet$ Our lower bound implies, for the first time, a separation between the number of mistakes obtainable by a private online learner and a non-private online learner. This partially resolves a COLT'22 open question published by Sanyal and Ramponi. $bullet$ Our lower bound also yields the first separation between the standard model of private online learning and a recently proposed relaxed variant of it, called private online prediction.
[ "['Edith Cohen' 'Xin Lyu' 'Jelani Nelson' 'Tamás Sarlós' 'Uri Stemmer']" ]
null
null
2403.00030
null
null
http://arxiv.org/pdf/2403.00030v2
2024-03-05T05:34:55Z
2024-02-28T20:02:55Z
GraphPub: Generation of Differential Privacy Graph with High Availability
In recent years, with the rapid development of graph neural networks (GNN), more and more graph datasets have been published for GNN tasks. However, when an upstream data owner publishes graph data, there are often many privacy concerns, because many real-world graph data contain sensitive information like person's friend list. Differential privacy (DP) is a common method to protect privacy, but due to the complex topological structure of graph data, applying DP on graphs often affects the message passing and aggregation of GNN models, leading to a decrease in model accuracy. In this paper, we propose a novel graph edge protection framework, graph publisher (GraphPub), which can protect graph topology while ensuring that the availability of data is basically unchanged. Through reverse learning and the encoder-decoder mechanism, we search for some false edges that do not have a large negative impact on the aggregation of node features, and use them to replace some real edges. The modified graph will be published, which is difficult to distinguish between real and false data. Sufficient experiments prove that our framework achieves model accuracy close to the original graph with an extremely low privacy budget.
[ "['Wanghan Xu' 'Bin Shi' 'Ao Liu' 'Jiqiang Zhang' 'Bo Dong']" ]
null
null
2403.00033
null
null
http://arxiv.org/pdf/2403.00033v4
2024-03-26T04:58:01Z
2024-02-29T04:01:38Z
Identification of Craving Maps among Marijuana Users via the Analysis of Functional Brain Networks with High-Order Attention Graph Neural Networks
The excessive consumption of marijuana can induce substantial psychological and social consequences. In this investigation, we propose an elucidative framework termed high-order graph attention neural networks (HOGANN) for the classification of Marijuana addiction, coupled with an analysis of localized brain network communities exhibiting abnormal activities among chronic marijuana users. HOGANN integrates dynamic intrinsic functional brain networks, estimated from resting-state functional magnetic resonance imaging (rs-fMRI), using long short-term memory (LSTM) to capture temporal network dynamics. We employ a high-order attention module for information fusion and message passing among neighboring nodes, enhancing the network community analysis. Our model is validated across two distinct data cohorts, yielding substantially higher classification accuracy than benchmark algorithms. Furthermore, we discern the most pertinent subnetworks and cognitive regions affected by persistent marijuana consumption, indicating adverse effects on functional brain networks, particularly within the dorsal attention and frontoparietal networks. Intriguingly, our model demonstrates superior performance in cohorts exhibiting prolonged dependence, implying that prolonged marijuana usage induces more pronounced alterations in brain networks. The model proficiently identifies craving brain maps, thereby delineating critical brain regions for analysis.
[ "['Jun-En Ding' 'Shihao Yang' 'Anna Zilverstand' 'Feng Liu']" ]
null
null
2403.00036
null
null
http://arxiv.org/pdf/2403.00036v1
2024-02-29T05:59:27Z
2024-02-29T05:59:27Z
Influencing Bandits: Arm Selection for Preference Shaping
We consider a non stationary multi-armed bandit in which the population preferences are positively and negatively reinforced by the observed rewards. The objective of the algorithm is to shape the population preferences to maximize the fraction of the population favouring a predetermined arm. For the case of binary opinions, two types of opinion dynamics are considered -- decreasing elasticity (modeled as a Polya urn with increasing number of balls) and constant elasticity (using the voter model). For the first case, we describe an Explore-then-commit policy and a Thompson sampling policy and analyse the regret for each of these policies. We then show that these algorithms and their analyses carry over to the constant elasticity case. We also describe a Thompson sampling based algorithm for the case when more than two types of opinions are present. Finally, we discuss the case where presence of multiple recommendation systems gives rise to a trade-off between their popularity and opinion shaping objectives.
[ "['Viraj Nadkarni' 'D. Manjunath' 'Sharayu Moharir']" ]
null
null
2403.00041
null
null
http://arxiv.org/pdf/2403.00041v2
2024-04-03T15:16:58Z
2024-02-29T11:43:04Z
Global and Local Prompts Cooperation via Optimal Transport for Federated Learning
Prompt learning in pretrained visual-language models has shown remarkable flexibility across various downstream tasks. Leveraging its inherent lightweight nature, recent research attempted to integrate the powerful pretrained models into federated learning frameworks to simultaneously reduce communication costs and promote local training on insufficient data. Despite these efforts, current federated prompt learning methods lack specialized designs to systematically address severe data heterogeneities, e.g., data distribution with both label and feature shifts involved. To address this challenge, we present Federated Prompts Cooperation via Optimal Transport (FedOTP), which introduces efficient collaborative prompt learning strategies to capture diverse category traits on a per-client basis. Specifically, for each client, we learn a global prompt to extract consensus knowledge among clients, and a local prompt to capture client-specific category characteristics. Unbalanced Optimal Transport is then employed to align local visual features with these prompts, striking a balance between global consensus and local personalization. By relaxing one of the equality constraints, FedOTP enables prompts to focus solely on the core regions of image patches. Extensive experiments on datasets with various types of heterogeneities have demonstrated that our FedOTP outperforms the state-of-the-art methods.
[ "['Hongxia Li' 'Wei Huang' 'Jingya Wang' 'Ye Shi']" ]
null
null
2403.00043
null
null
http://arxiv.org/pdf/2403.00043v1
2024-02-29T14:50:58Z
2024-02-29T14:50:58Z
RiNALMo: General-Purpose RNA Language Models Can Generalize Well on Structure Prediction Tasks
Ribonucleic acid (RNA) plays a variety of crucial roles in fundamental biological processes. Recently, RNA has become an interesting drug target, emphasizing the need to improve our understanding of its structures and functions. Over the years, sequencing technologies have produced an enormous amount of unlabeled RNA data, which hides important knowledge and potential. Motivated by the successes of protein language models, we introduce RiboNucleic Acid Language Model (RiNALMo) to help unveil the hidden code of RNA. RiNALMo is the largest RNA language model to date with $650$ million parameters pre-trained on $36$ million non-coding RNA sequences from several available databases. RiNALMo is able to extract hidden knowledge and capture the underlying structure information implicitly embedded within the RNA sequences. RiNALMo achieves state-of-the-art results on several downstream tasks. Notably, we show that its generalization capabilities can overcome the inability of other deep learning methods for secondary structure prediction to generalize on unseen RNA families. The code has been made publicly available on https://github.com/lbcb-sci/RiNALMo.
[ "['Rafael Josip Penić' 'Tin Vlašić' 'Roland G. Huber' 'Yue Wan'\n 'Mile Šikić']" ]
null
null
2403.00044
null
null
http://arxiv.org/pdf/2403.00044v1
2024-02-29T15:19:35Z
2024-02-29T15:19:35Z
Scaling up Dynamic Edge Partition Models via Stochastic Gradient MCMC
The edge partition model (EPM) is a generative model for extracting an overlapping community structure from static graph-structured data. In the EPM, the gamma process (GaP) prior is adopted to infer the appropriate number of latent communities, and each vertex is endowed with a gamma distributed positive memberships vector. Despite having many attractive properties, inference in the EPM is typically performed using Markov chain Monte Carlo (MCMC) methods that prevent it from being applied to massive network data. In this paper, we generalize the EPM to account for dynamic enviroment by representing each vertex with a positive memberships vector constructed using Dirichlet prior specification, and capturing the time-evolving behaviour of vertices via a Dirichlet Markov chain construction. A simple-to-implement Gibbs sampler is proposed to perform posterior computation using Negative- Binomial augmentation technique. For large network data, we propose a stochastic gradient Markov chain Monte Carlo (SG-MCMC) algorithm for scalable inference in the proposed model. The experimental results show that the novel methods achieve competitive performance in terms of link prediction, while being much faster.
[ "['Sikun Yang' 'Heinz Koeppl']" ]
null
null
2403.00103
null
null
http://arxiv.org/pdf/2403.00103v1
2024-02-29T20:11:47Z
2024-02-29T20:11:47Z
On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations
There is substantial interest in the use of machine learning (ML)-based techniques throughout the electronic computer-aided design (CAD) flow, particularly methods based on deep learning. However, while deep learning methods have achieved state-of-the-art performance in several applications, recent work has demonstrated that neural networks are generally vulnerable to small, carefully chosen perturbations of their input (e.g. a single pixel change in an image). In this work, we investigate robustness in the context of ML-based EDA tools -- particularly for congestion prediction. As far as we are aware, we are the first to explore this concept in the context of ML-based EDA. We first describe a novel notion of imperceptibility designed specifically for VLSI layout problems defined on netlists and cell placements. Our definition of imperceptibility is characterized by a guarantee that a perturbation to a layout will not alter its global routing. We then demonstrate that state-of-the-art CNN and GNN-based congestion models exhibit brittleness to imperceptible perturbations. Namely, we show that when a small number of cells (e.g. 1%-5% of cells) have their positions shifted such that a measure of global congestion is guaranteed to remain unaffected (e.g. 1% of the design adversarially shifted by 0.001% of the layout space results in a predicted decrease in congestion of up to 90%, while no change in congestion is implied by the perturbation). In other words, the quality of a predictor can be made arbitrarily poor (i.e. can be made to predict that a design is "congestion-free") for an arbitrary input layout. Next, we describe a simple technique to train predictors that improves robustness to these perturbations. Our work indicates that CAD engineers should be cautious when integrating neural network-based mechanisms in EDA flows to ensure robust and high-quality results.
[ "['Chester Holtz' 'Yucheng Wang' 'Chung-Kuan Cheng' 'Bill Lin']" ]
null
null
2403.00105
null
null
http://arxiv.org/pdf/2403.00105v1
2024-02-29T20:17:08Z
2024-02-29T20:17:08Z
Longitudinal Counterfactuals: Constraints and Opportunities
Counterfactual explanations are a common approach to providing recourse to data subjects. However, current methodology can produce counterfactuals that cannot be achieved by the subject, making the use of counterfactuals for recourse difficult to justify in practice. Though there is agreement that plausibility is an important quality when using counterfactuals for algorithmic recourse, ground truth plausibility continues to be difficult to quantify. In this paper, we propose using longitudinal data to assess and improve plausibility in counterfactuals. In particular, we develop a metric that compares longitudinal differences to counterfactual differences, allowing us to evaluate how similar a counterfactual is to prior observed changes. Furthermore, we use this metric to generate plausible counterfactuals. Finally, we discuss some of the inherent difficulties of using counterfactuals for recourse.
[ "['Alexander Asemota' 'Giles Hooker']" ]
null
null
2403.00116
null
null
http://arxiv.org/pdf/2403.00116v1
2024-02-29T20:39:31Z
2024-02-29T20:39:31Z
Federated Linear Contextual Bandits with Heterogeneous Clients
The demand for collaborative and private bandit learning across multiple agents is surging due to the growing quantity of data generated from distributed systems. Federated bandit learning has emerged as a promising framework for private, efficient, and decentralized online learning. However, almost all previous works rely on strong assumptions of client homogeneity, i.e., all participating clients shall share the same bandit model; otherwise, they all would suffer linear regret. This greatly restricts the application of federated bandit learning in practice. In this work, we introduce a new approach for federated bandits for heterogeneous clients, which clusters clients for collaborative bandit learning under the federated learning setting. Our proposed algorithm achieves non-trivial sub-linear regret and communication cost for all clients, subject to the communication protocol under federated learning that at anytime only one model can be shared by the server.
[ "['Ethan Blaser' 'Chuanhao Li' 'Hongning Wang']" ]
null
null
2403.00128
null
null
http://arxiv.org/pdf/2403.00128v1
2024-02-29T21:09:08Z
2024-02-29T21:09:08Z
From Flies to Robots: Inverted Landing in Small Quadcopters with Dynamic Perching
Inverted landing is a routine behavior among a number of animal fliers. However, mastering this feat poses a considerable challenge for robotic fliers, especially to perform dynamic perching with rapid body rotations (or flips) and landing against gravity. Inverted landing in flies have suggested that optical flow senses are closely linked to the precise triggering and control of body flips that lead to a variety of successful landing behaviors. Building upon this knowledge, we aimed to replicate the flies' landing behaviors in small quadcopters by developing a control policy general to arbitrary ceiling-approach conditions. First, we employed reinforcement learning in simulation to optimize discrete sensory-motor pairs across a broad spectrum of ceiling-approach velocities and directions. Next, we converted the sensory-motor pairs to a two-stage control policy in a continuous augmented-optical flow space. The control policy consists of a first-stage Flip-Trigger Policy, which employs a one-class support vector machine, and a second-stage Flip-Action Policy, implemented as a feed-forward neural network. To transfer the inverted-landing policy to physical systems, we utilized domain randomization and system identification techniques for a zero-shot sim-to-real transfer. As a result, we successfully achieved a range of robust inverted-landing behaviors in small quadcopters, emulating those observed in flies.
[ "['Bryan Habas' 'Bo Cheng']" ]
null
null
2403.00131
null
null
http://arxiv.org/pdf/2403.00131v2
2024-05-29T18:11:04Z
2024-02-29T21:25:58Z
UNITS: A Unified Multi-Task Time Series Model
Advances in time series models are driving a shift from conventional deep learning methods to pre-trained foundational models. While pre-trained transformers and reprogrammed text-based LLMs report state-of-the-art results, the best-performing architectures vary significantly across tasks, and models often have limited scope, such as focusing only on time series forecasting. Models that unify predictive and generative time series tasks under a single framework remain challenging to achieve. We introduce UniTS, a multi-task time series model that uses task tokenization to express predictive and generative tasks within a single model. UniTS leverages a modified transformer block designed to obtain universal time series representations. This design induces transferability from a heterogeneous, multi-domain pre-training dataset-often with diverse dynamic patterns, sampling rates, and temporal scales-to many downstream datasets, which can also be diverse in task specifications and data domains. Across 38 datasets spanning human activity sensors, healthcare, engineering, and finance domains, UniTS model performs favorably against 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models, including repurposed text-based LLMs. UniTS demonstrates effective few-shot and prompt learning capabilities when evaluated on new data domains and tasks. In the conventional single-task setting, UniTS outperforms strong task-specialized time series models. The source code and datasets are available at https://github.com/mims-harvard/UniTS.
[ "['Shanghua Gao' 'Teddy Koker' 'Owen Queen' 'Thomas Hartvigsen'\n 'Theodoros Tsiligkaridis' 'Marinka Zitnik']" ]
null
null
2403.00143
null
null
http://arxiv.org/pdf/2403.00143v1
2024-02-29T21:49:31Z
2024-02-29T21:49:31Z
Ensemble-Based Unsupervised Discontinuous Constituency Parsing by Tree Averaging
We address unsupervised discontinuous constituency parsing, where we observe a high variance in the performance of the only previous model. We propose to build an ensemble of different runs of the existing discontinuous parser by averaging the predicted trees, to stabilize and boost performance. To begin with, we provide comprehensive computational complexity analysis (in terms of P and NP-complete) for tree averaging under different setups of binarity and continuity. We then develop an efficient exact algorithm to tackle the task, which runs in a reasonable time for all samples in our experiments. Results on three datasets show our method outperforms all baselines in all metrics; we also provide in-depth analyses of our approach.
[ "['Behzad Shayegh' 'Yuqiao Wen' 'Lili Mou']" ]
null
null
2403.00144
null
null
http://arxiv.org/pdf/2403.00144v1
2024-02-29T21:49:31Z
2024-02-29T21:49:31Z
EBBS: An Ensemble with Bi-Level Beam Search for Zero-Shot Machine Translation
The ability of zero-shot translation emerges when we train a multilingual model with certain translation directions; the model can then directly translate in unseen directions. Alternatively, zero-shot translation can be accomplished by pivoting through a third language (e.g., English). In our work, we observe that both direct and pivot translations are noisy and achieve less satisfactory performance. We propose EBBS, an ensemble method with a novel bi-level beam search algorithm, where each ensemble component explores its own prediction step by step at the lower level but they are synchronized by a "soft voting" mechanism at the upper level. Results on two popular multilingual translation datasets show that EBBS consistently outperforms direct and pivot translations as well as existing ensemble techniques. Further, we can distill the ensemble's knowledge back to the multilingual model to improve inference efficiency; profoundly, our EBBS-based distillation does not sacrifice, or even improves, the translation quality.
[ "['Yuqiao Wen' 'Behzad Shayegh' 'Chenyang Huang' 'Yanshuai Cao' 'Lili Mou']" ]
null
null
2403.00147
null
null
http://arxiv.org/pdf/2403.00147v1
2024-02-29T21:55:17Z
2024-02-29T21:55:17Z
Analysis of Kernel Mirror Prox for Measure Optimization
By choosing a suitable function space as the dual to the non-negative measure cone, we study in a unified framework a class of functional saddle-point optimization problems, which we term the Mixed Functional Nash Equilibrium (MFNE), that underlies several existing machine learning algorithms, such as implicit generative models, distributionally robust optimization (DRO), and Wasserstein barycenters. We model the saddle-point optimization dynamics as an interacting Fisher-Rao-RKHS gradient flow when the function space is chosen as a reproducing kernel Hilbert space (RKHS). As a discrete time counterpart, we propose a primal-dual kernel mirror prox (KMP) algorithm, which uses a dual step in the RKHS, and a primal entropic mirror prox step. We then provide a unified convergence analysis of KMP in an infinite-dimensional setting for this class of MFNE problems, which establishes a convergence rate of $O(1/N)$ in the deterministic case and $O(1/sqrt{N})$ in the stochastic case, where $N$ is the iteration counter. As a case study, we apply our analysis to DRO, providing algorithmic guarantees for DRO robustness and convergence.
[ "['Pavel Dvurechensky' 'Jia-Jie Zhu']" ]
null
null
2403.00155
null
null
http://arxiv.org/pdf/2403.00155v1
2024-02-29T22:13:12Z
2024-02-29T22:13:12Z
Towards Explaining Deep Neural Network Compression Through a Probabilistic Latent Space
Despite the impressive performance of deep neural networks (DNNs), their computational complexity and storage space consumption have led to the concept of network compression. While DNN compression techniques such as pruning and low-rank decomposition have been extensively studied, there has been insufficient attention paid to their theoretical explanation. In this paper, we propose a novel theoretical framework that leverages a probabilistic latent space of DNN weights and explains the optimal network sparsity by using the information-theoretic divergence measures. We introduce new analogous projected patterns (AP2) and analogous-in-probability projected patterns (AP3) notions for DNNs and prove that there exists a relationship between AP3/AP2 property of layers in the network and its performance. Further, we provide a theoretical analysis that explains the training process of the compressed network. The theoretical results are empirically validated through experiments conducted on standard pre-trained benchmarks, including AlexNet, ResNet50, and VGG16, using CIFAR10 and CIFAR100 datasets. Through our experiments, we highlight the relationship of AP3 and AP2 properties with fine-tuning pruned DNNs and sparsity levels.
[ "['Mahsa Mozafari-Nia' 'Salimeh Yasaei Sekeh']" ]
null
null
2403.00157
null
null
http://arxiv.org/pdf/2403.00157v1
2024-02-29T22:18:05Z
2024-02-29T22:18:05Z
Privacy-Preserving Distributed Optimization and Learning
Distributed optimization and learning has recently garnered great attention due to its wide applications in sensor networks, smart grids, machine learning, and so forth. Despite rapid development, existing distributed optimization and learning algorithms require each agent to exchange messages with its neighbors, which may expose sensitive information and raise significant privacy concerns. In this survey paper, we overview privacy-preserving distributed optimization and learning methods. We first discuss cryptography, differential privacy, and other techniques that can be used for privacy preservation and indicate their pros and cons for privacy protection in distributed optimization and learning. We believe that among these approaches, differential privacy is most promising due to its low computational and communication complexities, which are extremely appealing for modern learning based applications with high dimensions of optimization variables. We then introduce several differential-privacy algorithms that can simultaneously ensure privacy and optimization accuracy. Moreover, we provide example applications in several machine learning problems to confirm the real-world effectiveness of these algorithms. Finally, we highlight some challenges in this research domain and discuss future directions.
[ "['Ziqin Chen' 'Yongqiang Wang']" ]
null
null
2403.00158
null
null
http://arxiv.org/pdf/2403.00158v2
2024-03-08T16:26:03Z
2024-02-29T22:19:46Z
Automated Efficient Estimation using Monte Carlo Efficient Influence Functions
Many practical problems involve estimating low dimensional statistical quantities with high-dimensional models and datasets. Several approaches address these estimation tasks based on the theory of influence functions, such as debiased/double ML or targeted minimum loss estimation. This paper introduces textit{Monte Carlo Efficient Influence Functions} (MC-EIF), a fully automated technique for approximating efficient influence functions that integrates seamlessly with existing differentiable probabilistic programming systems. MC-EIF automates efficient statistical estimation for a broad class of models and target functionals that would previously require rigorous custom analysis. We prove that MC-EIF is consistent, and that estimators using MC-EIF achieve optimal $sqrt{N}$ convergence rates. We show empirically that estimators using MC-EIF are at parity with estimators using analytic EIFs. Finally, we demonstrate a novel capstone example using MC-EIF for optimal portfolio selection.
[ "['Raj Agrawal' 'Sam Witty' 'Andy Zane' 'Eli Bingham']" ]
null
null
2403.00165
null
null
http://arxiv.org/pdf/2403.00165v2
2024-06-16T19:10:39Z
2024-02-29T22:26:07Z
TELEClass: Taxonomy Enrichment and LLM-Enhanced Hierarchical Text Classification with Minimal Supervision
Hierarchical text classification aims to categorize each document into a set of classes in a label taxonomy. Most earlier works focus on fully or semi-supervised methods that require a large amount of human annotated data which is costly and time-consuming to acquire. To alleviate human efforts, in this paper, we work on hierarchical text classification with the minimal amount of supervision: using the sole class name of each node as the only supervision. Recently, large language models (LLM) show competitive performance on various tasks through zero-shot prompting, but this method performs poorly in the hierarchical setting, because it is ineffective to include the large and structured label space in a prompt. On the other hand, previous weakly-supervised hierarchical text classification methods only utilize the raw taxonomy skeleton and ignore the rich information hidden in the text corpus that can serve as additional class-indicative features. To tackle the above challenges, we propose TELEClass, Taxonomy Enrichment and LLM-Enhanced weakly-supervised hierarchical text Classification, which (1) automatically enriches the label taxonomy with class-indicative terms to facilitate classifier training and (2) utilizes LLMs for both data annotation and creation tailored for the hierarchical label space. Experiments show that TELEClass can outperform previous weakly-supervised methods and LLM-based zero-shot prompting methods on two public datasets.
[ "['Yunyi Zhang' 'Ruozhen Yang' 'Xueqiang Xu' 'Rui Li' 'Jinfeng Xiao'\n 'Jiaming Shen' 'Jiawei Han']" ]
null
null
2403.00172
null
null
http://arxiv.org/pdf/2403.00172v1
2024-02-29T22:42:23Z
2024-02-29T22:42:23Z
Go Beyond Black-box Policies: Rethinking the Design of Learning Agent for Interpretable and Verifiable HVAC Control
Recent research has shown the potential of Model-based Reinforcement Learning (MBRL) to enhance energy efficiency of Heating, Ventilation, and Air Conditioning (HVAC) systems. However, existing methods rely on black-box thermal dynamics models and stochastic optimizers, lacking reliability guarantees and posing risks to occupant health. In this work, we overcome the reliability bottleneck by redesigning HVAC controllers using decision trees extracted from existing thermal dynamics models and historical data. Our decision tree-based policies are deterministic, verifiable, interpretable, and more energy-efficient than current MBRL methods. First, we introduce a novel verification criterion for RL agents in HVAC control based on domain knowledge. Second, we develop a policy extraction procedure that produces a verifiable decision tree policy. We found that the high dimensionality of the thermal dynamics model input hinders the efficiency of policy extraction. To tackle the dimensionality challenge, we leverage importance sampling conditioned on historical data distributions, significantly improving policy extraction efficiency. Lastly, we present an offline verification algorithm that guarantees the reliability of a control policy. Extensive experiments show that our method saves 68.4% more energy and increases human comfort gain by 14.8% compared to the state-of-the-art method, in addition to an 1127x reduction in computation overhead. Our code and data are available at https://github.com/ryeii/Veri_HVAC
[ "['Zhiyu An' 'Xianzhong Ding' 'Wan Du']" ]
null
null
2403.00176
null
null
http://arxiv.org/abs/2403.00176v1
2024-02-29T23:04:01Z
2024-02-29T23:04:01Z
SoD$^2$: Statically Optimizing Dynamic Deep Neural Network
Though many compilation and runtime systems have been developed for DNNs in recent years, the focus has largely been on static DNNs. Dynamic DNNs, where tensor shapes and sizes and even the set of operators used are dependent upon the input and/or execution, are becoming common. This paper presents SoD$^2$, a comprehensive framework for optimizing Dynamic DNNs. The basis of our approach is a classification of common operators that form DNNs, and the use of this classification towards a Rank and Dimension Propagation (RDP) method. This framework statically determines the shapes of operators as known constants, symbolic constants, or operations on these. Next, using RDP we enable a series of optimizations, like fused code generation, execution (order) planning, and even runtime memory allocation plan generation. By evaluating the framework on 10 emerging Dynamic DNNs and comparing it against several existing systems, we demonstrate both reductions in execution latency and memory requirements, with RDP-enabled key optimizations responsible for much of the gains. Our evaluation results show that SoD$^2$ runs up to $3.9times$ faster than these systems while saving up to $88%$ peak memory consumption.
[ "['Wei Niu' 'Gagan Agrawal' 'Bin Ren']" ]
null
null
2403.00177
null
null
http://arxiv.org/pdf/2403.00177v2
2024-05-28T04:19:17Z
2024-02-29T23:04:42Z
Med-Real2Sim: Non-Invasive Medical Digital Twins using Physics-Informed Self-Supervised Learning
A digital twin is a virtual replica of a real-world physical phenomena that uses mathematical modeling to characterize and simulate its defining features. By constructing digital twins for disease processes, we can perform in-silico simulations that mimic patients' health conditions and counterfactual outcomes under hypothetical interventions in a virtual setting. This eliminates the need for invasive procedures or uncertain treatment decisions. In this paper, we propose a method to identify digital twin model parameters using only noninvasive patient health data. We approach the digital twin modeling as a composite inverse problem, and observe that its structure resembles pretraining and finetuning in self-supervised learning (SSL). Leveraging this, we introduce a physics-informed SSL algorithm that initially pretrains a neural network on the pretext task of learning a differentiable simulator of a physiological process. Subsequently, the model is trained to reconstruct physiological measurements from noninvasive modalities while being constrained by the physical equations learned in pretraining. We apply our method to identify digital twins of cardiac hemodynamics using noninvasive echocardiogram videos, and demonstrate its utility in unsupervised disease detection and in-silico clinical trials.
[ "['Keying Kuang' 'Frances Dean' 'Jack B. Jedlicki' 'David Ouyang'\n 'Anthony Philippakis' 'David Sontag' 'Ahmed M. Alaa']" ]
null
null
2403.00178
null
null
http://arxiv.org/pdf/2403.00178v1
2024-02-29T23:07:07Z
2024-02-29T23:07:07Z
Causal Graph ODE: Continuous Treatment Effect Modeling in Multi-agent Dynamical Systems
Real-world multi-agent systems are often dynamic and continuous, where the agents co-evolve and undergo changes in their trajectories and interactions over time. For example, the COVID-19 transmission in the U.S. can be viewed as a multi-agent system, where states act as agents and daily population movements between them are interactions. Estimating the counterfactual outcomes in such systems enables accurate future predictions and effective decision-making, such as formulating COVID-19 policies. However, existing methods fail to model the continuous dynamic effects of treatments on the outcome, especially when multiple treatments (e.g., "stay-at-home" and "get-vaccine" policies) are applied simultaneously. To tackle this challenge, we propose Causal Graph Ordinary Differential Equations (CAG-ODE), a novel model that captures the continuous interaction among agents using a Graph Neural Network (GNN) as the ODE function. The key innovation of our model is to learn time-dependent representations of treatments and incorporate them into the ODE function, enabling precise predictions of potential outcomes. To mitigate confounding bias, we further propose two domain adversarial learning-based objectives, which enable our model to learn balanced continuous representations that are not affected by treatments or interference. Experiments on two datasets (i.e., COVID-19 and tumor growth) demonstrate the superior performance of our proposed model.
[ "['Zijie Huang' 'Jeehyun Hwang' 'Junkai Zhang' 'Jinwoo Baik'\n 'Weitong Zhang' 'Dominik Wodarz' 'Yizhou Sun' 'Quanquan Gu' 'Wei Wang']" ]
null
null
2403.00184
null
null
http://arxiv.org/abs/2403.00184v1
2024-02-29T23:24:43Z
2024-02-29T23:24:43Z
Entry-Specific Bounds for Low-Rank Matrix Completion under Highly Non-Uniform Sampling
Low-rank matrix completion concerns the problem of estimating unobserved entries in a matrix using a sparse set of observed entries. We consider the non-uniform setting where the observed entries are sampled with highly varying probabilities, potentially with different asymptotic scalings. We show that under structured sampling probabilities, it is often better and sometimes optimal to run estimation algorithms on a smaller submatrix rather than the entire matrix. In particular, we prove error upper bounds customized to each entry, which match the minimax lower bounds under certain conditions. Our bounds characterize the hardness of estimating each entry as a function of the localized sampling probabilities. We provide numerical experiments that confirm our theoretical findings.
[ "['Xumei Xi' 'Christina Lee Yu' 'Yudong Chen']" ]
null
null
2403.00188
null
null
http://arxiv.org/pdf/2403.00188v2
2024-06-21T17:11:10Z
2024-02-29T23:38:28Z
Impact of Decentralized Learning on Player Utilities in Stackelberg Games
When deployed in the world, a learning agent such as a recommender system or a chatbot often repeatedly interacts with another learning agent (such as a user) over time. In many such two-agent systems, each agent learns separately and the rewards of the two agents are not perfectly aligned. To better understand such cases, we examine the learning dynamics of the two-agent system and the implications for each agent's objective. We model these systems as Stackelberg games with decentralized learning and show that standard regret benchmarks (such as Stackelberg equilibrium payoffs) result in worst-case linear regret for at least one player. To better capture these systems, we construct a relaxed regret benchmark that is tolerant to small learning errors by agents. We show that standard learning algorithms fail to provide sublinear regret, and we develop algorithms to achieve near-optimal $O(T^{2/3})$ regret for both players with respect to these benchmarks. We further design relaxed environments under which faster learning ($O(sqrt{T})$) is possible. Altogether, our results take a step towards assessing how two-agent interactions in sequential and decentralized learning environments affect the utility of both agents.
[ "['Kate Donahue' 'Nicole Immorlica' 'Meena Jagadeesan' 'Brendan Lucier'\n 'Aleksandrs Slivkins']" ]
null
null
2403.00194
null
null
http://arxiv.org/pdf/2403.00194v1
2024-02-29T23:46:28Z
2024-02-29T23:46:28Z
Ask Your Distribution Shift if Pre-Training is Right for You
Pre-training is a widely used approach to develop models that are robust to distribution shifts. However, in practice, its effectiveness varies: fine-tuning a pre-trained model improves robustness significantly in some cases but not at all in others (compared to training from scratch). In this work, we seek to characterize the failure modes that pre-training can and cannot address. In particular, we focus on two possible failure modes of models under distribution shift: poor extrapolation (e.g., they cannot generalize to a different domain) and biases in the training data (e.g., they rely on spurious features). Our study suggests that, as a rule of thumb, pre-training can help mitigate poor extrapolation but not dataset biases. After providing theoretical motivation and empirical evidence for this finding, we explore two of its implications for developing robust models: (1) pre-training and interventions designed to prevent exploiting biases have complementary robustness benefits, and (2) fine-tuning on a (very) small, non-diverse but de-biased dataset can result in significantly more robust models than fine-tuning on a large and diverse but biased dataset. Code is available at https://github.com/MadryLab/pretraining-distribution-shift-robustness.
[ "['Benjamin Cohen-Wang' 'Joshua Vendrow' 'Aleksander Madry']" ]
null
null
2403.00196
null
null
http://arxiv.org/pdf/2403.00196v1
2024-02-29T23:52:15Z
2024-02-29T23:52:15Z
Learning to Find Missing Video Frames with Synthetic Data Augmentation: A General Framework and Application in Generating Thermal Images Using RGB Cameras
Advanced Driver Assistance Systems (ADAS) in intelligent vehicles rely on accurate driver perception within the vehicle cabin, often leveraging a combination of sensing modalities. However, these modalities operate at varying rates, posing challenges for real-time, comprehensive driver state monitoring. This paper addresses the issue of missing data due to sensor frame rate mismatches, introducing a generative model approach to create synthetic yet realistic thermal imagery. We propose using conditional generative adversarial networks (cGANs), specifically comparing the pix2pix and CycleGAN architectures. Experimental results demonstrate that pix2pix outperforms CycleGAN, and utilizing multi-view input styles, especially stacked views, enhances the accuracy of thermal image generation. Moreover, the study evaluates the model's generalizability across different subjects, revealing the importance of individualized training for optimal performance. The findings suggest the potential of generative models in addressing missing frames, advancing driver state monitoring for intelligent vehicles, and underscoring the need for continued research in model generalization and customization.
[ "['Mathias Viborg Andersen' 'Ross Greer' 'Andreas Møgelmose'\n 'Mohan Trivedi']" ]
null
null
2403.00198
null
null
http://arxiv.org/pdf/2403.00198v1
2024-03-01T00:02:37Z
2024-03-01T00:02:37Z
AXOLOTL: Fairness through Assisted Self-Debiasing of Large Language Model Outputs
Pre-trained Large Language Models (LLMs) have significantly advanced natural language processing capabilities but are susceptible to biases present in their training data, leading to unfair outcomes in various applications. While numerous strategies have been proposed to mitigate bias, they often require extensive computational resources and may compromise model performance. In this work, we introduce AXOLOTL, a novel post-processing framework, which operates agnostically across tasks and models, leveraging public APIs to interact with LLMs without direct access to internal parameters. Through a three-step process resembling zero-shot learning, AXOLOTL identifies biases, proposes resolutions, and guides the model to self-debias its outputs. This approach minimizes computational costs and preserves model performance, making AXOLOTL a promising tool for debiasing LLM outputs with broad applicability and ease of use.
[ "['Sana Ebrahimi' 'Kaiwen Chen' 'Abolfazl Asudeh' 'Gautam Das'\n 'Nick Koudas']" ]
null
null
2403.00199
null
null
http://arxiv.org/pdf/2403.00199v3
2024-04-19T02:36:26Z
2024-03-01T00:08:20Z
Improving Socratic Question Generation using Data Augmentation and Preference Optimization
The Socratic method is a way of guiding students toward solving a problem independently without directly revealing the solution to the problem. Although this method has been shown to significantly improve student learning outcomes, it remains a complex labor-intensive task for instructors. Large language models (LLMs) can be used to augment human effort by automatically generating Socratic questions for students. However, existing methods that involve prompting these LLMs sometimes produce invalid outputs, e.g., those that directly reveal the solution to the problem or provide irrelevant or premature questions. To alleviate this problem, inspired by reinforcement learning with AI feedback (RLAIF), we first propose a data augmentation method to enrich existing Socratic questioning datasets with questions that are invalid in specific ways. Next, we propose a method to optimize open-source LLMs such as LLama 2 to prefer ground-truth questions over generated invalid ones, using direct preference optimization (DPO). Our experiments on a Socratic questions dataset for student code debugging show that a DPO-optimized 7B LLama 2 model can effectively avoid generating invalid questions, and as a result, outperforms existing state-of-the-art prompting methods.
[ "['Nischal Ashok Kumar' 'Andrew Lan']" ]
null
null
2403.00212
null
null
http://arxiv.org/pdf/2403.00212v1
2024-03-01T01:15:45Z
2024-03-01T01:15:45Z
Transcription and translation of videos using fine-tuned XLSR Wav2Vec2 on custom dataset and mBART
This research addresses the challenge of training an ASR model for personalized voices with minimal data. Utilizing just 14 minutes of custom audio from a YouTube video, we employ Retrieval-Based Voice Conversion (RVC) to create a custom Common Voice 16.0 corpus. Subsequently, a Cross-lingual Self-supervised Representations (XLSR) Wav2Vec2 model is fine-tuned on this dataset. The developed web-based GUI efficiently transcribes and translates input Hindi videos. By integrating XLSR Wav2Vec2 and mBART, the system aligns the translated text with the video timeline, delivering an accessible solution for multilingual video content transcription and translation for personalized voice.
[ "['Aniket Tathe' 'Anand Kamble' 'Suyash Kumbharkar' 'Atharva Bhandare'\n 'Anirban C. Mitra']" ]
null
null
2403.00222
null
null
http://arxiv.org/pdf/2403.00222v2
2024-05-22T09:38:22Z
2024-03-01T01:49:57Z
Efficient Reinforcement Learning for Global Decision Making in the Presence of Local Agents at Scale
We study reinforcement learning for global decision-making in the presence of many local agents, where the global decision-maker makes decisions affecting all local agents, and the objective is to learn a policy that maximizes the rewards of both the global and the local agents. Such problems find many applications, e.g. demand response, EV charging, queueing, etc. In this setting, scalability has been a long-standing challenge due to the size of the state/action space which can be exponential in the number of agents. This work proposes the $texttt{SUB-SAMPLE-Q}$ algorithm where the global agent subsamples $kleq n$ local agents to compute an optimal policy in time that is only exponential in $k$, providing an exponential speedup from standard methods that are exponential in $n$. We show that the learned policy converges to the optimal policy in the order of $tilde{O}(1/sqrt{k}+epsilon_{k,m})$ as the number of sub-sampled agents $k$ increases, where $epsilon_{k,m}$ is the Bellman noise, by proving a novel generalization of the Dvoretzky-Kiefer-Wolfowitz inequality to the regime of sampling without replacement. We also conduct numerical simulations in a demand-response setting and a queueing setting.
[ "['Emile Anand' 'Guannan Qu']" ]
null
null
2403.00225
null
null
http://arxiv.org/pdf/2403.00225v2
2024-03-05T06:23:41Z
2024-03-01T02:00:44Z
Robust Policy Learning via Offline Skill Diffusion
Skill-based reinforcement learning (RL) approaches have shown considerable promise, especially in solving long-horizon tasks via hierarchical structures. These skills, learned task-agnostically from offline datasets, can accelerate the policy learning process for new tasks. Yet, the application of these skills in different domains remains restricted due to their inherent dependency on the datasets, which poses a challenge when attempting to learn a skill-based policy via RL for a target domain different from the datasets' domains. In this paper, we present a novel offline skill learning framework DuSkill which employs a guided Diffusion model to generate versatile skills extended from the limited skills in datasets, thereby enhancing the robustness of policy learning for tasks in different domains. Specifically, we devise a guided diffusion-based skill decoder in conjunction with the hierarchical encoding to disentangle the skill embedding space into two distinct representations, one for encapsulating domain-invariant behaviors and the other for delineating the factors that induce domain variations in the behaviors. Our DuSkill framework enhances the diversity of skills learned offline, thus enabling to accelerate the learning procedure of high-level policies for different domains. Through experiments, we show that DuSkill outperforms other skill-based imitation learning and RL algorithms for several long-horizon tasks, demonstrating its benefits in few-shot imitation and online RL.
[ "['Woo Kyung Kim' 'Minjong Yoo' 'Honguk Woo']" ]
null
null
2403.00229
null
null
http://arxiv.org/pdf/2403.00229v1
2024-03-01T02:20:01Z
2024-03-01T02:20:01Z
Diffraction and Scattering Aware Radio Map and Environment Reconstruction using Geometry Model-Assisted Deep Learning
Machine learning (ML) facilitates rapid channel modeling for 5G and beyond wireless communication systems. Many existing ML techniques utilize a city map to construct the radio map; however, an updated city map may not always be available. This paper proposes to employ the received signal strength (RSS) data to jointly construct the radio map and the virtual environment by exploiting the geometry structure of the environment. In contrast to many existing ML approaches that lack of an environment model, we develop a virtual obstacle model and characterize the geometry relation between the propagation paths and the virtual obstacles. A multi-screen knife-edge model is adopted to extract the key diffraction features, and these features are fed into a neural network (NN) for diffraction representation. To describe the scattering, as oppose to most existing methods that directly input an entire city map, our model focuses on the geometry structure from the local area surrounding the TX-RX pair and the spatial invariance of such local geometry structure is exploited. Numerical experiments demonstrate that, in addition to reconstructing a 3D virtual environment, the proposed model outperforms the state-of-the-art methods in radio map construction with 10%-18% accuracy improvements. It can also reduce 20% data and 50% training epochs when transferred to a new environment.
[ "['Wangqian Chen' 'Junting Chen']" ]
null
null
2403.00233
null
null
http://arxiv.org/pdf/2403.00233v1
2024-03-01T02:28:49Z
2024-03-01T02:28:49Z
Causal Bandits with General Causal Models and Interventions
This paper considers causal bandits (CBs) for the sequential design of interventions in a causal system. The objective is to optimize a reward function via minimizing a measure of cumulative regret with respect to the best sequence of interventions in hindsight. The paper advances the results on CBs in three directions. First, the structural causal models (SCMs) are assumed to be unknown and drawn arbitrarily from a general class $mathcal{F}$ of Lipschitz-continuous functions. Existing results are often focused on (generalized) linear SCMs. Second, the interventions are assumed to be generalized soft with any desired level of granularity, resulting in an infinite number of possible interventions. The existing literature, in contrast, generally adopts atomic and hard interventions. Third, we provide general upper and lower bounds on regret. The upper bounds subsume (and improve) known bounds for special cases. The lower bounds are generally hitherto unknown. These bounds are characterized as functions of the (i) graph parameters, (ii) eluder dimension of the space of SCMs, denoted by $operatorname{dim}(mathcal{F})$, and (iii) the covering number of the function space, denoted by ${rm cn}(mathcal{F})$. Specifically, the cumulative achievable regret over horizon $T$ is $mathcal{O}(K d^{L-1}sqrt{Toperatorname{dim}(mathcal{F}) log({rm cn}(mathcal{F}))})$, where $K$ is related to the Lipschitz constants, $d$ is the graph's maximum in-degree, and $L$ is the length of the longest causal path. The upper bound is further refined for special classes of SCMs (neural network, polynomial, and linear), and their corresponding lower bounds are provided.
[ "['Zirui Yan' 'Dennis Wei' 'Dmitriy Katz-Rogozhnikov' 'Prasanna Sattigeri'\n 'Ali Tajer']" ]
null
null
2403.00236
null
null
http://arxiv.org/pdf/2403.00236v1
2024-03-01T02:33:26Z
2024-03-01T02:33:26Z
Benchmarking zero-shot stance detection with FlanT5-XXL: Insights from training data, prompting, and decoding strategies into its near-SoTA performance
We investigate the performance of LLM-based zero-shot stance detection on tweets. Using FlanT5-XXL, an instruction-tuned open-source LLM, with the SemEval 2016 Tasks 6A, 6B, and P-Stance datasets, we study the performance and its variations under different prompts and decoding strategies, as well as the potential biases of the model. We show that the zero-shot approach can match or outperform state-of-the-art benchmarks, including fine-tuned models. We provide various insights into its performance including the sensitivity to instructions and prompts, the decoding strategies, the perplexity of the prompts, and to negations and oppositions present in prompts. Finally, we ensure that the LLM has not been trained on test datasets, and identify a positivity bias which may partially explain the performance differences across decoding strategie
[ "['Rachith Aiyappa' 'Shruthi Senthilmani' 'Jisun An' 'Haewoon Kwak'\n 'Yong-Yeol Ahn']" ]
null
null
2403.00252
null
null
http://arxiv.org/pdf/2403.00252v2
2024-06-14T13:51:01Z
2024-03-01T03:30:38Z
EUROPA: A Legal Multilingual Keyphrase Generation Dataset
Keyphrase generation has primarily been explored within the context of academic research articles, with a particular focus on scientific domains and the English language. In this work, we present EUROPA, a dataset for multilingual keyphrase generation in the legal domain. It is derived from legal judgments from the Court of Justice of the European Union (EU), and contains instances in all 24 EU official languages. We run multilingual models on our corpus and analyze the results, showing room for improvement on a domain-specific multilingual corpus such as the one we present.
[ "['Olivier Salaün' 'Frédéric Piedboeuf' 'Guillaume Le Berre'\n 'David Alfonso Hermelo' 'Philippe Langlais']" ]
null
null
2403.00254
null
null
http://arxiv.org/pdf/2403.00254v1
2024-03-01T03:39:17Z
2024-03-01T03:39:17Z
Cloud-based Federated Learning Framework for MRI Segmentation
In contemporary rural healthcare settings, the principal challenge in diagnosing brain images is the scarcity of available data, given that most of the existing deep learning models demand extensive training data to optimize their performance, necessitating centralized processing methods that potentially compromise data privacy. This paper proposes a novel framework tailored for brain tissue segmentation in rural healthcare facilities. The framework employs a deep reinforcement learning (DRL) environment in tandem with a refinement model (RM) deployed locally at rural healthcare sites. The proposed DRL model has a reduced parameter count and practicality for implementation across distributed rural sites. To uphold data privacy and enhance model generalization without transgressing privacy constraints, we employ federated learning (FL) for cooperative model training. We demonstrate the efficacy of our approach by training the network with a limited data set and observing a substantial performance enhancement, mitigating inaccuracies and irregularities in segmentation across diverse sites. Remarkably, the DRL model attains an accuracy of up to 80%, surpassing the capabilities of conventional convolutional neural networks when confronted with data insufficiency. Incorporating our RM results in an additional accuracy improvement of at least 10%, while FL contributes to a further accuracy enhancement of up to 5%. Collectively, the framework achieves an average 92% accuracy rate within rural healthcare settings characterized by data constraints.
[ "['Rukesh Prajapati' 'Amr S. El-Wakeel']" ]
null
null
2403.00257
null
null
http://arxiv.org/pdf/2403.00257v1
2024-03-01T03:45:56Z
2024-03-01T03:45:56Z
Robust deep labeling of radiological emphysema subtypes using squeeze and excitation convolutional neural networks: The MESA Lung and SPIROMICS Studies
Pulmonary emphysema, the progressive, irreversible loss of lung tissue, is conventionally categorized into three subtypes identifiable on pathology and on lung computed tomography (CT) images. Recent work has led to the unsupervised learning of ten spatially-informed lung texture patterns (sLTPs) on lung CT, representing distinct patterns of emphysematous lung parenchyma based on both textural appearance and spatial location within the lung, and which aggregate into 6 robust and reproducible CT Emphysema Subtypes (CTES). Existing methods for sLTP segmentation, however, are slow and highly sensitive to changes in CT acquisition protocol. In this work, we present a robust 3-D squeeze-and-excitation CNN for supervised classification of sLTPs and CTES on lung CT. Our results demonstrate that this model achieves accurate and reproducible sLTP segmentation on lung CTscans, across two independent cohorts and independently of scanner manufacturer and model.
[ "['Artur Wysoczanski' 'Nabil Ettehadi' 'Soroush Arabshahi' 'Yifei Sun'\n 'Karen Hinkley Stukovsky' 'Karol E. Watson' 'MeiLan K. Han'\n 'Erin D Michos' 'Alejandro P. Comellas' 'Eric A. Hoffman'\n 'Andrew F. Laine' 'R. Graham Barr' 'Elsa D. Angelini']" ]
null
null
2403.00258
null
null
http://arxiv.org/pdf/2403.00258v1
2024-03-01T03:46:28Z
2024-03-01T03:46:28Z
"Lossless" Compression of Deep Neural Networks: A High-dimensional Neural Tangent Kernel Approach
Modern deep neural networks (DNNs) are extremely powerful; however, this comes at the price of increased depth and having more parameters per layer, making their training and inference more computationally challenging. In an attempt to address this key limitation, efforts have been devoted to the compression (e.g., sparsification and/or quantization) of these large-scale machine learning models, so that they can be deployed on low-power IoT devices. In this paper, building upon recent advances in neural tangent kernel (NTK) and random matrix theory (RMT), we provide a novel compression approach to wide and fully-connected emph{deep} neural nets. Specifically, we demonstrate that in the high-dimensional regime where the number of data points $n$ and their dimension $p$ are both large, and under a Gaussian mixture model for the data, there exists emph{asymptotic spectral equivalence} between the NTK matrices for a large family of DNN models. This theoretical result enables "lossless" compression of a given DNN to be performed, in the sense that the compressed network yields asymptotically the same NTK as the original (dense and unquantized) network, with its weights and activations taking values emph{only} in ${ 0, pm 1 }$ up to a scaling. Experiments on both synthetic and real-world data are conducted to support the advantages of the proposed compression scheme, with code available at url{https://github.com/Model-Compression/Lossless_Compression}.
[ "['Lingyu Gu' 'Yongqi Du' 'Yuan Zhang' 'Di Xie' 'Shiliang Pu'\n 'Robert C. Qiu' 'Zhenyu Liao']" ]
null
null
2403.00259
null
null
http://arxiv.org/pdf/2403.00259v1
2024-03-01T03:50:03Z
2024-03-01T03:50:03Z
Deciphering diffuse scattering with machine learning and the equivariant foundation model: The case of molten FeO
Bridging the gap between diffuse x-ray or neutron scattering measurements and predicted structures derived from atom-atom pair potentials in disordered materials, has been a longstanding challenge in condensed matter physics. This perspective gives a brief overview of the traditional approaches employed over the past several decades. Namely, the use of approximate interatomic pair potentials that relate 3-dimensional structural models to the measured structure factor and its associated pair distribution function. The use of machine learned interatomic potentials has grown in the past few years, and has been particularly successful in the cases of ionic and oxide systems. Recent advances in large scale sampling, along with a direct integration of scattering measurements into the model development, has provided improved agreement between experiments and large-scale models calculated with quantum mechanical accuracy. However, details of local polyhedral bonding and connectivity in meta-stable disordered systems still require improvement. Here we leverage MACE-MP-0; a newly introduced equivariant foundation model and validate the results against high-quality experimental scattering data for the case of molten iron(II) oxide (FeO). These preliminary results suggest that the emerging foundation model has the potential to surpass the traditional limitations of classical interatomic potentials.
[ "['Ganesh Sivaraman' 'Chris J. Benmore']" ]
null
null
2403.00269
null
null
http://arxiv.org/pdf/2403.00269v2
2024-03-11T05:58:55Z
2024-03-01T04:16:08Z
Large Convolutional Model Tuning via Filter Subspace
Efficient fine-tuning methods are critical to address the high computational and parameter complexity while adapting large pre-trained models to downstream tasks. Our study is inspired by prior research that represents each convolution filter as a linear combination of a small set of filter subspace elements, referred to as filter atoms. In this paper, we propose to fine-tune pre-trained models by adjusting only filter atoms, which are responsible for spatial-only convolution, while preserving spatially-invariant channel combination knowledge in atom coefficients. In this way, we bring a new filter subspace view for model tuning. Furthermore, each filter atom can be recursively decomposed as a combination of another set of atoms, which naturally expands the number of tunable parameters in the filter subspace. By only adapting filter atoms constructed by a small number of parameters, while maintaining the rest of model parameters constant, the proposed approach is highly parameter-efficient. It effectively preserves the capabilities of pre-trained models and prevents overfitting to downstream tasks. Extensive experiments show that such a simple scheme surpasses previous tuning baselines for both discriminate and generative tasks.
[ "['Wei Chen' 'Zichen Miao' 'Qiang Qiu']" ]
null
null
2403.00272
null
null
http://arxiv.org/pdf/2403.00272v1
2024-03-01T04:20:13Z
2024-03-01T04:20:13Z
Dual Pose-invariant Embeddings: Learning Category and Object-specific Discriminative Representations for Recognition and Retrieval
In the context of pose-invariant object recognition and retrieval, we demonstrate that it is possible to achieve significant improvements in performance if both the category-based and the object-identity-based embeddings are learned simultaneously during training. In hindsight, that sounds intuitive because learning about the categories is more fundamental than learning about the individual objects that correspond to those categories. However, to the best of what we know, no prior work in pose-invariant learning has demonstrated this effect. This paper presents an attention-based dual-encoder architecture with specially designed loss functions that optimize the inter- and intra-class distances simultaneously in two different embedding spaces, one for the category embeddings and the other for the object-level embeddings. The loss functions we have proposed are pose-invariant ranking losses that are designed to minimize the intra-class distances and maximize the inter-class distances in the dual representation spaces. We demonstrate the power of our approach with three challenging multi-view datasets, ModelNet-40, ObjectPI, and FG3D. With our dual approach, for single-view object recognition, we outperform the previous best by 20.0% on ModelNet40, 2.0% on ObjectPI, and 46.5% on FG3D. On the other hand, for single-view object retrieval, we outperform the previous best by 33.7% on ModelNet40, 18.8% on ObjectPI, and 56.9% on FG3D.
[ "['Rohan Sarkar' 'Avinash Kak']" ]
null
null
2403.00273
null
null
http://arxiv.org/pdf/2403.00273v1
2024-03-01T04:25:39Z
2024-03-01T04:25:39Z
ARED: Argentina Real Estate Dataset
The Argentinian real estate market presents a unique case study characterized by its unstable and rapidly shifting macroeconomic circumstances over the past decades. Despite the existence of a few datasets for price prediction, there is a lack of mixed modality datasets specifically focused on Argentina. In this paper, the first edition of ARED is introduced. A comprehensive real estate price prediction dataset series, designed for the Argentinian market. This edition contains information solely for Jan-Feb 2024. It was found that despite the short time range captured by this zeroth edition (44 days), time dependent phenomena has been occurring mostly on a market level (market as a whole). Nevertheless future editions of this dataset, will most likely contain historical data. Each listing in ARED comprises descriptive features, and variable-length sets of images.
[ "['Iván Belenky']" ]
null
null
2403.00276
null
null
http://arxiv.org/pdf/2403.00276v1
2024-03-01T04:38:51Z
2024-03-01T04:38:51Z
Graph Construction with Flexible Nodes for Traffic Demand Prediction
Graph neural networks (GNNs) have been widely applied in traffic demand prediction, and transportation modes can be divided into station-based mode and free-floating traffic mode. Existing research in traffic graph construction primarily relies on map matching to construct graphs based on the road network. However, the complexity and inhomogeneity of data distribution in free-floating traffic demand forecasting make road network matching inflexible. To tackle these challenges, this paper introduces a novel graph construction method tailored to free-floating traffic mode. We propose a novel density-based clustering algorithm (HDPC-L) to determine the flexible positioning of nodes in the graph, overcoming the computational bottlenecks of traditional clustering algorithms and enabling effective handling of large-scale datasets. Furthermore, we extract valuable information from ridership data to initialize the edge weights of GNNs. Comprehensive experiments on two real-world datasets, the Shenzhen bike-sharing dataset and the Haikou ride-hailing dataset, show that the method significantly improves the performance of the model. On average, our models show an improvement in accuracy of around 25% and 19.5% on the two datasets. Additionally, it significantly enhances computational efficiency, reducing training time by approximately 12% and 32.5% on the two datasets. We make our code available at https://github.com/houjinyan/HDPC-L-ODInit.
[ "['Jinyan Hou' 'Shan Liu' 'Ya Zhang' 'Haotong Qin']" ]
null
null
2403.00278
null
null
http://arxiv.org/pdf/2403.00278v2
2024-06-12T04:08:27Z
2024-03-01T04:50:04Z
Shifted Interpolation for Differential Privacy
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning. It is a fundamental question to quantify their privacy leakage, yet tight characterizations remain open even in the foundational setting of convex losses. This paper improves over previous analyses by establishing (and refining) the "privacy amplification by iteration" phenomenon in the unifying framework of $f$-differential privacy--which tightly captures all aspects of the privacy loss and immediately implies tighter privacy accounting in other notions of differential privacy, e.g., $(varepsilon,delta)$-DP and R'enyi DP. Our key technical insight is the construction of shifted interpolated processes that unravel the popular shifted-divergences argument, enabling generalizations beyond divergence-based relaxations of DP. Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization. Our techniques extend to many settings: convex/strongly convex, constrained/unconstrained, full/cyclic/stochastic batches, and all combinations thereof. As an immediate corollary, we recover the $f$-DP characterization of the exponential mechanism for strongly convex optimization in Gopi et al. (2022), and moreover extend this result to more general settings.
[ "['Jinho Bok' 'Weijie Su' 'Jason M. Altschuler']" ]
null
null
2403.00282
null
null
http://arxiv.org/pdf/2403.00282v2
2024-05-31T07:19:03Z
2024-03-01T04:57:13Z
Conflict-Averse Gradient Aggregation for Constrained Multi-Objective Reinforcement Learning
In many real-world applications, a reinforcement learning (RL) agent should consider multiple objectives and adhere to safety guidelines. To address these considerations, we propose a constrained multi-objective RL algorithm named Constrained Multi-Objective Gradient Aggregator (CoMOGA). In the field of multi-objective optimization, managing conflicts between the gradients of the multiple objectives is crucial to prevent policies from converging to local optima. It is also essential to efficiently handle safety constraints for stable training and constraint satisfaction. We address these challenges straightforwardly by treating the maximization of multiple objectives as a constrained optimization problem (COP), where the constraints are defined to improve the original objectives. Existing safety constraints are then integrated into the COP, and the policy is updated using a linear approximation, which ensures the avoidance of gradient conflicts. Despite its simplicity, CoMOGA guarantees optimal convergence in tabular settings. Through various experiments, we have confirmed that preventing gradient conflicts is critical, and the proposed method achieves constraint satisfaction across all tasks.
[ "['Dohyeong Kim' 'Mineui Hong' 'Jeongho Park' 'Songhwai Oh']" ]
null
null
2403.00284
null
null
http://arxiv.org/pdf/2403.00284v2
2024-04-06T07:02:46Z
2024-03-01T05:04:00Z
A Survey of Route Recommendations: Methods, Applications, and Opportunities
Nowadays, with advanced information technologies deployed citywide, large data volumes and powerful computational resources are intelligentizing modern city development. As an important part of intelligent transportation, route recommendation and its applications are widely used, directly influencing citizens` travel habits. Developing smart and efficient travel routes based on big data (possibly multi-modal) has become a central challenge in route recommendation research. Our survey offers a comprehensive review of route recommendation work based on urban computing. It is organized by the following three parts: 1) Methodology-wise. We categorize a large volume of traditional machine learning and modern deep learning methods. Also, we discuss their historical relations and reveal the edge-cutting progress. 2) Application-wise. We present numerous novel applications related to route commendation within urban computing scenarios. 3) We discuss current problems and challenges and envision several promising research directions. We believe that this survey can help relevant researchers quickly familiarize themselves with the current state of route recommendation research and then direct them to future research trends.
[ "['Shiming Zhang' 'Zhipeng Luo' 'Li Yang' 'Fei Teng' 'Tianrui Li']" ]
null
null
2403.00289
null
null
http://arxiv.org/abs/2403.00289v2
2024-06-20T06:13:18Z
2024-03-01T05:19:59Z
Optimization of array encoding for ultrasound imaging
Objective: The transmit encoding model for synthetic aperture imaging is a robust and flexible framework for understanding the effects of acoustic transmission on ultrasound image reconstruction. Our objective is to use machine learning (ML) to construct scanning sequences, parameterized by time delays and apodization weights, that produce high-quality B-mode images. Approach: We use a custom ML model in PyTorch with simulated RF data from Field II to probe the space of possible encoding sequences for those that minimize a loss function that describes image quality. This approach is made computationally feasible by a novel formulation of the derivative for delay-and-sum beamforming. Main Results: When trained for a specified experimental setting (imaging domain, hardware restrictions, etc.), our ML model produces optimized encoding sequences that, when deployed in the REFoCUS imaging framework, improve a number of standard quality metrics over conventional sequences including resolution, field of view, and contrast. We demonstrate these results experimentally on both wire targets and a tissue-mimicking phantom. Significance: This work demonstrates that the set of commonly used encoding schemes represent only a narrow subset of those available. Additionally, it demonstrates the value for ML tasks in synthetic transmit aperture imaging to consider the beamformer within the model, instead of purely as a post-processing step.
[ "['Jacob Spainhour' 'Korben Smart' 'Stephen Becker' 'Nick Bottenus']" ]
null
null
2403.00290
null
null
http://arxiv.org/pdf/2403.00290v1
2024-03-01T05:20:16Z
2024-03-01T05:20:16Z
Semantic Text Transmission via Prediction with Small Language Models: Cost-Similarity Trade-off
We consider the communication of natural language text from a source to a destination over noiseless and character-erasure channels. We exploit language's inherent correlations and predictability to constrain transmission costs by allowing the destination to predict or complete words with potential dissimilarity with the source text. Concretely, our objective is to obtain achievable $(bar{c}, bar{s})$ pairs, where $bar{c}$ is the average transmission cost at the source and $bar{s}$ is the average semantic similarity measured via cosine similarity between vector embedding of words at the source and those predicted/completed at the destination. We obtain $(bar{c}, bar{s})$ pairs for neural language and first-order Markov chain-based small language models (SLM) for prediction, using both a threshold policy that transmits a word if its cosine similarity with that predicted/completed at the destination is below a threshold, and a periodic policy, which transmits words after a specific interval and predicts/completes the words in between, at the destination. We adopt an SLM for word completion. We demonstrate that, when communication occurs over a noiseless channel, the threshold policy achieves a higher $bar{s}$ for a given $bar{c}$ than the periodic policy and that the $bar{s}$ achieved with the neural SLM is greater than or equal to that of the Markov chain-based algorithm for the same $bar{c}$. The improved performance comes with a higher complexity in terms of time and computing requirements. However, when communication occurs over a character-erasure channel, all prediction algorithms and scheduling policies perform poorly. Furthermore, if character-level Huffman coding is used, the required $bar{c}$ to achieve a given $bar{s}$ is reduced, but the above observations still apply.
[ "['Bhavani A Madhabhavi' 'Gangadhar Karevvanavar' 'Rajshekhar V Bhat'\n 'Nikolaos Pappas']" ]
null
null
2403.00293
null
null
http://arxiv.org/pdf/2403.00293v1
2024-03-01T05:32:14Z
2024-03-01T05:32:14Z
Efficient Adapter Tuning of Pre-trained Speech Models for Automatic Speaker Verification
With excellent generalization ability, self-supervised speech models have shown impressive performance on various downstream speech tasks in the pre-training and fine-tuning paradigm. However, as the growing size of pre-trained models, fine-tuning becomes practically unfeasible due to heavy computation and storage overhead, as well as the risk of overfitting. Adapters are lightweight modules inserted into pre-trained models to facilitate parameter-efficient adaptation. In this paper, we propose an effective adapter framework designed for adapting self-supervised speech models to the speaker verification task. With a parallel adapter design, our proposed framework inserts two types of adapters into the pre-trained model, allowing the adaptation of latent features within intermediate Transformer layers and output embeddings from all Transformer layers. We conduct comprehensive experiments to validate the efficiency and effectiveness of the proposed framework. Experimental results on the VoxCeleb1 dataset demonstrate that the proposed adapters surpass fine-tuning and other parameter-efficient transfer learning methods, achieving superior performance while updating only 5% of the parameters.
[ "['Mufan Sang' 'John H. L. Hansen']" ]
null
null
2403.00299
null
null
http://arxiv.org/pdf/2403.00299v1
2024-03-01T05:57:08Z
2024-03-01T05:57:08Z
Universal Auto-encoder Framework for MIMO CSI Feedback
Existing auto-encoder (AE)-based channel state information (CSI) frameworks have focused on a specific configuration of user equipment (UE) and base station (BS), and thus the input and output sizes of the AE are fixed. However, in the real-world scenario, the input and output sizes may vary depending on the number of antennas of the BS and UE and the allocated resource block in the frequency dimension. A naive approach to support the different input and output sizes is to use multiple AE models, which is impractical for the UE due to the limited HW resources. In this paper, we propose a universal AE framework that can support different input sizes and multiple compression ratios. The proposed AE framework significantly reduces the HW complexity while providing comparable performance in terms of compression ratio-distortion trade-off compared to the naive and state-of-the-art approaches.
[ "['Jinhyun So' 'Hyukjoon Kwon']" ]
null
null
2403.00315
null
null
http://arxiv.org/pdf/2403.00315v1
2024-03-01T06:28:53Z
2024-03-01T06:28:53Z
Axe the X in XAI: A Plea for Understandable AI
In a recent paper, Erasmus et al. (2021) defend the idea that the ambiguity of the term "explanation" in explainable AI (XAI) can be solved by adopting any of four different extant accounts of explanation in the philosophy of science: the Deductive Nomological, Inductive Statistical, Causal Mechanical, and New Mechanist models. In this chapter, I show that the authors' claim that these accounts can be applied to deep neural networks as they would to any natural phenomenon is mistaken. I also provide a more general argument as to why the notion of explainability as it is currently used in the XAI literature bears little resemblance to the traditional concept of scientific explanation. It would be more fruitful to use the label "understandable AI" to avoid the confusion that surrounds the goal and purposes of XAI. In the second half of the chapter, I argue for a pragmatic conception of understanding that is better suited to play the central role attributed to explanation in XAI. Following Kuorikoski & Ylikoski (2015), the conditions of satisfaction for understanding an ML system are fleshed out in terms of an agent's success in using the system, in drawing correct inferences from it.
[ "['Andrés Páez']" ]
null
null
2403.00318
null
null
http://arxiv.org/pdf/2403.00318v1
2024-03-01T06:40:02Z
2024-03-01T06:40:02Z
Deep Reinforcement Learning for Solving Management Problems: Towards A Large Management Mode
We introduce a deep reinforcement learning (DRL) approach for solving management problems including inventory management, dynamic pricing, and recommendation. This DRL approach has the potential to lead to a large management model based on certain transformer neural network structures, resulting in an artificial general intelligence paradigm for various management tasks. Traditional methods have limitations for solving complex real-world problems, and we demonstrate how DRL can surpass existing heuristic approaches for solving management tasks. We aim to solve the problems in a unified framework, considering the interconnections between different tasks. Central to our methodology is the development of a foundational decision model coordinating decisions across the different domains through generative decision-making. Our experimental results affirm the effectiveness of our DRL-based framework in complex and dynamic business environments. This work opens new pathways for the application of DRL in management problems, highlighting its potential to revolutionize traditional business management.
[ "['Jinyang Jiang' 'Xiaotian Liu' 'Tao Ren' 'Qinghao Wang' 'Yi Zheng'\n 'Yufu Du' 'Yijie Peng' 'Cheng Zhang']" ]
null
null
2403.00321
null
null
http://arxiv.org/pdf/2403.00321v1
2024-03-01T06:48:58Z
2024-03-01T06:48:58Z
DEEP-IoT: Downlink-Enhanced Efficient-Power Internet of Things
At the heart of the Internet of Things (IoT) -- a domain witnessing explosive growth -- the imperative for energy efficiency and the extension of device lifespans has never been more pressing. This paper presents DEEP-IoT, a revolutionary communication paradigm poised to redefine how IoT devices communicate. Through a pioneering "listen more, transmit less" strategy, DEEP-IoT challenges and transforms the traditional transmitter (IoT devices)-centric communication model to one where the receiver (the access point) play a pivotal role, thereby cutting down energy use and boosting device longevity. We not only conceptualize DEEP-IoT but also actualize it by integrating deep learning-enhanced feedback channel codes within a narrow-band system. Simulation results show a significant enhancement in the operational lifespan of IoT cells -- surpassing traditional systems using Turbo and Polar codes by up to 52.71%. This leap signifies a paradigm shift in IoT communications, setting the stage for a future where IoT devices boast unprecedented efficiency and durability.
[ "['Yulin Shao']" ]
null
null
2403.00323
null
null
http://arxiv.org/pdf/2403.00323v1
2024-03-01T06:57:09Z
2024-03-01T06:57:09Z
Softened Symbol Grounding for Neuro-symbolic Systems
Neuro-symbolic learning generally consists of two separated worlds, i.e., neural network training and symbolic constraint solving, whose success hinges on symbol grounding, a fundamental problem in AI. This paper presents a novel, softened symbol grounding process, bridging the gap between the two worlds, and resulting in an effective and efficient neuro-symbolic learning framework. Technically, the framework features (1) modeling of symbol solution states as a Boltzmann distribution, which avoids expensive state searching and facilitates mutually beneficial interactions between network training and symbolic reasoning;(2) a new MCMC technique leveraging projection and SMT solvers, which efficiently samples from disconnected symbol solution spaces; (3) an annealing mechanism that can escape from %being trapped into sub-optimal symbol groundings. Experiments with three representative neuro symbolic learning tasks demonstrate that, owining to its superior symbol grounding capability, our framework successfully solves problems well beyond the frontier of the existing proposals.
[ "['Zenan Li' 'Yuan Yao' 'Taolue Chen' 'Jingwei Xu' 'Chun Cao' 'Xiaoxing Ma'\n 'Jian Lü']" ]
null
null
2403.00329
null
null
http://arxiv.org/pdf/2403.00329v1
2024-03-01T07:17:20Z
2024-03-01T07:17:20Z
Learning with Logical Constraints but without Shortcut Satisfaction
Recent studies in neuro-symbolic learning have explored the integration of logical knowledge into deep learning via encoding logical constraints as an additional loss function. However, existing approaches tend to vacuously satisfy logical constraints through shortcuts, failing to fully exploit the knowledge. In this paper, we present a new framework for learning with logical constraints. Specifically, we address the shortcut satisfaction issue by introducing dual variables for logical connectives, encoding how the constraint is satisfied. We further propose a variational framework where the encoded logical constraint is expressed as a distributional loss that is compatible with the model's original training loss. The theoretical analysis shows that the proposed approach bears salient properties, and the experimental evaluations demonstrate its superior performance in both model generalizability and constraint satisfaction.
[ "['Zenan Li' 'Zehua Liu' 'Yuan Yao' 'Jingwei Xu' 'Taolue Chen'\n 'Xiaoxing Ma' 'Jian Lü']" ]
null
null
2403.00337
null
null
http://arxiv.org/pdf/2403.00337v1
2024-03-01T08:01:27Z
2024-03-01T08:01:27Z
Nonlinear Sheaf Diffusion in Graph Neural Networks
This work focuses on exploring the potential benefits of introducing a nonlinear Laplacian in Sheaf Neural Networks for graph-related tasks. The primary aim is to understand the impact of such nonlinearity on diffusion dynamics, signal propagation, and performance of neural network architectures in discrete-time settings. The study primarily emphasizes experimental analysis, using real-world and synthetic datasets to validate the practical effectiveness of different versions of the model. This approach shifts the focus from an initial theoretical exploration to demonstrating the practical utility of the proposed model.
[ "['Olga Zaghen']" ]
null
null
2403.00344
null
null
http://arxiv.org/pdf/2403.00344v2
2024-04-01T08:29:44Z
2024-03-01T08:15:18Z
Robustifying a Policy in Multi-Agent RL with Diverse Cooperative Behaviors and Adversarial Style Sampling for Assistive Tasks
Autonomous assistance of people with motor impairments is one of the most promising applications of autonomous robotic systems. Recent studies have reported encouraging results using deep reinforcement learning (RL) in the healthcare domain. Previous studies showed that assistive tasks can be formulated as multi-agent RL, wherein there are two agents: a caregiver and a care-receiver. However, policies trained in multi-agent RL are often sensitive to the policies of other agents. In such a case, a trained caregiver's policy may not work for different care-receivers. To alleviate this issue, we propose a framework that learns a robust caregiver's policy by training it for diverse care-receiver responses. In our framework, diverse care-receiver responses are autonomously learned through trials and errors. In addition, to robustify the care-giver's policy, we propose a strategy for sampling a care-receiver's response in an adversarial manner during the training. We evaluated the proposed method using tasks in an Assistive Gym. We demonstrate that policies trained with a popular deep RL method are vulnerable to changes in policies of other agents and that the proposed framework improves the robustness against such changes.
[ "['Takayuki Osa' 'Tatsuya Harada']" ]
null
null
2403.00352
null
null
http://arxiv.org/pdf/2403.00352v1
2024-03-01T08:31:58Z
2024-03-01T08:31:58Z
Revisiting Disentanglement in Downstream Tasks: A Study on Its Necessity for Abstract Visual Reasoning
In representation learning, a disentangled representation is highly desirable as it encodes generative factors of data in a separable and compact pattern. Researchers have advocated leveraging disentangled representations to complete downstream tasks with encouraging empirical evidence. This paper further investigates the necessity of disentangled representation in downstream applications. Specifically, we show that dimension-wise disentangled representations are unnecessary on a fundamental downstream task, abstract visual reasoning. We provide extensive empirical evidence against the necessity of disentanglement, covering multiple datasets, representation learning methods, and downstream network architectures. Furthermore, our findings suggest that the informativeness of representations is a better indicator of downstream performance than disentanglement. Finally, the positive correlation between informativeness and disentanglement explains the claimed usefulness of disentangled representations in previous works. The source code is available at https://github.com/Richard-coder-Nai/disentanglement-lib-necessity.git.
[ "['Ruiqian Nai' 'Zixin Wen' 'Ji Li' 'Yuanzhi Li' 'Yang Gao']" ]
null
null
2403.00376
null
null
http://arxiv.org/pdf/2403.00376v2
2024-06-03T07:09:39Z
2024-03-01T09:01:53Z
Spurious Feature Eraser: Stabilizing Test-Time Adaptation for Vision-Language Foundation Model
Vision-language foundation models have exhibited remarkable success across a multitude of downstream tasks due to their scalability on extensive image-text paired data. However, these models also display significant limitations when applied to downstream tasks, such as fine-grained image classification, as a result of ``decision shortcuts'' that hinder their generalization capabilities. In this work, we find that the CLIP model possesses a rich set of features, encompassing both textit{desired invariant causal features} and textit{undesired decision shortcuts}. Moreover, the underperformance of CLIP on downstream tasks originates from its inability to effectively utilize pre-trained features in accordance with specific task requirements. To address this challenge, we propose a simple yet effective method, Spurious Feature Eraser (SEraser), to alleviate the decision shortcuts by erasing the spurious features. Specifically, we introduce a test-time prompt tuning paradigm that optimizes a learnable prompt, thereby compelling the model to exploit invariant features while disregarding decision shortcuts during the inference phase. The proposed method effectively alleviates excessive dependence on potentially misleading spurious information. We conduct comparative analysis of the proposed method against various approaches which validates the significant superiority.
[ "['Huan Ma' 'Yan Zhu' 'Changqing Zhang' 'Peilin Zhao' 'Baoyuan Wu'\n 'Long-Kai Huang' 'Qinghua Hu' 'Bingzhe Wu']" ]
null
null
2403.00381
null
null
http://arxiv.org/pdf/2403.00381v1
2024-03-01T09:09:37Z
2024-03-01T09:09:37Z
Structured Deep Neural Networks-Based Backstepping Trajectory Tracking Control for Lagrangian Systems
Deep neural networks (DNN) are increasingly being used to learn controllers due to their excellent approximation capabilities. However, their black-box nature poses significant challenges to closed-loop stability guarantees and performance analysis. In this paper, we introduce a structured DNN-based controller for the trajectory tracking control of Lagrangian systems using backing techniques. By properly designing neural network structures, the proposed controller can ensure closed-loop stability for any compatible neural network parameters. In addition, improved control performance can be achieved by further optimizing neural network parameters. Besides, we provide explicit upper bounds on tracking errors in terms of controller parameters, which allows us to achieve the desired tracking performance by properly selecting the controller parameters. Furthermore, when system models are unknown, we propose an improved Lagrangian neural network (LNN) structure to learn the system dynamics and design the controller. We show that in the presence of model approximation errors and external disturbances, the closed-loop stability and tracking control performance can still be guaranteed. The effectiveness of the proposed approach is demonstrated through simulations.
[ "['Jiajun Qian' 'Liang Xu' 'Xiaoqiang Ren' 'Xiaofan Wang']" ]
null
null
2403.00403
null
null
http://arxiv.org/pdf/2403.00403v1
2024-03-01T09:49:53Z
2024-03-01T09:49:53Z
Fractal interpolation in the context of prediction accuracy optimization
This paper focuses on the hypothesis of optimizing time series predictions using fractal interpolation techniques. In general, the accuracy of machine learning model predictions is closely related to the quality and quantitative aspects of the data used, following the principle of textit{garbage-in, garbage-out}. In order to quantitatively and qualitatively augment datasets, one of the most prevalent concerns of data scientists is to generate synthetic data, which should follow as closely as possible the actual pattern of the original data. This study proposes three different data augmentation strategies based on fractal interpolation, namely the textit{Closest Hurst Strategy}, textit{Closest Values Strategy} and textit{Formula Strategy}. To validate the strategies, we used four public datasets from the literature, as well as a private dataset obtained from meteorological records in the city of Brasov, Romania. The prediction results obtained with the LSTM model using the presented interpolation strategies showed a significant accuracy improvement compared to the raw datasets, thus providing a possible answer to practical problems in the field of remote sensing and sensor sensitivity. Moreover, our methodologies answer some optimization-related open questions for the fractal interpolation step using textit{Optuna} framework.
[ "['Alexandra Baicoianu' 'Cristina Gabriela Gavrilă'\n 'Cristina Maria Pacurar' 'Victor Dan Pacurar']" ]
null
null
2403.00409
null
null
http://arxiv.org/pdf/2403.00409v2
2024-04-12T01:09:37Z
2024-03-01T09:55:18Z
Provably Robust DPO: Aligning Language Models with Noisy Feedback
Learning from preference-based feedback has recently gained traction as a promising approach to align language models with human interests. While these aligned generative models have demonstrated impressive capabilities across various tasks, their dependence on high-quality human preference data poses a bottleneck in practical applications. Specifically, noisy (incorrect and ambiguous) preference pairs in the dataset might restrict the language models from capturing human intent accurately. While practitioners have recently proposed heuristics to mitigate the effect of noisy preferences, a complete theoretical understanding of their workings remain elusive. In this work, we aim to bridge this gap by by introducing a general framework for policy optimization in the presence of random preference flips. We focus on the direct preference optimization (DPO) algorithm in particular since it assumes that preferences adhere to the Bradley-Terry-Luce (BTL) model, raising concerns about the impact of noisy data on the learned policy. We design a novel loss function, which de-bias the effect of noise on average, making a policy trained by minimizing that loss robust to the noise. Under log-linear parameterization of the policy class and assuming good feature coverage of the SFT policy, we prove that the sub-optimality gap of the proposed robust DPO (rDPO) policy compared to the optimal policy is of the order $O(frac{1}{1-2epsilon}sqrt{frac{d}{n}})$, where $epsilon < 1/2$ is flip rate of labels, $d$ is policy parameter dimension and $n$ is size of dataset. Our experiments on IMDb sentiment generation and Anthropic's helpful-harmless dataset show that rDPO is robust to noise in preference labels compared to vanilla DPO and other heuristics proposed by practitioners.
[ "['Sayak Ray Chowdhury' 'Anush Kini' 'Nagarajan Natarajan']" ]
null
null
2403.00420
null
null
http://arxiv.org/pdf/2403.00420v1
2024-03-01T10:16:46Z
2024-03-01T10:16:46Z
Robust Deep Reinforcement Learning Through Adversarial Attacks and Training : A Survey
Deep Reinforcement Learning (DRL) is an approach for training autonomous agents across various complex environments. Despite its significant performance in well known environments, it remains susceptible to minor conditions variations, raising concerns about its reliability in real-world applications. To improve usability, DRL must demonstrate trustworthiness and robustness. A way to improve robustness of DRL to unknown changes in the conditions is through Adversarial Training, by training the agent against well suited adversarial attacks on the dynamics of the environment. Addressing this critical issue, our work presents an in-depth analysis of contemporary adversarial attack methodologies, systematically categorizing them and comparing their objectives and operational mechanisms. This classification offers a detailed insight into how adversarial attacks effectively act for evaluating the resilience of DRL agents, thereby paving the way for enhancing their robustness.
[ "['Lucas Schott' 'Josephine Delas' 'Hatem Hajri' 'Elies Gherbi'\n 'Reda Yaich' 'Nora Boulahia-Cuppens' 'Frederic Cuppens'\n 'Sylvain Lamprier']" ]
null
null
2403.00423
null
null
http://arxiv.org/pdf/2403.00423v2
2024-06-24T13:43:10Z
2024-03-01T10:19:32Z
Validation of ML-UQ calibration statistics using simulated reference values: a sensitivity analysis
Some popular Machine Learning Uncertainty Quantification (ML-UQ) calibration statistics do not have predefined reference values and are mostly used in comparative studies. In consequence, calibration is almost never validated and the diagnostic is left to the appreciation of the reader. Simulated reference values, based on synthetic calibrated datasets derived from actual uncertainties, have been proposed to palliate this problem. As the generative probability distribution for the simulation of synthetic errors is often not constrained, the sensitivity of simulated reference values to the choice of generative distribution might be problematic, shedding a doubt on the calibration diagnostic. This study explores various facets of this problem, and shows that some statistics are excessively sensitive to the choice of generative distribution to be used for validation when the generative distribution is unknown. This is the case, for instance, of the correlation coefficient between absolute errors and uncertainties (CC) and of the expected normalized calibration error (ENCE). A robust validation workflow to deal with simulated reference values is proposed.
[ "['Pascal Pernot']" ]
null
null
2403.00425
null
null
http://arxiv.org/pdf/2403.00425v2
2024-06-10T15:21:41Z
2024-03-01T10:21:52Z
HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
While large vision-language models (LVLMs) have demonstrated impressive capabilities in interpreting multi-modal contexts, they invariably suffer from object hallucinations (OH). We introduce HALC, a novel decoding algorithm designed to mitigate OH in LVLMs. HALC leverages distinct fine-grained optimal visual information in vision-language tasks and operates on both local and global contexts simultaneously. Specifically, HALC integrates a robust auto-focal grounding mechanism (locally) to correct hallucinated tokens on the fly, and a specialized beam search algorithm (globally) to significantly reduce OH while preserving text generation quality. Additionally, HALC can be integrated into any LVLMs as a plug-and-play module without extra training. Extensive experimental studies demonstrate the effectiveness of HALC in reducing OH, outperforming state-of-the-arts across four benchmarks.
[ "['Zhaorun Chen' 'Zhuokai Zhao' 'Hongyin Luo' 'Huaxiu Yao' 'Bo Li'\n 'Jiawei Zhou']" ]
null
null
2403.00437
null
null
http://arxiv.org/pdf/2403.00437v1
2024-03-01T10:46:47Z
2024-03-01T10:46:47Z
LoMOE: Localized Multi-Object Editing via Multi-Diffusion
Recent developments in the field of diffusion models have demonstrated an exceptional capacity to generate high-quality prompt-conditioned image edits. Nevertheless, previous approaches have primarily relied on textual prompts for image editing, which tend to be less effective when making precise edits to specific objects or fine-grained regions within a scene containing single/multiple objects. We introduce a novel framework for zero-shot localized multi-object editing through a multi-diffusion process to overcome this challenge. This framework empowers users to perform various operations on objects within an image, such as adding, replacing, or editing $textbf{many}$ objects in a complex scene $textbf{in one pass}$. Our approach leverages foreground masks and corresponding simple text prompts that exert localized influences on the target regions resulting in high-fidelity image editing. A combination of cross-attention and background preservation losses within the latent space ensures that the characteristics of the object being edited are preserved while simultaneously achieving a high-quality, seamless reconstruction of the background with fewer artifacts compared to the current methods. We also curate and release a dataset dedicated to multi-object editing, named $texttt{LoMOE}$-Bench. Our experiments against existing state-of-the-art methods demonstrate the improved effectiveness of our approach in terms of both image editing quality and inference speed.
[ "['Goirik Chakrabarty' 'Aditya Chandrasekar' 'Ramya Hebbalaguppe'\n 'Prathosh AP']" ]
null
null
2403.00446
null
null
http://arxiv.org/pdf/2403.00446v1
2024-03-01T11:03:17Z
2024-03-01T11:03:17Z
Safe Hybrid-Action Reinforcement Learning-Based Decision and Control for Discretionary Lane Change
Autonomous lane-change, a key feature of advanced driver-assistance systems, can enhance traffic efficiency and reduce the incidence of accidents. However, safe driving of autonomous vehicles remains challenging in complex environments. How to perform safe and appropriate lane change is a popular topic of research in the field of autonomous driving. Currently, few papers consider the safety of reinforcement learning in autonomous lane-change scenarios. We introduce safe hybrid-action reinforcement learning into discretionary lane change for the first time and propose Parameterized Soft Actor-Critic with PID Lagrangian (PASAC-PIDLag) algorithm. Furthermore, we conduct a comparative analysis of the Parameterized Soft Actor-Critic (PASAC), which is an unsafe version of PASAC-PIDLag. Both algorithms are employed to train the lane-change strategy of autonomous vehicles to output discrete lane-change decision and longitudinal vehicle acceleration. Our simulation results indicate that at a traffic density of 15 vehicles per kilometer (15 veh/km), the PASAC-PIDLag algorithm exhibits superior safety with a collision rate of 0%, outperforming the PASAC algorithm, which has a collision rate of 1%. The outcomes of the generalization assessments reveal that at low traffic density levels, both the PASAC-PIDLag and PASAC algorithms are proficient in attaining a 0% collision rate. Under conditions of high traffic flow density, the PASAC-PIDLag algorithm surpasses PASAC in terms of both safety and optimality.
[ "['Ruichen Xu' 'Xiao Liu' 'Jinming Xu' 'Yuan Lin']" ]
null
null
2403.00470
null
null
http://arxiv.org/pdf/2403.00470v1
2024-03-01T11:54:25Z
2024-03-01T11:54:25Z
Autonomous Robotic Arm Manipulation for Planetary Missions using Causal Machine Learning
Autonomous robotic arm manipulators have the potential to make planetary exploration and in-situ resource utilization missions more time efficient and productive, as the manipulator can handle the objects itself and perform goal-specific actions. We train a manipulator to autonomously study objects of which it has no prior knowledge, such as planetary rocks. This is achieved using causal machine learning in a simulated planetary environment. Here, the manipulator interacts with objects, and classifies them based on differing causal factors. These are parameters, such as mass or friction coefficient, that causally determine the outcomes of its interactions. Through reinforcement learning, the manipulator learns to interact in ways that reveal the underlying causal factors. We show that this method works even without any prior knowledge of the objects, or any previously-collected training data. We carry out the training in planetary exploration conditions, with realistic manipulator models.
[ "['C. McDonnell' 'M. Arana-Catania' 'S. Upadhyay']" ]
null
null
2403.00485
null
null
http://arxiv.org/pdf/2403.00485v1
2024-03-01T12:13:04Z
2024-03-01T12:13:04Z
A Survey of Geometric Graph Neural Networks: Data Structures, Models and Applications
Geometric graph is a special kind of graph with geometric features, which is vital to model many scientific problems. Unlike generic graphs, geometric graphs often exhibit physical symmetries of translations, rotations, and reflections, making them ineffectively processed by current Graph Neural Networks (GNNs). To tackle this issue, researchers proposed a variety of Geometric Graph Neural Networks equipped with invariant/equivariant properties to better characterize the geometry and topology of geometric graphs. Given the current progress in this field, it is imperative to conduct a comprehensive survey of data structures, models, and applications related to geometric GNNs. In this paper, based on the necessary but concise mathematical preliminaries, we provide a unified view of existing models from the geometric message passing perspective. Additionally, we summarize the applications as well as the related datasets to facilitate later research for methodology development and experimental evaluation. We also discuss the challenges and future potential directions of Geometric GNNs at the end of this survey.
[ "['Jiaqi Han' 'Jiacheng Cen' 'Liming Wu' 'Zongzhao Li' 'Xiangzhe Kong'\n 'Rui Jiao' 'Ziyang Yu' 'Tingyang Xu' 'Fandi Wu' 'Zihe Wang' 'Hongteng Xu'\n 'Zhewei Wei' 'Yang Liu' 'Yu Rong' 'Wenbing Huang']" ]
null
null
2403.00504
null
null
http://arxiv.org/pdf/2403.00504v1
2024-03-01T13:05:38Z
2024-03-01T13:05:38Z
Learning and Leveraging World Models in Visual Representation Learning
Joint-Embedding Predictive Architecture (JEPA) has emerged as a promising self-supervised approach that learns by leveraging a world model. While previously limited to predicting missing parts of an input, we explore how to generalize the JEPA prediction task to a broader set of corruptions. We introduce Image World Models, an approach that goes beyond masked image modeling and learns to predict the effect of global photometric transformations in latent space. We study the recipe of learning performant IWMs and show that it relies on three key aspects: conditioning, prediction difficulty, and capacity. Additionally, we show that the predictive world model learned by IWM can be adapted through finetuning to solve diverse tasks; a fine-tuned IWM world model matches or surpasses the performance of previous self-supervised methods. Finally, we show that learning with an IWM allows one to control the abstraction level of the learned representations, learning invariant representations such as contrastive methods, or equivariant representations such as masked image modelling.
[ "['Quentin Garrido' 'Mahmoud Assran' 'Nicolas Ballas' 'Adrien Bardes'\n 'Laurent Najman' 'Yann LeCun']" ]
null
null
2403.00514
null
null
http://arxiv.org/pdf/2403.00514v2
2024-06-19T11:32:01Z
2024-03-01T13:25:10Z
Overestimation, Overfitting, and Plasticity in Actor-Critic: the Bitter Lesson of Reinforcement Learning
Recent advancements in off-policy Reinforcement Learning (RL) have significantly improved sample efficiency, primarily due to the incorporation of various forms of regularization that enable more gradient update steps than traditional agents. However, many of these techniques have been tested in limited settings, often on tasks from single simulation benchmarks and against well-known algorithms rather than a range of regularization approaches. This limits our understanding of the specific mechanisms driving RL improvements. To address this, we implemented over 60 different off-policy agents, each integrating established regularization techniques from recent state-of-the-art algorithms. We tested these agents across 14 diverse tasks from 2 simulation benchmarks, measuring training metrics related to overestimation, overfitting, and plasticity loss -- issues that motivate the examined regularization techniques. Our findings reveal that while the effectiveness of a specific regularization setup varies with the task, certain combinations consistently demonstrate robust and superior performance. Notably, a simple Soft Actor-Critic agent, appropriately regularized, reliably finds a better-performing policy within the training regime, which previously was achieved mainly through model-based approaches.
[ "['Michal Nauman' 'Michał Bortkiewicz' 'Piotr Miłoś' 'Tomasz Trzciński'\n 'Mateusz Ostaszewski' 'Marek Cygan']" ]
null
null
2403.00529
null
null
http://arxiv.org/pdf/2403.00529v1
2024-03-01T13:39:56Z
2024-03-01T13:39:56Z
VoxGenesis: Unsupervised Discovery of Latent Speaker Manifold for Speech Synthesis
Achieving nuanced and accurate emulation of human voice has been a longstanding goal in artificial intelligence. Although significant progress has been made in recent years, the mainstream of speech synthesis models still relies on supervised speaker modeling and explicit reference utterances. However, there are many aspects of human voice, such as emotion, intonation, and speaking style, for which it is hard to obtain accurate labels. In this paper, we propose VoxGenesis, a novel unsupervised speech synthesis framework that can discover a latent speaker manifold and meaningful voice editing directions without supervision. VoxGenesis is conceptually simple. Instead of mapping speech features to waveforms deterministically, VoxGenesis transforms a Gaussian distribution into speech distributions conditioned and aligned by semantic tokens. This forces the model to learn a speaker distribution disentangled from the semantic content. During the inference, sampling from the Gaussian distribution enables the creation of novel speakers with distinct characteristics. More importantly, the exploration of latent space uncovers human-interpretable directions associated with specific speaker characteristics such as gender attributes, pitch, tone, and emotion, allowing for voice editing by manipulating the latent codes along these identified directions. We conduct extensive experiments to evaluate the proposed VoxGenesis using both subjective and objective metrics, finding that it produces significantly more diverse and realistic speakers with distinct characteristics than the previous approaches. We also show that latent space manipulation produces consistent and human-identifiable effects that are not detrimental to the speech quality, which was not possible with previous approaches. Audio samples of VoxGenesis can be found at: url{https://bit.ly/VoxGenesis}.
[ "['Weiwei Lin' 'Chenhang He' 'Man-Wai Mak' 'Jiachen Lian' 'Kong Aik Lee']" ]
null
null
2403.00540
null
null
http://arxiv.org/pdf/2403.00540v2
2024-05-04T14:04:48Z
2024-03-01T13:53:44Z
Epsilon-Greedy Thompson Sampling to Bayesian Optimization
Bayesian optimization (BO) has become a powerful tool for solving simulation-based engineering optimization problems thanks to its ability to integrate physical and mathematical understandings, consider uncertainty, and address the exploitation--exploration dilemma. Thompson sampling (TS) is a preferred solution for BO to handle the exploitation--exploration trade-off. While it prioritizes exploration by generating and minimizing random sample paths from probabilistic models -- a fundamental ingredient of BO -- TS weakly manages exploitation by gathering information about the true objective function after it obtains new observations. In this work, we improve the exploitation of TS by incorporating the $varepsilon$-greedy policy, a well-established selection strategy in reinforcement learning. We first delineate two extremes of TS, namely the generic TS and the sample-average TS. The former promotes exploration, while the latter favors exploitation. We then adopt the $varepsilon$-greedy policy to randomly switch between these two extremes. Small and large values of $varepsilon$ govern exploitation and exploration, respectively. By minimizing two benchmark functions and solving an inverse problem of a steel cantilever beam,we empirically show that $varepsilon$-greedy TS equipped with an appropriate $varepsilon$ is more robust than its two extremes,matching or outperforming the better of the generic TS and the sample-average TS.
[ "['Bach Do' 'Taiwo Adebiyi' 'Ruda Zhang']" ]
null
null
2403.00542
null
null
http://arxiv.org/pdf/2403.00542v1
2024-03-01T13:56:36Z
2024-03-01T13:56:36Z
Machine Learning Training Optimization using the Barycentric Correction Procedure
Machine learning (ML) algorithms are predictively competitive algorithms with many human-impact applications. However, the issue of long execution time remains unsolved in the literature for high-dimensional spaces. This study proposes combining ML algorithms with an efficient methodology known as the barycentric correction procedure (BCP) to address this issue. This study uses synthetic data and an educational dataset from a private university to show the benefits of the proposed method. It was found that this combination provides significant benefits related to time in synthetic and real data without losing accuracy when the number of instances and dimensions increases. Additionally, for high-dimensional spaces, it was proved that BCP and linear support vector classification (LinearSVC), after an estimated feature map for the gaussian radial basis function (RBF) kernel, were unfeasible in terms of computational time and accuracy.
[ "['Sofia Ramos-Pulido' 'Neil Hernandez-Gress' 'Hector G. Ceballos-Cancino']" ]
null
null
2403.00550
null
null
http://arxiv.org/pdf/2403.00550v1
2024-03-01T14:18:46Z
2024-03-01T14:18:46Z
Imitation Learning Datasets: A Toolkit For Creating Datasets, Training Agents and Benchmarking
Imitation learning field requires expert data to train agents in a task. Most often, this learning approach suffers from the absence of available data, which results in techniques being tested on its dataset. Creating datasets is a cumbersome process requiring researchers to train expert agents from scratch, record their interactions and test each benchmark method with newly created data. Moreover, creating new datasets for each new technique results in a lack of consistency in the evaluation process since each dataset can drastically vary in state and action distribution. In response, this work aims to address these issues by creating Imitation Learning Datasets, a toolkit that allows for: (i) curated expert policies with multithreaded support for faster dataset creation; (ii) readily available datasets and techniques with precise measurements; and (iii) sharing implementations of common imitation learning techniques. Demonstration link: https://nathangavenski.github.io/#/il-datasets-video
[ "['Nathan Gavenski' 'Michael Luck' 'Odinaldo Rodrigues']" ]
null
null
2403.00563
null
null
http://arxiv.org/pdf/2403.00563v1
2024-03-01T14:41:51Z
2024-03-01T14:41:51Z
Indirectly Parameterized Concrete Autoencoders
Feature selection is a crucial task in settings where data is high-dimensional or acquiring the full set of features is costly. Recent developments in neural network-based embedded feature selection show promising results across a wide range of applications. Concrete Autoencoders (CAEs), considered state-of-the-art in embedded feature selection, may struggle to achieve stable joint optimization, hurting their training time and generalization. In this work, we identify that this instability is correlated with the CAE learning duplicate selections. To remedy this, we propose a simple and effective improvement: Indirectly Parameterized CAEs (IP-CAEs). IP-CAEs learn an embedding and a mapping from it to the Gumbel-Softmax distributions' parameters. Despite being simple to implement, IP-CAE exhibits significant and consistent improvements over CAE in both generalization and training time across several datasets for reconstruction and classification. Unlike CAE, IP-CAE effectively leverages non-linear relationships and does not require retraining the jointly optimized decoder. Furthermore, our approach is, in principle, generalizable to Gumbel-Softmax distributions beyond feature selection.
[ "['Alfred Nilsson' 'Klas Wijk' 'Sai bharath chandra Gutha' 'Erik Englesson'\n 'Alexandra Hotti' 'Carlo Saccardi' 'Oskar Kviman' 'Jens Lagergren'\n 'Ricardo Vinuesa' 'Hossein Azizpour']" ]
null
null
2403.00564
null
null
http://arxiv.org/pdf/2403.00564v1
2024-03-01T14:42:25Z
2024-03-01T14:42:25Z
EfficientZero V2: Mastering Discrete and Continuous Control with Limited Data
Sample efficiency remains a crucial challenge in applying Reinforcement Learning (RL) to real-world tasks. While recent algorithms have made significant strides in improving sample efficiency, none have achieved consistently superior performance across diverse domains. In this paper, we introduce EfficientZero V2, a general framework designed for sample-efficient RL algorithms. We have expanded the performance of EfficientZero to multiple domains, encompassing both continuous and discrete actions, as well as visual and low-dimensional inputs. With a series of improvements we propose, EfficientZero V2 outperforms the current state-of-the-art (SOTA) by a significant margin in diverse tasks under the limited data setting. EfficientZero V2 exhibits a notable advancement over the prevailing general algorithm, DreamerV3, achieving superior outcomes in 50 of 66 evaluated tasks across diverse benchmarks, such as Atari 100k, Proprio Control, and Vision Control.
[ "['Shengjie Wang' 'Shaohuai Liu' 'Weirui Ye' 'Jiacheng You' 'Yang Gao']" ]
null
null
2403.00570
null
null
http://arxiv.org/pdf/2403.00570v1
2024-03-01T14:47:46Z
2024-03-01T14:47:46Z
Rethinking cluster-conditioned diffusion models
We present a comprehensive experimental study on image-level conditioning for diffusion models using cluster assignments. We elucidate how individual components regarding image clustering impact image synthesis across three datasets. By combining recent advancements from image clustering and diffusion models, we show that, given the optimal cluster granularity with respect to image synthesis (visual groups), cluster-conditioning can achieve state-of-the-art FID (i.e. 1.67, 2.17 on CIFAR10 and CIFAR100 respectively), while attaining a strong training sample efficiency. Finally, we propose a novel method to derive an upper cluster bound that reduces the search space of the visual groups using solely feature-based clustering. Unlike existing approaches, we find no significant connection between clustering and cluster-conditional image generation. The code and cluster assignments will be released.
[ "['Nikolas Adaloglou' 'Tim Kaiser' 'Felix Michels' 'Markus Kollmann']" ]
null
null
2403.00574
null
null
http://arxiv.org/pdf/2403.00574v1
2024-03-01T14:55:22Z
2024-03-01T14:55:22Z
Beyond Single-Model Views for Deep Learning: Optimization versus Generalizability of Stochastic Optimization Algorithms
Despite an extensive body of literature on deep learning optimization, our current understanding of what makes an optimization algorithm effective is fragmented. In particular, we do not understand well whether enhanced optimization translates to improved generalizability. Current research overlooks the inherent stochastic nature of stochastic gradient descent (SGD) and its variants, resulting in a lack of comprehensive benchmarking and insight into their statistical performance. This paper aims to address this gap by adopting a novel approach. Rather than solely evaluating the endpoint of individual optimization trajectories, we draw from an ensemble of trajectories to estimate the stationary distribution of stochastic optimizers. Our investigation encompasses a wide array of techniques, including SGD and its variants, flat-minima optimizers, and new algorithms we propose under the Basin Hopping framework. Through our evaluation, which encompasses synthetic functions with known minima and real-world problems in computer vision and natural language processing, we emphasize fair benchmarking under a statistical framework, comparing stationary distributions and establishing statistical significance. Our study uncovers several key findings regarding the relationship between training loss and hold-out accuracy, as well as the comparable performance of SGD, noise-enabled variants, and novel optimizers utilizing the BH framework. Notably, these algorithms demonstrate performance on par with flat-minima optimizers like SAM, albeit with half the gradient evaluations. We anticipate that our work will catalyze further exploration in deep learning optimization, encouraging a shift away from single-model approaches towards methodologies that acknowledge and leverage the stochastic nature of optimizers.
[ "['Toki Tahmid Inan' 'Mingrui Liu' 'Amarda Shehu']" ]