categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.17886
| null | null |
http://arxiv.org/pdf/2402.17886v3
|
2024-05-26T08:14:30Z
|
2024-02-27T21:00:00Z
|
Zeroth-Order Sampling Methods for Non-Log-Concave Distributions:
Alleviating Metastability by Denoising Diffusion
|
This paper considers the problem of sampling from non-logconcave distribution, based on queries of its unnormalized density. It first describes a framework, Diffusion Monte Carlo (DMC), based on the simulation of a denoising diffusion process with its score function approximated by a generic Monte Carlo estimator. DMC is an oracle-based meta-algorithm, where its oracle is the assumed access to samples that generate a Monte Carlo score estimator. Then we provide an implementation of this oracle, based on rejection sampling, and this turns DMC into a true algorithm, termed Zeroth-Order Diffusion Monte Carlo (ZOD-MC). We provide convergence analyses by first constructing a general framework, i.e. a performance guarantee for DMC, without assuming the target distribution to be log-concave or satisfying any isoperimetric inequality. Then we prove that ZOD-MC admits an inverse polynomial dependence on the desired sampling accuracy, albeit still suffering from the curse of dimensionality. Consequently, for low dimensional distributions, ZOD-MC is a very efficient sampler, with performance exceeding latest samplers, including also-denoising-diffusion-based RDMC and RS-DMC. Last, we experimentally demonstrate the insensitivity of ZOD-MC to increasingly higher barriers between modes or discontinuity in non-convex potential.
|
[
"['Ye He' 'Kevin Rojas' 'Molei Tao']"
] |
null | null |
2402.17888
| null | null |
http://arxiv.org/pdf/2402.17888v1
|
2024-02-27T21:02:47Z
|
2024-02-27T21:02:47Z
|
ConjNorm: Tractable Density Estimation for Out-of-Distribution Detection
|
Post-hoc out-of-distribution (OOD) detection has garnered intensive attention in reliable machine learning. Many efforts have been dedicated to deriving score functions based on logits, distances, or rigorous data distribution assumptions to identify low-scoring OOD samples. Nevertheless, these estimate scores may fail to accurately reflect the true data density or impose impractical constraints. To provide a unified perspective on density-based score design, we propose a novel theoretical framework grounded in Bregman divergence, which extends distribution considerations to encompass an exponential family of distributions. Leveraging the conjugation constraint revealed in our theorem, we introduce a textsc{ConjNorm} method, reframing density function design as a search for the optimal norm coefficient $p$ against the given dataset. In light of the computational challenges of normalization, we devise an unbiased and analytically tractable estimator of the partition function using the Monte Carlo-based importance sampling technique. Extensive experiments across OOD detection benchmarks empirically demonstrate that our proposed textsc{ConjNorm} has established a new state-of-the-art in a variety of OOD detection setups, outperforming the current best method by up to 13.25$%$ and 28.19$%$ (FPR95) on CIFAR-100 and ImageNet-1K, respectively.
|
[
"['Bo Peng' 'Yadan Luo' 'Yonggang Zhang' 'Yixuan Li' 'Zhen Fang']"
] |
null | null |
2402.17890
| null | null |
http://arxiv.org/pdf/2402.17890v2
|
2024-06-05T02:38:37Z
|
2024-02-27T21:06:42Z
|
From Inverse Optimization to Feasibility to ERM
|
Inverse optimization involves inferring unknown parameters of an optimization problem from known solutions and is widely used in fields such as transportation, power systems, and healthcare. We study the contextual inverse optimization setting that utilizes additional contextual information to better predict the unknown problem parameters. We focus on contextual inverse linear programming (CILP), addressing the challenges posed by the non-differentiable nature of LPs. For a linear prediction model, we reduce CILP to a convex feasibility problem allowing the use of standard algorithms such as alternating projections. The resulting algorithm for CILP is equipped with theoretical convergence guarantees without additional assumptions such as degeneracy or interpolation. Next, we reduce CILP to empirical risk minimization (ERM) on a smooth, convex loss that satisfies the Polyak-Lojasiewicz condition. This reduction enables the use of scalable first-order optimization methods to solve large non-convex problems while maintaining theoretical guarantees in the convex setting. Subsequently, we use the reduction to ERM to quantify the generalization performance of the proposed algorithm on previously unseen instances. Finally, we experimentally validate our approach on synthetic and real-world problems and demonstrate improved performance compared to existing methods.
|
[
"['Saurabh Mishra' 'Anant Raj' 'Sharan Vaswani']"
] |
null | null |
2402.17898
| null | null |
http://arxiv.org/pdf/2402.17898v1
|
2024-02-27T21:28:08Z
|
2024-02-27T21:28:08Z
|
Exoplanets Prediction in Multi-Planetary Systems and Determining the
Correlation Between the Parameters of Planets and Host Stars Using Artificial
Intelligence
|
The number of extrasolar planets discovered is increasing, so that more than five thousand exoplanets have been confirmed to date. Now we have an opportunity to test the validity of the laws governing planetary systems and take steps to discover the relationships between the physical parameters of planets and stars. Firstly, we present the results of a search for additional exoplanets in 229 multi-planetary systems that house at least three or more confirmed planets, employing a logarithmic spacing between planets in our Solar System known as the Titius-Bode (TB) relation. We find that the planets in $sim53%$ of these systems adhere to a logarithmic spacing relation remarkably better than the Solar System planets. We predict the presence of 426 additional exoplanets, 47 of which are located within the habitable zone (HZ), and five of the 47 planets have a maximum mass limit of 0.1-2$M_{oplus}$ and a maximum radius lower than 1.25$R_{oplus}$. Secondly, we employ efficient machine learning approaches to analyze a dataset comprising 762 confirmed exoplanets and eight Solar System planets, aiming to characterize their fundamental quantities. We classify the data into two main classes: 'small' and 'giant' planets, with cut-off values at $R_{p}=8.13R_{oplus}$ and $M_{p}=52.48M_{oplus}$. Giant planets have lower densities, suggesting higher H-He mass fractions, while small planets are denser, composed mainly of heavier elements. We highlight that planetary mass, orbital period, and stellar mass play crucial roles in predicting exoplanet radius. Notably, our study reveals a noteworthy result: for giant planets, we observe a strong correlation between planetary radius and the mass of their host stars, which might provide intriguing insights into the relationship between giant planet formation and stellar characteristics.
|
[
"['Mahdiyar Mousavi-Sadr']"
] |
null | null |
2402.17902
| null | null |
http://arxiv.org/pdf/2402.17902v1
|
2024-02-27T21:42:18Z
|
2024-02-27T21:42:18Z
|
SequentialAttention++ for Block Sparsification: Differentiable Pruning
Meets Combinatorial Optimization
|
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, and generalizable models. Prior work on the subject has developed largely along two orthogonal directions: (1) differentiable pruning for efficiently and accurately scoring the importance of parameters, and (2) combinatorial optimization for efficiently searching over the space of sparse models. We unite the two approaches, both theoretically and empirically, to produce a coherent framework for structured neural network pruning in which differentiable pruning guides combinatorial optimization algorithms to select the most important sparse set of parameters. Theoretically, we show how many existing differentiable pruning techniques can be understood as nonconvex regularization for group sparse optimization, and prove that for a wide class of nonconvex regularizers, the global optimum is unique, group-sparse, and provably yields an approximate solution to a sparse convex optimization problem. The resulting algorithm that we propose, SequentialAttention++, advances the state of the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets.
|
[
"['Taisuke Yasuda' 'Kyriakos Axiotis' 'Gang Fu' 'MohammadHossein Bateni'\n 'Vahab Mirrokni']"
] |
null | null |
2402.17905
| null | null |
http://arxiv.org/pdf/2402.17905v3
|
2024-04-22T13:07:38Z
|
2024-02-27T21:43:14Z
|
Using Graph Neural Networks to Predict Local Culture
|
Urban research has long recognized that neighbourhoods are dynamic and relational. However, lack of data, methodologies, and computer processing power have hampered a formal quantitative examination of neighbourhood relational dynamics. To make progress on this issue, this study proposes a graph neural network (GNN) approach that permits combining and evaluating multiple sources of information about internal characteristics of neighbourhoods, their past characteristics, and flows of groups among them, potentially providing greater expressive power in predictive models. By exploring a public large-scale dataset from Yelp, we show the potential of our approach for considering structural connectedness in predicting neighbourhood attributes, specifically to predict local culture. Results are promising from a substantive and methodologically point of view. Substantively, we find that either local area information (e.g. area demographics) or group profiles (tastes of Yelp reviewers) give the best results in predicting local culture, and they are nearly equivalent in all studied cases. Methodologically, exploring group profiles could be a helpful alternative where finding local information for specific areas is challenging, since they can be extracted automatically from many forms of online data. Thus, our approach could empower researchers and policy-makers to use a range of data sources when other local area information is lacking.
|
[
"['Thiago H Silva' 'Daniel Silver']"
] |
null | null |
2402.17906
| null | null |
http://arxiv.org/pdf/2402.17906v1
|
2024-02-27T21:47:06Z
|
2024-02-27T21:47:06Z
|
Representation learning in multiplex graphs: Where and how to fuse
information?
|
In recent years, unsupervised and self-supervised graph representation learning has gained popularity in the research community. However, most proposed methods are focused on homogeneous networks, whereas real-world graphs often contain multiple node and edge types. Multiplex graphs, a special type of heterogeneous graphs, possess richer information, provide better modeling capabilities and integrate more detailed data from potentially different sources. The diverse edge types in multiplex graphs provide more context and insights into the underlying processes of representation learning. In this paper, we tackle the problem of learning representations for nodes in multiplex networks in an unsupervised or self-supervised manner. To that end, we explore diverse information fusion schemes performed at different levels of the graph processing pipeline. The detailed analysis and experimental evaluation of various scenarios inspired us to propose improvements in how to construct GNN architectures that deal with multiplex graphs.
|
[
"['Piotr Bielak' 'Tomasz Kajdanowicz']"
] |
null | null |
2402.17911
| null | null |
http://arxiv.org/pdf/2402.17911v1
|
2024-02-27T21:53:32Z
|
2024-02-27T21:53:32Z
|
Demonstration of Robust and Efficient Quantum Property Learning with
Shallow Shadows
|
Extracting information efficiently from quantum systems is a major component of quantum information processing tasks. Randomized measurements, or classical shadows, enable predicting many properties of arbitrary quantum states using few measurements. While random single qubit measurements are experimentally friendly and suitable for learning low-weight Pauli observables, they perform poorly for nonlocal observables. Prepending a shallow random quantum circuit before measurements maintains this experimental friendliness, but also has favorable sample complexities for observables beyond low-weight Paulis, including high-weight Paulis and global low-rank properties such as fidelity. However, in realistic scenarios, quantum noise accumulated with each additional layer of the shallow circuit biases the results. To address these challenges, we propose the robust shallow shadows protocol. Our protocol uses Bayesian inference to learn the experimentally relevant noise model and mitigate it in postprocessing. This mitigation introduces a bias-variance trade-off: correcting for noise-induced bias comes at the cost of a larger estimator variance. Despite this increased variance, as we demonstrate on a superconducting quantum processor, our protocol correctly recovers state properties such as expectation values, fidelity, and entanglement entropy, while maintaining a lower sample complexity compared to the random single qubit measurement scheme. We also theoretically analyze the effects of noise on sample complexity and show how the optimal choice of the shallow shadow depth varies with noise strength. This combined theoretical and experimental analysis positions the robust shallow shadow protocol as a scalable, robust, and sample-efficient protocol for characterizing quantum states on current quantum computing platforms.
|
[
"['Hong-Ye Hu' 'Andi Gu' 'Swarnadeep Majumder' 'Hang Ren' 'Yipei Zhang'\n 'Derek S. Wang' 'Yi-Zhuang You' 'Zlatko Minev' 'Susanne F. Yelin'\n 'Alireza Seif']"
] |
null | null |
2402.17913
| null | null |
http://arxiv.org/pdf/2402.17913v1
|
2024-02-27T22:00:50Z
|
2024-02-27T22:00:50Z
|
Using AI libraries for Incompressible Computational Fluid Dynamics
|
Recently, there has been a huge effort focused on developing highly efficient open source libraries to perform Artificial Intelligence (AI) related computations on different computer architectures (for example, CPUs, GPUs and new AI processors). This has not only made the algorithms based on these libraries highly efficient and portable between different architectures, but also has substantially simplified the entry barrier to develop methods using AI. Here, we present a novel methodology to bring the power of both AI software and hardware into the field of numerical modelling by repurposing AI methods, such as Convolutional Neural Networks (CNNs), for the standard operations required in the field of the numerical solution of Partial Differential Equations (PDEs). The aim of this work is to bring the high performance, architecture agnosticism and ease of use into the field of the numerical solution of PDEs. We use the proposed methodology to solve the advection-diffusion equation, the non-linear Burgers equation and incompressible flow past a bluff body. For the latter, a convolutional neural network is used as a multigrid solver in order to enforce the incompressibility constraint. We show that the presented methodology can solve all these problems using repurposed AI libraries in an efficient way, and presents a new avenue to explore in the development of methods to solve PDEs and Computational Fluid Dynamics problems with implicit methods.
|
[
"['Boyang Chen' 'Claire E. Heaney' 'Christopher C. Pain']"
] |
null | null |
2402.17917
| null | null |
http://arxiv.org/pdf/2402.17917v1
|
2024-02-27T22:10:51Z
|
2024-02-27T22:10:51Z
|
Collaborative learning of common latent representations in routinely
collected multivariate ICU physiological signals
|
In Intensive Care Units (ICU), the abundance of multivariate time series presents an opportunity for machine learning (ML) to enhance patient phenotyping. In contrast to previous research focused on electronic health records (EHR), here we propose an ML approach for phenotyping using routinely collected physiological time series data. Our new algorithm integrates Long Short-Term Memory (LSTM) networks with collaborative filtering concepts to identify common physiological states across patients. Tested on real-world ICU clinical data for intracranial hypertension (IH) detection in patients with brain injury, our method achieved an area under the curve (AUC) of 0.889 and average precision (AP) of 0.725. Moreover, our algorithm outperforms autoencoders in learning more structured latent representations of the physiological signals. These findings highlight the promise of our methodology for patient phenotyping, leveraging routinely collected multivariate time series to improve clinical care practices.
|
[
"['Hollan Haule' 'Ian Piper' 'Patricia Jones' 'Tsz-Yan Milly Lo'\n 'Javier Escudero']"
] |
null | null |
2402.17918
| null | null |
http://arxiv.org/pdf/2402.17918v1
|
2024-02-27T22:14:01Z
|
2024-02-27T22:14:01Z
|
The Seeker's Dilemma: Realistic Formulation and Benchmarking for
Hardware Trojan Detection
|
This work focuses on advancing security research in the hardware design space by formally defining the realistic problem of Hardware Trojan (HT) detection. The goal is to model HT detection more closely to the real world, i.e., describing the problem as "The Seeker's Dilemma" (an extension of Hide&Seek on a graph), where a detecting agent is unaware of whether circuits are infected by HTs or not. Using this theoretical problem formulation, we create a benchmark that consists of a mixture of HT-free and HT-infected restructured circuits while preserving their original functionalities. The restructured circuits are randomly infected by HTs, causing a situation where the defender is uncertain if a circuit is infected or not. We believe that our innovative dataset will help the community better judge the detection quality of different methods by comparing their success rates in circuit classification. We use our developed benchmark to evaluate three state-of-the-art HT detection tools to show baseline results for this approach. We use Principal Component Analysis to assess the strength of our benchmark, where we observe that some restructured HT-infected circuits are mapped closely to HT-free circuits, leading to significant label misclassification by detectors.
|
[
"['Amin Sarihi' 'Ahmad Patooghy' 'Abdel-Hameed A. Badawy' 'Peter Jamieson']"
] |
null | null |
2402.17926
| null | null |
http://arxiv.org/pdf/2402.17926v2
|
2024-03-01T19:48:53Z
|
2024-02-27T22:49:33Z
|
Certain and Approximately Certain Models for Statistical Learning
|
Real-world data is often incomplete and contains missing values. To train accurate models over real-world datasets, users need to spend a substantial amount of time and resources imputing and finding proper values for missing data items. In this paper, we demonstrate that it is possible to learn accurate models directly from data with missing values for certain training data and target models. We propose a unified approach for checking the necessity of data imputation to learn accurate models across various widely-used machine learning paradigms. We build efficient algorithms with theoretical guarantees to check this necessity and return accurate models in cases where imputation is unnecessary. Our extensive experiments indicate that our proposed algorithms significantly reduce the amount of time and effort needed for data imputation without imposing considerable computational overhead.
|
[
"['Cheng Zhen' 'Nischal Aryal' 'Arash Termehchy' 'Alireza Aghasi'\n 'Amandeep Singh Chabada']"
] |
null | null |
2402.17930
| null | null |
http://arxiv.org/pdf/2402.17930v1
|
2024-02-27T23:06:53Z
|
2024-02-27T23:06:53Z
|
Pragmatic Instruction Following and Goal Assistance via Cooperative
Language-Guided Inverse Planning
|
People often give instructions whose meaning is ambiguous without further context, expecting that their actions or goals will disambiguate their intentions. How can we build assistive agents that follow such instructions in a flexible, context-sensitive manner? This paper introduces cooperative language-guided inverse plan search (CLIPS), a Bayesian agent architecture for pragmatic instruction following and goal assistance. Our agent assists a human by modeling them as a cooperative planner who communicates joint plans to the assistant, then performs multimodal Bayesian inference over the human's goal from actions and language, using large language models (LLMs) to evaluate the likelihood of an instruction given a hypothesized plan. Given this posterior, our assistant acts to minimize expected goal achievement cost, enabling it to pragmatically follow ambiguous instructions and provide effective assistance even when uncertain about the goal. We evaluate these capabilities in two cooperative planning domains (Doors, Keys & Gems and VirtualHome), finding that CLIPS significantly outperforms GPT-4V, LLM-based literal instruction following and unimodal inverse planning in both accuracy and helpfulness, while closely matching the inferences and assistive judgments provided by human raters.
|
[
"['Tan Zhi-Xuan' 'Lance Ying' 'Vikash Mansinghka' 'Joshua B. Tenenbaum']"
] |
null | null |
2402.17943
| null | null |
http://arxiv.org/pdf/2402.17943v1
|
2024-02-27T23:52:58Z
|
2024-02-27T23:52:58Z
|
Sequential transport maps using SoS density estimation and
$α$-divergences
|
Transport-based density estimation methods are receiving growing interest because of their ability to efficiently generate samples from the approximated density. We further invertigate the sequential transport maps framework proposed from arXiv:2106.04170 arXiv:2303.02554, which builds on a sequence of composed Knothe-Rosenblatt (KR) maps. Each of those maps are built by first estimating an intermediate density of moderate complexity, and then by computing the exact KR map from a reference density to the precomputed approximate density. In our work, we explore the use of Sum-of-Squares (SoS) densities and $alpha$-divergences for approximating the intermediate densities. Combining SoS densities with $alpha$-divergence interestingly yields convex optimization problems which can be efficiently solved using semidefinite programming. The main advantage of $alpha$-divergences is to enable working with unnormalized densities, which provides benefits both numerically and theoretically. In particular, we provide two new convergence analyses of the sequential transport maps: one based on a triangle-like inequality and the second on information geometric properties of $alpha$-divergences for unnormalizied densities. The choice of intermediate densities is also crucial for the efficiency of the method. While tempered (or annealed) densities are the state-of-the-art, we introduce diffusion-based intermediate densities which permits to approximate densities known from samples only. Such intermediate densities are well-established in machine learning for generative modeling. Finally we propose and try different low-dimensional maps (or lazy maps) for dealing with high-dimensional problems and numerically demonstrate our methods on several benchmarks, including Bayesian inference problems and unsupervised learning task.
|
[
"['Benjamin Zanger' 'Tiangang Cui' 'Martin Schreiber' 'Olivier Zahm']"
] |
null | null |
2402.17966
| null | null |
http://arxiv.org/pdf/2402.17966v2
|
2024-05-24T00:19:33Z
|
2024-02-28T01:15:30Z
|
STC-ViT: Spatio Temporal Continuous Vision Transformer for Weather
Forecasting
|
Operational weather forecasting system relies on computationally expensive physics-based models. Recently, transformer based models have shown remarkable potential in weather forecasting achieving state-of-the-art results. However, transformers are discrete models which limit their ability to learn the continuous spatio-temporal features of the dynamical weather system. We address this issue with STC-ViT, a Spatio-Temporal Continuous Vision Transformer for weather forecasting. STC-ViT incorporates the continuous time Neural ODE layers with multi-head attention mechanism to learn the continuous weather evolution over time. The attention mechanism is encoded as a differentiable function in the transformer architecture to model the complex weather dynamics. We evaluate STC-ViT against a operational Numerical Weather Prediction (NWP) model and several deep learning based weather forecasting models. STC-ViT performs competitively with current data-driven methods in global forecasting while only being trained at lower resolution data and with less compute power.
|
[
"['Hira Saleem' 'Flora Salim' 'Cormac Purcell']"
] |
null | null |
2402.17967
| null | null |
http://arxiv.org/pdf/2402.17967v1
|
2024-02-28T01:19:42Z
|
2024-02-28T01:19:42Z
|
Imitation-regularized Optimal Transport on Networks: Provable Robustness
and Application to Logistics Planning
|
Network systems form the foundation of modern society, playing a critical role in various applications. However, these systems are at significant risk of being adversely affected by unforeseen circumstances, such as disasters. Considering this, there is a pressing need for research to enhance the robustness of network systems. Recently, in reinforcement learning, the relationship between acquiring robustness and regularizing entropy has been identified. Additionally, imitation learning is used within this framework to reflect experts' behavior. However, there are no comprehensive studies on the use of a similar imitation framework for optimal transport on networks. Therefore, in this study, imitation-regularized optimal transport (I-OT) on networks was investigated. It encodes prior knowledge on the network by imitating a given prior distribution. The I-OT solution demonstrated robustness in terms of the cost defined on the network. Moreover, we applied the I-OT to a logistics planning problem using real data. We also examined the imitation and apriori risk information scenarios to demonstrate the usefulness and implications of the proposed method.
|
[
"['Koshi Oishi' 'Yota Hashizume' 'Tomohiko Jimbo' 'Hirotaka Kaji'\n 'Kenji Kashima']"
] |
null | null |
2402.17975
| null | null |
http://arxiv.org/pdf/2402.17975v1
|
2024-02-28T01:41:34Z
|
2024-02-28T01:41:34Z
|
Sample-Efficient Preference-based Reinforcement Learning with Dynamics
Aware Rewards
|
Preference-based reinforcement learning (PbRL) aligns a robot behavior with human preferences via a reward function learned from binary feedback over agent behaviors. We show that dynamics-aware reward functions improve the sample efficiency of PbRL by an order of magnitude. In our experiments we iterate between: (1) learning a dynamics-aware state-action representation (z^{sa}) via a self-supervised temporal consistency task, and (2) bootstrapping the preference-based reward function from (z^{sa}), which results in faster policy learning and better final policy performance. For example, on quadruped-walk, walker-walk, and cheetah-run, with 50 preference labels we achieve the same performance as existing approaches with 500 preference labels, and we recover 83% and 66% of ground truth reward policy performance versus only 38% and 21%. The performance gains demonstrate the benefits of explicitly learning a dynamics-aware reward model. Repo: texttt{https://github.com/apple/ml-reed}.
|
[
"['Katherine Metcalf' 'Miguel Sarabia' 'Natalie Mackraz'\n 'Barry-John Theobald']"
] |
null | null |
2402.17978
| null | null |
http://arxiv.org/pdf/2402.17978v2
|
2024-03-01T11:08:48Z
|
2024-02-28T01:45:01Z
|
Imagine, Initialize, and Explore: An Effective Exploration Method in
Multi-Agent Reinforcement Learning
|
Effective exploration is crucial to discovering optimal strategies for multi-agent reinforcement learning (MARL) in complex coordination tasks. Existing methods mainly utilize intrinsic rewards to enable committed exploration or use role-based learning for decomposing joint action spaces instead of directly conducting a collective search in the entire action-observation space. However, they often face challenges obtaining specific joint action sequences to reach successful states in long-horizon tasks. To address this limitation, we propose Imagine, Initialize, and Explore (IIE), a novel method that offers a promising solution for efficient multi-agent exploration in complex scenarios. IIE employs a transformer model to imagine how the agents reach a critical state that can influence each other's transition functions. Then, we initialize the environment at this state using a simulator before the exploration phase. We formulate the imagination as a sequence modeling problem, where the states, observations, prompts, actions, and rewards are predicted autoregressively. The prompt consists of timestep-to-go, return-to-go, influence value, and one-shot demonstration, specifying the desired state and trajectory as well as guiding the action generation. By initializing agents at the critical states, IIE significantly increases the likelihood of discovering potentially important under-explored regions. Despite its simplicity, empirical results demonstrate that our method outperforms multi-agent exploration baselines on the StarCraft Multi-Agent Challenge (SMAC) and SMACv2 environments. Particularly, IIE shows improved performance in the sparse-reward SMAC tasks and produces more effective curricula over the initialized states than other generative methods, such as CVAE-GAN and diffusion models.
|
[
"['Zeyang Liu' 'Lipeng Wan' 'Xinrui Yang' 'Zhuoran Chen' 'Xingyu Chen'\n 'Xuguang Lan']"
] |
null | null |
2402.17979
| null | null |
http://arxiv.org/pdf/2402.17979v1
|
2024-02-28T01:48:54Z
|
2024-02-28T01:48:54Z
|
Ensemble Methodology:Innovations in Credit Default Prediction Using
LightGBM, XGBoost, and LocalEnsemble
|
In the realm of consumer lending, accurate credit default prediction stands as a critical element in risk mitigation and lending decision optimization. Extensive research has sought continuous improvement in existing models to enhance customer experiences and ensure the sound economic functioning of lending institutions. This study responds to the evolving landscape of credit default prediction, challenging conventional models and introducing innovative approaches. By building upon foundational research and recent innovations, our work aims to redefine the standards of accuracy in credit default prediction, setting a new benchmark for the industry. To overcome these challenges, we present an Ensemble Methods framework comprising LightGBM, XGBoost, and LocalEnsemble modules, each making unique contributions to amplify diversity and improve generalization. By utilizing distinct feature sets, our methodology directly tackles limitations identified in previous studies, with the overarching goal of establishing a novel standard for credit default prediction accuracy. Our experimental findings validate the effectiveness of the ensemble model on the dataset, signifying substantial contributions to the field. This innovative approach not only addresses existing obstacles but also sets a precedent for advancing the accuracy and robustness of credit default prediction models.
|
[
"['Mengran Zhu' 'Ye Zhang' 'Yulu Gong' 'Kaijuan Xing' 'Xu Yan'\n 'Jintong Song']"
] |
null | null |
2402.17985
| null | null |
http://arxiv.org/pdf/2402.17985v1
|
2024-02-28T02:00:34Z
|
2024-02-28T02:00:34Z
|
FlattenQuant: Breaking Through the Inference Compute-bound for Large
Language Models with Per-tensor Quantization
|
Large language models (LLMs) have demonstrated state-of-the-art performance across various tasks. However, the latency of inference and the large GPU memory consumption of LLMs restrict their deployment performance. Recently, there have been some efficient attempts to quantize LLMs, yet inference with large batch size or long sequence still has the issue of being compute-bound. Fine-grained quantization methods have showcased their proficiency in achieving low-bit quantization for LLMs, while requiring FP16 data type for linear layer computations, which is time-consuming when dealing with large batch size or long sequence. In this paper, we introduce a method called FlattenQuant, which significantly reduces the maximum value of the tensor by flattening the large channels in the tensor, to achieve low bit per-tensor quantization with minimal accuracy loss. Our experiments show that FlattenQuant can directly use 4 bits to achieve 48.29% of the linear layer calculation in LLMs, with the remaining layers using 8 bits. The 4-bit matrix multiplication introduced in the FlattenQuant method can effectively address the compute-bound caused by large matrix calculation. Our work achieves up to 2$times$ speedup and 2.3$times$ memory reduction for LLMs with negligible loss in accuracy.
|
[
"['Yi Zhang' 'Fei Yang' 'Shuang Peng' 'Fangyu Wang' 'Aimin Pan']"
] |
null | null |
2402.17987
| null | null |
http://arxiv.org/pdf/2402.17987v2
|
2024-03-08T17:47:21Z
|
2024-02-28T02:11:47Z
|
Multistatic-Radar RCS-Signature Recognition of Aerial Vehicles: A
Bayesian Fusion Approach
|
Radar Automated Target Recognition (RATR) for Unmanned Aerial Vehicles (UAVs) involves transmitting Electromagnetic Waves (EMWs) and performing target type recognition on the received radar echo, crucial for defense and aerospace applications. Previous studies highlighted the advantages of multistatic radar configurations over monostatic ones in RATR. However, fusion methods in multistatic radar configurations often suboptimally combine classification vectors from individual radars probabilistically. To address this, we propose a fully Bayesian RATR framework employing Optimal Bayesian Fusion (OBF) to aggregate classification probability vectors from multiple radars. OBF, based on expected 0-1 loss, updates a Recursive Bayesian Classification (RBC) posterior distribution for target UAV type, conditioned on historical observations across multiple time steps. We evaluate the approach using simulated random walk trajectories for seven drones, correlating target aspect angles to Radar Cross Section (RCS) measurements in an anechoic chamber. Comparing against single radar Automated Target Recognition (ATR) systems and suboptimal fusion methods, our empirical results demonstrate that the OBF method integrated with RBC significantly enhances classification accuracy compared to other fusion methods and single radar configurations.
|
[
"['Michael Potter' 'Murat Akcakaya' 'Marius Necsoiu' 'Gunar Schirner'\n 'Deniz Erdogmus' 'Tales Imbiriba']"
] |
null | null |
2402.17988
| null | null |
http://arxiv.org/pdf/2402.17988v1
|
2024-02-28T02:12:47Z
|
2024-02-28T02:12:47Z
|
Constrained Decoding for Code Language Models via Efficient Left and
Right Quotienting of Context-Sensitive Grammars
|
Large Language Models are powerful tools for program synthesis and advanced auto-completion, but come with no guarantee that their output code is syntactically correct. This paper contributes an incremental parser that allows early rejection of syntactically incorrect code, as well as efficient detection of complete programs for fill-in-the-middle (FItM) tasks. We develop Earley-style parsers that operate over left and right quotients of arbitrary context-free grammars, and we extend our incremental parsing and quotient operations to several context-sensitive features present in the grammars of many common programming languages. The result of these contributions is an efficient, general, and well-grounded method for left and right quotient parsing. To validate our theoretical contributions -- and the practical effectiveness of certain design decisions -- we evaluate our method on the particularly difficult case of FItM completion for Python 3. Our results demonstrate that constrained generation can significantly reduce the incidence of syntax errors in recommended code.
|
[
"['Daniel Melcer' 'Nathan Fulton' 'Sanjay Krishna Gouda' 'Haifeng Qian']"
] |
null | null |
2402.17992
| null | null |
http://arxiv.org/pdf/2402.17992v3
|
2024-04-29T14:47:42Z
|
2024-02-28T02:16:03Z
|
Physics-Informed Machine Learning for Seismic Response Prediction OF
Nonlinear Steel Moment Resisting Frame Structures
|
There is growing interest in using machine learning (ML) methods for structural metamodeling due to the substantial computational cost of traditional simulations. Purely data-driven strategies often face limitations in model robustness, interpretability, and dependency on extensive data. To address these challenges, this paper introduces a novel physics-informed machine learning (PiML) method that integrates scientific principles and physical laws into deep neural networks to model seismic responses of nonlinear structures. The approach constrains the ML model's solution space within known physical bounds through three main features: dimensionality reduction via combined model order reduction and wavelet analysis, long short-term memory (LSTM) networks, and Newton's second law. Dimensionality reduction addresses structural systems' redundancy and boosts efficiency while extracting essential features through wavelet analysis. LSTM networks capture temporal dependencies for accurate time-series predictions. Manipulating the equation of motion helps learn system nonlinearities and confines solutions within physically interpretable results. These attributes allow for model training with sparse data, enhancing accuracy, interpretability, and robustness. Furthermore, a dataset of archetype steel moment resistant frames under seismic loading, available in the DesignSafe-CI Database [1], is considered for evaluation. The resulting metamodel handles complex data better than existing physics-guided LSTM models and outperforms other non-physics data-driven networks.
|
[
"['R. Bailey Bond' 'Pu Ren' 'Jerome F. Hajjar' 'Hao Sun']"
] |
null | null |
2402.18002
| null | null |
http://arxiv.org/pdf/2402.18002v2
|
2024-04-29T19:00:56Z
|
2024-02-28T02:30:59Z
|
Symmetry-aware Reinforcement Learning for Robotic Assembly under Partial
Observability with a Soft Wrist
|
This study tackles the representative yet challenging contact-rich peg-in-hole task of robotic assembly, using a soft wrist that can operate more safely and tolerate lower-frequency control signals than a rigid one. Previous studies often use a fully observable formulation, requiring external setups or estimators for the peg-to-hole pose. In contrast, we use a partially observable formulation and deep reinforcement learning from demonstrations to learn a memory-based agent that acts purely on haptic and proprioceptive signals. Moreover, previous works do not incorporate potential domain symmetry and thus must search for solutions in a bigger space. Instead, we propose to leverage the symmetry for sample efficiency by augmenting the training data and constructing auxiliary losses to force the agent to adhere to the symmetry. Results in simulation with five different symmetric peg shapes show that our proposed agent can be comparable to or even outperform a state-based agent. In particular, the sample efficiency also allows us to learn directly on the real robot within 3 hours.
|
[
"['Hai Nguyen' 'Tadashi Kozuno' 'Cristian C. Beltran-Hernandez'\n 'Masashi Hamaya']"
] |
null | null |
2402.18007
| null | null |
http://arxiv.org/pdf/2402.18007v2
|
2024-03-02T03:32:40Z
|
2024-02-28T02:45:58Z
|
Mixer is more than just a model
|
Recently, MLP structures have regained popularity, with MLP-Mixer standing out as a prominent example. In the field of computer vision, MLP-Mixer is noted for its ability to extract data information from both channel and token perspectives, effectively acting as a fusion of channel and token information. Indeed, Mixer represents a paradigm for information extraction that amalgamates channel and token information. The essence of Mixer lies in its ability to blend information from diverse perspectives, epitomizing the true concept of "mixing" in the realm of neural network architectures. Beyond channel and token considerations, it is possible to create more tailored mixers from various perspectives to better suit specific task requirements. This study focuses on the domain of audio recognition, introducing a novel model named Audio Spectrogram Mixer with Roll-Time and Hermit FFT (ASM-RH) that incorporates insights from both time and frequency domains. Experimental results demonstrate that ASM-RH is particularly well-suited for audio data and yields promising outcomes across multiple classification tasks. The models and optimal weights files will be published.
|
[
"['Qingfeng Ji' 'Yuxin Wang' 'Letong Sun']"
] |
null | null |
2402.18012
| null | null |
http://arxiv.org/pdf/2402.18012v2
|
2024-04-30T03:32:33Z
|
2024-02-28T03:09:12Z
|
Diffusion Models as Constrained Samplers for Optimization with Unknown
Constraints
|
Addressing real-world optimization problems becomes particularly challenging when analytic objective functions or constraints are unavailable. While numerous studies have addressed the issue of unknown objectives, limited research has focused on scenarios where feasibility constraints are not given explicitly. Overlooking these constraints can lead to spurious solutions that are unrealistic in practice. To deal with such unknown constraints, we propose to perform optimization within the data manifold using diffusion models. To constrain the optimization process to the data manifold, we reformulate the original optimization problem as a sampling problem from the product of the Boltzmann distribution defined by the objective function and the data distribution learned by the diffusion model. To enhance sampling efficiency, we propose a two-stage framework that begins with a guided diffusion process for warm-up, followed by a Langevin dynamics stage for further correction. Theoretical analysis shows that the initial stage results in a distribution focused on feasible solutions, thereby providing a better initialization for the later stage. Comprehensive experiments on a synthetic dataset, six real-world black-box optimization datasets, and a multi-objective optimization dataset show that our method achieves better or comparable performance with previous state-of-the-art baselines.
|
[
"['Lingkai Kong' 'Yuanqi Du' 'Wenhao Mu' 'Kirill Neklyudov'\n 'Valentin De Bortoli' 'Haorui Wang' 'Dongxia Wu' 'Aaron Ferber'\n 'Yi-An Ma' 'Carla P. Gomes' 'Chao Zhang']"
] |
null | null |
2402.18018
| null | null |
http://arxiv.org/pdf/2402.18018v1
|
2024-02-28T03:27:10Z
|
2024-02-28T03:27:10Z
|
Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach
|
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data dispersed over various data sources. Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability. In this work, we consider a multi-server FL framework, referred to as emph{Confederated Learning} (CFL), in order to accommodate a larger number of users. A CFL system is composed of multiple networked edge servers, with each server connected to an individual set of users. Decentralized collaboration among servers is leveraged to harness all users' data for model training. Due to the potentially massive number of users involved, it is crucial to reduce the communication overhead of the CFL system. We propose a stochastic gradient method for distributed learning in the CFL framework. The proposed method incorporates a conditionally-triggered user selection (CTUS) mechanism as the central component to effectively reduce communication overhead. Relying on a delicately designed triggering condition, the CTUS mechanism allows each server to select only a small number of users to upload their gradients, without significantly jeopardizing the convergence performance of the algorithm. Our theoretical analysis reveals that the proposed algorithm enjoys a linear convergence rate. Simulation results show that it achieves substantial improvement over state-of-the-art algorithms in terms of communication efficiency.
|
[
"['Bin Wang' 'Jun Fang' 'Hongbin Li' 'Yonina C. Eldar']"
] |
null | null |
2402.18040
| null | null |
http://arxiv.org/pdf/2402.18040v1
|
2024-02-28T04:34:15Z
|
2024-02-28T04:34:15Z
|
Automated Discovery of Integral with Deep Learning
|
Recent advancements in the realm of deep learning, particularly in the development of large language models (LLMs), have demonstrated AI's ability to tackle complex mathematical problems or solving programming challenges. However, the capability to solve well-defined problems based on extensive training data differs significantly from the nuanced process of making scientific discoveries. Trained on almost all human knowledge available, today's sophisticated LLMs basically learn to predict sequences of tokens. They generate mathematical derivations and write code in a similar way as writing an essay, and do not have the ability to pioneer scientific discoveries in the manner a human scientist would do. In this study we delve into the potential of using deep learning to rediscover a fundamental mathematical concept: integrals. By defining integrals as area under the curve, we illustrate how AI can deduce the integral of a given function, exemplified by inferring $int_{0}^{x} t^2 dt = frac{x^3}{3}$ and $int_{0}^{x} ae^{bt} dt = frac{a}{b} e^{bx} - frac{a}{b}$. Our experiments show that deep learning models can approach the task of inferring integrals either through a sequence-to-sequence model, akin to language translation, or by uncovering the rudimentary principles of integration, such as $int_{0}^{x} t^n dt = frac{x^{n+1}}{n+1}$.
|
[
"['Xiaoxin Yin']"
] |
null | null |
2402.18046
| null | null |
http://arxiv.org/pdf/2402.18046v1
|
2024-02-28T04:47:32Z
|
2024-02-28T04:47:32Z
|
Data augmentation method for modeling health records with applications
to clopidogrel treatment failure detection
|
We present a novel data augmentation method to address the challenge of data scarcity in modeling longitudinal patterns in Electronic Health Records (EHR) of patients using natural language processing (NLP) algorithms. The proposed method generates augmented data by rearranging the orders of medical records within a visit where the order of elements are not obvious, if any. Applying the proposed method to the clopidogrel treatment failure detection task enabled up to 5.3% absolute improvement in terms of ROC-AUC (from 0.908 without augmentation to 0.961 with augmentation) when it was used during the pre-training procedure. It was also shown that the augmentation helped to improve performance during fine-tuning procedures, especially when the amount of labeled training data is limited.
|
[
"['Sunwoong Choi' 'Samuel Kim']"
] |
null | null |
2402.18059
| null | null |
http://arxiv.org/pdf/2402.18059v3
|
2024-06-06T04:49:37Z
|
2024-02-28T05:43:22Z
|
Token-Specific Watermarking with Enhanced Detectability and Semantic
Coherence for Large Language Models
|
Large language models generate high-quality responses with potential misinformation, underscoring the need for regulation by distinguishing AI-generated and human-written texts. Watermarking is pivotal in this context, which involves embedding hidden markers in texts during the LLM inference phase, which is imperceptible to humans. Achieving both the detectability of inserted watermarks and the semantic quality of generated texts is challenging. While current watermarking algorithms have made promising progress in this direction, there remains significant scope for improvement. To address these challenges, we introduce a novel multi-objective optimization (MOO) approach for watermarking that utilizes lightweight networks to generate token-specific watermarking logits and splitting ratios. By leveraging MOO to optimize for both detection and semantic objective functions, our method simultaneously achieves detectability and semantic integrity. Experimental results show that our method outperforms current watermarking techniques in enhancing the detectability of texts generated by LLMs while maintaining their semantic coherence. Our code is available at https://github.com/mignonjia/TS_watermark.
|
[
"['Mingjia Huo' 'Sai Ashish Somayajula' 'Youwei Liang' 'Ruisi Zhang'\n 'Farinaz Koushanfar' 'Pengtao Xie']"
] |
null | null |
2402.18064
| null | null |
http://arxiv.org/pdf/2402.18064v3
|
2024-03-07T11:21:04Z
|
2024-02-28T05:49:08Z
|
Automated Testing of Spatially-Dependent Environmental Hypotheses
through Active Transfer Learning
|
The efficient collection of samples is an important factor in outdoor information gathering applications on account of high sampling costs such as time, energy, and potential destruction to the environment. Utilization of available a-priori data can be a powerful tool for increasing efficiency. However, the relationships of this data with the quantity of interest are often not known ahead of time, limiting the ability to leverage this knowledge for improved planning efficiency. To this end, this work combines transfer learning and active learning through a Multi-Task Gaussian Process and an information-based objective function. Through this combination it can explore the space of hypothetical inter-quantity relationships and evaluate these hypotheses in real-time, allowing this new knowledge to be immediately exploited for future plans. The performance of the proposed method is evaluated against synthetic data and is shown to evaluate multiple hypotheses correctly. Its effectiveness is also demonstrated on real datasets. The technique is able to identify and leverage hypotheses which show a medium or strong correlation to reduce prediction error by a factor of 1.4--3.4 within the first 7 samples, and poor hypotheses are quickly identified and rejected eventually having no adverse effect.
|
[
"['Nicholas Harrison' 'Nathan Wallace' 'Salah Sukkarieh']"
] |
null | null |
2402.18096
| null | null |
http://arxiv.org/pdf/2402.18096v1
|
2024-02-28T06:34:54Z
|
2024-02-28T06:34:54Z
|
No Token Left Behind: Reliable KV Cache Compression via Importance-Aware
Mixed Precision Quantization
|
Key-Value (KV) Caching has become an essential technique for accelerating the inference speed and throughput of generative Large Language Models~(LLMs). However, the memory footprint of the KV cache poses a critical bottleneck in LLM deployment as the cache size grows with batch size and sequence length, often surpassing even the size of the model itself. Although recent methods were proposed to select and evict unimportant KV pairs from the cache to reduce memory consumption, the potential ramifications of eviction on the generative process are yet to be thoroughly examined. In this paper, we examine the detrimental impact of cache eviction and observe that unforeseen risks arise as the information contained in the KV pairs is exhaustively discarded, resulting in safety breaches, hallucinations, and context loss. Surprisingly, we find that preserving even a small amount of information contained in the evicted KV pairs via reduced precision quantization substantially recovers the incurred degradation. On the other hand, we observe that the important KV pairs must be kept at a relatively higher precision to safeguard the generation quality. Motivated by these observations, we propose textit{Mixed-precision KV cache}~(MiKV), a reliable cache compression method that simultaneously preserves the context details by retaining the evicted KV pairs in low-precision and ensure generation quality by keeping the important KV pairs in high-precision. Experiments on diverse benchmarks and LLM backbones show that our proposed method offers a state-of-the-art trade-off between compression ratio and performance, compared to other baselines.
|
[
"['June Yong Yang' 'Byeongwook Kim' 'Jeongin Bae' 'Beomseok Kwon'\n 'Gunho Park' 'Eunho Yang' 'Se Jung Kwon' 'Dongsoo Lee']"
] |
null | null |
2402.18112
| null | null |
http://arxiv.org/abs/2402.18112v2
|
2024-03-20T10:08:28Z
|
2024-02-28T07:02:08Z
|
Simple But Effective: Rethinking the Ability of Deep Learning in fNIRS
to Exclude Abnormal Input
|
Functional near-infrared spectroscopy (fNIRS) is a non-invasive technique for monitoring brain activity. To better understand the brain, researchers often use deep learning to address the classification challenges of fNIRS data. Our study shows that while current networks in fNIRS are highly accurate for predictions within their training distribution, they falter at identifying and excluding abnormal data which is out-of-distribution, affecting their reliability. We propose integrating metric learning and supervised methods into fNIRS research to improve networks capability in identifying and excluding out-of-distribution outliers. This method is simple yet effective. In our experiments, it significantly enhances the performance of various networks in fNIRS, particularly transformer-based one, which shows the great improvement in reliability. We will make our experiment data available on GitHub.
|
[
"['Zhihao Cao']"
] |
null | null |
2402.18117
| null | null |
http://arxiv.org/pdf/2402.18117v1
|
2024-02-28T07:10:37Z
|
2024-02-28T07:10:37Z
|
PRCL: Probabilistic Representation Contrastive Learning for
Semi-Supervised Semantic Segmentation
|
Tremendous breakthroughs have been developed in Semi-Supervised Semantic Segmentation (S4) through contrastive learning. However, due to limited annotations, the guidance on unlabeled images is generated by the model itself, which inevitably exists noise and disturbs the unsupervised training process. To address this issue, we propose a robust contrastive-based S4 framework, termed the Probabilistic Representation Contrastive Learning (PRCL) framework to enhance the robustness of the unsupervised training process. We model the pixel-wise representation as Probabilistic Representations (PR) via multivariate Gaussian distribution and tune the contribution of the ambiguous representations to tolerate the risk of inaccurate guidance in contrastive learning. Furthermore, we introduce Global Distribution Prototypes (GDP) by gathering all PRs throughout the whole training process. Since the GDP contains the information of all representations with the same class, it is robust from the instant noise in representations and bears the intra-class variance of representations. In addition, we generate Virtual Negatives (VNs) based on GDP to involve the contrastive learning process. Extensive experiments on two public benchmarks demonstrate the superiority of our PRCL framework.
|
[
"['Haoyu Xie' 'Changqi Wang' 'Jian Zhao' 'Yang Liu' 'Jun Dan' 'Chong Fu'\n 'Baigui Sun']"
] |
null | null |
2402.18127
| null | null |
http://arxiv.org/pdf/2402.18127v1
|
2024-02-28T07:36:16Z
|
2024-02-28T07:36:16Z
|
Hierarchical Multi-Relational Graph Representation Learning for
Large-Scale Prediction of Drug-Drug Interactions
|
Most existing methods for predicting drug-drug interactions (DDI) predominantly concentrate on capturing the explicit relationships among drugs, overlooking the valuable implicit correlations present between drug pairs (DPs), which leads to weak predictions. To address this issue, this paper introduces a hierarchical multi-relational graph representation learning (HMGRL) approach. Within the framework of HMGRL, we leverage a wealth of drug-related heterogeneous data sources to construct heterogeneous graphs, where nodes represent drugs and edges denote clear and various associations. The relational graph convolutional network (RGCN) is employed to capture diverse explicit relationships between drugs from these heterogeneous graphs. Additionally, a multi-view differentiable spectral clustering (MVDSC) module is developed to capture multiple valuable implicit correlations between DPs. Within the MVDSC, we utilize multiple DP features to construct graphs, where nodes represent DPs and edges denote different implicit correlations. Subsequently, multiple DP representations are generated through graph cutting, each emphasizing distinct implicit correlations. The graph-cutting strategy enables our HMGRL to identify strongly connected communities of graphs, thereby reducing the fusion of irrelevant features. By combining every representation view of a DP, we create high-level DP representations for predicting DDIs. Two genuine datasets spanning three distinct tasks are adopted to gauge the efficacy of our HMGRL. Experimental outcomes unequivocally indicate that HMGRL surpasses several leading-edge methods in performance.
|
[
"['Mengying Jiang' 'Guizhong Liu' 'Yuanchao Su' 'Weiqiang Jin' 'Biao Zhao']"
] |
null | null |
2402.18128
| null | null |
http://arxiv.org/pdf/2402.18128v1
|
2024-02-28T07:37:26Z
|
2024-02-28T07:37:26Z
|
Downstream Task Guided Masking Learning in Masked Autoencoders Using
Multi-Level Optimization
|
Masked Autoencoder (MAE) is a notable method for self-supervised pretraining in visual representation learning. It operates by randomly masking image patches and reconstructing these masked patches using the unmasked ones. A key limitation of MAE lies in its disregard for the varying informativeness of different patches, as it uniformly selects patches to mask. To overcome this, some approaches propose masking based on patch informativeness. However, these methods often do not consider the specific requirements of downstream tasks, potentially leading to suboptimal representations for these tasks. In response, we introduce the Multi-level Optimized Mask Autoencoder (MLO-MAE), a novel framework that leverages end-to-end feedback from downstream tasks to learn an optimal masking strategy during pretraining. Our experimental findings highlight MLO-MAE's significant advancements in visual representation learning. Compared to existing methods, it demonstrates remarkable improvements across diverse datasets and tasks, showcasing its adaptability and efficiency. Our code is available at: https://github.com/Alexiland/MLOMAE
|
[
"['Han Guo' 'Ramtin Hosseini' 'Ruiyi Zhang' 'Sai Ashish Somayajula'\n 'Ranak Roy Chowdhury' 'Rajesh K. Gupta' 'Pengtao Xie']"
] |
null | null |
2402.18129
| null | null |
http://arxiv.org/pdf/2402.18129v2
|
2024-06-20T09:35:02Z
|
2024-02-28T07:39:58Z
|
On the Inductive Biases of Demographic Parity-based Fair Learning
Algorithms
|
Fair supervised learning algorithms assigning labels with little dependence on a sensitive attribute have attracted great attention in the machine learning community. While the demographic parity (DP) notion has been frequently used to measure a model's fairness in training fair classifiers, several studies in the literature suggest potential impacts of enforcing DP in fair learning algorithms. In this work, we analytically study the effect of standard DP-based regularization methods on the conditional distribution of the predicted label given the sensitive attribute. Our analysis shows that an imbalanced training dataset with a non-uniform distribution of the sensitive attribute could lead to a classification rule biased toward the sensitive attribute outcome holding the majority of training data. To control such inductive biases in DP-based fair learning, we propose a sensitive attribute-based distributionally robust optimization (SA-DRO) method improving robustness against the marginal distribution of the sensitive attribute. Finally, we present several numerical results on the application of DP-based learning methods to standard centralized and distributed learning problems. The empirical findings support our theoretical results on the inductive biases in DP-based fair learning algorithms and the debiasing effects of the proposed SA-DRO method.
|
[
"['Haoyu Lei' 'Amin Gohari' 'Farzan Farnia']"
] |
null | null |
2402.18133
| null | null |
http://arxiv.org/pdf/2402.18133v2
|
2024-03-13T03:07:08Z
|
2024-02-28T07:54:50Z
|
Classes Are Not Equal: An Empirical Study on Image Recognition Fairness
|
In this paper, we present an empirical study on image recognition fairness, i.e., extreme class accuracy disparity on balanced data like ImageNet. We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets, network architectures, and model capacities. Moreover, several intriguing properties of fairness are identified. First, the unfairness lies in problematic representation rather than classifier bias. Second, with the proposed concept of Model Prediction Bias, we investigate the origins of problematic representation during optimization. Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize. It means that more other classes will be confused with harder classes. Then the False Positives (FPs) will dominate the learning in optimization, thus leading to their poor accuracy. Further, we conclude that data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification. The Code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.
|
[
"['Jiequan Cui' 'Beier Zhu' 'Xin Wen' 'Xiaojuan Qi' 'Bei Yu'\n 'Hanwang Zhang']"
] |
null | null |
2402.18137
| null | null |
http://arxiv.org/pdf/2402.18137v2
|
2024-05-24T03:31:50Z
|
2024-02-28T07:58:24Z
|
DecisionNCE: Embodied Multimodal Representations via Implicit Preference
Learning
|
Multimodal pretraining is an effective strategy for the trinity of goals of representation learning in autonomous robots: 1) extracting both local and global task progressions; 2) enforcing temporal consistency of visual representation; 3) capturing trajectory-level language grounding. Most existing methods approach these via separate objectives, which often reach sub-optimal solutions. In this paper, we propose a universal unified objective that can simultaneously extract meaningful task progression information from image sequences and seamlessly align them with language instructions. We discover that via implicit preferences, where a visual trajectory inherently aligns better with its corresponding language instruction than mismatched pairs, the popular Bradley-Terry model can transform into representation learning through proper reward reparameterizations. The resulted framework, DecisionNCE, mirrors an InfoNCE-style objective but is distinctively tailored for decision-making tasks, providing an embodied representation learning framework that elegantly extracts both local and global task progression features, with temporal consistency enforced through implicit time contrastive learning, while ensuring trajectory-level instruction grounding via multimodal joint encoding. Evaluation on both simulated and real robots demonstrates that DecisionNCE effectively facilitates diverse downstream policy learning tasks, offering a versatile solution for unified representation and reward learning. Project Page: https://2toinf.github.io/DecisionNCE/
|
[
"['Jianxiong Li' 'Jinliang Zheng' 'Yinan Zheng' 'Liyuan Mao' 'Xiao Hu'\n 'Sijie Cheng' 'Haoyi Niu' 'Jihao Liu' 'Yu Liu' 'Jingjing Liu'\n 'Ya-Qin Zhang' 'Xianyuan Zhan']"
] |
null | null |
2402.18149
| null | null |
http://arxiv.org/pdf/2402.18149v1
|
2024-02-28T08:24:06Z
|
2024-02-28T08:24:06Z
|
Provably Efficient Partially Observable Risk-Sensitive Reinforcement
Learning with Hindsight Observation
|
This work pioneers regret analysis of risk-sensitive reinforcement learning in partially observable environments with hindsight observation, addressing a gap in theoretical exploration. We introduce a novel formulation that integrates hindsight observations into a Partially Observable Markov Decision Process (POMDP) framework, where the goal is to optimize accumulated reward under the entropic risk measure. We develop the first provably efficient RL algorithm tailored for this setting. We also prove by rigorous analysis that our algorithm achieves polynomial regret $tilde{O}left(frac{e^{|{gamma}|H}-1}{|{gamma}|H}H^2sqrt{KHS^2OA}right)$, which outperforms or matches existing upper bounds when the model degenerates to risk-neutral or fully observable settings. We adopt the method of change-of-measure and develop a novel analytical tool of beta vectors to streamline mathematical derivations. These techniques are of particular interest to the theoretical study of reinforcement learning.
|
[
"['Tonghe Zhang' 'Yu Chen' 'Longbo Huang']"
] |
null | null |
2402.18153
| null | null |
http://arxiv.org/pdf/2402.18153v1
|
2024-02-28T08:34:23Z
|
2024-02-28T08:34:23Z
|
Diffusion-based Neural Network Weights Generation
|
Transfer learning is a topic of significant interest in recent deep learning research because it enables faster convergence and improved performance on new tasks. While the performance of transfer learning depends on the similarity of the source data to the target data, it is costly to train a model on a large number of datasets. Therefore, pretrained models are generally blindly selected with the hope that they will achieve good performance on the given task. To tackle such suboptimality of the pretrained models, we propose an efficient and adaptive transfer learning scheme through dataset-conditioned pretrained weights sampling. Specifically, we use a latent diffusion model with a variational autoencoder that can reconstruct the neural network weights, to learn the distribution of a set of pretrained weights conditioned on each dataset for transfer learning on unseen datasets. By learning the distribution of a neural network on a variety pretrained models, our approach enables adaptive sampling weights for unseen datasets achieving faster convergence and reaching competitive performance.
|
[
"['Bedionita Soro' 'Bruno Andreis' 'Hayeon Lee' 'Song Chong' 'Frank Hutter'\n 'Sung Ju Hwang']"
] |
null | null |
2402.18159
| null | null |
http://arxiv.org/pdf/2402.18159v1
|
2024-02-28T08:43:18Z
|
2024-02-28T08:43:18Z
|
Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation
|
In the realm of reinforcement learning (RL), accounting for risk is crucial for making decisions under uncertainty, particularly in applications where safety and reliability are paramount. In this paper, we introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation. Our framework covers a broad class of risk-sensitive RL, and facilitates analysis of the impact of estimation functions on the effectiveness of RSRL strategies and evaluation of their sample complexity. We design two innovative meta-algorithms: texttt{RS-DisRL-M}, a model-based strategy for model-based function approximation, and texttt{RS-DisRL-V}, a model-free approach for general value function approximation. With our novel estimation techniques via Least Squares Regression (LSR) and Maximum Likelihood Estimation (MLE) in distributional RL with augmented Markov Decision Process (MDP), we derive the first $widetilde{mathcal{O}}(sqrt{K})$ dependency of the regret upper bound for RSRL with static LRM, marking a pioneering contribution towards statistically efficient algorithms in this domain.
|
[
"['Yu Chen' 'Xiangcheng Zhang' 'Siwei Wang' 'Longbo Huang']"
] |
null | null |
2402.18164
| null | null |
http://arxiv.org/pdf/2402.18164v1
|
2024-02-28T08:53:20Z
|
2024-02-28T08:53:20Z
|
Autoencoder-based General Purpose Representation Learning for Customer
Embedding
|
In recent years, exploiting the domain-specific underlying structure of data and its generative factors for representation learning has shown success in various use-case agnostic applications. However, the diversity and complexity of tabular data have made it challenging to represent these structures in a latent space through multi-dimensional vectors. We design an autoencoder-based framework for building general purpose embeddings, we assess the performance of different autoencoder architectures, and show simpler models outperform complex ones in embedding highly complex tabular data. We apply our framework to produce plug-and-play, rich, and anonymized embeddings representing AWS customers for usage in any model, saving up to 45% of development time, and observe significant improvements in downstream models. Moreover, we propose a significant improvement to the calculation of reconstruction loss for multi-layer contractive autoencoders (CAE) by calculating the Jacobian of the entire encoder leading to a 15% improvement in reconstruction quality when compared to a stacked CAE.
|
[
"['Jan Henrik Bertrand' 'Jacopo Pio Gargano' 'Laurent Mombaerts'\n 'Jonathan Taws']"
] |
null | null |
2402.18167
| null | null |
http://arxiv.org/pdf/2402.18167v1
|
2024-02-28T08:56:00Z
|
2024-02-28T08:56:00Z
|
Decentralised Traffic Incident Detection via Network Lasso
|
Traffic incident detection plays a key role in intelligent transportation systems, which has gained great attention in transport engineering. In the past, traditional machine learning (ML) based detection methods achieved good performance under a centralised computing paradigm, where all data are transmitted to a central server for building ML models therein. Nowadays, deep neural networks based federated learning (FL) has become a mainstream detection approach to enable the model training in a decentralised manner while warranting local data governance. Such neural networks-centred techniques, however, have overshadowed the utility of well-established ML-based detection methods. In this work, we aim to explore the potential of potent conventional ML-based detection models in modern traffic scenarios featured by distributed data. We leverage an elegant but less explored distributed optimisation framework named Network Lasso, with guaranteed global convergence for convex problem formulations, integrate the potent convex ML model with it, and compare it with centralised learning, local learning, and federated learning methods atop a well-known traffic incident detection dataset. Experimental results show that the proposed network lasso-based approach provides a promising alternative to the FL-based approach in data-decentralised traffic scenarios, with a strong convergence guarantee while rekindling the significance of conventional ML-based detection methods.
|
[
"['Qiyuan Zhu' 'A. K. Qin' 'Prabath Abeysekara' 'Hussein Dia'\n 'Hanna Grzybowska']"
] |
null | null |
2402.18198
| null | null |
http://arxiv.org/abs/2402.18198v1
|
2024-02-28T09:40:36Z
|
2024-02-28T09:40:36Z
|
Automated Machine Learning for Multi-Label Classification
|
Automated machine learning (AutoML) aims to select and configure machine learning algorithms and combine them into machine learning pipelines tailored to a dataset at hand. For supervised learning tasks, most notably binary and multinomial classification, aka single-label classification (SLC), such AutoML approaches have shown promising results. However, the task of multi-label classification (MLC), where data points are associated with a set of class labels instead of a single class label, has received much less attention so far. In the context of multi-label classification, the data-specific selection and configuration of multi-label classifiers are challenging even for experts in the field, as it is a high-dimensional optimization problem with multi-level hierarchical dependencies. While for SLC, the space of machine learning pipelines is already huge, the size of the MLC search space outnumbers the one of SLC by several orders. In the first part of this thesis, we devise a novel AutoML approach for single-label classification tasks optimizing pipelines of machine learning algorithms, consisting of two algorithms at most. This approach is then extended first to optimize pipelines of unlimited length and eventually configure the complex hierarchical structures of multi-label classification methods. Furthermore, we investigate how well AutoML approaches that form the state of the art for single-label classification tasks scale with the increased problem complexity of AutoML for multi-label classification. In the second part, we explore how methods for SLC and MLC could be configured more flexibly to achieve better generalization performance and how to increase the efficiency of execution-based AutoML systems.
|
[
"['Marcel Wever']"
] |
null | null |
2402.18211
| null | null |
http://arxiv.org/pdf/2402.18211v1
|
2024-02-28T10:01:44Z
|
2024-02-28T10:01:44Z
|
Catastrophic Overfitting: A Potential Blessing in Disguise
|
Fast Adversarial Training (FAT) has gained increasing attention within the research community owing to its efficacy in improving adversarial robustness. Particularly noteworthy is the challenge posed by catastrophic overfitting (CO) in this field. Although existing FAT approaches have made strides in mitigating CO, the ascent of adversarial robustness occurs with a non-negligible decline in classification accuracy on clean samples. To tackle this issue, we initially employ the feature activation differences between clean and adversarial examples to analyze the underlying causes of CO. Intriguingly, our findings reveal that CO can be attributed to the feature coverage induced by a few specific pathways. By intentionally manipulating feature activation differences in these pathways with well-designed regularization terms, we can effectively mitigate and induce CO, providing further evidence for this observation. Notably, models trained stably with these terms exhibit superior performance compared to prior FAT work. On this basis, we harness CO to achieve `attack obfuscation', aiming to bolster model performance. Consequently, the models suffering from CO can attain optimal classification accuracy on both clean and adversarial data when adding random noise to inputs during evaluation. We also validate their robustness against transferred adversarial examples and the necessity of inducing CO to improve robustness. Hence, CO may not be a problem that has to be solved.
|
[
"['Mengnan Zhao' 'Lihe Zhang' 'Yuqiu Kong' 'Baocai Yin']"
] |
null | null |
2402.18213
| null | null |
http://arxiv.org/pdf/2402.18213v2
|
2024-06-19T12:15:20Z
|
2024-02-28T10:09:04Z
|
Multi-objective Differentiable Neural Architecture Search
|
Pareto front profiling in multi-objective optimization (MOO), i.e. finding a diverse set of Pareto optimal solutions, is challenging, especially with expensive objectives like neural network training. Typically, in MOO neural architecture search (NAS), we aim to balance performance and hardware metrics across devices. Prior NAS approaches simplify this task by incorporating hardware constraints into the objective function, but profiling the Pareto front necessitates a computationally expensive search for each constraint. In this work, we propose a novel NAS algorithm that encodes user preferences for the trade-off between performance and hardware metrics, and yields representative and diverse architectures across multiple devices in just one search run. To this end, we parameterize the joint architectural distribution across devices and multiple objectives via a hypernetwork that can be conditioned on hardware features and preference vectors, enabling zero-shot transferability to new devices. Extensive experiments with up to 19 hardware devices and 3 objectives showcase the effectiveness and scalability of our method. Finally, we show that, without extra costs, our method outperforms existing MOO NAS methods across a broad range of qualitatively different search spaces and datasets, including MobileNetV3 on ImageNet-1k, an encoder-decoder transformer space for machine translation and a decoder-only transformer space for language modelling.
|
[
"['Rhea Sanjay Sukthanker' 'Arber Zela' 'Benedikt Staffler' 'Samuel Dooley'\n 'Josif Grabocka' 'Frank Hutter']"
] |
null | null |
2402.18225
| null | null |
http://arxiv.org/pdf/2402.18225v1
|
2024-02-28T10:43:54Z
|
2024-02-28T10:43:54Z
|
CogBench: a large language model walks into a psychology lab
|
Large language models (LLMs) have significantly advanced the field of artificial intelligence. Yet, evaluating them comprehensively remains challenging. We argue that this is partly due to the predominant focus on performance metrics in most benchmarks. This paper introduces CogBench, a benchmark that includes ten behavioral metrics derived from seven cognitive psychology experiments. This novel approach offers a toolkit for phenotyping LLMs' behavior. We apply CogBench to 35 LLMs, yielding a rich and diverse dataset. We analyze this data using statistical multilevel modeling techniques, accounting for the nested dependencies among fine-tuned versions of specific LLMs. Our study highlights the crucial role of model size and reinforcement learning from human feedback (RLHF) in improving performance and aligning with human behavior. Interestingly, we find that open-source models are less risk-prone than proprietary models and that fine-tuning on code does not necessarily enhance LLMs' behavior. Finally, we explore the effects of prompt-engineering techniques. We discover that chain-of-thought prompting improves probabilistic reasoning, while take-a-step-back prompting fosters model-based behaviors.
|
[
"['Julian Coda-Forno' 'Marcel Binz' 'Jane X. Wang' 'Eric Schulz']"
] |
null | null |
2402.18241
| null | null |
http://arxiv.org/pdf/2402.18241v1
|
2024-02-28T11:12:47Z
|
2024-02-28T11:12:47Z
|
Affective State Detection using fNIRs and Machine Learning
|
Affective states regulate our day to day to function and has a tremendous effect on mental and physical health. Detection of affective states is of utmost importance for mental health monitoring, smart entertainment selection and dynamic workload management. In this paper, we discussed relevant literature on affective state detection using physiology data, the benefits and limitations of different sensors and methods used for collecting physiology data, and our rationale for selecting functional near-infrared spectroscopy. We present the design of an experiment involving nine subjects to evoke the affective states of meditation, amusement and cognitive load and the results of the attempt to classify using machine learning. A mean accuracy of 83.04% was achieved in three class classification with an individual model; 84.39% accuracy was achieved for a group model and 60.57% accuracy was achieved for subject independent model using leave one out cross validation. It was found that prediction accuracy for cognitive load was higher (evoked using a pen and paper task) than the other two classes (evoked using computer bases tasks). To verify that this discrepancy was not due to motor skills involved in the pen and paper task, a second experiment was conducted using four participants and the results of that experiment has also been presented in the paper.
|
[
"['Ritam Ghosh']"
] |
null | null |
2402.18260
| null | null |
http://arxiv.org/pdf/2402.18260v2
|
2024-04-15T15:40:06Z
|
2024-02-28T11:47:15Z
|
Efficiently Computable Safety Bounds for Gaussian Processes in Active
Learning
|
Active learning of physical systems must commonly respect practical safety constraints, which restricts the exploration of the design space. Gaussian Processes (GPs) and their calibrated uncertainty estimations are widely used for this purpose. In many technical applications the design space is explored via continuous trajectories, along which the safety needs to be assessed. This is particularly challenging for strict safety requirements in GP methods, as it employs computationally expensive Monte-Carlo sampling of high quantiles. We address these challenges by providing provable safety bounds based on the adaptively sampled median of the supremum of the posterior GP. Our method significantly reduces the number of samples required for estimating high safety probabilities, resulting in faster evaluation without sacrificing accuracy and exploration speed. The effectiveness of our safe active learning approach is demonstrated through extensive simulations and validated using a real-world engine example.
|
[
"['Jörn Tebbe' 'Christoph Zimmer' 'Ansgar Steland' 'Markus Lange-Hegermann'\n 'Fabian Mies']"
] |
null | null |
2402.18285
| null | null |
http://arxiv.org/pdf/2402.18285v2
|
2024-05-14T17:23:13Z
|
2024-02-28T12:24:27Z
|
PiShield: A PyTorch Package for Learning with Requirements
|
Deep learning models have shown their strengths in various application domains, however, they often struggle to meet safety requirements for their outputs. In this paper, we introduce PiShield, the first package ever allowing for the integration of the requirements into the neural networks' topology. PiShield guarantees compliance with these requirements, regardless of input. Additionally, it allows for integrating requirements both at inference and/or training time, depending on the practitioners' needs. Given the widespread application of deep learning, there is a growing need for frameworks allowing for the integration of the requirements across various domains. Here, we explore three application scenarios: functional genomics, autonomous driving, and tabular data generation.
|
[
"['Mihaela Cătălina Stoian' 'Alex Tatomir' 'Thomas Lukasiewicz'\n 'Eleonora Giunchiglia']"
] |
null | null |
2402.18286
| null | null |
http://arxiv.org/pdf/2402.18286v1
|
2024-02-28T12:25:01Z
|
2024-02-28T12:25:01Z
|
Self-Supervised Learning in Electron Microscopy: Towards a Foundation
Model for Advanced Image Analysis
|
In this work, we explore the potential of self-supervised learning from unlabeled electron microscopy datasets, taking a step toward building a foundation model in this field. We show how self-supervised pretraining facilitates efficient fine-tuning for a spectrum of downstream tasks, including semantic segmentation, denoising, noise & background removal, and super-resolution. Experimentation with varying model complexities and receptive field sizes reveals the remarkable phenomenon that fine-tuned models of lower complexity consistently outperform more complex models with random weight initialization. We demonstrate the versatility of self-supervised pretraining across various downstream tasks in the context of electron microscopy, allowing faster convergence and better performance. We conclude that self-supervised pretraining serves as a powerful catalyst, being especially advantageous when limited annotated data are available and efficient scaling of computational cost are important.
|
[
"['Bashir Kazimi' 'Karina Ruzaeva' 'Stefan Sandfeld']"
] |
null | null |
2402.18292
| null | null |
http://arxiv.org/pdf/2402.18292v2
|
2024-05-16T07:44:48Z
|
2024-02-28T12:37:30Z
|
FSL-Rectifier: Rectify Outliers in Few-Shot Learning via Test-Time
Augmentation
|
Few-shot-learning (FSL) commonly requires a model to identify images (queries) that belong to classes unseen during training, based on a few labelled samples of the new classes (support set) as reference. As the test classes are novel, FSL is challenging with high generalization error with respect to the novel classes, where outliers query or support image during inference exacerbate the error further. So far, plenty of algorithms involve training data augmentation to improve the generalization capability of FSL models. In contrast, inspired by the fact that test samples are more relevant to the target domain, we believe that test-time augmentation may be more useful than training augmentation for FSL. In this work, to reduce the bias caused by unconventional test samples, we generate new test samples through combining them with similar train-class samples. Averaged representations of the test-time augmentation are then considered for few-shot classification. According to our experiments, by augmenting the support set and query with a few additional generated sample, we can achieve improvement for trained FSL models. Importantly, our method is universally compatible with different off-the-shelf FSL models, whose performance can be improved without extra dataset nor further training of the models themselves. Codes are available at https://github.com/WendyBaiYunwei/FSL-Rectifier.
|
[
"['Yunwei Bai' 'Ying Kiat Tan' 'Tsuhan Chen']"
] |
null | null |
2402.18296
| null | null |
http://arxiv.org/pdf/2402.18296v1
|
2024-02-28T12:41:06Z
|
2024-02-28T12:41:06Z
|
Comparative Analysis of XGBoost and Minirocket Algortihms for Human
Activity Recognition
|
Human Activity Recognition (HAR) has been extensively studied, with recent emphasis on the implementation of advanced Machine Learning (ML) and Deep Learning (DL) algorithms for accurate classification. This study investigates the efficacy of two ML algorithms, eXtreme Gradient Boosting (XGBoost) and MiniRocket, in the realm of HAR using data collected from smartphone sensors. The experiments are conducted on a dataset obtained from the UCI repository, comprising accelerometer and gyroscope signals captured from 30 volunteers performing various activities while wearing a smartphone. The dataset undergoes preprocessing, including noise filtering and feature extraction, before being utilized for training and testing the classifiers. Monte Carlo cross-validation is employed to evaluate the models' robustness. The findings reveal that both XGBoost and MiniRocket attain accuracy, F1 score, and AUC values as high as 0.99 in activity classification. XGBoost exhibits a slightly superior performance compared to MiniRocket. Notably, both algorithms surpass the performance of other ML and DL algorithms reported in the literature for HAR tasks. Additionally, the study compares the computational efficiency of the two algorithms, revealing XGBoost's advantage in terms of training time. Furthermore, the performance of MiniRocket, which achieves accuracy and F1 values of 0.94, and an AUC value of 0.96 using raw data and utilizing only one channel from the sensors, highlights the potential of directly leveraging unprocessed signals. It also suggests potential advantages that could be gained by utilizing sensor fusion or channel fusion techniques. Overall, this research sheds light on the effectiveness and computational characteristics of XGBoost and MiniRocket in HAR tasks, providing insights for future studies in activity recognition using smartphone sensor data.
|
[
"['Celal Alagoz']"
] |
null | null |
2402.18311
| null | null |
http://arxiv.org/pdf/2402.18311v1
|
2024-02-28T13:11:06Z
|
2024-02-28T13:11:06Z
|
Escaping Local Optima in Global Placement
|
Placement is crucial in the physical design, as it greatly affects power, performance, and area metrics. Recent advancements in analytical methods, such as DREAMPlace, have demonstrated impressive performance in global placement. However, DREAMPlace has some limitations, e.g., may not guarantee legalizable placements under the same settings, leading to fragile and unpredictable results. This paper highlights the main issue as being stuck in local optima, and proposes a hybrid optimization framework to efficiently escape the local optima, by perturbing the placement result iteratively. The proposed framework achieves significant improvements compared to state-of-the-art methods on two popular benchmarks.
|
[
"['Ke Xue' 'Xi Lin' 'Yunqi Shi' 'Shixiong Kai' 'Siyuan Xu' 'Chao Qian']"
] |
null | null |
2402.18312
| null | null |
http://arxiv.org/pdf/2402.18312v2
|
2024-05-06T09:16:15Z
|
2024-02-28T13:14:20Z
|
How to think step-by-step: A mechanistic understanding of
chain-of-thought reasoning
|
Despite superior reasoning prowess demonstrated by Large Language Models (LLMs) with Chain-of-Thought (CoT) prompting, a lack of understanding prevails around the internal mechanisms of the models that facilitate CoT generation. This work investigates the neural sub-structures within LLMs that manifest CoT reasoning from a mechanistic point of view. From an analysis of Llama-2 7B applied to multistep reasoning over fictional ontologies, we demonstrate that LLMs deploy multiple parallel pathways of answer generation for step-by-step reasoning. These parallel pathways provide sequential answers from the input question context as well as the generated CoT. We observe a functional rift in the middle layers of the LLM. Token representations in the initial half remain strongly biased towards the pretraining prior, with the in-context prior taking over in the later half. This internal phase shift manifests in different functional components: attention heads that write the answer token appear in the later half, attention heads that move information along ontological relationships appear in the initial half, and so on. To the best of our knowledge, this is the first attempt towards mechanistic investigation of CoT reasoning in LLMs.
|
[
"['Subhabrata Dutta' 'Joykirat Singh' 'Soumen Chakrabarti'\n 'Tanmoy Chakraborty']"
] |
null | null |
2402.18329
| null | null |
http://arxiv.org/pdf/2402.18329v1
|
2024-02-28T13:49:23Z
|
2024-02-28T13:49:23Z
|
Living-off-The-Land Reverse-Shell Detection by Informed Data
Augmentation
|
The living-off-the-land (LOTL) offensive methodologies rely on the perpetration of malicious actions through chains of commands executed by legitimate applications, identifiable exclusively by analysis of system logs. LOTL techniques are well hidden inside the stream of events generated by common legitimate activities, moreover threat actors often camouflage activity through obfuscation, making them particularly difficult to detect without incurring in plenty of false alarms, even using machine learning. To improve the performance of models in such an harsh environment, we propose an augmentation framework to enhance and diversify the presence of LOTL malicious activity inside legitimate logs. Guided by threat intelligence, we generate a dataset by injecting attack templates known to be employed in the wild, further enriched by malleable patterns of legitimate activities to replicate the behavior of evasive threat actors. We conduct an extensive ablation study to understand which models better handle our augmented dataset, also manipulated to mimic the presence of model-agnostic evasion and poisoning attacks. Our results suggest that augmentation is needed to maintain high-predictive capabilities, robustness to attack is achieved through specific hardening techniques like adversarial training, and it is possible to deploy near-real-time models with almost-zero false alarms.
|
[
"['Dmitrijs Trizna' 'Luca Demetrio' 'Battista Biggio' 'Fabio Roli']"
] |
null | null |
2402.18334
| null | null |
http://arxiv.org/pdf/2402.18334v2
|
2024-06-06T13:50:26Z
|
2024-02-28T13:54:57Z
|
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task
Adaptation
|
We introduce Bonito, an open-source model for conditional task generation that converts unannotated text into task-specific training datasets for instruction tuning. We aim to enable zero-shot task adaptation of large language models on users' specialized, private data. We train Bonito by fine-tuning a pretrained large language model on a new large-scale dataset with 1.65M examples created by remixing existing instruction tuning datasets into meta-templates. The meta-templates for a dataset produce training examples where the input is the unannotated text and the task attribute and the output consists of the instruction and the response. We use Bonito to generate synthetic tasks for seven datasets from specialized domains with unannotated text across three task types -- yes-no question answering, extractive question answering, and natural language inference -- and adapt language models. We show that Bonito significantly improves the average performance of pretrained and instruction tuned models over the de facto self supervised baseline. For example, adapting Mistral-Instruct-v2 and instruction tuned variants of Mistral and Llama2 with Bonito improves the strong zero-shot performance by 22.1 F1 points whereas the next word prediction objective undoes some of the benefits of instruction tuning and reduces the average performance by 0.8 F1 points. We conduct additional experiments with Bonito to understand the effects of the domain, the size of the training set, and the choice of alternative synthetic task generators. Overall, we show that learning with synthetic instruction tuning datasets is an effective way to adapt language models to new domains. The model, dataset, and code are available at https://github.com/BatsResearch/bonito.
|
[
"['Nihal V. Nayak' 'Yiyang Nan' 'Avi Trost' 'Stephen H. Bach']"
] |
null | null |
2402.18337
| null | null |
http://arxiv.org/pdf/2402.18337v1
|
2024-02-28T13:59:20Z
|
2024-02-28T13:59:20Z
|
Probabilistic Bayesian optimal experimental design using conditional
normalizing flows
|
Bayesian optimal experimental design (OED) seeks to conduct the most informative experiment under budget constraints to update the prior knowledge of a system to its posterior from the experimental data in a Bayesian framework. Such problems are computationally challenging because of (1) expensive and repeated evaluation of some optimality criterion that typically involves a double integration with respect to both the system parameters and the experimental data, (2) suffering from the curse-of-dimensionality when the system parameters and design variables are high-dimensional, (3) the optimization is combinatorial and highly non-convex if the design variables are binary, often leading to non-robust designs. To make the solution of the Bayesian OED problem efficient, scalable, and robust for practical applications, we propose a novel joint optimization approach. This approach performs simultaneous (1) training of a scalable conditional normalizing flow (CNF) to efficiently maximize the expected information gain (EIG) of a jointly learned experimental design (2) optimization of a probabilistic formulation of the binary experimental design with a Bernoulli distribution. We demonstrate the performance of our proposed method for a practical MRI data acquisition problem, one of the most challenging Bayesian OED problems that has high-dimensional (320 $times$ 320) parameters at high image resolution, high-dimensional (640 $times$ 386) observations, and binary mask designs to select the most informative observations.
|
[
"['Rafael Orozco' 'Felix J. Herrmann' 'Peng Chen']"
] |
null | null |
2402.18354
| null | null |
http://arxiv.org/pdf/2402.18354v1
|
2024-02-28T14:26:16Z
|
2024-02-28T14:26:16Z
|
SuperdropNet: a Stable and Accurate Machine Learning Proxy for
Droplet-based Cloud Microphysics
|
Cloud microphysics has important consequences for climate and weather phenomena, and inaccurate representations can limit forecast accuracy. While atmospheric models increasingly resolve storms and clouds, the accuracy of the underlying microphysics remains limited by computationally expedient bulk moment schemes based on simplifying assumptions. Droplet-based Lagrangian schemes are more accurate but are underutilized due to their large computational overhead. Machine learning (ML) based schemes can bridge this gap by learning from vast droplet-based simulation datasets, but have so far struggled to match the accuracy and stability of bulk moment schemes. To address this challenge, we developed SuperdropNet, an ML-based emulator of the Lagrangian superdroplet simulations. To improve accuracy and stability, we employ multi-step autoregressive prediction during training, impose physical constraints, and carefully control stochasticity in the training data. Superdropnet predicted hydrometeor states and cloud-to-rain transition times more accurately than previous ML emulators, and matched or outperformed bulk moment schemes in many cases. We further carried out detailed analyses to reveal how multistep autoregressive training improves performance, and how the performance of SuperdropNet and other microphysical schemes hydrometeors' mass, number and size distribution. Together our results suggest that ML models can effectively emulate cloud microphysics, in a manner consistent with droplet-based simulations.
|
[
"['Shivani Sharma' 'David Greenberg']"
] |
null | null |
2402.18372
| null | null |
http://arxiv.org/pdf/2402.18372v2
|
2024-03-01T21:53:26Z
|
2024-02-27T15:53:15Z
|
FedUV: Uniformity and Variance for Heterogeneous Federated Learning
|
Federated learning is a promising framework to train neural networks with widely distributed data. However, performance degrades heavily with heterogeneously distributed data. Recent work has shown this is due to the final layer of the network being most prone to local bias, some finding success freezing the final layer as an orthogonal classifier. We investigate the training dynamics of the classifier by applying SVD to the weights motivated by the observation that freezing weights results in constant singular values. We find that there are differences when training in IID and non-IID settings. Based on this finding, we introduce two regularization terms for local training to continuously emulate IID settings: (1) variance in the dimension-wise probability distribution of the classifier and (2) hyperspherical uniformity of representations of the encoder. These regularizations promote local models to act as if it were in an IID setting regardless of the local data distribution, thus offsetting proneness to bias while being flexible to the data. On extensive experiments in both label-shift and feature-shift settings, we verify that our method achieves highest performance by a large margin especially in highly non-IID cases in addition to being scalable to larger models and datasets.
|
[
"['Ha Min Son' 'Moon-Hyun Kim' 'Tai-Myoung Chung' 'Chao Huang' 'Xin Liu']"
] |
null | null |
2402.18377
| null | null |
http://arxiv.org/pdf/2402.18377v2
|
2024-06-07T23:38:25Z
|
2024-02-28T14:52:58Z
|
Out-of-Domain Generalization in Dynamical Systems Reconstruction
|
In science we are interested in finding the governing equations, the dynamical rules, underlying empirical phenomena. While traditionally scientific models are derived through cycles of human insight and experimentation, recently deep learning (DL) techniques have been advanced to reconstruct dynamical systems (DS) directly from time series data. State-of-the-art dynamical systems reconstruction (DSR) methods show promise in capturing invariant and long-term properties of observed DS, but their ability to generalize to unobserved domains remains an open challenge. Yet, this is a crucial property we would expect from any viable scientific theory. In this work, we provide a formal framework that addresses generalization in DSR. We explain why and how out-of-domain (OOD) generalization (OODG) in DSR profoundly differs from OODG considered elsewhere in machine learning. We introduce mathematical notions based on topological concepts and ergodic theory to formalize the idea of learnability of a DSR model. We formally prove that black-box DL techniques, without adequate structural priors, generally will not be able to learn a generalizing DSR model. We also show this empirically, considering major classes of DSR algorithms proposed so far, and illustrate where and why they fail to generalize across the whole phase space. Our study provides the first comprehensive mathematical treatment of OODG in DSR, and gives a deeper conceptual understanding of where the fundamental problems in OODG lie and how they could possibly be addressed in practice.
|
[
"['Niclas Göring' 'Florian Hess' 'Manuel Brenner' 'Zahra Monfared'\n 'Daniel Durstewitz']"
] |
null | null |
2402.18381
| null | null |
http://arxiv.org/pdf/2402.18381v1
|
2024-02-28T15:02:17Z
|
2024-02-28T15:02:17Z
|
Large Language Models As Evolution Strategies
|
Large Transformer models are capable of implementing a plethora of so-called in-context learning algorithms. These include gradient descent, classification, sequence completion, transformation, and improvement. In this work, we investigate whether large language models (LLMs), which never explicitly encountered the task of black-box optimization, are in principle capable of implementing evolutionary optimization algorithms. While previous works have solely focused on language-based task specification, we move forward and focus on the zero-shot application of LLMs to black-box optimization. We introduce a novel prompting strategy, consisting of least-to-most sorting of discretized population members and querying the LLM to propose an improvement to the mean statistic, i.e. perform a type of black-box recombination operation. Empirically, we find that our setup allows the user to obtain an LLM-based evolution strategy, which we call `EvoLLM', that robustly outperforms baseline algorithms such as random search and Gaussian Hill Climbing on synthetic BBOB functions as well as small neuroevolution tasks. Hence, LLMs can act as `plug-in' in-context recombination operators. We provide several comparative studies of the LLM's model size, prompt strategy, and context construction. Finally, we show that one can flexibly improve EvoLLM's performance by providing teacher algorithm information via instruction fine-tuning on previously collected teacher optimization trajectories.
|
[
"['Robert Tjarko Lange' 'Yingtao Tian' 'Yujin Tang']"
] |
null | null |
2402.18392
| null | null |
http://arxiv.org/pdf/2402.18392v1
|
2024-02-28T15:12:24Z
|
2024-02-28T15:12:24Z
|
Unveiling the Potential of Robustness in Evaluating Causal Inference
Models
|
The growing demand for personalized decision-making has led to a surge of interest in estimating the Conditional Average Treatment Effect (CATE). The intersection of machine learning and causal inference has yielded various effective CATE estimators. However, deploying these estimators in practice is often hindered by the absence of counterfactual labels, making it challenging to select the desirable CATE estimator using conventional model selection procedures like cross-validation. Existing approaches for CATE estimator selection, such as plug-in and pseudo-outcome metrics, face two inherent challenges. Firstly, they are required to determine the metric form and the underlying machine learning models for fitting nuisance parameters or plug-in learners. Secondly, they lack a specific focus on selecting a robust estimator. To address these challenges, this paper introduces a novel approach, the Distributionally Robust Metric (DRM), for CATE estimator selection. The proposed DRM not only eliminates the need to fit additional models but also excels at selecting a robust CATE estimator. Experimental studies demonstrate the efficacy of the DRM method, showcasing its consistent effectiveness in identifying superior estimators while mitigating the risk of selecting inferior ones.
|
[
"['Yiyan Huang' 'Cheuk Hang Leung' 'Siyi Wang' 'Yijun Li' 'Qi Wu']"
] |
null | null |
2402.18396
| null | null |
http://arxiv.org/pdf/2402.18396v1
|
2024-02-28T15:15:23Z
|
2024-02-28T15:15:23Z
|
Deep Confident Steps to New Pockets: Strategies for Docking
Generalization
|
Accurate blind docking has the potential to lead to new biological breakthroughs, but for this promise to be realized, docking methods must generalize well across the proteome. Existing benchmarks, however, fail to rigorously assess generalizability. Therefore, we develop DockGen, a new benchmark based on the ligand-binding domains of proteins, and we show that existing machine learning-based docking models have very weak generalization abilities. We carefully analyze the scaling laws of ML-based docking and show that, by scaling data and model size, as well as integrating synthetic data strategies, we are able to significantly increase the generalization capacity and set new state-of-the-art performance across benchmarks. Further, we propose Confidence Bootstrapping, a new training paradigm that solely relies on the interaction between diffusion and confidence models and exploits the multi-resolution generation process of diffusion models. We demonstrate that Confidence Bootstrapping significantly improves the ability of ML-based docking methods to dock to unseen protein classes, edging closer to accurate and generalizable blind docking methods.
|
[
"['Gabriele Corso' 'Arthur Deng' 'Benjamin Fry' 'Nicholas Polizzi'\n 'Regina Barzilay' 'Tommi Jaakkola']"
] |
null | null |
2402.18419
| null | null |
http://arxiv.org/pdf/2402.18419v1
|
2024-02-28T15:39:53Z
|
2024-02-28T15:39:53Z
|
Can GPT Improve the State of Prior Authorization via Guideline Based
Automated Question Answering?
|
Health insurance companies have a defined process called prior authorization (PA) which is a health plan cost-control process that requires doctors and other healthcare professionals to get clearance in advance from a health plan before performing a particular procedure on a patient in order to be eligible for payment coverage. For health insurance companies, approving PA requests for patients in the medical domain is a time-consuming and challenging task. One of those key challenges is validating if a request matches up to certain criteria such as age, gender, etc. In this work, we evaluate whether GPT can validate numerous key factors, in turn helping health plans reach a decision drastically faster. We frame it as a question answering task, prompting GPT to answer a question from patient electronic health record. We experiment with different conventional prompting techniques as well as introduce our own novel prompting technique. Moreover, we report qualitative assessment by humans on the natural language generation outputs from our approach. Results show that our method achieves superior performance with the mean weighted F1 score of 0.61 as compared to its standard counterparts.
|
[
"['Shubham Vatsal' 'Ayush Singh' 'Shabnam Tafreshi']"
] |
null | null |
2402.18424
| null | null |
http://arxiv.org/pdf/2402.18424v1
|
2024-02-28T15:46:09Z
|
2024-02-28T15:46:09Z
|
Emotion Classification in Low and Moderate Resource Languages
|
It is important to be able to analyze the emotional state of people around the globe. There are 7100+ active languages spoken around the world and building emotion classification for each language is labor intensive. Particularly for low-resource and endangered languages, building emotion classification can be quite challenging. We present a cross-lingual emotion classifier, where we train an emotion classifier with resource-rich languages (i.e. textit{English} in our work) and transfer the learning to low and moderate resource languages. We compare and contrast two approaches of transfer learning from a high-resource language to a low or moderate-resource language. One approach projects the annotation from a high-resource language to low and moderate-resource language in parallel corpora and the other one uses direct transfer from high-resource language to the other languages. We show the efficacy of our approaches on 6 languages: Farsi, Arabic, Spanish, Ilocano, Odia, and Azerbaijani. Our results indicate that our approaches outperform random baselines and transfer emotions across languages successfully. For all languages, the direct cross-lingual transfer of emotion yields better results. We also create annotated emotion-labeled resources for four languages: Farsi, Azerbaijani, Ilocano and Odia.
|
[
"['Shabnam Tafreshi' 'Shubham Vatsal' 'Mona Diab']"
] |
null | null |
2402.18426
| null | null |
http://arxiv.org/pdf/2402.18426v1
|
2024-02-28T15:51:05Z
|
2024-02-28T15:51:05Z
|
A Relational Inductive Bias for Dimensional Abstraction in Neural
Networks
|
The human cognitive system exhibits remarkable flexibility and generalization capabilities, partly due to its ability to form low-dimensional, compositional representations of the environment. In contrast, standard neural network architectures often struggle with abstract reasoning tasks, overfitting, and requiring extensive data for training. This paper investigates the impact of the relational bottleneck -- a mechanism that focuses processing on relations among inputs -- on the learning of factorized representations conducive to compositional coding and the attendant flexibility of processing. We demonstrate that such a bottleneck not only improves generalization and learning efficiency, but also aligns network performance with human-like behavioral biases. Networks trained with the relational bottleneck developed orthogonal representations of feature dimensions latent in the dataset, reflecting the factorized structure thought to underlie human cognitive flexibility. Moreover, the relational network mimics human biases towards regularity without pre-specified symbolic primitives, suggesting that the bottleneck fosters the emergence of abstract representations that confer flexibility akin to symbols.
|
[
"['Declan Campbell' 'Jonathan D. Cohen']"
] |
null | null |
2402.18434
| null | null |
http://arxiv.org/pdf/2402.18434v1
|
2024-02-28T16:00:25Z
|
2024-02-28T16:00:25Z
|
Graph Regularized Encoder Training for Extreme Classification
|
Deep extreme classification (XC) aims to train an encoder architecture and an accompanying classifier architecture to tag a data point with the most relevant subset of labels from a very large universe of labels. XC applications in ranking, recommendation and tagging routinely encounter tail labels for which the amount of training data is exceedingly small. Graph convolutional networks (GCN) present a convenient but computationally expensive way to leverage task metadata and enhance model accuracies in these settings. This paper formally establishes that in several use cases, the steep computational cost of GCNs is entirely avoidable by replacing GCNs with non-GCN architectures. The paper notices that in these settings, it is much more effective to use graph data to regularize encoder training than to implement a GCN. Based on these insights, an alternative paradigm RAMEN is presented to utilize graph metadata in XC settings that offers significant performance boosts with zero increase in inference computational costs. RAMEN scales to datasets with up to 1M labels and offers prediction accuracy up to 15% higher on benchmark datasets than state of the art methods, including those that use graph metadata to train GCNs. RAMEN also offers 10% higher accuracy over the best baseline on a proprietary recommendation dataset sourced from click logs of a popular search engine. Code for RAMEN will be released publicly.
|
[
"['Anshul Mittal' 'Shikhar Mohan' 'Deepak Saini' 'Suchith C. Prabhu'\n 'Jain jiao' 'Sumeet Agarwal' 'Soumen Chakrabarti' 'Purushottam Kar'\n 'Manik Varma']"
] |
null | null |
2402.18443
| null | null |
http://arxiv.org/pdf/2402.18443v1
|
2024-02-28T16:13:44Z
|
2024-02-28T16:13:44Z
|
LeMo-NADe: Multi-Parameter Neural Architecture Discovery with LLMs
|
Building efficient neural network architectures can be a time-consuming task requiring extensive expert knowledge. This task becomes particularly challenging for edge devices because one has to consider parameters such as power consumption during inferencing, model size, inferencing speed, and CO2 emissions. In this article, we introduce a novel framework designed to automatically discover new neural network architectures based on user-defined parameters, an expert system, and an LLM trained on a large amount of open-domain knowledge. The introduced framework (LeMo-NADe) is tailored to be used by non-AI experts, does not require a predetermined neural architecture search space, and considers a large set of edge device-specific parameters. We implement and validate this proposed neural architecture discovery framework using CIFAR-10, CIFAR-100, and ImageNet16-120 datasets while using GPT-4 Turbo and Gemini as the LLM component. We observe that the proposed framework can rapidly (within hours) discover intricate neural network models that perform extremely well across a diverse set of application settings defined by the user.
|
[
"['Md Hafizur Rahman' 'Prabuddha Chakraborty']"
] |
null | null |
2402.18449
| null | null |
http://arxiv.org/pdf/2402.18449v1
|
2024-02-28T16:21:02Z
|
2024-02-28T16:21:02Z
|
HOP to the Next Tasks and Domains for Continual Learning in NLP
|
Continual Learning (CL) aims to learn a sequence of problems (i.e., tasks and domains) by transferring knowledge acquired on previous problems, whilst avoiding forgetting of past ones. Different from previous approaches which focused on CL for one NLP task or domain in a specific use-case, in this paper, we address a more general CL setting to learn from a sequence of problems in a unique framework. Our method, HOP, permits to hop across tasks and domains by addressing the CL problem along three directions: (i) we employ a set of adapters to generalize a large pre-trained model to unseen problems, (ii) we compute high-order moments over the distribution of embedded representations to distinguish independent and correlated statistics across different tasks and domains, (iii) we process this enriched information with auxiliary heads specialized for each end problem. Extensive experimental campaign on 4 NLP applications, 5 benchmarks and 2 CL setups demonstrates the effectiveness of our HOP.
|
[
"['Umberto Michieli' 'Mete Ozay']"
] |
null | null |
2402.18477
| null | null |
http://arxiv.org/pdf/2402.18477v2
|
2024-06-11T16:37:51Z
|
2024-02-28T16:58:31Z
|
Signature Kernel Conditional Independence Tests in Causal Discovery for
Stochastic Processes
|
Inferring the causal structure underlying stochastic dynamical systems from observational data holds great promise in domains ranging from science and health to finance. Such processes can often be accurately modeled via stochastic differential equations (SDEs), which naturally imply causal relationships via "which variables enter the differential of which other variables". In this paper, we develop a kernel-based test of conditional independence (CI) on "path-space" -- e.g., solutions to SDEs, but applicable beyond that -- by leveraging recent advances in signature kernels. We demonstrate strictly superior performance of our proposed CI test compared to existing approaches on path-space and provide theoretical consistency results. Then, we develop constraint-based causal discovery algorithms for acyclic stochastic dynamical systems (allowing for self-loops) that leverage temporal information to recover the entire directed acyclic graph. Assuming faithfulness and a CI oracle, we show that our algorithms are sound and complete. We empirically verify that our developed CI test in conjunction with the causal discovery algorithms outperform baselines across a range of settings.
|
[
"['Georg Manten' 'Cecilia Casolo' 'Emilio Ferrucci' 'Søren Wengel Mogensen'\n 'Cristopher Salvi' 'Niki Kilbertus']"
] |
null | null |
2402.18484
| null | null |
http://arxiv.org/pdf/2402.18484v1
|
2024-02-28T17:06:19Z
|
2024-02-28T17:06:19Z
|
A non-intrusive machine learning framework for debiasing long-time
coarse resolution climate simulations and quantifying rare events statistics
|
Due to the rapidly changing climate, the frequency and severity of extreme weather is expected to increase over the coming decades. As fully-resolved climate simulations remain computationally intractable, policy makers must rely on coarse-models to quantify risk for extremes. However, coarse models suffer from inherent bias due to the ignored "sub-grid" scales. We propose a framework to non-intrusively debias coarse-resolution climate predictions using neural-network (NN) correction operators. Previous efforts have attempted to train such operators using loss functions that match statistics. However, this approach falls short with events that have longer return period than that of the training data, since the reference statistics have not converged. Here, the scope is to formulate a learning method that allows for correction of dynamics and quantification of extreme events with longer return period than the training data. The key obstacle is the chaotic nature of the underlying dynamics. To overcome this challenge, we introduce a dynamical systems approach where the correction operator is trained using reference data and a coarse model simulation nudged towards that reference. The method is demonstrated on debiasing an under-resolved quasi-geostrophic model and the Energy Exascale Earth System Model (E3SM). For the former, our method enables the quantification of events that have return period two orders longer than the training data. For the latter, when trained on 8 years of ERA5 data, our approach is able to correct the coarse E3SM output to closely reflect the 36-year ERA5 statistics for all prognostic variables and significantly reduce their spatial biases.
|
[
"['Benedikt Barthel Sorensen' 'Alexis Charalampopoulos' 'Shixuan Zhang'\n 'Bryce Harrop' 'Ruby Leung' 'Themistoklis Sapsis']"
] |
null | null |
2402.18487
| null | null |
http://arxiv.org/pdf/2402.18487v1
|
2024-02-28T17:10:22Z
|
2024-02-28T17:10:22Z
|
Human-Centric Aware UAV Trajectory Planning in Search and Rescue
Missions Employing Multi-Objective Reinforcement Learning with AHP and
Similarity-Based Experience Replay
|
The integration of Unmanned Aerial Vehicles (UAVs) into Search and Rescue (SAR) missions presents a promising avenue for enhancing operational efficiency and effectiveness. However, the success of these missions is not solely dependent on the technical capabilities of the drones but also on their acceptance and interaction with humans on the ground. This paper explores the effect of human-centric factor in UAV trajectory planning for SAR missions. We introduce a novel approach based on the reinforcement learning augmented with Analytic Hierarchy Process and novel similarity-based experience replay to optimize UAV trajectories, balancing operational objectives with human comfort and safety considerations. Additionally, through a comprehensive survey, we investigate the impact of gender cues and anthropomorphism in UAV design on public acceptance and trust, revealing significant implications for drone interaction strategies in SAR. Our contributions include (1) a reinforcement learning framework for UAV trajectory planning that dynamically integrates multi-objective considerations, (2) an analysis of human perceptions towards gendered and anthropomorphized drones in SAR contexts, and (3) the application of similarity-based experience replay for enhanced learning efficiency in complex SAR scenarios. The findings offer valuable insights into designing UAV systems that are not only technically proficient but also aligned with human-centric values.
|
[
"['Mahya Ramezani' 'Jose Luis Sanchez-Lopez']"
] |
null | null |
2402.18491
| null | null |
http://arxiv.org/pdf/2402.18491v1
|
2024-02-28T17:19:26Z
|
2024-02-28T17:19:26Z
|
Dynamical Regimes of Diffusion Models
|
Using statistical physics methods, we study generative diffusion models in the regime where the dimension of space and the number of data are large, and the score function has been trained optimally. Our analysis reveals three distinct dynamical regimes during the backward generative diffusion process. The generative dynamics, starting from pure noise, encounters first a 'speciation' transition where the gross structure of data is unraveled, through a mechanism similar to symmetry breaking in phase transitions. It is followed at later time by a 'collapse' transition where the trajectories of the dynamics become attracted to one of the memorized data points, through a mechanism which is similar to the condensation in a glass phase. For any dataset, the speciation time can be found from a spectral analysis of the correlation matrix, and the collapse time can be found from the estimation of an 'excess entropy' in the data. The dependence of the collapse time on the dimension and number of data provides a thorough characterization of the curse of dimensionality for diffusion models. Analytical solutions for simple models like high-dimensional Gaussian mixtures substantiate these findings and provide a theoretical framework, while extensions to more complex scenarios and numerical validations with real datasets confirm the theoretical predictions.
|
[
"['Giulio Biroli' 'Tony Bonnaire' 'Valentin de Bortoli' 'Marc Mézard']"
] |
null | null |
2402.18495
| null | null |
http://arxiv.org/pdf/2402.18495v2
|
2024-02-29T13:02:50Z
|
2024-02-28T17:25:06Z
|
ROG$_{PL}$: Robust Open-Set Graph Learning via Region-Based Prototype
Learning
|
Open-set graph learning is a practical task that aims to classify the known class nodes and to identify unknown class samples as unknowns. Conventional node classification methods usually perform unsatisfactorily in open-set scenarios due to the complex data they encounter, such as out-of-distribution (OOD) data and in-distribution (IND) noise. OOD data are samples that do not belong to any known classes. They are outliers if they occur in training (OOD noise), and open-set samples if they occur in testing. IND noise are training samples which are assigned incorrect labels. The existence of IND noise and OOD noise is prevalent, which usually cause the ambiguity problem, including the intra-class variety problem and the inter-class confusion problem. Thus, to explore robust open-set learning methods is necessary and difficult, and it becomes even more difficult for non-IID graph data.To this end, we propose a unified framework named ROG$_{PL}$ to achieve robust open-set learning on complex noisy graph data, by introducing prototype learning. In specific, ROG$_{PL}$ consists of two modules, i.e., denoising via label propagation and open-set prototype learning via regions. The first module corrects noisy labels through similarity-based label propagation and removes low-confidence samples, to solve the intra-class variety problem caused by noise. The second module learns open-set prototypes for each known class via non-overlapped regions and remains both interior and border prototypes to remedy the inter-class confusion problem.The two modules are iteratively updated under the constraints of classification loss and prototype diversity loss. To the best of our knowledge, the proposed ROG$_{PL}$ is the first robust open-set node classification method for graph data with complex noise.
|
[
"['Qin Zhang' 'Xiaowei Li' 'Jiexin Lu' 'Liping Qiu' 'Shirui Pan'\n 'Xiaojun Chen' 'Junyang Chen']"
] |
null | null |
2402.18505
| null | null |
http://arxiv.org/pdf/2402.18505v1
|
2024-02-28T17:34:21Z
|
2024-02-28T17:34:21Z
|
Evolving machine learning workflows through interactive AutoML
|
Automatic workflow composition (AWC) is a relevant problem in automated machine learning (AutoML) that allows finding suitable sequences of preprocessing and prediction models together with their optimal hyperparameters. This problem can be solved using evolutionary algorithms and, in particular, grammar-guided genetic programming (G3P). Current G3P approaches to AWC define a fixed grammar that formally specifies how workflow elements can be combined and which algorithms can be included. In this paper we present ourmethod, an interactive G3P algorithm that allows users to dynamically modify the grammar to prune the search space and focus on their regions of interest. Our proposal is the first to combine the advantages of a G3P method with ideas from interactive optimisation and human-guided machine learning, an area little explored in the context of AutoML. To evaluate our approach, we present an experimental study in which 20 participants interact with ourmethod to evolve workflows according to their preferences. Our results confirm that the collaboration between ourmethod and humans allows us to find high-performance workflows in terms of accuracy that require less tuning time than those found without human intervention.
|
[
"['Rafael Barbudo' 'Aurora Ramírez' 'José Raúl Romero']"
] |
null | null |
2402.18508
| null | null |
http://arxiv.org/pdf/2402.18508v2
|
2024-05-24T05:51:52Z
|
2024-02-28T17:36:45Z
|
Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling
|
In the rapidly evolving field of deep learning, the demand for models that are both expressive and computationally efficient has never been more critical. This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms without compromising the ability to capture long-range dependencies and in-context learning. At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its kernel conditioned on input sequence using a dedicated conditioning neural network. We design two simple conditioning networks that maintain shift equivariance in our data-dependent convolution operation. The dynamic nature of the proposed convolution kernel grants Orchid high expressivity while maintaining quasilinear scalability for long sequences. We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality. Our experiments demonstrate that this architecture not only outperforms traditional attention-based architectures such as BERT and Vision Transformers with smaller model sizes, but also extends the feasible sequence length beyond the limitations of the dense attention layers. This achievement represents a significant step towards more efficient and scalable deep learning models for sequence modeling.
|
[
"['Mahdi Karami' 'Ali Ghodsi']"
] |
null | null |
2402.18510
| null | null |
http://arxiv.org/pdf/2402.18510v3
|
2024-05-10T08:55:21Z
|
2024-02-28T17:38:06Z
|
RNNs are not Transformers (Yet): The Key Bottleneck on In-context
Retrieval
|
This paper investigates the gap in representation powers of Recurrent Neural Networks (RNNs) and Transformers in the context of solving algorithmic problems. We focus on understanding whether RNNs, known for their memory efficiency in handling long sequences, can match the performance of Transformers, particularly when enhanced with Chain-of-Thought (CoT) prompting. Our theoretical analysis reveals that CoT improves RNNs but is insufficient to close the gap with Transformers. A key bottleneck lies in the inability of RNNs to perfectly retrieve information from the context, even with CoT: for several tasks that explicitly or implicitly require this capability, such as associative recall and determining if a graph is a tree, we prove that RNNs are not expressive enough to solve the tasks while Transformers can solve them with ease. Conversely, we prove that adopting techniques to enhance the in-context retrieval capability of RNNs, including Retrieval-Augmented Generation (RAG) and adding a single Transformer layer, can elevate RNNs to be capable of solving all polynomial-time solvable problems with CoT, hence closing the representation gap with Transformers.
|
[
"['Kaiyue Wen' 'Xingyu Dang' 'Kaifeng Lyu']"
] |
null | null |
2402.18512
| null | null |
http://arxiv.org/pdf/2402.18512v2
|
2024-06-11T13:35:52Z
|
2024-02-28T17:40:05Z
|
Log Neural Controlled Differential Equations: The Lie Brackets Make a
Difference
|
The vector field of a controlled differential equation (CDE) describes the relationship between a control path and the evolution of a solution path. Neural CDEs (NCDEs) treat time series data as observations from a control path, parameterise a CDE's vector field using a neural network, and use the solution path as a continuously evolving hidden state. As their formulation makes them robust to irregular sampling rates, NCDEs are a powerful approach for modelling real-world data. Building on neural rough differential equations (NRDEs), we introduce Log-NCDEs, a novel, effective, and efficient method for training NCDEs. The core component of Log-NCDEs is the Log-ODE method, a tool from the study of rough paths for approximating a CDE's solution. Log-NCDEs are shown to outperform NCDEs, NRDEs, the linear recurrent unit, S5, and MAMBA on a range of multivariate time series datasets with up to $50{,}000$ observations.
|
[
"['Benjamin Walker' 'Andrew D. McLeod' 'Tiexin Qin' 'Yichuan Cheng'\n 'Haoliang Li' 'Terry Lyons']"
] |
null | null |
2402.18527
| null | null |
http://arxiv.org/pdf/2402.18527v1
|
2024-02-28T18:07:47Z
|
2024-02-28T18:07:47Z
|
Defect Detection in Tire X-Ray Images: Conventional Methods Meet Deep
Structures
|
This paper introduces a robust approach for automated defect detection in tire X-ray images by harnessing traditional feature extraction methods such as Local Binary Pattern (LBP) and Gray Level Co-Occurrence Matrix (GLCM) features, as well as Fourier and Wavelet-based features, complemented by advanced machine learning techniques. Recognizing the challenges inherent in the complex patterns and textures of tire X-ray images, the study emphasizes the significance of feature engineering to enhance the performance of defect detection systems. By meticulously integrating combinations of these features with a Random Forest (RF) classifier and comparing them against advanced models like YOLOv8, the research not only benchmarks the performance of traditional features in defect detection but also explores the synergy between classical and modern approaches. The experimental results demonstrate that these traditional features, when fine-tuned and combined with machine learning models, can significantly improve the accuracy and reliability of tire defect detection, aiming to set a new standard in automated quality assurance in tire manufacturing.
|
[
"['Andrei Cozma' 'Landon Harris' 'Hairong Qi' 'Ping Ji' 'Wenpeng Guo'\n 'Song Yuan']"
] |
null | null |
2402.18540
| null | null |
http://arxiv.org/pdf/2402.18540v1
|
2024-02-28T18:23:49Z
|
2024-02-28T18:23:49Z
|
Keeping LLMs Aligned After Fine-tuning: The Crucial Role of Prompt
Templates
|
Public LLMs such as the Llama 2-Chat have driven huge activity in LLM research. These models underwent alignment training and were considered safe. Recently Qi et al. (2023) reported that even benign fine-tuning (e.g., on seemingly safe datasets) can give rise to unsafe behaviors in the models. The current paper is about methods and best practices to mitigate such loss of alignment. Through extensive experiments on several chat models (Meta's Llama 2-Chat, Mistral AI's Mistral 7B Instruct v0.2, and OpenAI's GPT-3.5 Turbo), this paper uncovers that the prompt templates used during fine-tuning and inference play a crucial role in preserving safety alignment, and proposes the "Pure Tuning, Safe Testing" (PTST) principle -- fine-tune models without a safety prompt, but include it at test time. Fine-tuning experiments on GSM8K, ChatDoctor, and OpenOrca show that PTST significantly reduces the rise of unsafe behaviors, and even almost eliminates them in some cases.
|
[
"['Kaifeng Lyu' 'Haoyu Zhao' 'Xinran Gu' 'Dingli Yu' 'Anirudh Goyal'\n 'Sanjeev Arora']"
] |
null | null |
2402.18546
| null | null |
http://arxiv.org/pdf/2402.18546v3
|
2024-03-19T21:54:05Z
|
2024-02-28T18:29:25Z
|
Generalizability Under Sensor Failure: Tokenization + Transformers
Enable More Robust Latent Spaces
|
A major goal in neuroscience is to discover neural data representations that generalize. This goal is challenged by variability along recording sessions (e.g. environment), subjects (e.g. varying neural structures), and sensors (e.g. sensor noise), among others. Recent work has begun to address generalization across sessions and subjects, but few study robustness to sensor failure which is highly prevalent in neuroscience experiments. In order to address these generalizability dimensions we first collect our own electroencephalography dataset with numerous sessions, subjects, and sensors, then study two time series models: EEGNet (Lawhern et al., 2018) and TOTEM (Talukder et al., 2024). EEGNet is a widely used convolutional neural network, while TOTEM is a discrete time series tokenizer and transformer model. We find that TOTEM outperforms or matches EEGNet across all generalizability cases. Finally through analysis of TOTEM's latent codebook we observe that tokenization enables generalization.
|
[
"['Geeling Chau' 'Yujin An' 'Ahamed Raffey Iqbal' 'Soon-Jo Chung'\n 'Yisong Yue' 'Sabera Talukder']"
] |
null | null |
2402.18551
| null | null |
http://arxiv.org/pdf/2402.18551v1
|
2024-02-28T18:34:53Z
|
2024-02-28T18:34:53Z
|
Implicit Bias of Next-Token Prediction
|
Next-token prediction (NTP), the go-to training paradigm in training large language models, involves predicting the next token in a sequence. Departing from traditional one-hot classification, in NTP, multiple tokens with varying frequencies follow each given context. This work frames NTP training as cross-entropy minimization over distinct contexts, each associated with a sparse empirical probability vector across a finite vocabulary. It then addresses the following question: do gradient-based optimizers exhibit a bias towards solutions with specific structure as the NTP training loss reaches its lower bound (entropy)? Specifically, for linear NTP models trained using gradient descent (GD), we make the following contributions: Firstly, we determine NTP-separability conditions on the data, under which GD can attain its lower bound. We also demonstrate that these conditions hold under overparameterization. Secondly, we establish that the parameters of GD projected onto an appropriate data subspace converge to the unique solution of a system of linear equations, which requires the logits' difference of in-support tokens to be equal to the log-ratio of their respective probabilities. Meanwhile, on the orthogonal subspace, the parameters diverge and converge in the direction of the solution of a max-margin quadratic program, minimizing the Euclidean norm of parameters satisfying the NTP-separability conditions. Akin to prior research on implicit bias of one-hot classification, our work opens exciting avenues for future research that can lead to better understanding optimization, generalization and robustness principles of models trained with NTP.
|
[
"['Christos Thrampoulidis']"
] |
null | null |
2402.18563
| null | null |
http://arxiv.org/pdf/2402.18563v1
|
2024-02-28T18:54:18Z
|
2024-02-28T18:54:18Z
|
Approaching Human-Level Forecasting with Language Models
|
Forecasting future events is important for policy and decision making. In this work, we study whether language models (LMs) can forecast at the level of competitive human forecasters. Towards this goal, we develop a retrieval-augmented LM system designed to automatically search for relevant information, generate forecasts, and aggregate predictions. To facilitate our study, we collect a large dataset of questions from competitive forecasting platforms. Under a test set published after the knowledge cut-offs of our LMs, we evaluate the end-to-end performance of our system against the aggregates of human forecasts. On average, the system nears the crowd aggregate of competitive forecasters, and in some settings surpasses it. Our work suggests that using LMs to forecast the future could provide accurate predictions at scale and help to inform institutional decision making.
|
[
"['Danny Halawi' 'Fred Zhang' 'Chen Yueh-Han' 'Jacob Steinhardt']"
] |
null | null |
2402.18567
| null | null |
http://arxiv.org/pdf/2402.18567v1
|
2024-02-28T18:57:56Z
|
2024-02-28T18:57:56Z
|
Diffusion Language Models Are Versatile Protein Learners
|
This paper introduces diffusion protein language model (DPLM), a versatile protein language model that demonstrates strong generative and predictive capabilities for protein sequences. We first pre-train scalable DPLMs from evolutionary-scale protein sequences within a generative self-supervised discrete diffusion probabilistic framework, which generalizes language modeling for proteins in a principled way. After pre-training, DPLM exhibits the ability to generate structurally plausible, novel, and diverse protein sequences for unconditional generation. We further demonstrate the proposed diffusion generative pre-training makes DPLM possess a better understanding of proteins, making it a superior representation learner, which can be fine-tuned for various predictive tasks, comparing favorably to ESM2 (Lin et al., 2022). Moreover, DPLM can be tailored for various needs, which showcases its prowess of conditional generation in several ways: (1) conditioning on partial peptide sequences, e.g., generating scaffolds for functional motifs with high success rate; (2) incorporating other modalities as conditioner, e.g., structure-conditioned generation for inverse folding; and (3) steering sequence generation towards desired properties, e.g., satisfying specified secondary structures, through a plug-and-play classifier guidance.
|
[
"['Xinyou Wang' 'Zaixiang Zheng' 'Fei Ye' 'Dongyu Xue' 'Shujian Huang'\n 'Quanquan Gu']"
] |
null | null |
2402.18571
| null | null |
http://arxiv.org/pdf/2402.18571v3
|
2024-03-06T08:07:02Z
|
2024-02-28T18:58:25Z
|
Arithmetic Control of LLMs for Diverse User Preferences: Directional
Preference Alignment with Multi-Objective Rewards
|
Fine-grained control over large language models (LLMs) remains a significant challenge, hindering their adaptability to diverse user needs. While Reinforcement Learning from Human Feedback (RLHF) shows promise in aligning LLMs, its reliance on scalar rewards often limits its ability to capture diverse user preferences in real-world applications. To address this limitation, we introduce the Directional Preference Alignment (DPA) framework. Unlike the scalar-reward RLHF, DPA incorporates multi-objective reward modeling to represent diverse preference profiles. Additionally, DPA models user preferences as directions (i.e., unit vectors) in the reward space to achieve user-dependent preference control. Our method involves training a multi-objective reward model and then fine-tuning the LLM with a preference-conditioned variant of Rejection Sampling Finetuning (RSF), an RLHF method adopted by Llama 2. This method enjoys a better performance trade-off across various reward objectives. In comparison with the scalar-reward RLHF, DPA offers users intuitive control over LLM generation: they can arithmetically specify their desired trade-offs (e.g., more helpfulness with less verbosity). We also validate the effectiveness of DPA with real-world alignment experiments on Mistral-7B. Our method provides straightforward arithmetic control over the trade-off between helpfulness and verbosity while maintaining competitive performance with strong baselines such as Direct Preference Optimization (DPO).
|
[
"['Haoxiang Wang' 'Yong Lin' 'Wei Xiong' 'Rui Yang' 'Shizhe Diao'\n 'Shuang Qiu' 'Han Zhao' 'Tong Zhang']"
] |
null | null |
2402.18575
| null | null |
http://arxiv.org/pdf/2402.18575v1
|
2023-12-13T03:39:05Z
|
2023-12-13T03:39:05Z
|
DiffuseRAW: End-to-End Generative RAW Image Processing for Low-Light
Images
|
Imaging under extremely low-light conditions presents a significant challenge and is an ill-posed problem due to the low signal-to-noise ratio (SNR) caused by minimal photon capture. Previously, diffusion models have been used for multiple kinds of generative tasks and image-to-image tasks, however, these models work as a post-processing step. These diffusion models are trained on processed images and learn on processed images. However, such approaches are often not well-suited for extremely low-light tasks. Unlike the task of low-light image enhancement or image-to-image enhancement, we tackle the task of learning the entire image-processing pipeline, from the RAW image to a processed image. For this task, a traditional image processing pipeline often consists of multiple specialized parts that are overly reliant on the downstream tasks. Unlike these, we develop a new generative ISP that relies on fine-tuning latent diffusion models on RAW images and generating processed long-exposure images which allows for the apt use of the priors from large text-to-image generation models. We evaluate our approach on popular end-to-end low-light datasets for which we see promising results and set a new SoTA on the See-in-Dark (SID) dataset. Furthermore, with this work, we hope to pave the way for more generative and diffusion-based image processing and other problems on RAW data.
|
[
"['Rishit Dagli']"
] |
null | null |
2402.18576
| null | null |
http://arxiv.org/abs/2402.18576v1
|
2024-01-10T01:15:33Z
|
2024-01-10T01:15:33Z
|
Improved Forecasting Using a PSO-RDV Framework to Enhance Artificial
Neural Network
|
Decision making and planning have long relied heavily on AI-driven forecasts. The government and the general public are working to minimize the risks while maximizing benefits in the face of potential future public health uncertainties. This study used an improved method of forecasting utilizing the Random Descending Velocity Inertia Weight (RDV IW) technique to improve the convergence of Particle Swarm Optimization (PSO) and the accuracy of Artificial Neural Network (ANN). The IW technique, inspired by the motions of a golf ball, modified the particles' velocities as they approached the solution point to a parabolically descending structure. Simulation results revealed that the proposed forecasting model with [0.4, 0.9] combination of alpha and alpha_dump exhibits a 6.36% improvement in position error and 11.75% improvement in computational time compared to the old model, thus, improving its convergence. It reached the optimum level at minimal steps with 12.50% improvement as against the old model since it provides better velocity averages when speed stabilization occurs at the 24th iteration. Meanwhile, the computed p-values for NRMSE (0.04889174), MAE (0.02829063), MAPE (0.02226053), WAPE (0.01701545), and R2 (0.00000021) of the proposed algorithm are less than the set 0.05 level of significance, thus the values indicated a significant result in terms of accuracy performance. Applying the modified ANN-PSO using RDV IW technique greatly improved the new HIV/AIDS forecasting model compared with the two models.
|
[
"['Sales Aribe Jr']"
] |
null | null |
2402.18583
| null | null |
http://arxiv.org/pdf/2402.18583v1
|
2024-01-15T00:34:00Z
|
2024-01-15T00:34:00Z
|
Binding-Adaptive Diffusion Models for Structure-Based Drug Design
|
Structure-based drug design (SBDD) aims to generate 3D ligand molecules that bind to specific protein targets. Existing 3D deep generative models including diffusion models have shown great promise for SBDD. However, it is complex to capture the essential protein-ligand interactions exactly in 3D space for molecular generation. To address this problem, we propose a novel framework, namely Binding-Adaptive Diffusion Models (BindDM). In BindDM, we adaptively extract subcomplex, the essential part of binding sites responsible for protein-ligand interactions. Then the selected protein-ligand subcomplex is processed with SE(3)-equivariant neural networks, and transmitted back to each atom of the complex for augmenting the target-aware 3D molecule diffusion generation with binding interaction information. We iterate this hierarchical complex-subcomplex process with cross-hierarchy interaction node for adequately fusing global binding context between the complex and its corresponding subcomplex. Empirical studies on the CrossDocked2020 dataset show BindDM can generate molecules with more realistic 3D structures and higher binding affinities towards the protein targets, with up to -5.92 Avg. Vina Score, while maintaining proper molecular properties. Our code is available at https://github.com/YangLing0818/BindDM
|
[
"['Zhilin Huang' 'Ling Yang' 'Zaixi Zhang' 'Xiangxin Zhou' 'Yu Bao'\n 'Xiawu Zheng' 'Yuwei Yang' 'Yu Wang' 'Wenming Yang']"
] |
null | null |
2402.18587
| null | null |
http://arxiv.org/abs/2402.18587v1
|
2024-02-02T06:23:25Z
|
2024-02-02T06:23:25Z
|
At the Dawn of Generative AI Era: A Tutorial-cum-Survey on New Frontiers
in 6G Wireless Intelligence
|
The majority of data-driven wireless research leans heavily on discriminative AI (DAI) that requires vast real-world datasets. Unlike the DAI, Generative AI (GenAI) pertains to generative models (GMs) capable of discerning the underlying data distribution, patterns, and features of the input data. This makes GenAI a crucial asset in wireless domain wherein real-world data is often scarce, incomplete, costly to acquire, and hard to model or comprehend. With these appealing attributes, GenAI can replace or supplement DAI methods in various capacities. Accordingly, this combined tutorial-survey paper commences with preliminaries of 6G and wireless intelligence by outlining candidate 6G applications and services, presenting a taxonomy of state-of-the-art DAI models, exemplifying prominent DAI use cases, and elucidating the multifaceted ways through which GenAI enhances DAI. Subsequently, we present a tutorial on GMs by spotlighting seminal examples such as generative adversarial networks, variational autoencoders, flow-based GMs, diffusion-based GMs, generative transformers, large language models, to name a few. Contrary to the prevailing belief that GenAI is a nascent trend, our exhaustive review of approximately 120 technical papers demonstrates the scope of research across core wireless research areas, including physical layer design; network optimization, organization, and management; network traffic analytics; cross-layer network security; and localization & positioning. Furthermore, we outline the central role of GMs in pioneering areas of 6G network research, including semantic/THz/near-field communications, ISAC, extremely large antenna arrays, digital twins, AI-generated content services, mobile edge computing and edge AI, adversarial ML, and trustworthy AI. Lastly, we shed light on the multifarious challenges ahead, suggesting potential strategies and promising remedies.
|
[
"['Abdulkadir Celik' 'Ahmed M. Eltawil']"
] |
null | null |
2402.18589
| null | null |
http://arxiv.org/pdf/2402.18589v1
|
2024-02-09T10:25:01Z
|
2024-02-09T10:25:01Z
|
Verif.ai: Towards an Open-Source Scientific Generative
Question-Answering System with Referenced and Verifiable Answers
|
In this paper, we present the current progress of the project Verif.ai, an open-source scientific generative question-answering system with referenced and verified answers. The components of the system are (1) an information retrieval system combining semantic and lexical search techniques over scientific papers (PubMed), (2) a fine-tuned generative model (Mistral 7B) taking top answers and generating answers with references to the papers from which the claim was derived, and (3) a verification engine that cross-checks the generated claim and the abstract or paper from which the claim was derived, verifying whether there may have been any hallucinations in generating the claim. We are reinforcing the generative model by providing the abstract in context, but in addition, an independent set of methods and models are verifying the answer and checking for hallucinations. Therefore, we believe that by using our method, we can make scientists more productive, while building trust in the use of generative language models in scientific environments, where hallucinations and misinformation cannot be tolerated.
|
[
"['Miloš Košprdić' 'Adela Ljajić' 'Bojana Bašaragin' 'Darija Medvecki'\n 'Nikola Milošević']"
] |
null | null |
2402.18591
| null | null |
http://arxiv.org/pdf/2402.18591v1
|
2024-02-12T06:56:13Z
|
2024-02-12T06:56:13Z
|
Stochastic contextual bandits with graph feedback: from independence
number to MAS number
|
We consider contextual bandits with graph feedback, a class of interactive learning problems with richer structures than vanilla contextual bandits, where taking an action reveals the rewards for all neighboring actions in the feedback graph under all contexts. Unlike the multi-armed bandits setting where a growing literature has painted a near-complete understanding of graph feedback, much remains unexplored in the contextual bandits counterpart. In this paper, we make inroads into this inquiry by establishing a regret lower bound $Omega(sqrt{beta_M(G) T})$, where $M$ is the number of contexts, $G$ is the feedback graph, and $beta_M(G)$ is our proposed graph-theoretical quantity that characterizes the fundamental learning limit for this class of problems. Interestingly, $beta_M(G)$ interpolates between $alpha(G)$ (the independence number of the graph) and $mathsf{m}(G)$ (the maximum acyclic subgraph (MAS) number of the graph) as the number of contexts $M$ varies. We also provide algorithms that achieve near-optimal regrets for important classes of context sequences and/or feedback graphs, such as transitively closed graphs that find applications in auctions and inventory control. In particular, with many contexts, our results show that the MAS number completely characterizes the statistical complexity for contextual bandits, as opposed to the independence number in multi-armed bandits.
|
[
"['Yuxiao Wen' 'Yanjun Han' 'Zhengyuan Zhou']"
] |
null | null |
2402.18595
| null | null |
http://arxiv.org/pdf/2402.18595v1
|
2024-02-25T09:35:30Z
|
2024-02-25T09:35:30Z
|
EncodingNet: A Novel Encoding-based MAC Design for Efficient Neural
Network Acceleration
|
Deep neural networks (DNNs) have achieved great breakthroughs in many fields such as image classification and natural language processing. However, the execution of DNNs needs to conduct massive numbers of multiply-accumulate (MAC) operations on hardware and thus incurs a large power consumption. To address this challenge, we propose a novel digital MAC design based on encoding. In this new design, the multipliers are replaced by simple logic gates to project the results onto a wide bit representation. These bits carry individual position weights, which can be trained for specific neural networks to enhance inference accuracy. The outputs of the new multipliers are added by bit-wise weighted accumulation and the accumulation results are compatible with existing computing platforms accelerating neural networks with either uniform or non-uniform quantization. Since the multiplication function is replaced by simple logic projection, the critical paths in the resulting circuits become much shorter. Correspondingly, pipelining stages in the MAC array can be reduced, leading to a significantly smaller area as well as a better power efficiency. The proposed design has been synthesized and verified by ResNet18-Cifar10, ResNet20-Cifar100 and ResNet50-ImageNet. The experimental results confirmed the reduction of circuit area by up to 79.63% and the reduction of power consumption of executing DNNs by up to 70.18%, while the accuracy of the neural networks can still be well maintained.
|
[
"['Bo Liu' 'Grace Li Zhang' 'Xunzhao Yin' 'Ulf Schlichtmann' 'Bing Li']"
] |
null | null |
2402.18599
| null | null |
http://arxiv.org/pdf/2402.18599v1
|
2024-02-27T21:15:40Z
|
2024-02-27T21:15:40Z
|
Meta-Tasks: An alternative view on Meta-Learning Regularization
|
Few-shot learning (FSL) is a challenging machine learning problem due to a scarcity of labeled data. The ability to generalize effectively on both novel and training tasks is a significant barrier to FSL. This paper proposes a novel solution that can generalize to both training and novel tasks while also utilizing unlabeled samples. The method refines the embedding model before updating the outer loop using unsupervised techniques as ``meta-tasks''. The experimental results show that our proposed method performs well on novel and training tasks, with faster and better convergence, lower generalization, and standard deviation error, indicating its potential for practical applications in FSL. The experimental results show that the proposed method outperforms prototypical networks by 3.9%.
|
[
"['Mohammad Rostami' 'Atik Faysal' 'Huaxia Wang' 'Avimanyu Sahoo'\n 'Ryan Antle']"
] |
null | null |
2402.18603
| null | null |
http://arxiv.org/pdf/2402.18603v4
|
2024-03-14T12:10:43Z
|
2024-02-28T08:29:42Z
|
MMSR: Symbolic Regression is a Multimodal Task
|
Mathematical formulas are the crystallization of human wisdom in exploring the laws of nature for thousands of years. Describing the complex laws of nature with a concise mathematical formula is a constant pursuit of scientists and a great challenge for artificial intelligence. This field is called symbolic regression. Symbolic regression was originally formulated as a combinatorial optimization problem, and GP and reinforcement learning algorithms were used to solve it. However, GP is sensitive to hyperparameters, and these two types of algorithms are inefficient. To solve this problem, researchers treat the mapping from data to expressions as a translation problem. And the corresponding large-scale pre-trained model is introduced. However, the data and expression skeletons do not have very clear word correspondences as the two languages do. Instead, they are more like two modalities (e.g., image and text). Therefore, in this paper, we proposed MMSR. The SR problem is solved as a pure multimodal problem, and contrastive learning is also introduced in the training process for modal alignment to facilitate later modal feature fusion. It is worth noting that in order to better promote the modal feature fusion, we adopt the strategy of training contrastive learning loss and other losses at the same time, which only needs one-step training, instead of training contrastive learning loss first and then training other losses. Because our experiments prove training together can make the feature extraction module and feature fusion module running-in better. Experimental results show that compared with multiple large-scale pre-training baselines, MMSR achieves the most advanced results on multiple mainstream datasets including SRBench.
|
[
"['Yanjie Li' 'Jingyi Liu' 'Weijun Li' 'Lina Yu' 'Min Wu' 'Wenqiang Li'\n 'Meilan Hao' 'Su Wei' 'Yusong Deng']"
] |
null | null |
2402.18605
| null | null |
http://arxiv.org/pdf/2402.18605v2
|
2024-05-31T21:34:33Z
|
2024-02-28T10:57:30Z
|
FORML: A Riemannian Hessian-free Method for Meta-learning on Stiefel
Manifolds
|
Meta-learning problem is usually formulated as a bi-level optimization in which the task-specific and the meta-parameters are updated in the inner and outer loops of optimization, respectively. However, performing the optimization in the Riemannian space, where the parameters and meta-parameters are located on Riemannian manifolds is computationally intensive. Unlike the Euclidean methods, the Riemannian backpropagation needs computing the second-order derivatives that include backward computations through the Riemannian operators such as retraction and orthogonal projection. This paper introduces a Hessian-free approach that uses a first-order approximation of derivatives on the Stiefel manifold. Our method significantly reduces the computational load and memory footprint. We show how using a Stiefel fully-connected layer that enforces orthogonality constraint on the parameters of the last classification layer as the head of the backbone network, strengthens the representation reuse of the gradient-based meta-learning methods. Our experimental results across various few-shot learning datasets, demonstrate the superiority of our proposed method compared to the state-of-the-art methods, especially MAML, its Euclidean counterpart.
|
[
"['Hadi Tabealhojeh' 'Soumava Kumar Roy' 'Peyman Adibi' 'Hossein Karshenas']"
] |
null | null |
2402.18606
| null | null |
http://arxiv.org/pdf/2402.18606v1
|
2024-02-28T11:13:53Z
|
2024-02-28T11:13:53Z
|
Impact of network topology on the performance of Decentralized Federated
Learning
|
Fully decentralized learning is gaining momentum for training AI models at the Internet's edge, addressing infrastructure challenges and privacy concerns. In a decentralized machine learning system, data is distributed across multiple nodes, with each node training a local model based on its respective dataset. The local models are then shared and combined to form a global model capable of making accurate predictions on new data. Our exploration focuses on how different types of network structures influence the spreading of knowledge - the process by which nodes incorporate insights gained from learning patterns in data available on other nodes across the network. Specifically, this study investigates the intricate interplay between network structure and learning performance using three network topologies and six data distribution methods. These methods consider different vertex properties, including degree centrality, betweenness centrality, and clustering coefficient, along with whether nodes exhibit high or low values of these metrics. Our findings underscore the significance of global centrality metrics (degree, betweenness) in correlating with learning performance, while local clustering proves less predictive. We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation. Additionally, we observe that central nodes exert a pull effect, facilitating the spread of knowledge. In examining degree distribution, hubs in Barabasi-Albert networks positively impact learning for central nodes but exacerbate dilution when knowledge originates from peripheral nodes. Finally, we demonstrate the formidable challenge of knowledge circulation outside of segregated communities.
|
[
"['Luigi Palmieri' 'Chiara Boldrini' 'Lorenzo Valerio' 'Andrea Passarella'\n 'Marco Conti']"
] |
null | null |
2402.18607
| null | null |
http://arxiv.org/pdf/2402.18607v2
|
2024-03-04T03:51:35Z
|
2024-02-28T12:21:12Z
|
Exploring Privacy and Fairness Risks in Sharing Diffusion Models: An
Adversarial Perspective
|
Diffusion models have recently gained significant attention in both academia and industry due to their impressive generative performance in terms of both sampling quality and distribution coverage. Accordingly, proposals are made for sharing pre-trained diffusion models across different organizations, as a way of improving data utilization while enhancing privacy protection by avoiding sharing private data directly. However, the potential risks associated with such an approach have not been comprehensively examined. In this paper, we take an adversarial perspective to investigate the potential privacy and fairness risks associated with the sharing of diffusion models. Specifically, we investigate the circumstances in which one party (the sharer) trains a diffusion model using private data and provides another party (the receiver) black-box access to the pre-trained model for downstream tasks. We demonstrate that the sharer can execute fairness poisoning attacks to undermine the receiver's downstream models by manipulating the training data distribution of the diffusion model. Meanwhile, the receiver can perform property inference attacks to reveal the distribution of sensitive features in the sharer's dataset. Our experiments conducted on real-world datasets demonstrate remarkable attack performance on different types of diffusion models, which highlights the critical importance of robust data auditing and privacy protection protocols in pertinent applications.
|
[
"['Xinjian Luo' 'Yangfan Jiang' 'Fei Wei' 'Yuncheng Wu' 'Xiaokui Xiao'\n 'Beng Chin Ooi']"
] |
null | null |
2402.18609
| null | null |
http://arxiv.org/pdf/2402.18609v4
|
2024-05-08T18:05:43Z
|
2024-02-28T15:06:25Z
|
ICE-SEARCH: A Language Model-Driven Feature Selection Approach
|
This study unveils the In-Context Evolutionary Search (ICE-SEARCH) method, which is among the first works that melds large language models (LLMs) with evolutionary algorithms for feature selection (FS) tasks and demonstrates its effectiveness in Medical Predictive Analytics (MPA) applications. ICE-SEARCH harnesses the crossover and mutation capabilities inherent in LLMs within an evolutionary framework, significantly improving FS through the model's comprehensive world knowledge and its adaptability to a variety of roles. Our evaluation of this methodology spans three crucial MPA tasks: stroke, cardiovascular disease, and diabetes, where ICE-SEARCH outperforms traditional FS methods in pinpointing essential features for medical applications. ICE-SEARCH achieves State-of-the-Art (SOTA) performance in stroke prediction and diabetes prediction; the Decision-Randomized ICE-SEARCH ranks as SOTA in cardiovascular disease prediction. The study emphasizes the critical role of incorporating domain-specific insights, illustrating ICE-SEARCH's robustness, generalizability, and convergence. This opens avenues for further research into comprehensive and intricate FS landscapes, marking a significant stride in the application of artificial intelligence in medical predictive analytics.
|
[
"['Tianze Yang' 'Tianyi Yang' 'Fuyuan Lyu' 'Shaoshan Liu' 'Xue' 'Liu']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.