categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.03183
null
null
http://arxiv.org/pdf/2403.03183v1
2024-03-05T18:20:10Z
2024-03-05T18:20:10Z
How Well Can Transformers Emulate In-context Newton's Method?
Transformer-based models have demonstrated remarkable in-context learning capabilities, prompting extensive research into its underlying mechanisms. Recent studies have suggested that Transformers can implement first-order optimization algorithms for in-context learning and even second order ones for the case of linear regression. In this work, we study whether Transformers can perform higher order optimization methods, beyond the case of linear regression. We establish that linear attention Transformers with ReLU layers can approximate second order optimization algorithms for the task of logistic regression and achieve $epsilon$ error with only a logarithmic to the error more layers. As a by-product we demonstrate the ability of even linear attention-only Transformers in implementing a single step of Newton's iteration for matrix inversion with merely two layers. These results suggest the ability of the Transformer architecture to implement complex algorithms, beyond gradient descent.
[ "['Angeliki Giannou' 'Liu Yang' 'Tianhao Wang' 'Dimitris Papailiopoulos'\n 'Jason D. Lee']" ]
null
null
2403.03185
null
null
http://arxiv.org/pdf/2403.03185v1
2024-03-05T18:22:15Z
2024-03-05T18:22:15Z
Preventing Reward Hacking with Occupancy Measure Regularization
Reward hacking occurs when an agent performs very well with respect to a "proxy" reward function (which may be hand-specified or learned), but poorly with respect to the unknown true reward. Since ensuring good alignment between the proxy and true reward is extremely difficult, one approach to prevent reward hacking is optimizing the proxy conservatively. Prior work has particularly focused on enforcing the learned policy to behave similarly to a "safe" policy by penalizing the KL divergence between their action distributions (AD). However, AD regularization doesn't always work well since a small change in action distribution at a single state can lead to potentially calamitous outcomes, while large changes might not be indicative of any dangerous activity. Our insight is that when reward hacking, the agent visits drastically different states from those reached by the safe policy, causing large deviations in state occupancy measure (OM). Thus, we propose regularizing based on the OM divergence between policies instead of AD divergence to prevent reward hacking. We theoretically establish that OM regularization can more effectively avoid large drops in true reward. Then, we empirically demonstrate in a variety of realistic environments that OM divergence is superior to AD divergence for preventing reward hacking by regularizing towards a safe policy. Furthermore, we show that occupancy measure divergence can also regularize learned policies away from reward hacking behavior. Our code and data are available at https://github.com/cassidylaidlaw/orpo
[ "['Cassidy Laidlaw' 'Shivam Singhal' 'Anca Dragan']" ]
null
null
2403.03187
null
null
http://arxiv.org/pdf/2403.03187v1
2024-03-05T18:22:33Z
2024-03-05T18:22:33Z
Reliable, Adaptable, and Attributable Language Models with Retrieval
Parametric language models (LMs), which are trained on vast amounts of web data, exhibit remarkable flexibility and capability. However, they still face practical challenges such as hallucinations, difficulty in adapting to new data distributions, and a lack of verifiability. In this position paper, we advocate for retrieval-augmented LMs to replace parametric LMs as the next generation of LMs. By incorporating large-scale datastores during inference, retrieval-augmented LMs can be more reliable, adaptable, and attributable. Despite their potential, retrieval-augmented LMs have yet to be widely adopted due to several obstacles: specifically, current retrieval-augmented LMs struggle to leverage helpful text beyond knowledge-intensive tasks such as question answering, have limited interaction between retrieval and LM components, and lack the infrastructure for scaling. To address these, we propose a roadmap for developing general-purpose retrieval-augmented LMs. This involves a reconsideration of datastores and retrievers, the exploration of pipelines with improved retriever-LM interaction, and significant investment in infrastructure for efficient training and inference.
[ "['Akari Asai' 'Zexuan Zhong' 'Danqi Chen' 'Pang Wei Koh'\n 'Luke Zettlemoyer' 'Hannaneh Hajishirzi' 'Wen-tau Yih']" ]
null
null
2403.03208
null
null
http://arxiv.org/pdf/2403.03208v2
2024-05-29T05:20:54Z
2024-03-05T18:46:50Z
Active Statistical Inference
Inspired by the concept of active learning, we propose active inference$unicode{x2013}$a methodology for statistical inference with machine-learning-assisted data collection. Assuming a budget on the number of labels that can be collected, the methodology uses a machine learning model to identify which data points would be most beneficial to label, thus effectively utilizing the budget. It operates on a simple yet powerful intuition: prioritize the collection of labels for data points where the model exhibits uncertainty, and rely on the model's predictions where it is confident. Active inference constructs provably valid confidence intervals and hypothesis tests while leveraging any black-box machine learning model and handling any data distribution. The key point is that it achieves the same level of accuracy with far fewer samples than existing baselines relying on non-adaptively-collected data. This means that for the same number of collected samples, active inference enables smaller confidence intervals and more powerful p-values. We evaluate active inference on datasets from public opinion research, census analysis, and proteomics.
[ "['Tijana Zrnic' 'Emmanuel J. Candès']" ]
null
null
2403.03218
null
null
http://arxiv.org/pdf/2403.03218v7
2024-05-15T19:16:09Z
2024-03-05T18:59:35Z
The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning
The White House Executive Order on Artificial Intelligence highlights the risks of large language models (LLMs) empowering malicious actors in developing biological, cyber, and chemical weapons. To measure these risks of malicious use, government institutions and major AI labs are developing evaluations for hazardous capabilities in LLMs. However, current evaluations are private, preventing further research into mitigating risk. Furthermore, they focus on only a few, highly specific pathways for malicious use. To fill these gaps, we publicly release the Weapons of Mass Destruction Proxy (WMDP) benchmark, a dataset of 3,668 multiple-choice questions that serve as a proxy measurement of hazardous knowledge in biosecurity, cybersecurity, and chemical security. WMDP was developed by a consortium of academics and technical consultants, and was stringently filtered to eliminate sensitive information prior to public release. WMDP serves two roles: first, as an evaluation for hazardous knowledge in LLMs, and second, as a benchmark for unlearning methods to remove such hazardous knowledge. To guide progress on unlearning, we develop RMU, a state-of-the-art unlearning method based on controlling model representations. RMU reduces model performance on WMDP while maintaining general capabilities in areas such as biology and computer science, suggesting that unlearning may be a concrete path towards reducing malicious use from LLMs. We release our benchmark and code publicly at https://wmdp.ai
[ "['Nathaniel Li' 'Alexander Pan' 'Anjali Gopal' 'Summer Yue'\n 'Daniel Berrios' 'Alice Gatti' 'Justin D. Li' 'Ann-Kathrin Dombrowski'\n 'Shashwat Goel' 'Long Phan' 'Gabriel Mukobi' 'Nathan Helm-Burger'\n 'Rassin Lababidi' 'Lennart Justen' 'Andrew B. Liu' 'Michael Chen'\n 'Isabelle Barrass' 'Oliver Zhang' 'Xiaoyuan Zhu' 'Rishub Tamirisa'\n 'Bhrugu Bharathi' 'Adam Khoja' 'Zhenqi Zhao' 'Ariel Herbert-Voss'\n 'Cort B. Breuer' 'Samuel Marks' 'Oam Patel' 'Andy Zou' 'Mantas Mazeika'\n 'Zifan Wang' 'Palash Oswal' 'Weiran Lin' 'Adam A. Hunt'\n 'Justin Tienken-Harder' 'Kevin Y. Shih' 'Kemper Talley' 'John Guan'\n 'Russell Kaplan' 'Ian Steneker' 'David Campbell' 'Brad Jokubaitis'\n 'Alex Levinson' 'Jean Wang' 'William Qian' 'Kallol Krishna Karmakar'\n 'Steven Basart' 'Stephen Fitz' 'Mindy Levine' 'Ponnurangam Kumaraguru'\n 'Uday Tupakula' 'Vijay Varadharajan' 'Ruoyu Wang' 'Yan Shoshitaishvili'\n 'Jimmy Ba' 'Kevin M. Esvelt' 'Alexandr Wang' 'Dan Hendrycks']" ]
null
null
2403.03219
null
null
http://arxiv.org/pdf/2403.03219v2
2024-04-03T21:49:42Z
2024-03-05T18:59:47Z
LC-Tsallis-INF: Generalized Best-of-Both-Worlds Linear Contextual Bandits
This study considers the linear contextual bandit problem with independent and identically distributed (i.i.d.) contexts. In this problem, existing studies have proposed Best-of-Both-Worlds (BoBW) algorithms whose regrets satisfy $O(log^2(T))$ for the number of rounds $T$ in a stochastic regime with a suboptimality gap lower-bounded by a positive constant, while satisfying $O(sqrt{T})$ in an adversarial regime. However, the dependency on $T$ has room for improvement, and the suboptimality-gap assumption can be relaxed. For this issue, this study proposes an algorithm whose regret satisfies $O(log(T))$ in the setting when the suboptimality gap is lower-bounded. Furthermore, we introduce a margin condition, a milder assumption on the suboptimality gap. That condition characterizes the problem difficulty linked to the suboptimality gap using a parameter $beta in (0, infty]$. We then show that the algorithm's regret satisfies $Oleft(left{log(T)right}^{frac{1+beta}{2+beta}}T^{frac{1}{2+beta}}right)$. Here, $beta= infty$ corresponds to the case in the existing studies where a lower bound exists in the suboptimality gap, and our regret satisfies $O(log(T))$ in that case. Our proposed algorithm is based on the Follow-The-Regularized-Leader with the Tsallis entropy and referred to as the $alpha$-Linear-Contextual (LC)-Tsallis-INF.
[ "['Masahiro Kato' 'Shinji Ito']" ]
null
null
2403.03222
null
null
http://arxiv.org/pdf/2403.03222v1
2024-02-15T01:52:44Z
2024-02-15T01:52:44Z
Knowledge-guided EEG Representation Learning
Self-supervised learning has produced impressive results in multimedia domains of audio, vision and speech. This paradigm is equally, if not more, relevant for the domain of biosignals, owing to the scarcity of labelled data in such scenarios. The ability to leverage large-scale unlabelled data to learn robust representations could help improve the performance of numerous inference tasks on biosignals. Given the inherent domain differences between multimedia modalities and biosignals, the established objectives for self-supervised learning may not translate well to this domain. Hence, there is an unmet need to adapt these methods to biosignal analysis. In this work we propose a self-supervised model for EEG, which provides robust performance and remarkable parameter efficiency by using state space-based deep learning architecture. We also propose a novel knowledge-guided pre-training objective that accounts for the idiosyncrasies of the EEG signal. The results indicate improved embedding representation learning and downstream performance compared to prior works on exemplary tasks. Also, the proposed objective significantly reduces the amount of pre-training data required to obtain performance equivalent to prior works.
[ "['Aditya Kommineni' 'Kleanthis Avramidis' 'Richard Leahy'\n 'Shrikanth Narayanan']" ]
null
null
2403.03223
null
null
http://arxiv.org/pdf/2403.03223v2
2024-03-07T06:12:56Z
2024-02-15T17:41:02Z
Exact Enforcement of Temporal Continuity in Sequential Physics-Informed Neural Networks
The use of deep learning methods in scientific computing represents a potential paradigm shift in engineering problem solving. One of the most prominent developments is Physics-Informed Neural Networks (PINNs), in which neural networks are trained to satisfy partial differential equations (PDEs). While this method shows promise, the standard version has been shown to struggle in accurately predicting the dynamic behavior of time-dependent problems. To address this challenge, methods have been proposed that decompose the time domain into multiple segments, employing a distinct neural network in each segment and directly incorporating continuity between them in the loss function of the minimization problem. In this work we introduce a method to exactly enforce continuity between successive time segments via a solution ansatz. This hard constrained sequential PINN (HCS-PINN) method is simple to implement and eliminates the need for any loss terms associated with temporal continuity. The method is tested for a number of benchmark problems involving both linear and non-linear PDEs. Examples include various first order time dependent problems in which traditional PINNs struggle, namely advection, Allen-Cahn, and Korteweg-de Vries equations. Furthermore, second and third order time-dependent problems are demonstrated via wave and Jerky dynamics examples, respectively. Notably, the Jerky dynamics problem is chaotic, making the problem especially sensitive to temporal accuracy. The numerical experiments conducted with the proposed method demonstrated superior convergence and accuracy over both traditional PINNs and the soft-constrained counterparts.
[ "['Pratanu Roy' 'Stephen Castonguay']" ]
null
null
2403.03224
null
null
http://arxiv.org/pdf/2403.03224v1
2024-02-25T16:46:15Z
2024-02-25T16:46:15Z
Reinforcement Learning Jazz Improvisation: When Music Meets Game Theory
Live performances of music are always charming, with the unpredictability of improvisation due to the dynamic between musicians and interactions with the audience. Jazz improvisation is a particularly noteworthy example for further investigation from a theoretical perspective. Here, we introduce a novel mathematical game theory model for jazz improvisation, providing a framework for studying music theory and improvisational methodologies. We use computational modeling, mainly reinforcement learning, to explore diverse stochastic improvisational strategies and their paired performance on improvisation. We find that the most effective strategy pair is a strategy that reacts to the most recent payoff (Stepwise Changes) with a reinforcement learning strategy limited to notes in the given chord (Chord-Following Reinforcement Learning). Conversely, a strategy that reacts to the partner's last note and attempts to harmonize with it (Harmony Prediction) strategy pair yields the lowest non-control payoff and highest standard deviation, indicating that picking notes based on immediate reactions to the partner player can yield inconsistent outcomes. On average, the Chord-Following Reinforcement Learning strategy demonstrates the highest mean payoff, while Harmony Prediction exhibits the lowest. Our work lays the foundation for promising applications beyond jazz: including the use of artificial intelligence (AI) models to extract data from audio clips to refine musical reward systems, and training machine learning (ML) models on existing jazz solos to further refine strategies within the game.
[ "['Vedant Tapiavala' 'Joshua Piesner' 'Sourjyamoy Barman' 'Feng Fu']" ]
null
null
2403.03229
null
null
http://arxiv.org/pdf/2403.03229v2
2024-03-12T16:03:03Z
2024-03-04T12:36:31Z
Embracing Uncertainty Flexibility: Harnessing a Supervised Tree Kernel to Empower Ensemble Modelling for 2D Echocardiography-Based Prediction of Right Ventricular Volume
The right ventricular (RV) function deterioration strongly predicts clinical outcomes in numerous circumstances. To boost the clinical deployment of ensemble regression methods that quantify RV volumes using tabular data from the widely available two-dimensional echocardiography (2DE), we propose to complement the volume predictions with uncertainty scores. To this end, we employ an instance-based method which uses the learned tree structure to identify the nearest training samples to a target instance and then uses a number of distribution types to more flexibly model the output. The probabilistic and point-prediction performances of the proposed framework are evaluated on a relatively small-scale dataset, comprising 100 end-diastolic and end-systolic RV volumes. The reference values for point performance were obtained from MRI. The results demonstrate that our flexible approach yields improved probabilistic and point performances over other state-of-the-art methods. The appropriateness of the proposed framework is showcased by providing exemplar cases. The estimated uncertainty embodies both aleatoric and epistemic types. This work aligns with trustworthy artificial intelligence since it can be used to enhance the decision-making process and reduce risks. The feature importance scores of our framework can be exploited to reduce the number of required 2DE views which could enhance the proposed pipeline's clinical application.
[ "['Tuan A. Bohoran' 'Polydoros N. Kampaktsis' 'Laura McLaughlin' 'Jay Leb'\n 'Gerry P. McCann' 'Archontis Giannakidis']" ]
null
null
2403.03231
null
null
http://arxiv.org/pdf/2403.03231v1
2024-03-04T19:04:41Z
2024-03-04T19:04:41Z
Machine and deep learning methods for predicting 3D genome organization
Three-Dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, Topologically Associating Domains (TADs), and A/B compartments play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers, Transcription Factor Binding Site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, TAD boundaries) and analyze their pros and cons. We also point out obstacles of computational prediction of 3D interactions and suggest future research directions.
[ "['Brydon P. G. Wall' 'My Nguyen' 'J. Chuck Harrell' 'Mikhail G. Dozmorov']" ]
null
null
2403.03233
null
null
http://arxiv.org/pdf/2403.03233v1
2024-03-04T20:40:50Z
2024-03-04T20:40:50Z
From Displacements to Distributions: A Machine-Learning Enabled Framework for Quantifying Uncertainties in Parameters of Computational Models
This work presents novel extensions for combining two frameworks for quantifying both aleatoric (i.e., irreducible) and epistemic (i.e., reducible) sources of uncertainties in the modeling of engineered systems. The data-consistent (DC) framework poses an inverse problem and solution for quantifying aleatoric uncertainties in terms of pullback and push-forward measures for a given Quantity of Interest (QoI) map. Unfortunately, a pre-specified QoI map is not always available a priori to the collection of data associated with system outputs. The data themselves are often polluted with measurement errors (i.e., epistemic uncertainties), which complicates the process of specifying a useful QoI. The Learning Uncertain Quantities (LUQ) framework defines a formal three-step machine-learning enabled process for transforming noisy datasets into samples of a learned QoI map to enable DC-based inversion. We develop a robust filtering step in LUQ that can learn the most useful quantitative information present in spatio-temporal datasets. The learned QoI map transforms simulated and observed datasets into distributions to perform DC-based inversion. We also develop a DC-based inversion scheme that iterates over time as new spatial datasets are obtained and utilizes quantitative diagnostics to identify both the quality and impact of inversion at each iteration. Reproducing Kernel Hilbert Space theory is leveraged to mathematically analyze the learned QoI map and develop a quantitative sufficiency test for evaluating the filtered data. An illustrative example is utilized throughout while the final two examples involve the manufacturing of shells of revolution to demonstrate various aspects of the presented frameworks.
[ "['Taylor Roper' 'Harri Hakula' 'Troy Butler']" ]
null
null
2403.03234
null
null
http://arxiv.org/pdf/2403.03234v2
2024-06-05T21:02:37Z
2024-03-05T01:42:51Z
Caduceus: Bi-Directional Equivariant Long-Range DNA Sequence Modeling
Large-scale sequence modeling has sparked rapid advances that now extend into biology and genomics. However, modeling genomic sequences introduces challenges such as the need to model long-range token interactions, the effects of upstream and downstream regions of the genome, and the reverse complementarity (RC) of DNA. Here, we propose an architecture motivated by these challenges that builds off the long-range Mamba block, and extends it to a BiMamba component that supports bi-directionality, and to a MambaDNA block that additionally supports RC equivariance. We use MambaDNA as the basis of Caduceus, the first family of RC equivariant bi-directional long-range DNA language models, and we introduce pre-training and fine-tuning strategies that yield Caduceus DNA foundation models. Caduceus outperforms previous long-range models on downstream benchmarks; on a challenging long-range variant effect prediction task, Caduceus exceeds the performance of 10x larger models that do not leverage bi-directionality or equivariance.
[ "['Yair Schiff' 'Chia-Hsiang Kao' 'Aaron Gokaslan' 'Tri Dao' 'Albert Gu'\n 'Volodymyr Kuleshov']" ]
null
null
2403.03240
null
null
http://arxiv.org/pdf/2403.03240v1
2024-03-05T18:29:36Z
2024-03-05T18:29:36Z
Triple/Debiased Lasso for Statistical Inference of Conditional Average Treatment Effects
This study investigates the estimation and the statistical inference about Conditional Average Treatment Effects (CATEs), which have garnered attention as a metric representing individualized causal effects. In our data-generating process, we assume linear models for the outcomes associated with binary treatments and define the CATE as a difference between the expected outcomes of these linear models. This study allows the linear models to be high-dimensional, and our interest lies in consistent estimation and statistical inference for the CATE. In high-dimensional linear regression, one typical approach is to assume sparsity. However, in our study, we do not assume sparsity directly. Instead, we consider sparsity only in the difference of the linear models. We first use a doubly robust estimator to approximate this difference and then regress the difference on covariates with Lasso regularization. Although this regression estimator is consistent for the CATE, we further reduce the bias using the techniques in double/debiased machine learning (DML) and debiased Lasso, leading to $sqrt{n}$-consistency and confidence intervals. We refer to the debiased estimator as the triple/debiased Lasso (TDL), applying both DML and debiased Lasso techniques. We confirm the soundness of our proposed method through simulation studies.
[ "['Masahiro Kato']" ]
null
null
2403.03245
null
null
http://arxiv.org/pdf/2403.03245v1
2024-03-05T19:00:01Z
2024-03-05T19:00:01Z
Neural Network Learning and Quantum Gravity
The landscape of low-energy effective field theories stemming from string theory is too vast for a systematic exploration. However, the meadows of the string landscape may be fertile ground for the application of machine learning techniques. Employing neural network learning may allow for inferring novel, undiscovered properties that consistent theories in the landscape should possess, or checking conjectural statements about alleged characteristics thereof. The aim of this work is to describe to what extent the string landscape can be explored with neural network-based learning. Our analysis is motivated by recent studies that show that the string landscape is characterized by finiteness properties, emerging from its underlying tame, o-minimal structures. Indeed, employing these results, we illustrate that any low-energy effective theory of string theory is endowed with certain statistical learnability properties. Consequently, several learning problems therein formulated, including interpolations and multi-class classification problems, can be concretely addressed with machine learning, delivering results with sufficiently high accuracy.
[ "['Stefano Lanza']" ]
null
null
2403.03273
null
null
http://arxiv.org/pdf/2403.03273v1
2024-03-05T19:13:45Z
2024-03-05T19:13:45Z
DINOv2 based Self Supervised Learning For Few Shot Medical Image Segmentation
Deep learning models have emerged as the cornerstone of medical image segmentation, but their efficacy hinges on the availability of extensive manually labeled datasets and their adaptability to unforeseen categories remains a challenge. Few-shot segmentation (FSS) offers a promising solution by endowing models with the capacity to learn novel classes from limited labeled examples. A leading method for FSS is ALPNet, which compares features between the query image and the few available support segmented images. A key question about using ALPNet is how to design its features. In this work, we delve into the potential of using features from DINOv2, which is a foundational self-supervised learning model in computer vision. Leveraging the strengths of ALPNet and harnessing the feature extraction capabilities of DINOv2, we present a novel approach to few-shot segmentation that not only enhances performance but also paves the way for more robust and adaptable medical image analysis.
[ "['Lev Ayzenberg' 'Raja Giryes' 'Hayit Greenspan']" ]
null
null
2403.03274
null
null
http://arxiv.org/pdf/2403.03274v1
2024-03-05T19:13:57Z
2024-03-05T19:13:57Z
From Noise to Signal: Unveiling Treatment Effects from Digital Health Data through Pharmacology-Informed Neural-SDE
Digital health technologies (DHT), such as wearable devices, provide personalized, continuous, and real-time monitoring of patient. These technologies are contributing to the development of novel therapies and personalized medicine. Gaining insight from these technologies requires appropriate modeling techniques to capture clinically-relevant changes in disease state. The data generated from these devices is characterized by being stochastic in nature, may have missing elements, and exhibits considerable inter-individual variability - thereby making it difficult to analyze using traditional longitudinal modeling techniques. We present a novel pharmacology-informed neural stochastic differential equation (SDE) model capable of addressing these challenges. Using synthetic data, we demonstrate that our approach is effective in identifying treatment effects and learning causal relationships from stochastic data, thereby enabling counterfactual simulation.
[ "['Samira Pakravan' 'Nikolaos Evangelou' 'Maxime Usdin' 'Logan Brooks'\n 'James Lu']" ]
null
null
2403.03276
null
null
http://arxiv.org/pdf/2403.03276v1
2024-03-05T19:15:17Z
2024-03-05T19:15:17Z
ARNN: Attentive Recurrent Neural Network for Multi-channel EEG Signals to Identify Epileptic Seizures
We proposed an Attentive Recurrent Neural Network (ARNN), which recurrently applies attention layers along a sequence and has linear complexity with respect to the sequence length. The proposed model operates on multi-channel EEG signals rather than single channel signals and leverages parallel computation. In this cell, the attention layer is a computational unit that efficiently applies self-attention and cross-attention mechanisms to compute a recurrent function over a wide number of state vectors and input signals. Our architecture is inspired in part by the attention layer and long short-term memory (LSTM) cells, and it uses long-short style gates, but it scales this typical cell up by several orders to parallelize for multi-channel EEG signals. It inherits the advantages of attention layers and LSTM gate while avoiding their respective drawbacks. We evaluated the model effectiveness through extensive experiments with heterogeneous datasets, including the CHB-MIT and UPenn and Mayos Clinic, CHB-MIT datasets. The empirical findings suggest that the ARNN model outperforms baseline methods such as LSTM, Vision Transformer (ViT), Compact Convolution Transformer (CCT), and R-Transformer (RT), showcasing superior performance and faster processing capabilities across a wide range of tasks. The code has been made publicly accessible at url{https://github.com/Salim-Lysiun/ARNN}.
[ "['Salim Rukhsar' 'Anil Kumar Tiwari']" ]
null
null
2403.03281
null
null
http://arxiv.org/pdf/2403.03281v1
2024-03-05T19:25:55Z
2024-03-05T19:25:55Z
Credibility-Aware Multi-Modal Fusion Using Probabilistic Circuits
We consider the problem of late multi-modal fusion for discriminative learning. Motivated by noisy, multi-source domains that require understanding the reliability of each data source, we explore the notion of credibility in the context of multi-modal fusion. We propose a combination function that uses probabilistic circuits (PCs) to combine predictive distributions over individual modalities. We also define a probabilistic measure to evaluate the credibility of each modality via inference queries over the PC. Our experimental evaluation demonstrates that our fusion method can reliably infer credibility while maintaining competitive performance with the state-of-the-art.
[ "['Sahil Sidheekh' 'Pranuthi Tenali' 'Saurabh Mathur' 'Erik Blasch'\n 'Kristian Kersting' 'Sriraam Natarajan']" ]
null
null
2403.03292
null
null
http://arxiv.org/pdf/2403.03292v1
2024-03-05T19:47:51Z
2024-03-05T19:47:51Z
Averaging Rate Scheduler for Decentralized Learning on Heterogeneous Data
State-of-the-art decentralized learning algorithms typically require the data distribution to be Independent and Identically Distributed (IID). However, in practical scenarios, the data distribution across the agents can have significant heterogeneity. In this work, we propose averaging rate scheduling as a simple yet effective way to reduce the impact of heterogeneity in decentralized learning. Our experiments illustrate the superiority of the proposed method (~3% improvement in test accuracy) compared to the conventional approach of employing a constant averaging rate.
[ "['Sai Aparna Aketi' 'Sakshi Choudhary' 'Kaushik Roy']" ]
null
null
2403.03295
null
null
http://arxiv.org/pdf/2403.03295v1
2024-03-05T19:49:44Z
2024-03-05T19:49:44Z
Proper vs Improper Quantum PAC learning
A basic question in the PAC model of learning is whether proper learning is harder than improper learning. In the classical case, there are examples of concept classes with VC dimension $d$ that have sample complexity $Omegaleft(frac depsilonlogfrac1epsilonright)$ for proper learning with error $epsilon$, while the complexity for improper learning is O$!left(frac depsilonright)$. One such example arises from the Coupon Collector problem. Motivated by the efficiency of proper versus improper learning with quantum samples, Arunachalam, Belovs, Childs, Kothari, Rosmanis, and de Wolf (TQC 2020) studied an analogue, the Quantum Coupon Collector problem. Curiously, they discovered that for learning size $k$ subsets of $[n]$ the problem has sample complexity $Theta(klogmin{k,n-k+1})$, in contrast with the complexity of $Theta(klog k)$ for Coupon Collector. This effectively negates the possibility of a separation between the two modes of learning via the quantum problem, and Arunachalam et al. posed the possibility of such a separation as an open question. In this work, we first present an algorithm for the Quantum Coupon Collector problem with sample complexity that matches the sharper lower bound of $(1-o_k(1))klnmin{k,n-k+1}$ shown recently by Bab Hadiashar, Nayak, and Sinha (IEEE TIT 2024), for the entire range of the parameter $k$. Next, we devise a variant of the problem, the Quantum Padded Coupon Collector. We prove that its sample complexity matches that of the classical Coupon Collector problem for both modes of learning, thereby exhibiting the same asymptotic separation between proper and improper quantum learning as mentioned above. The techniques we develop in the process can be directly applied to any form of padded quantum data. We hope that padding can more generally lift other forms of classical learning behaviour to the quantum setting.
[ "['Ashwin Nayak' 'Pulkit Sinha']" ]
null
null
2403.03304
null
null
http://arxiv.org/pdf/2403.03304v2
2024-06-12T19:21:33Z
2024-03-05T20:07:42Z
Large Language Models for Document-Level Event-Argument Data Augmentation for Challenging Role Types
Event Argument Extraction (EAE) is an extremely difficult information extraction problem -- with significant limitations in few-shot cross-domain (FSCD) settings. A common solution to FSCD modeling is data augmentation. Unfortunately, existing augmentation methods are not well-suited to a variety of real-world EAE contexts including (i) The need to model long documents (10+ sentences) (ii) The need to model zero and few-shot roles (i.e. event roles with little to no training representation). In this work, we introduce two novel LLM-powered data augmentation frameworks for synthesizing extractive document-level EAE samples using zero in-domain training data. Our highest performing methods provide a 16-pt increase in F1 score on extraction of zero shot role types. To better facilitate analysis of cross-domain EAE, we additionally introduce a new metric, Role-Depth F1 (RDF1), which uses statistical depth to identify roles in the target domain which are semantic outliers with respect to roles observed in the source domain. Our experiments show that LLM-based augmentation can boost RDF1 performance by up to 11 F1 points compared to baseline methods.
[ "['Joseph Gatto' 'Parker Seegmiller' 'Omar Sharif' 'Sarah M. Preum']" ]
null
null
2403.03310
null
null
http://arxiv.org/pdf/2403.03310v1
2024-03-05T20:23:25Z
2024-03-05T20:23:25Z
Graph Learning for Parameter Prediction of Quantum Approximate Optimization Algorithm
In recent years, quantum computing has emerged as a transformative force in the field of combinatorial optimization, offering novel approaches to tackling complex problems that have long challenged classical computational methods. Among these, the Quantum Approximate Optimization Algorithm (QAOA) stands out for its potential to efficiently solve the Max-Cut problem, a quintessential example of combinatorial optimization. However, practical application faces challenges due to current limitations on quantum computational resource. Our work optimizes QAOA initialization, using Graph Neural Networks (GNN) as a warm-start technique. This sacrifices affordable computational resource on classical computer to reduce quantum computational resource overhead, enhancing QAOA's effectiveness. Experiments with various GNN architectures demonstrate the adaptability and stability of our framework, highlighting the synergy between quantum algorithms and machine learning. Our findings show GNN's potential in improving QAOA performance, opening new avenues for hybrid quantum-classical approaches in quantum computing and contributing to practical applications.
[ "['Zhiding Liang' 'Gang Liu' 'Zheyuan Liu' 'Jinglei Cheng' 'Tianyi Hao'\n 'Kecheng Liu' 'Hang Ren' 'Zhixin Song' 'Ji Liu' 'Fanny Ye' 'Yiyu Shi']" ]
null
null
2403.03314
null
null
http://arxiv.org/pdf/2403.03314v2
2024-04-25T19:12:02Z
2024-03-05T20:36:26Z
Collision Avoidance Verification of Multiagent Systems with Learned Policies
For many multiagent control problems, neural networks (NNs) have enabled promising new capabilities. However, many of these systems lack formal guarantees (e.g., collision avoidance, robustness), which prevents leveraging these advances in safety-critical settings. While there is recent work on formal verification of NN-controlled systems, most existing techniques cannot handle scenarios with more than one agent. To address this research gap, this paper presents a backward reachability-based approach for verifying the collision avoidance properties of Multi-Agent Neural Feedback Loops (MA-NFLs). Given the dynamics models and trained control policies of each agent, the proposed algorithm computes relative backprojection sets by (simultaneously) solving a series of Mixed Integer Linear Programs (MILPs) offline for each pair of agents. We account for state measurement uncertainties, making it well aligned with real-world scenarios. Using those results, the agents can quickly check for collision avoidance online by solving low-dimensional Linear Programs (LPs). We demonstrate the proposed algorithm can verify collision-free properties of a MA-NFL with agents trained to imitate a collision avoidance algorithm (Reciprocal Velocity Obstacles). We further demonstrate the computational scalability of the approach on systems with up to 10 agents.
[ "['Zihao Dong' 'Shayegan Omidshafiei' 'Michael Everett']" ]
null
null
2403.03322
null
null
http://arxiv.org/pdf/2403.03322v2
2024-07-02T22:28:00Z
2024-03-05T21:05:16Z
Deep Configuration Performance Learning: A Systematic Survey and Taxonomy
Performance is arguably the most crucial attribute that reflects the quality of a configurable software system. However, given the increasing scale and complexity of modern software, modeling and predicting how various configurations can impact performance becomes one of the major challenges in software maintenance. As such, performance is often modeled without having a thorough knowledge of the software system, but relying mainly on data, which fits precisely with the purpose of deep learning. In this paper, we conduct a comprehensive review exclusively on the topic of deep learning for performance learning of configurable software, covering 1,206 searched papers spanning six indexing services, based on which 99 primary papers were extracted and analyzed. Our results outline key statistics, taxonomy, strengths, weaknesses, and optimal usage scenarios for techniques related to the preparation of configuration data, the construction of deep learning performance models, the evaluation of these models, and their utilization in various software configuration-related tasks.We also identify the good practices and potentially problematic phenomena from the studies surveyed, together with a comprehensive summary of actionable suggestions and insights into future opportunities within the field. To promote open science, all the raw results of this survey can be accessed at our repository: https://github.com/ideas-labo/DCPL-SLR.
[ "['Jingzhi Gong' 'Tao Chen']" ]
null
null
2403.03328
null
null
http://arxiv.org/pdf/2403.03328v1
2024-03-05T21:12:10Z
2024-03-05T21:12:10Z
An Ensemble Framework for Explainable Geospatial Machine Learning Models
Analyzing spatial varying effect is pivotal in geographic analysis. Yet, accurately capturing and interpreting this variability is challenging due to the complexity and non-linearity of geospatial data. Herein, we introduce an integrated framework that merges local spatial weighting scheme, Explainable Artificial Intelligence (XAI), and cutting-edge machine learning technologies to bridge the gap between traditional geographic analysis models and general machine learning approaches. Through tests on synthetic datasets, this framework is verified to enhance the interpretability and accuracy of predictions in both geographic regression and classification by elucidating spatial variability. It significantly boosts prediction precision, offering a novel approach to understanding spatial phenomena.
[ "['Lingbo Liu']" ]
null
null
2403.03333
null
null
http://arxiv.org/pdf/2403.03333v2
2024-06-21T12:43:12Z
2024-03-05T21:34:23Z
Federated Learning over Connected Modes
Statistical heterogeneity in federated learning poses two major challenges: slow global training due to conflicting gradient signals, and the need of personalization for local distributions. In this work, we tackle both challenges by leveraging recent advances in emph{linear mode connectivity} -- identifying a linearly connected low-loss region in the weight space of neural networks, which we call solution simplex. We propose federated learning over connected modes (textsc{Floco}), where clients are assigned local subregions in this simplex based on their gradient signals, and together learn the shared global solution simplex. This allows personalization of the client models to fit their local distributions within the degrees of freedom in the solution simplex and homogenizes the update signals for the global simplex training. Our experiments show that textsc{Floco} accelerates the global training process, and significantly improves the local accuracy with minimal computational overhead.
[ "['Dennis Grinwald' 'Philipp Wiesner' 'Shinichi Nakajima']" ]
null
null
2403.03353
null
null
http://arxiv.org/pdf/2403.03353v2
2024-03-11T14:37:42Z
2024-03-05T22:42:29Z
Hypothesis Spaces for Deep Learning
This paper introduces a hypothesis space for deep learning that employs deep neural networks (DNNs). By treating a DNN as a function of two variables, the physical variable and parameter variable, we consider the primitive set of the DNNs for the parameter variable located in a set of the weight matrices and biases determined by a prescribed depth and widths of the DNNs. We then complete the linear span of the primitive DNN set in a weak* topology to construct a Banach space of functions of the physical variable. We prove that the Banach space so constructed is a reproducing kernel Banach space (RKBS) and construct its reproducing kernel. We investigate two learning models, regularized learning and minimum interpolation problem in the resulting RKBS, by establishing representer theorems for solutions of the learning models. The representer theorems unfold that solutions of these learning models can be expressed as linear combination of a finite number of kernel sessions determined by given data and the reproducing kernel.
[ "['Rui Wang' 'Yuesheng Xu' 'Mingsong Yan']" ]
null
null
2403.03359
null
null
http://arxiv.org/pdf/2403.03359v2
2024-03-15T10:32:45Z
2024-03-05T23:03:56Z
RACE-SM: Reinforcement Learning Based Autonomous Control for Social On-Ramp Merging
Autonomous parallel-style on-ramp merging in human controlled traffic continues to be an existing issue for autonomous vehicle control. Existing non-learning based solutions for vehicle control rely on rules and optimization primarily. These methods have been seen to present significant challenges. Recent advancements in Deep Reinforcement Learning have shown promise and have received significant academic interest however the available learning based approaches show inadequate attention to other highway vehicles and often rely on inaccurate road traffic assumptions. In addition, the parallel-style case is rarely considered. A novel learning based model for acceleration and lane change decision making that explicitly considers the utility to both the ego vehicle and its surrounding vehicles which may be cooperative or uncooperative to produce behaviour that is socially acceptable is proposed. The novel reward function makes use of Social Value Orientation to weight the vehicle's level of social cooperation and is divided into ego vehicle and surrounding vehicle utility which are weighted according to the model's designated Social Value Orientation. A two-lane highway with an on-ramp divided into a taper-style and parallel-style section is considered. Simulation results indicated the importance of considering surrounding vehicles in reward function design and show that the proposed model matches or surpasses those in literature in terms of collisions while also introducing socially courteous behaviour avoiding near misses and anti-social behaviour through direct consideration of the effect of merging on surrounding vehicles.
[ "['Jordan Poots']" ]
null
null
2403.03361
null
null
http://arxiv.org/pdf/2403.03361v1
2024-03-05T23:08:18Z
2024-03-05T23:08:18Z
Chained Information-Theoretic bounds and Tight Regret Rate for Linear Bandit Problems
This paper studies the Bayesian regret of a variant of the Thompson-Sampling algorithm for bandit problems. It builds upon the information-theoretic framework of [Russo and Van Roy, 2015] and, more specifically, on the rate-distortion analysis from [Dong and Van Roy, 2020], where they proved a bound with regret rate of $O(dsqrt{T log(T)})$ for the $d$-dimensional linear bandit setting. We focus on bandit problems with a metric action space and, using a chaining argument, we establish new bounds that depend on the metric entropy of the action space for a variant of Thompson-Sampling. Under suitable continuity assumption of the rewards, our bound offers a tight rate of $O(dsqrt{T})$ for $d$-dimensional linear bandit problems.
[ "['Amaury Gouverneur' 'Borja Rodríguez-Gálvez' 'Tobias J. Oechtering'\n 'Mikael Skoglund']" ]
null
null
2403.03362
null
null
http://arxiv.org/pdf/2403.03362v1
2024-03-05T23:16:13Z
2024-03-05T23:16:13Z
Level Set Teleportation: An Optimization Perspective
We study level set teleportation, an optimization sub-routine which seeks to accelerate gradient methods by maximizing the gradient norm on a level-set of the objective function. Since the descent lemma implies that gradient descent (GD) decreases the objective proportional to the squared norm of the gradient, level-set teleportation maximizes this one-step progress guarantee. For convex functions satisfying Hessian stability, we prove that GD with level-set teleportation obtains a combined sub-linear/linear convergence rate which is strictly faster than standard GD when the optimality gap is small. This is in sharp contrast to the standard (strongly) convex setting, where we show level-set teleportation neither improves nor worsens convergence rates. To evaluate teleportation in practice, we develop a projected-gradient-type method requiring only Hessian-vector products. We use this method to show that gradient methods with access to a teleportation oracle uniformly out-perform their standard versions on a variety of learning problems.
[ "['Aaron Mishkin' 'Alberto Bietti' 'Robert M. Gower']" ]
null
null
2403.03368
null
null
http://arxiv.org/pdf/2403.03368v1
2024-03-05T23:31:07Z
2024-03-05T23:31:07Z
Leveraging Federated Learning for Automatic Detection of Clopidogrel Treatment Failures
The effectiveness of clopidogrel, a widely used antiplatelet medication, varies significantly among individuals, necessitating the development of precise predictive models to optimize patient care. In this study, we leverage federated learning strategies to address clopidogrel treatment failure detection. Our research harnesses the collaborative power of multiple healthcare institutions, allowing them to jointly train machine learning models while safeguarding sensitive patient data. Utilizing the UK Biobank dataset, which encompasses a vast and diverse population, we partitioned the data based on geographic centers and evaluated the performance of federated learning. Our results show that while centralized training achieves higher Area Under the Curve (AUC) values and faster convergence, federated learning approaches can substantially narrow this performance gap. Our findings underscore the potential of federated learning in addressing clopidogrel treatment failure detection, offering a promising avenue for enhancing patient care through personalized treatment strategies while respecting data privacy. This study contributes to the growing body of research on federated learning in healthcare and lays the groundwork for secure and privacy-preserving predictive models for various medical conditions.
[ "['Samuel Kim' 'Min Sang Kim']" ]
null
null
2403.03372
null
null
http://arxiv.org/pdf/2403.03372v1
2024-03-05T23:37:43Z
2024-03-05T23:37:43Z
TartanAviation: Image, Speech, and ADS-B Trajectory Datasets for Terminal Airspace Operations
We introduce TartanAviation, an open-source multi-modal dataset focused on terminal-area airspace operations. TartanAviation provides a holistic view of the airport environment by concurrently collecting image, speech, and ADS-B trajectory data using setups installed inside airport boundaries. The datasets were collected at both towered and non-towered airfields across multiple months to capture diversity in aircraft operations, seasons, aircraft types, and weather conditions. In total, TartanAviation provides 3.1M images, 3374 hours of Air Traffic Control speech data, and 661 days of ADS-B trajectory data. The data was filtered, processed, and validated to create a curated dataset. In addition to the dataset, we also open-source the code-base used to collect and pre-process the dataset, further enhancing accessibility and usability. We believe this dataset has many potential use cases and would be particularly vital in allowing AI and machine learning technologies to be integrated into air traffic control systems and advance the adoption of autonomous aircraft in the airspace.
[ "['Jay Patrikar' 'Joao Dantas' 'Brady Moon' 'Milad Hamidi' 'Sourish Ghosh'\n 'Nikhil Keetha' 'Ian Higgins' 'Atharva Chandak' 'Takashi Yoneyama'\n 'Sebastian Scherer']" ]
null
null
2403.03375
null
null
http://arxiv.org/pdf/2403.03375v2
2024-06-16T15:43:33Z
2024-03-05T23:54:00Z
Complexity Matters: Dynamics of Feature Learning in the Presence of Spurious Correlations
Existing research often posits spurious features as easier to learn than core features in neural network optimization, but the impact of their relative simplicity remains under-explored. Moreover, studies mainly focus on end performance rather than the learning dynamics of feature learning. In this paper, we propose a theoretical framework and an associated synthetic dataset grounded in boolean function analysis. This setup allows for fine-grained control over the relative complexity (compared to core features) and correlation strength (with respect to the label) of spurious features to study the dynamics of feature learning under spurious correlations. Our findings uncover several interesting phenomena: (1) stronger spurious correlations or simpler spurious features slow down the learning rate of the core features, (2) two distinct subnetworks are formed to learn core and spurious features separately, (3) learning phases of spurious and core features are not always separable, (4) spurious features are not forgotten even after core features are fully learned. We demonstrate that our findings justify the success of retraining the last layer to remove spurious correlation and also identifies limitations of popular debiasing algorithms that exploit early learning of spurious features. We support our empirical findings with theoretical analyses for the case of learning XOR features with a one-hidden-layer ReLU network.
[ "['GuanWen Qiu' 'Da Kuang' 'Surbhi Goel']" ]
null
null
2403.03390
null
null
http://arxiv.org/pdf/2403.03390v1
2024-03-06T00:59:51Z
2024-03-06T00:59:51Z
Performance Evaluation of Semi-supervised Learning Frameworks for Multi-Class Weed Detection
Effective weed control plays a crucial role in optimizing crop yield and enhancing agricultural product quality. However, the reliance on herbicide application not only poses a critical threat to the environment but also promotes the emergence of resistant weeds. Fortunately, recent advances in precision weed management enabled by ML and DL provide a sustainable alternative. Despite great progress, existing algorithms are mainly developed based on supervised learning approaches, which typically demand large-scale datasets with manual-labeled annotations, which is time-consuming and labor-intensive. As such, label-efficient learning methods, especially semi-supervised learning, have gained increased attention in the broader domain of computer vision and have demonstrated promising performance. These methods aim to utilize a small number of labeled data samples along with a great number of unlabeled samples to develop high-performing models comparable to the supervised learning counterpart trained on a large amount of labeled data samples. In this study, we assess the effectiveness of a semi-supervised learning framework for multi-class weed detection, employing two well-known object detection frameworks, namely FCOS and Faster-RCNN. Specifically, we evaluate a generalized student-teacher framework with an improved pseudo-label generation module to produce reliable pseudo-labels for the unlabeled data. To enhance generalization, an ensemble student network is employed to facilitate the training process. Experimental results show that the proposed approach is able to achieve approximately 76% and 96% detection accuracy as the supervised methods with only 10% of labeled data in CottenWeedDet3 and CottonWeedDet12, respectively. We offer access to the source code, contributing a valuable resource for ongoing semi-supervised learning research in weed detection and beyond.
[ "['Jiajia Li' 'Dong Chen' 'Xunyuan Yin' 'Zhaojian Li']" ]
null
null
2403.03391
null
null
http://arxiv.org/pdf/2403.03391v2
2024-03-07T06:03:49Z
2024-03-05T16:55:06Z
CoRMF: Criticality-Ordered Recurrent Mean Field Ising Solver
We propose an RNN-based efficient Ising model solver, the Criticality-ordered Recurrent Mean Field (CoRMF), for forward Ising problems. In its core, a criticality-ordered spin sequence of an $N$-spin Ising model is introduced by sorting mission-critical edges with greedy algorithm, such that an autoregressive mean-field factorization can be utilized and optimized with Recurrent Neural Networks (RNNs). Our method has two notable characteristics: (i) by leveraging the approximated tree structure of the underlying Ising graph, the newly-obtained criticality order enables the unification between variational mean-field and RNN, allowing the generally intractable Ising model to be efficiently probed with probabilistic inference; (ii) it is well-modulized, model-independent while at the same time expressive enough, and hence fully applicable to any forward Ising inference problems with minimal effort. Computationally, by using a variance-reduced Monte Carlo gradient estimator, CoRFM solves the Ising problems in a self-train fashion without data/evidence, and the inference tasks can be executed by directly sampling from RNN. Theoretically, we establish a provably tighter error bound than naive mean-field by using the matrix cut decomposition machineries. Numerically, we demonstrate the utility of this framework on a series of Ising datasets.
[ "['Zhenyu Pan' 'Ammar Gilani' 'En-Jui Kuo' 'Zhuo Liu']" ]
null
null
2403.03401
null
null
http://arxiv.org/pdf/2403.03401v1
2024-03-06T01:56:17Z
2024-03-06T01:56:17Z
BAIT: Benchmarking (Embedding) Architectures for Interactive Theorem-Proving
Artificial Intelligence for Theorem Proving has given rise to a plethora of benchmarks and methodologies, particularly in Interactive Theorem Proving (ITP). Research in the area is fragmented, with a diverse set of approaches being spread across several ITP systems. This presents a significant challenge to the comparison of methods, which are often complex and difficult to replicate. Addressing this, we present BAIT, a framework for fair and streamlined comparison of learning approaches in ITP. We demonstrate BAIT's capabilities with an in-depth comparison, across several ITP benchmarks, of state-of-the-art architectures applied to the problem of formula embedding. We find that Structure Aware Transformers perform particularly well, improving on techniques associated with the original problem sets. BAIT also allows us to assess the end-to-end proving performance of systems built on interactive environments. This unified perspective reveals a novel end-to-end system that improves on prior work. We also provide a qualitative analysis, illustrating that improved performance is associated with more semantically-aware embeddings. By streamlining the implementation and comparison of Machine Learning algorithms in the ITP context, we anticipate BAIT will be a springboard for future research.
[ "['Sean Lamont' 'Michael Norrish' 'Amir Dezfouli' 'Christian Walder'\n 'Paul Montague']" ]
null
null
2403.03406
null
null
http://arxiv.org/pdf/2403.03406v1
2024-03-06T02:09:50Z
2024-03-06T02:09:50Z
An EnKF-LSTM Assimilation Algorithm for Crop Growth Model
Accurate and timely prediction of crop growth is of great significance to ensure crop yields and researchers have developed several crop models for the prediction of crop growth. However, there are large difference between the simulation results obtained by the crop models and the actual results, thus in this paper, we proposed to combine the simulation results with the collected crop data for data assimilation so that the accuracy of prediction will be improved. In this paper, an EnKF-LSTM data assimilation method for various crops is proposed by combining ensemble Kalman filter and LSTM neural network, which effectively avoids the overfitting problem of existing data assimilation methods and eliminates the uncertainty of the measured data. The verification of the proposed EnKF-LSTM method and the comparison of the proposed method with other data assimilation methods were performed using datasets collected by sensor equipment deployed on a farm.
[ "['Siqi Zhou' 'Ling Wang' 'Jie Liu' 'Jinshan Tang']" ]
null
null
2403.03410
null
null
http://arxiv.org/pdf/2403.03410v1
2024-03-06T02:37:26Z
2024-03-06T02:37:26Z
Prediction Of Cryptocurrency Prices Using LSTM, SVM And Polynomial Regression
The rapid development of information technology, especially the Internet, has facilitated users with a quick and easy way to seek information. With these convenience offered by internet services, many individuals who initially invested in gold and precious metals are now shifting into digital investments in form of cryptocurrencies. However, investments in crypto coins are filled with uncertainties and fluctuation in daily basis. This risk posed as significant challenges for coin investors that could result in substantial investment losses. The uncertainty of the value of these crypto coins is a critical issue in the field of coin investment. Forecasting, is one of the methods used to predict the future value of these crypto coins. By utilizing the models of Long Short Term Memory, Support Vector Machine, and Polynomial Regression algorithm for forecasting, a performance comparison is conducted to determine which algorithm model is most suitable for predicting crypto currency prices. The mean square error is employed as a benchmark for the comparison. By applying those three constructed algorithm models, the Support Vector Machine uses a linear kernel to produce the smallest mean square error compared to the Long Short Term Memory and Polynomial Regression algorithm models, with a mean square error value of 0.02. Keywords: Cryptocurrency, Forecasting, Long Short Term Memory, Mean Square Error, Polynomial Regression, Support Vector Machine
[ "['Novan Fauzi Al Giffary' 'Feri Sulianta']" ]
null
null
2403.03412
null
null
http://arxiv.org/pdf/2403.03412v1
2024-03-06T02:39:22Z
2024-03-06T02:39:22Z
Advancing Out-of-Distribution Detection through Data Purification and Dynamic Activation Function Design
In the dynamic realms of machine learning and deep learning, the robustness and reliability of models are paramount, especially in critical real-world applications. A fundamental challenge in this sphere is managing Out-of-Distribution (OOD) samples, significantly increasing the risks of model misclassification and uncertainty. Our work addresses this challenge by enhancing the detection and management of OOD samples in neural networks. We introduce OOD-R (Out-of-Distribution-Rectified), a meticulously curated collection of open-source datasets with enhanced noise reduction properties. In-Distribution (ID) noise in existing OOD datasets can lead to inaccurate evaluation of detection algorithms. Recognizing this, OOD-R incorporates noise filtering technologies to refine the datasets, ensuring a more accurate and reliable evaluation of OOD detection algorithms. This approach not only improves the overall quality of data but also aids in better distinguishing between OOD and ID samples, resulting in up to a 2.5% improvement in model accuracy and a minimum 3.2% reduction in false positives. Furthermore, we present ActFun, an innovative method that fine-tunes the model's response to diverse inputs, thereby improving the stability of feature extraction and minimizing specificity issues. ActFun addresses the common problem of model overconfidence in OOD detection by strategically reducing the influence of hidden units, which enhances the model's capability to estimate OOD uncertainty more accurately. Implementing ActFun in the OOD-R dataset has led to significant performance enhancements, including an 18.42% increase in AUROC of the GradNorm method and a 16.93% decrease in FPR95 of the Energy method. Overall, our research not only advances the methodologies in OOD detection but also emphasizes the importance of dataset integrity for accurate algorithm evaluation.
[ "['Yingrui Ji' 'Yao Zhu' 'Zhigang Li' 'Jiansheng Chen' 'Yunlong Kong'\n 'Jingbo Chen']" ]
null
null
2403.03414
null
null
http://arxiv.org/pdf/2403.03414v1
2024-03-06T02:46:17Z
2024-03-06T02:46:17Z
Leveraging The Finite States of Emotion Processing to Study Late-Life Mental Health
Traditional approaches in mental health research apply General Linear Models (GLM) to describe the longitudinal dynamics of observed psycho-behavioral measurements (questionnaire summary scores). Similarly, GLMs are also applied to characterize relationships between neurobiological measurements (regional fMRI signals) and perceptual stimuli or other regional signals. While these methods are useful for exploring linear correlations among the isolated signals of those constructs (i.e., summary scores or fMRI signals), these classical frameworks fall short in providing insights into the comprehensive system-level dynamics underlying observable changes. Hidden Markov Models (HMM) are a statistical model that enable us to describe the sequential relations among multiple observable constructs, and when applied through the lens of Finite State Automata (FSA), can provide a more integrated and intuitive framework for modeling and understanding the underlying controller (the prescription for how to respond to inputs) that fundamentally defines any system, as opposed to linearly correlating output signals produced by the controller. We present a simple and intuitive HMM processing pipeline vcHMM (See Preliminary Data) that highlights FSA theory and is applicable for both behavioral analysis of questionnaire data and fMRI data. HMMs offer theoretic promise as they are computationally equivalent to the FSA, the control processor of a Turing Machine (TM) The dynamic programming Viterbi algorithm is used to leverage the HMM model. It efficiently identifies the most likely sequence of hidden states. The vcHMM pipeline leverages this grammar to understand how behavior and neural activity relate to depression.
[ "['Yuanzhe Huang' 'Saurab Faruque' 'Minjie Wu' 'Akiko Mizuno'\n 'Eduardo Diniz' 'Shaolin Yang' 'George Dewitt Stetten' 'Noah Schweitzer'\n 'Hecheng Jin' 'Linghai Wang' 'Howard J. Aizenstein']" ]
null
null
2403.03421
null
null
http://arxiv.org/pdf/2403.03421v1
2024-03-06T03:08:20Z
2024-03-06T03:08:20Z
LEAD: Learning Decomposition for Source-free Universal Domain Adaptation
Universal Domain Adaptation (UniDA) targets knowledge transfer in the presence of both covariate and label shifts. Recently, Source-free Universal Domain Adaptation (SF-UniDA) has emerged to achieve UniDA without access to source data, which tends to be more practical due to data protection policies. The main challenge lies in determining whether covariate-shifted samples belong to target-private unknown categories. Existing methods tackle this either through hand-crafted thresholding or by developing time-consuming iterative clustering strategies. In this paper, we propose a new idea of LEArning Decomposition (LEAD), which decouples features into source-known and -unknown components to identify target-private data. Technically, LEAD initially leverages the orthogonal decomposition analysis for feature decomposition. Then, LEAD builds instance-level decision boundaries to adaptively identify target-private data. Extensive experiments across various UniDA scenarios have demonstrated the effectiveness and superiority of LEAD. Notably, in the OPDA scenario on VisDA dataset, LEAD outperforms GLC by 3.5% overall H-score and reduces 75% time to derive pseudo-labeling decision boundaries. Besides, LEAD is also appealing in that it is complementary to most existing methods. The code is available at https://github.com/ispc-lab/LEAD.
[ "['Sanqing Qu' 'Tianpei Zou' 'Lianghua He' 'Florian Röhrbein' 'Alois Knoll'\n 'Guang Chen' 'Changjun Jiang']" ]
null
null
2403.03425
null
null
http://arxiv.org/pdf/2403.03425v1
2024-03-06T03:15:25Z
2024-03-06T03:15:25Z
Sculpting Molecules in 3D: A Flexible Substructure Aware Framework for Text-Oriented Molecular Optimization
The integration of deep learning, particularly AI-Generated Content, with high-quality data derived from ab initio calculations has emerged as a promising avenue for transforming the landscape of scientific research. However, the challenge of designing molecular drugs or materials that incorporate multi-modality prior knowledge remains a critical and complex undertaking. Specifically, achieving a practical molecular design necessitates not only meeting the diversity requirements but also addressing structural and textural constraints with various symmetries outlined by domain experts. In this article, we present an innovative approach to tackle this inverse design problem by formulating it as a multi-modality guidance generation/optimization task. Our proposed solution involves a textural-structure alignment symmetric diffusion framework for the implementation of molecular generation/optimization tasks, namely 3DToMolo. 3DToMolo aims to harmonize diverse modalities, aligning them seamlessly to produce molecular structures adhere to specified symmetric structural and textural constraints by experts in the field. Experimental trials across three guidance generation settings have shown a superior hit generation performance compared to state-of-the-art methodologies. Moreover, 3DToMolo demonstrates the capability to generate novel molecules, incorporating specified target substructures, without the need for prior knowledge. This work not only holds general significance for the advancement of deep learning methodologies but also paves the way for a transformative shift in molecular design strategies. 3DToMolo creates opportunities for a more nuanced and effective exploration of the vast chemical space, opening new frontiers in the development of molecular entities with tailored properties and functionalities.
[ "['Kaiwei Zhang' 'Yange Lin' 'Guangcheng Wu' 'Yuxiang Ren' 'Xuecang Zhang'\n 'Bo wang' 'Xiaoyu Zhang' 'Weitao Du']" ]
null
null
2403.03427
null
null
http://arxiv.org/pdf/2403.03427v1
2024-03-06T03:16:47Z
2024-03-06T03:16:47Z
Single Transit Detection In Kepler With Machine Learning And Onboard Spacecraft Diagnostics
Exoplanet discovery at long orbital periods requires reliably detecting individual transits without additional information about the system. Techniques like phase-folding of light curves and periodogram analysis of radial velocity data are more sensitive to planets with shorter orbital periods, leaving a dearth of planet discoveries at long periods. We present a novel technique using an ensemble of Convolutional Neural Networks incorporating the onboard spacecraft diagnostics of emph{Kepler} to classify transits within a light curve. We create a pipeline to recover the location of individual transits, and the period of the orbiting planet, which maintains $>80%$ transit recovery sensitivity out to an 800-day orbital period. Our neural network pipeline has the potential to discover additional planets in the emph{Kepler} dataset, and crucially, within the $eta$-Earth regime. We report our first candidate from this pipeline, KOI 1271.02. KOI 1271.01 is known to exhibit strong Transit Timing Variations (TTVs), and so we jointly model the TTVs and transits of both transiting planets to constrain the orbital configuration and planetary parameters and conclude with a series of potential parameters for KOI 1271.02, as there is not enough data currently to uniquely constrain the system. We conclude that KOI 1271.02 has a radius of 5.32 $pm$ 0.20 $R_{oplus}$ and a mass of $28.94^{0.23}_{-0.47}$ $M_{oplus}$. Future constraints on the nature of KOI 1271.02 require measuring additional TTVs of KOI 1271.01 or observing a second transit of KOI 1271.02.
[ "['Matthew T. Hansen' 'Jason A. Dittmann']" ]
null
null
2403.03434
null
null
http://arxiv.org/pdf/2403.03434v2
2024-03-09T05:01:03Z
2024-03-06T03:36:07Z
An AI-enabled Agent-Based Model and Its Application in Measles Outbreak Simulation for New Zealand
Agent Based Models (ABMs) have emerged as a powerful tool for investigating complex social interactions, particularly in the context of public health and infectious disease investigation. In an effort to enhance the conventional ABM, enabling automated model calibration and reducing the computational resources needed for scaling up the model, we have developed a tensorized and differentiable agent-based model by coupling Graph Neural Network (GNN) and Long Short-Term Memory (LSTM) network. The model was employed to investigate the 2019 measles outbreak occurred in New Zealand, demonstrating a promising ability to accurately simulate the outbreak dynamics, particularly during the peak period of repeated cases. This paper shows that by leveraging the latest Artificial Intelligence (AI) technology and the capabilities of traditional ABMs, we gain deeper insights into the dynamics of infectious disease outbreaks. This, in turn, helps us make more informed decision when developing effective strategies that strike a balance between managing outbreaks and minimizing disruptions to everyday life.
[ "['Sijin Zhang' 'Alvaro Orsi' 'Lei Chen']" ]
null
null
2403.03444
null
null
http://arxiv.org/pdf/2403.03444v1
2024-03-06T04:02:30Z
2024-03-06T04:02:30Z
Uncertainty quantification for deeponets with ensemble kalman inversion
In recent years, operator learning, particularly the DeepONet, has received much attention for efficiently learning complex mappings between input and output functions across diverse fields. However, in practical scenarios with limited and noisy data, accessing the uncertainty in DeepONet predictions becomes essential, especially in mission-critical or safety-critical applications. Existing methods, either computationally intensive or yielding unsatisfactory uncertainty quantification, leave room for developing efficient and informative uncertainty quantification (UQ) techniques tailored for DeepONets. In this work, we proposed a novel inference approach for efficient UQ for operator learning by harnessing the power of the Ensemble Kalman Inversion (EKI) approach. EKI, known for its derivative-free, noise-robust, and highly parallelizable feature, has demonstrated its advantages for UQ for physics-informed neural networks [28]. Our innovative application of EKI enables us to efficiently train ensembles of DeepONets while obtaining informative uncertainty estimates for the output of interest. We deploy a mini-batch variant of EKI to accommodate larger datasets, mitigating the computational demand due to large datasets during the training stage. Furthermore, we introduce a heuristic method to estimate the artificial dynamics covariance, thereby improving our uncertainty estimates. Finally, we demonstrate the effectiveness and versatility of our proposed methodology across various benchmark problems, showcasing its potential to address the pressing challenges of uncertainty quantification in DeepONets, especially for practical applications with limited and noisy data.
[ "['Andrew Pensoneault' 'Xueyu Zhu']" ]
null
null
2403.03448
null
null
http://arxiv.org/abs/2403.03448v1
2024-03-06T04:24:43Z
2024-03-06T04:24:43Z
Kernel Correlation-Dissimilarity for Multiple Kernel k-Means Clustering
The main objective of the Multiple Kernel k-Means (MKKM) algorithm is to extract non-linear information and achieve optimal clustering by optimizing base kernel matrices. Current methods enhance information diversity and reduce redundancy by exploiting interdependencies among multiple kernels based on correlations or dissimilarities. Nevertheless, relying solely on a single metric, such as correlation or dissimilarity, to define kernel relationships introduces bias and incomplete characterization. Consequently, this limitation hinders efficient information extraction, ultimately compromising clustering performance. To tackle this challenge, we introduce a novel method that systematically integrates both kernel correlation and dissimilarity. Our approach comprehensively captures kernel relationships, facilitating more efficient classification information extraction and improving clustering performance. By emphasizing the coherence between kernel correlation and dissimilarity, our method offers a more objective and transparent strategy for extracting non-linear information and significantly improving clustering precision, supported by theoretical rationale. We assess the performance of our algorithm on 13 challenging benchmark datasets, demonstrating its superiority over contemporary state-of-the-art MKKM techniques.
[ "['Rina Su' 'Yu Guo' 'Caiying Wu' 'Qiyu Jin' 'Tieyong Zeng']" ]
null
null
2403.03449
null
null
http://arxiv.org/abs/2403.03449v1
2024-03-06T04:27:10Z
2024-03-06T04:27:10Z
SalienTime: User-driven Selection of Salient Time Steps for Large-Scale Geospatial Data Visualization
The voluminous nature of geospatial temporal data from physical monitors and simulation models poses challenges to efficient data access, often resulting in cumbersome temporal selection experiences in web-based data portals. Thus, selecting a subset of time steps for prioritized visualization and pre-loading is highly desirable. Addressing this issue, this paper establishes a multifaceted definition of salient time steps via extensive need-finding studies with domain experts to understand their workflows. Building on this, we propose a novel approach that leverages autoencoders and dynamic programming to facilitate user-driven temporal selections. Structural features, statistical variations, and distance penalties are incorporated to make more flexible selections. User-specified priorities, spatial regions, and aggregations are used to combine different perspectives. We design and implement a web-based interface to enable efficient and context-aware selection of time steps and evaluate its efficacy and usability through case studies, quantitative evaluations, and expert interviews.
[ "['Juntong Chen' 'Haiwen Huang' 'Huayuan Ye' 'Zhong Peng' 'Chenhui Li'\n 'Changbo Wang']" ]
null
null
2403.03454
null
null
http://arxiv.org/pdf/2403.03454v2
2024-03-14T20:09:17Z
2024-03-06T04:43:22Z
Learning Constrained Optimization with Deep Augmented Lagrangian Methods
Learning to Optimize (LtO) is a problem setting in which a machine learning (ML) model is trained to emulate a constrained optimization solver. Learning to produce optimal and feasible solutions subject to complex constraints is a difficult task, but is often made possible by restricting the input space to a limited distribution of related problems. Most LtO methods focus on directly learning solutions to the primal problem, and applying correction schemes or loss function penalties to encourage feasibility. This paper proposes an alternative approach, in which the ML model is trained instead to predict dual solution estimates directly, from which primal estimates are constructed to form dual-feasible solution pairs. This enables an end-to-end training scheme is which the dual objective is maximized as a loss function, and solution estimates iterate toward primal feasibility, emulating a Dual Ascent method. First it is shown that the poor convergence properties of classical Dual Ascent are reflected in poor convergence of the proposed training scheme. Then, by incorporating techniques from practical Augmented Lagrangian methods, we show how the training scheme can be improved to learn highly accurate constrained optimization solvers, for both convex and nonconvex problems.
[ "['James Kotary' 'Ferdinando Fioretto']" ]
null
null
2403.03458
null
null
http://arxiv.org/pdf/2403.03458v2
2024-06-02T23:04:43Z
2024-03-06T04:49:02Z
Slot Abstractors: Toward Scalable Abstract Visual Reasoning
Abstract visual reasoning is a characteristically human ability, allowing the identification of relational patterns that are abstracted away from object features, and the systematic generalization of those patterns to unseen problems. Recent work has demonstrated strong systematic generalization in visual reasoning tasks involving multi-object inputs, through the integration of slot-based methods used for extracting object-centric representations coupled with strong inductive biases for relational abstraction. However, this approach was limited to problems containing a single rule, and was not scalable to visual reasoning problems containing a large number of objects. Other recent work proposed Abstractors, an extension of Transformers that incorporates strong relational inductive biases, thereby inheriting the Transformer's scalability and multi-head architecture, but it has yet to be demonstrated how this approach might be applied to multi-object visual inputs. Here we combine the strengths of the above approaches and propose Slot Abstractors, an approach to abstract visual reasoning that can be scaled to problems involving a large number of objects and multiple relations among them. The approach displays state-of-the-art performance across four abstract visual reasoning tasks, as well as an abstract reasoning task involving real-world images.
[ "['Shanka Subhra Mondal' 'Jonathan D. Cohen' 'Taylor W. Webb']" ]
null
null
2403.03459
null
null
http://arxiv.org/pdf/2403.03459v1
2024-03-06T04:49:18Z
2024-03-06T04:49:18Z
TGPT-PINN: Nonlinear model reduction with transformed GPT-PINNs
We introduce the Transformed Generative Pre-Trained Physics-Informed Neural Networks (TGPT-PINN) for accomplishing nonlinear model order reduction (MOR) of transport-dominated partial differential equations in an MOR-integrating PINNs framework. Building on the recent development of the GPT-PINN that is a network-of-networks design achieving snapshot-based model reduction, we design and test a novel paradigm for nonlinear model reduction that can effectively tackle problems with parameter-dependent discontinuities. Through incorporation of a shock-capturing loss function component as well as a parameter-dependent transform layer, the TGPT-PINN overcomes the limitations of linear model reduction in the transport-dominated regime. We demonstrate this new capability for nonlinear model reduction in the PINNs framework by several nontrivial parametric partial differential equations.
[ "['Yanlai Chen' 'Yajie Ji' 'Akil Narayan' 'Zhenli Xu']" ]
null
null
2403.03462
null
null
http://arxiv.org/pdf/2403.03462v1
2024-03-06T04:55:39Z
2024-03-06T04:55:39Z
Interactive Continual Learning Architecture for Long-Term Personalization of Home Service Robots
For robots to perform assistive tasks in unstructured home environments, they must learn and reason on the semantic knowledge of the environments. Despite a resurgence in the development of semantic reasoning architectures, these methods assume that all the training data is available a priori. However, each user's environment is unique and can continue to change over time, which makes these methods unsuitable for personalized home service robots. Although research in continual learning develops methods that can learn and adapt over time, most of these methods are tested in the narrow context of object classification on static image datasets. In this paper, we combine ideas from continual learning, semantic reasoning, and interactive machine learning literature and develop a novel interactive continual learning architecture for continual learning of semantic knowledge in a home environment through human-robot interaction. The architecture builds on core cognitive principles of learning and memory for efficient and real-time learning of new knowledge from humans. We integrate our architecture with a physical mobile manipulator robot and perform extensive system evaluations in a laboratory environment over two months. Our results demonstrate the effectiveness of our architecture to allow a physical robot to continually adapt to the changes in the environment from limited data provided by the users (experimenters), and use the learned knowledge to perform object fetching tasks.
[ "['Ali Ayub' 'Chrystopher Nehaniv' 'Kerstin Dautenhahn']" ]
null
null
2403.03465
null
null
http://arxiv.org/pdf/2403.03465v1
2024-03-06T05:00:31Z
2024-03-06T05:00:31Z
Self-Attention Empowered Graph Convolutional Network for Structure Learning and Node Embedding
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies, leading to performance degradation. Furthermore, this weakness is magnified when the concerned graph is characterized by heterophily (low homophily). To solve this issue, this paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA). The proposed scheme exhibits an exceptional generalization capability in node-level representation learning. The proposed GCN-SA contains two enhancements corresponding to edges and node features. For edges, we utilize a self-attention mechanism to design a stable and effective graph-structure-learning module that can capture the internal correlation between any pair of nodes. This graph-structure-learning module can identify reliable neighbors for each node from the entire graph. Regarding the node features, we modify the transformer block to make it more applicable to enable GCN to fuse valuable information from the entire graph. These two enhancements work in distinct ways to help our GCN-SA capture long-range dependencies, enabling it to perform representation learning on graphs with varying levels of homophily. The experimental results on benchmark datasets demonstrate the effectiveness of the proposed GCN-SA. Compared to other outstanding GNN counterparts, the proposed GCN-SA is competitive.
[ "['Mengying Jiang' 'Guizhong Liu' 'Yuanchao Su' 'Xinliang Wu']" ]
null
null
2403.03472
null
null
http://arxiv.org/pdf/2403.03472v1
2024-03-06T05:13:23Z
2024-03-06T05:13:23Z
Boosting Meta-Training with Base Class Information for Few-Shot Learning
Few-shot learning, a challenging task in machine learning, aims to learn a classifier adaptable to recognize new, unseen classes with limited labeled examples. Meta-learning has emerged as a prominent framework for few-shot learning. Its training framework is originally a task-level learning method, such as Model-Agnostic Meta-Learning (MAML) and Prototypical Networks. And a recently proposed training paradigm called Meta-Baseline, which consists of sequential pre-training and meta-training stages, gains state-of-the-art performance. However, as a non-end-to-end training method, indicating the meta-training stage can only begin after the completion of pre-training, Meta-Baseline suffers from higher training cost and suboptimal performance due to the inherent conflicts of the two training stages. To address these limitations, we propose an end-to-end training paradigm consisting of two alternative loops. In the outer loop, we calculate cross entropy loss on the entire training set while updating only the final linear layer. In the inner loop, we employ the original meta-learning training mode to calculate the loss and incorporate gradients from the outer loss to guide the parameter updates. This training paradigm not only converges quickly but also outperforms existing baselines, indicating that information from the overall training set and the meta-learning training paradigm could mutually reinforce one another. Moreover, being model-agnostic, our framework achieves significant performance gains, surpassing the baseline systems by approximate 1%.
[ "['Weihao Jiang' 'Guodong Liu' 'Di He' 'Kun He']" ]
null
null
2403.03473
null
null
http://arxiv.org/pdf/2403.03473v2
2024-04-28T10:52:32Z
2024-03-06T05:13:28Z
Inverse-Free Fast Natural Gradient Descent Method for Deep Learning
Second-order optimization techniques have the potential to achieve faster convergence rates compared to first-order methods through the incorporation of second-order derivatives or statistics. However, their utilization in deep learning is limited due to their computational inefficiency. Various approaches have been proposed to address this issue, primarily centered on minimizing the size of the matrix to be inverted. Nevertheless, the necessity of performing the inverse operation iteratively persists. In this work, we present a fast natural gradient descent (FNGD) method that only requires inversion during the first epoch. Specifically, it is revealed that natural gradient descent (NGD) is essentially a weighted sum of per-sample gradients. Our novel approach further proposes to share these weighted coefficients across epochs without affecting empirical performance. Consequently, FNGD exhibits similarities to the average sum in first-order methods, leading to the computational complexity of FNGD being comparable to that of first-order methods. Extensive experiments on image classification and machine translation tasks demonstrate the efficiency of the proposed FNGD. For training ResNet-18 on CIFAR-100, FNGD can achieve a speedup of 2.07$times$ compared with KFAC. For training Transformer on Multi30K, FNGD outperforms AdamW by 24 BLEU score while requiring almost the same training time.
[ "['Xinwei Ou' 'Ce Zhu' 'Xiaolin Huang' 'Yipeng Liu']" ]
null
null
2403.03483
null
null
http://arxiv.org/pdf/2403.03483v1
2024-03-06T05:52:13Z
2024-03-06T05:52:13Z
A Teacher-Free Graph Knowledge Distillation Framework with Dual Self-Distillation
Recent years have witnessed great success in handling graph-related tasks with Graph Neural Networks (GNNs). Despite their great academic success, Multi-Layer Perceptrons (MLPs) remain the primary workhorse for practical industrial applications. One reason for such an academic-industry gap is the neighborhood-fetching latency incurred by data dependency in GNNs. To reduce their gaps, Graph Knowledge Distillation (GKD) is proposed, usually based on a standard teacher-student architecture, to distill knowledge from a large teacher GNN into a lightweight student GNN or MLP. However, we found in this paper that neither teachers nor GNNs are necessary for graph knowledge distillation. We propose a Teacher-Free Graph Self-Distillation (TGS) framework that does not require any teacher model or GNNs during both training and inference. More importantly, the proposed TGS framework is purely based on MLPs, where structural information is only implicitly used to guide dual knowledge self-distillation between the target node and its neighborhood. As a result, TGS enjoys the benefits of graph topology awareness in training but is free from data dependency in inference. Extensive experiments have shown that the performance of vanilla MLPs can be greatly improved with dual self-distillation, e.g., TGS improves over vanilla MLPs by 15.54% on average and outperforms state-of-the-art GKD algorithms on six real-world datasets. In terms of inference speed, TGS infers 75X-89X faster than existing GNNs and 16X-25X faster than classical inference acceleration methods.
[ "['Lirong Wu' 'Haitao Lin' 'Zhangyang Gao' 'Guojiang Zhao' 'Stan Z. Li']" ]
null
null
2403.03507
null
null
http://arxiv.org/pdf/2403.03507v2
2024-06-02T21:24:12Z
2024-03-06T07:29:57Z
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
Training Large Language Models (LLMs) presents significant memory challenges, predominantly due to the growing size of weights and optimizer states. Common memory-reduction approaches, such as low-rank adaptation (LoRA), add a trainable low-rank matrix to the frozen pre-trained weight in each layer, reducing trainable parameters and optimizer states. However, such approaches typically underperform training with full-rank weights in both pre-training and fine-tuning stages since they limit the parameter search to a low-rank subspace and alter the training dynamics, and further, may require full-rank warm start. In this work, we propose Gradient Low-Rank Projection (GaLore), a training strategy that allows full-parameter learning but is more memory-efficient than common low-rank adaptation methods such as LoRA. Our approach reduces memory usage by up to 65.5% in optimizer states while maintaining both efficiency and performance for pre-training on LLaMA 1B and 7B architectures with C4 dataset with up to 19.7B tokens, and on fine-tuning RoBERTa on GLUE tasks. Our 8-bit GaLore further reduces optimizer memory by up to 82.5% and total training memory by 63.3%, compared to a BF16 baseline. Notably, we demonstrate, for the first time, the feasibility of pre-training a 7B model on consumer GPUs with 24GB memory (e.g., NVIDIA RTX 4090) without model parallel, checkpointing, or offloading strategies.
[ "['Jiawei Zhao' 'Zhenyu Zhang' 'Beidi Chen' 'Zhangyang Wang'\n 'Anima Anandkumar' 'Yuandong Tian']" ]
null
null
2403.03508
null
null
http://arxiv.org/pdf/2403.03508v1
2024-03-06T07:34:47Z
2024-03-06T07:34:47Z
Probing the Robustness of Time-series Forecasting Models with CounterfacTS
A common issue for machine learning models applied to time-series forecasting is the temporal evolution of the data distributions (i.e., concept drift). Because most of the training data does not reflect such changes, the models present poor performance on the new out-of-distribution scenarios and, therefore, the impact of such events cannot be reliably anticipated ahead of time. We present and publicly release CounterfacTS, a tool to probe the robustness of deep learning models in time-series forecasting tasks via counterfactuals. CounterfacTS has a user-friendly interface that allows the user to visualize, compare and quantify time series data and their forecasts, for a number of datasets and deep learning models. Furthermore, the user can apply various transformations to the time series and explore the resulting changes in the forecasts in an interpretable manner. Through example cases, we illustrate how CounterfacTS can be used to i) identify the main features characterizing and differentiating sets of time series, ii) assess how the model performance depends on these characateristics, and iii) guide transformations of the original time series to create counterfactuals with desired properties for training and increasing the forecasting performance in new regions of the data distribution. We discuss the importance of visualizing and considering the location of the data in a projected feature space to transform time-series and create effective counterfactuals for training the models. Overall, CounterfacTS aids at creating counterfactuals to efficiently explore the impact of hypothetical scenarios not covered by the original data in time-series forecasting tasks.
[ "['Håkon Hanisch Kjærnli' 'Lluis Mas-Ribas' 'Aida Ashrafi' 'Gleb Sizov'\n 'Helge Langseth' 'Odd Erik Gundersen']" ]
null
null
2403.03522
null
null
http://arxiv.org/pdf/2403.03522v2
2024-03-13T09:50:40Z
2024-03-06T08:03:05Z
Non-verbal information in spontaneous speech -- towards a new framework of analysis
Non-verbal signals in speech are encoded by prosody and carry information that ranges from conversation action to attitude and emotion. Despite its importance, the principles that govern prosodic structure are not yet adequately understood. This paper offers an analytical schema and a technological proof-of-concept for the categorization of prosodic signals and their association with meaning. The schema interprets surface-representations of multi-layered prosodic events. As a first step towards implementation, we present a classification process that disentangles prosodic phenomena of three orders. It relies on fine-tuning a pre-trained speech recognition model, enabling the simultaneous multi-class/multi-label detection. It generalizes over a large variety of spontaneous data, performing on a par with, or superior to, human annotation. In addition to a standardized formalization of prosody, disentangling prosodic patterns can direct a theory of communication and speech organization. A welcome by-product is an interpretation of prosody that will enhance speech- and language-related technologies.
[ "['Tirza Biron' 'Moshe Barboy' 'Eran Ben-Artzy' 'Alona Golubchik'\n 'Yanir Marmor' 'Smadar Szekely' 'Yaron Winter' 'David Harel']" ]
null
null
2403.03526
null
null
http://arxiv.org/pdf/2403.03526v1
2024-03-06T08:05:53Z
2024-03-06T08:05:53Z
FingerNet: EEG Decoding of A Fine Motor Imagery with Finger-tapping Task Based on A Deep Neural Network
Brain-computer interface (BCI) technology facilitates communication between the human brain and computers, primarily utilizing electroencephalography (EEG) signals to discern human intentions. Although EEG-based BCI systems have been developed for paralysis individuals, ongoing studies explore systems for speech imagery and motor imagery (MI). This study introduces FingerNet, a specialized network for fine MI classification, departing from conventional gross MI studies. The proposed FingerNet could extract spatial and temporal features from EEG signals, improving classification accuracy within the same hand. The experimental results demonstrated that performance showed significantly higher accuracy in classifying five finger-tapping tasks, encompassing thumb, index, middle, ring, and little finger movements. FingerNet demonstrated dominant performance compared to the conventional baseline models, EEGNet and DeepConvNet. The average accuracy for FingerNet was 0.3049, whereas EEGNet and DeepConvNet exhibited lower accuracies of 0.2196 and 0.2533, respectively. Statistical validation also demonstrates the predominance of FingerNet over baseline networks. For biased predictions, particularly for thumb and index classes, we led to the implementation of weighted cross-entropy and also adapted the weighted cross-entropy, a method conventionally employed to mitigate class imbalance. The proposed FingerNet involves optimizing network structure, improving performance, and exploring applications beyond fine MI. Moreover, the weighted Cross Entropy approach employed to address such biased predictions appears to have broader applicability and relevance across various domains involving multi-class classification tasks. We believe that effective execution of motor imagery can be achieved not only for fine MI, but also for local muscle MI
[ "['Young-Min Go' 'Seong-Hyun Yu' 'Hyeong-Yeong Park' 'Minji Lee'\n 'Ji-Hoon Jeong']" ]
null
null
2403.03535
null
null
http://arxiv.org/pdf/2403.03535v1
2024-03-06T08:29:45Z
2024-03-06T08:29:45Z
Task Attribute Distance for Few-Shot Learning: Theoretical Analysis and Applications
Few-shot learning (FSL) aims to learn novel tasks with very few labeled samples by leveraging experience from emph{related} training tasks. In this paper, we try to understand FSL by delving into two key questions: (1) How to quantify the relationship between emph{training} and emph{novel} tasks? (2) How does the relationship affect the emph{adaptation difficulty} on novel tasks for different models? To answer the two questions, we introduce Task Attribute Distance (TAD) built upon attributes as a metric to quantify the task relatedness. Unlike many existing metrics, TAD is model-agnostic, making it applicable to different FSL models. Then, we utilize TAD metric to establish a theoretical connection between task relatedness and task adaptation difficulty. By deriving the generalization error bound on a novel task, we discover how TAD measures the adaptation difficulty on novel tasks for FSL models. To validate our TAD metric and theoretical findings, we conduct experiments on three benchmarks. Our experimental results confirm that TAD metric effectively quantifies the task relatedness and reflects the adaptation difficulty on novel tasks for various FSL methods, even if some of them do not learn attributes explicitly or human-annotated attributes are not available. Finally, we present two applications of the proposed TAD metric: data augmentation and test-time intervention, which further verify its effectiveness and general applicability. The source code is available at https://github.com/hu-my/TaskAttributeDistance.
[ "['Minyang Hu' 'Hong Chang' 'Zong Guo' 'Bingpeng Ma' 'Shiguan Shan'\n 'Xilin Chen']" ]
null
null
2403.03539
null
null
http://arxiv.org/pdf/2403.03539v1
2024-03-06T08:35:29Z
2024-03-06T08:35:29Z
Gadolinium dose reduction for brain MRI using conditional deep learning
Recently, deep learning (DL)-based methods have been proposed for the computational reduction of gadolinium-based contrast agents (GBCAs) to mitigate adverse side effects while preserving diagnostic value. Currently, the two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images. In this work, we address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs. To avoid the synthesis of any noise or artifacts and solely focus on contrast signal extraction and enhancement from low-dose subtraction images, we train our DL model using noise-free standard-dose subtraction images as targets. As a result, our model predicts the contrast enhancement signal only; thereby enabling synthesization of images beyond the standard dose. Furthermore, we adapt the embedding idea of recent diffusion-based models to condition our model on physical parameters affecting the contrast enhancement behavior. We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
[ "['Thomas Pinetz' 'Erich Kobler' 'Robert Haase' 'Julian A. Luetkens'\n 'Mathias Meetschen' 'Johannes Haubold' 'Cornelius Deuschl'\n 'Alexander Radbruch' 'Katerina Deike' 'Alexander Effland']" ]
null
null
2403.03542
null
null
http://arxiv.org/pdf/2403.03542v4
2024-05-07T01:57:00Z
2024-03-06T08:38:34Z
DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training
Pre-training has been investigated to improve the efficiency and performance of training neural operators in data-scarce settings. However, it is largely in its infancy due to the inherent complexity and diversity, such as long trajectories, multiple scales and varying dimensions of partial differential equations (PDEs) data. In this paper, we present a new auto-regressive denoising pre-training strategy, which allows for more stable and efficient pre-training on PDE data and generalizes to various downstream tasks. Moreover, by designing a flexible and scalable model architecture based on Fourier attention, we can easily scale up the model for large-scale pre-training. We train our PDE foundation model with up to 0.5B parameters on 10+ PDE datasets with more than 100k trajectories. Extensive experiments show that we achieve SOTA on these benchmarks and validate the strong generalizability of our model to significantly enhance performance on diverse downstream PDE tasks like 3D data. Code is available at url{https://github.com/thu-ml/DPOT}.
[ "['Zhongkai Hao' 'Chang Su' 'Songming Liu' 'Julius Berner' 'Chengyang Ying'\n 'Hang Su' 'Anima Anandkumar' 'Jian Song' 'Jun Zhu']" ]
null
null
2403.03545
null
null
http://arxiv.org/pdf/2403.03545v1
2024-03-06T08:47:31Z
2024-03-06T08:47:31Z
Diffusion-based Generative Prior for Low-Complexity MIMO Channel Estimation
This work proposes a novel channel estimator based on diffusion models (DMs), one of the currently top-rated generative models. Contrary to related works utilizing generative priors, a lightweight convolutional neural network (CNN) with positional embedding of the signal-to-noise ratio (SNR) information is designed by learning the channel distribution in the sparse angular domain. Combined with an estimation strategy that avoids stochastic resampling and truncates reverse diffusion steps that account for lower SNR than the given pilot observation, the resulting DM estimator has both low complexity and memory overhead. Numerical results exhibit better performance than state-of-the-art channel estimators utilizing generative priors.
[ "['Benedikt Fesl' 'Michael Baur' 'Florian Strasser' 'Michael Joham'\n 'Wolfgang Utschick']" ]
null
null
2403.03551
null
null
http://arxiv.org/pdf/2403.03551v1
2024-03-06T08:51:09Z
2024-03-06T08:51:09Z
Low-Dose CT Image Reconstruction by Fine-Tuning a UNet Pretrained for Gaussian Denoising for the Downstream Task of Image Enhancement
Computed Tomography (CT) is a widely used medical imaging modality, and as it is based on ionizing radiation, it is desirable to minimize the radiation dose. However, a reduced radiation dose comes with reduced image quality, and reconstruction from low-dose CT (LDCT) data is still a challenging task which is subject to research. According to the LoDoPaB-CT benchmark, a benchmark for LDCT reconstruction, many state-of-the-art methods use pipelines involving UNet-type architectures. Specifically the top ranking method, ItNet, employs a three-stage process involving filtered backprojection (FBP), a UNet trained on CT data, and an iterative refinement step. In this paper, we propose a less complex two-stage method. The first stage also employs FBP, while the novelty lies in the training strategy for the second stage, characterized as the CT image enhancement stage. The crucial point of our approach is that the neural network is pretrained on a distinctly different pretraining task with non-CT data, namely Gaussian noise removal on a variety of natural grayscale images (photographs). We then fine-tune this network for the downstream task of CT image enhancement using pairs of LDCT images and corresponding normal-dose CT images (NDCT). Despite being notably simpler than the state-of-the-art, as the pretraining did not depend on domain-specific CT data and no further iterative refinement step was necessary, the proposed two-stage method achieves competitive results. The proposed method achieves a shared top ranking in the LoDoPaB-CT challenge and a first position with respect to the SSIM metric.
[ "['Tim Selig' 'Thomas März' 'Martin Storath' 'Andreas Weinmann']" ]
null
null
2403.03552
null
null
http://arxiv.org/pdf/2403.03552v1
2024-03-06T08:55:34Z
2024-03-06T08:55:34Z
Population-aware Online Mirror Descent for Mean-Field Games by Deep Reinforcement Learning
Mean Field Games (MFGs) have the ability to handle large-scale multi-agent systems, but learning Nash equilibria in MFGs remains a challenging task. In this paper, we propose a deep reinforcement learning (DRL) algorithm that achieves population-dependent Nash equilibrium without the need for averaging or sampling from history, inspired by Munchausen RL and Online Mirror Descent. Through the design of an additional inner-loop replay buffer, the agents can effectively learn to achieve Nash equilibrium from any distribution, mitigating catastrophic forgetting. The resulting policy can be applied to various initial distributions. Numerical experiments on four canonical examples demonstrate our algorithm has better convergence properties than SOTA algorithms, in particular a DRL version of Fictitious Play for population-dependent policies.
[ "['Zida Wu' 'Mathieu Lauriere' 'Samuel Jia Cong Chua' 'Matthieu Geist'\n 'Olivier Pietquin' 'Ankur Mehta']" ]
null
null
2403.03562
null
null
http://arxiv.org/pdf/2403.03562v1
2024-03-06T09:14:24Z
2024-03-06T09:14:24Z
Efficient Algorithms for Empirical Group Distributional Robust Optimization and Beyond
We investigate the empirical counterpart of group distributionally robust optimization (GDRO), which aims to minimize the maximal empirical risk across $m$ distinct groups. We formulate empirical GDRO as a $textit{two-level}$ finite-sum convex-concave minimax optimization problem and develop a stochastic variance reduced mirror prox algorithm. Unlike existing methods, we construct the stochastic gradient by per-group sampling technique and perform variance reduction for all groups, which fully exploits the $textit{two-level}$ finite-sum structure of empirical GDRO. Furthermore, we compute the snapshot and mirror snapshot point by a one-index-shifted weighted average, which distinguishes us from the naive ergodic average. Our algorithm also supports non-constant learning rates, which is different from existing literature. We establish convergence guarantees both in expectation and with high probability, demonstrating a complexity of $mathcal{O}left(frac{msqrt{bar{n}ln{m}}}{varepsilon}right)$, where $bar n$ is the average number of samples among $m$ groups. Remarkably, our approach outperforms the state-of-the-art method by a factor of $sqrt{m}$. Furthermore, we extend our methodology to deal with the empirical minimax excess risk optimization (MERO) problem and manage to give the expectation bound and the high probability bound, accordingly. The complexity of our empirical MERO algorithm matches that of empirical GDRO at $mathcal{O}left(frac{msqrt{bar{n}ln{m}}}{varepsilon}right)$, significantly surpassing the bounds of existing methods.
[ "['Dingzhi Yu' 'Yunuo Cai' 'Wei Jiang' 'Lijun Zhang']" ]
null
null
2403.03563
null
null
http://arxiv.org/abs/2403.03563v1
2024-03-06T09:15:53Z
2024-03-06T09:15:53Z
Multimodal Anomaly Detection based on Deep Auto-Encoder for Object Slip Perception of Mobile Manipulation Robots
Object slip perception is essential for mobile manipulation robots to perform manipulation tasks reliably in the dynamic real-world. Traditional approaches to robot arms' slip perception use tactile or vision sensors. However, mobile robots still have to deal with noise in their sensor signals caused by the robot's movement in a changing environment. To solve this problem, we present an anomaly detection method that utilizes multisensory data based on a deep autoencoder model. The proposed framework integrates heterogeneous data streams collected from various robot sensors, including RGB and depth cameras, a microphone, and a force-torque sensor. The integrated data is used to train a deep autoencoder to construct latent representations of the multisensory data that indicate the normal status. Anomalies can then be identified by error scores measured by the difference between the trained encoder's latent values and the latent values of reconstructed input data. In order to evaluate the proposed framework, we conducted an experiment that mimics an object slip by a mobile service robot operating in a real-world environment with diverse household objects and different moving patterns. The experimental results verified that the proposed framework reliably detects anomalies in object slip situations despite various object types and robot behaviors, and visual and auditory noise in the environment.
[ "['Youngjae Yoo' 'Chung-Yeon Lee' 'Byoung-Tak Zhang']" ]
null
null
2403.03569
null
null
http://arxiv.org/pdf/2403.03569v1
2024-03-06T09:25:22Z
2024-03-06T09:25:22Z
On Transfer in Classification: How Well do Subsets of Classes Generalize?
In classification, it is usual to observe that models trained on a given set of classes can generalize to previously unseen ones, suggesting the ability to learn beyond the initial task. This ability is often leveraged in the context of transfer learning where a pretrained model can be used to process new classes, with or without fine tuning. Surprisingly, there are a few papers looking at the theoretical roots beyond this phenomenon. In this work, we are interested in laying the foundations of such a theoretical framework for transferability between sets of classes. Namely, we establish a partially ordered set of subsets of classes. This tool allows to represent which subset of classes can generalize to others. In a more practical setting, we explore the ability of our framework to predict which subset of classes can lead to the best performance when testing on all of them. We also explore few-shot learning, where transfer is the golden standard. Our work contributes to better understanding of transfer mechanics and model generalization.
[ "['Raphael Baena' 'Lucas Drumetz' 'Vincent Gripon']" ]
null
null
2403.03581
null
null
http://arxiv.org/abs/2403.03581v1
2024-03-06T09:57:42Z
2024-03-06T09:57:42Z
Enhancing ASD detection accuracy: a combined approach of machine learning and deep learning models with natural language processing
Purpose: Our study explored the use of artificial intelligence (AI) to diagnose autism spectrum disorder (ASD). It focused on machine learning (ML) and deep learning (DL) to detect ASD from text inputs on social media, addressing challenges in traditional ASD diagnosis. Methods: We used natural language processing (NLP), ML, and DL models (including decision trees, XGB, KNN, RNN, LSTM, Bi-LSTM, BERT, and BERTweet) to analyze 404,627 tweets, classifying them based on ASD or non-ASD authors. A subset of 90,000 tweets was used for model training and testing. Results: Our AI models showed high accuracy, with an 88% success rate in identifying texts from individuals with ASD. Conclusion: The study demonstrates AI's potential in improving ASD diagnosis, especially in children, highlighting the importance of early detection.
[ "['Sergio Rubio-Martín' 'María Teresa García-Ordás'\n 'Martín Bayón-Gutiérrez' 'Natalia Prieto-Fernández'\n 'José Alberto Benítez-Andrades']" ]
null
null
2403.03585
null
null
http://arxiv.org/pdf/2403.03585v1
2024-03-06T10:01:35Z
2024-03-06T10:01:35Z
RouteExplainer: An Explanation Framework for Vehicle Routing Problem
The Vehicle Routing Problem (VRP) is a widely studied combinatorial optimization problem and has been applied to various practical problems. While the explainability for VRP is significant for improving the reliability and interactivity in practical VRP applications, it remains unexplored. In this paper, we propose RouteExplainer, a post-hoc explanation framework that explains the influence of each edge in a generated route. Our framework realizes this by rethinking a route as the sequence of actions and extending counterfactual explanations based on the action influence model to VRP. To enhance the explanation, we additionally propose an edge classifier that infers the intentions of each edge, a loss function to train the edge classifier, and explanation-text generation by Large Language Models (LLMs). We quantitatively evaluate our edge classifier on four different VRPs. The results demonstrate its rapid computation while maintaining reasonable accuracy, thereby highlighting its potential for deployment in practical applications. Moreover, on the subject of a tourist route, we qualitatively evaluate explanations generated by our framework. This evaluation not only validates our framework but also shows the synergy between explanation frameworks and LLMs. See https://ntt-dkiku.github.io/xai-vrp for our code, datasets, models, and demo.
[ "['Daisuke Kikuta' 'Hiroki Ikeuchi' 'Kengo Tajiri' 'Yuusuke Nakano']" ]
null
null
2403.03589
null
null
http://arxiv.org/pdf/2403.03589v2
2024-06-18T18:20:08Z
2024-03-06T10:24:44Z
Active Adaptive Experimental Design for Treatment Effect Estimation with Covariate Choices
This study designs an adaptive experiment for efficiently estimating average treatment effects (ATEs). In each round of our adaptive experiment, an experimenter sequentially samples an experimental unit, assigns a treatment, and observes the corresponding outcome immediately. At the end of the experiment, the experimenter estimates an ATE using the gathered samples. The objective is to estimate the ATE with a smaller asymptotic variance. Existing studies have designed experiments that adaptively optimize the propensity score (treatment-assignment probability). As a generalization of such an approach, we propose optimizing the covariate density as well as the propensity score. First, we derive the efficient covariate density and propensity score that minimize the semiparametric efficiency bound and find that optimizing both covariate density and propensity score minimizes the semiparametric efficiency bound more effectively than optimizing only the propensity score. Next, we design an adaptive experiment using the efficient covariate density and propensity score sequentially estimated during the experiment. Lastly, we propose an ATE estimator whose asymptotic variance aligns with the minimized semiparametric efficiency bound.
[ "['Masahiro Kato' 'Akihiro Oga' 'Wataru Komatsubara' 'Ryo Inokuchi']" ]
null
null
2403.03590
null
null
http://arxiv.org/pdf/2403.03590v1
2024-03-06T10:24:47Z
2024-03-06T10:24:47Z
DeepEclipse: How to Break White-Box DNN-Watermarking Schemes
Deep Learning (DL) models have become crucial in digital transformation, thus raising concerns about their intellectual property rights. Different watermarking techniques have been developed to protect Deep Neural Networks (DNNs) from IP infringement, creating a competitive field for DNN watermarking and removal methods. The predominant watermarking schemes use white-box techniques, which involve modifying weights by adding a unique signature to specific DNN layers. On the other hand, existing attacks on white-box watermarking usually require knowledge of the specific deployed watermarking scheme or access to the underlying data for further training and fine-tuning. We propose DeepEclipse, a novel and unified framework designed to remove white-box watermarks. We present obfuscation techniques that significantly differ from the existing white-box watermarking removal schemes. DeepEclipse can evade watermark detection without prior knowledge of the underlying watermarking scheme, additional data, or training and fine-tuning. Our evaluation reveals that DeepEclipse excels in breaking multiple white-box watermarking schemes, reducing watermark detection to random guessing while maintaining a similar model accuracy as the original one. Our framework showcases a promising solution to address the ongoing DNN watermark protection and removal challenges.
[ "['Alessandro Pegoraro' 'Carlotta Segna' 'Kavita Kumari'\n 'Ahmad-Reza Sadeghi']" ]
null
null
2403.03599
null
null
http://arxiv.org/pdf/2403.03599v1
2024-03-06T10:36:56Z
2024-03-06T10:36:56Z
Learning Invariant Representations of Graph Neural Networks via Cluster Generalization
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data due to their ability to learn node representations by aggregating local structure information. However, it is widely acknowledged that the test graph structure may differ from the training graph structure, resulting in a structure shift. In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens, suggesting that the learned models may be biased towards specific structure patterns. To address this challenge, we propose the Cluster Information Transfer (CIT) mechanism (Code available at https://github.com/BUPT-GAMMA/CITGNN), which can learn invariant representations for GNNs, thereby improving their generalization ability to various and unknown test graphs with structure shift. The CIT mechanism achieves this by combining different cluster information with the nodes while preserving their cluster-independent information. By generating nodes across different clusters, the mechanism significantly enhances the diversity of the nodes and helps GNNs learn the invariant representations. We provide a theoretical analysis of the CIT mechanism, showing that the impact of changing clusters during structure shift can be mitigated after transfer. Additionally, the proposed mechanism is a plug-in that can be easily used to improve existing GNNs. We comprehensively evaluate our proposed method on three typical structure shift scenarios, demonstrating its effectiveness in enhancing GNNs' performance.
[ "['Donglin Xia' 'Xiao Wang' 'Nian Liu' 'Chuan Shi']" ]
null
null
2403.03606
null
null
http://arxiv.org/pdf/2403.03606v1
2024-03-06T10:53:12Z
2024-03-06T10:53:12Z
Enhancing Price Prediction in Cryptocurrency Using Transformer Neural Network and Technical Indicators
This study presents an innovative approach for predicting cryptocurrency time series, specifically focusing on Bitcoin, Ethereum, and Litecoin. The methodology integrates the use of technical indicators, a Performer neural network, and BiLSTM (Bidirectional Long Short-Term Memory) to capture temporal dynamics and extract significant features from raw cryptocurrency data. The application of technical indicators, such facilitates the extraction of intricate patterns, momentum, volatility, and trends. The Performer neural network, employing Fast Attention Via positive Orthogonal Random features (FAVOR+), has demonstrated superior computational efficiency and scalability compared to the traditional Multi-head attention mechanism in Transformer models. Additionally, the integration of BiLSTM in the feedforward network enhances the model's capacity to capture temporal dynamics in the data, processing it in both forward and backward directions. This is particularly advantageous for time series data where past and future data points can influence the current state. The proposed method has been applied to the hourly and daily timeframes of the major cryptocurrencies and its performance has been benchmarked against other methods documented in the literature. The results underscore the potential of the proposed method to outperform existing models, marking a significant progression in the field of cryptocurrency price prediction.
[ "['Mohammad Ali Labbaf Khaniki' 'Mohammad Manthouri']" ]
null
null
2403.03608
null
null
http://arxiv.org/pdf/2403.03608v1
2024-03-06T10:55:50Z
2024-03-06T10:55:50Z
GSNeRF: Generalizable Semantic Neural Radiance Fields with Enhanced 3D Scene Understanding
Utilizing multi-view inputs to synthesize novel-view images, Neural Radiance Fields (NeRF) have emerged as a popular research topic in 3D vision. In this work, we introduce a Generalizable Semantic Neural Radiance Field (GSNeRF), which uniquely takes image semantics into the synthesis process so that both novel view images and the associated semantic maps can be produced for unseen scenes. Our GSNeRF is composed of two stages: Semantic Geo-Reasoning and Depth-Guided Visual rendering. The former is able to observe multi-view image inputs to extract semantic and geometry features from a scene. Guided by the resulting image geometry information, the latter performs both image and semantic rendering with improved performances. Our experiments not only confirm that GSNeRF performs favorably against prior works on both novel-view image and semantic segmentation synthesis but the effectiveness of our sampling strategy for visual rendering is further verified.
[ "['Zi-Ting Chou' 'Sheng-Yu Huang' 'I-Jieh Liu' 'Yu-Chiang Frank Wang']" ]
null
null
2403.03631
null
null
http://arxiv.org/pdf/2403.03631v1
2024-03-06T11:38:08Z
2024-03-06T11:38:08Z
Tackling Missing Values in Probabilistic Wind Power Forecasting: A Generative Approach
Machine learning techniques have been successfully used in probabilistic wind power forecasting. However, the issue of missing values within datasets due to sensor failure, for instance, has been overlooked for a long time. Although it is natural to consider addressing this issue by imputing missing values before model estimation and forecasting, we suggest treating missing values and forecasting targets indifferently and predicting all unknown values simultaneously based on observations. In this paper, we offer an efficient probabilistic forecasting approach by estimating the joint distribution of features and targets based on a generative model. It is free of preprocessing, and thus avoids introducing potential errors. Compared with the traditional "impute, then predict" pipeline, the proposed approach achieves better performance in terms of continuous ranked probability score.
[ "['Honglin Wen' 'Pierre Pinson' 'Jie Gu' 'Zhijian Jin']" ]
null
null
2403.03636
null
null
http://arxiv.org/pdf/2403.03636v1
2024-03-06T11:48:08Z
2024-03-06T11:48:08Z
SheetAgent: A Generalist Agent for Spreadsheet Reasoning and Manipulation via Large Language Models
Spreadsheet manipulation is widely existing in most daily works and significantly improves working efficiency. Large language model (LLM) has been recently attempted for automatic spreadsheet manipulation but has not yet been investigated in complicated and realistic tasks where reasoning challenges exist (e.g., long horizon manipulation with multi-step reasoning and ambiguous requirements). To bridge the gap with the real-world requirements, we introduce $textbf{SheetRM}$, a benchmark featuring long-horizon and multi-category tasks with reasoning-dependent manipulation caused by real-life challenges. To mitigate the above challenges, we further propose $textbf{SheetAgent}$, a novel autonomous agent that utilizes the power of LLMs. SheetAgent consists of three collaborative modules: $textit{Planner}$, $textit{Informer}$, and $textit{Retriever}$, achieving both advanced reasoning and accurate manipulation over spreadsheets without human interaction through iterative task reasoning and reflection. Extensive experiments demonstrate that SheetAgent delivers 20-30% pass rate improvements on multiple benchmarks over baselines, achieving enhanced precision in spreadsheet manipulation and demonstrating superior table reasoning abilities. More details and visualizations are available at https://sheetagent.github.io.
[ "['Yibin Chen' 'Yifu Yuan' 'Zeyu Zhang' 'Yan Zheng' 'Jinyi Liu' 'Fei Ni'\n 'Jianye Hao']" ]
null
null
2403.03642
null
null
http://arxiv.org/pdf/2403.03642v1
2024-03-06T12:02:07Z
2024-03-06T12:02:07Z
Generative Active Learning with Variational Autoencoder for Radiology Data Generation in Veterinary Medicine
Recently, with increasing interest in pet healthcare, the demand for computer-aided diagnosis (CAD) systems in veterinary medicine has increased. The development of veterinary CAD has stagnated due to a lack of sufficient radiology data. To overcome the challenge, we propose a generative active learning framework based on a variational autoencoder. This approach aims to alleviate the scarcity of reliable data for CAD systems in veterinary medicine. This study utilizes datasets comprising cardiomegaly radiograph data. After removing annotations and standardizing images, we employed a framework for data augmentation, which consists of a data generation phase and a query phase for filtering the generated data. The experimental results revealed that as the data generated through this framework was added to the training data of the generative model, the frechet inception distance consistently decreased from 84.14 to 50.75 on the radiograph. Subsequently, when the generated data were incorporated into the training of the classification model, the false positive of the confusion matrix also improved from 0.16 to 0.66 on the radiograph. The proposed framework has the potential to address the challenges of data scarcity in medical CAD, contributing to its advancement.
[ "['In-Gyu Lee' 'Jun-Young Oh' 'Hee-Jung Yu' 'Jae-Hwan Kim' 'Ki-Dong Eom'\n 'Ji-Hoon Jeong']" ]
null
null
2403.03643
null
null
http://arxiv.org/pdf/2403.03643v2
2024-03-07T02:05:28Z
2024-03-06T12:05:56Z
A Survey on Applications of Reinforcement Learning in Spatial Resource Allocation
The challenge of spatial resource allocation is pervasive across various domains such as transportation, industry, and daily life. As the scale of real-world issues continues to expand and demands for real-time solutions increase, traditional algorithms face significant computational pressures, struggling to achieve optimal efficiency and real-time capabilities. In recent years, with the escalating computational power of computers, the remarkable achievements of reinforcement learning in domains like Go and robotics have demonstrated its robust learning and sequential decision-making capabilities. Given these advancements, there has been a surge in novel methods employing reinforcement learning to tackle spatial resource allocation problems. These methods exhibit advantages such as rapid solution convergence and strong model generalization abilities, offering a new perspective on resolving spatial resource allocation problems. Therefore, this paper aims to summarize and review recent theoretical methods and applied research utilizing reinforcement learning to address spatial resource allocation problems. It provides a summary and comprehensive overview of its fundamental principles, related methodologies, and applied research. Additionally, it highlights several unresolved issues that urgently require attention in this direction for the future.
[ "['Di Zhang' 'Moyang Wang' 'Joseph Mango' 'Xiang Li' 'Xianrui Xu']" ]
null
null
2403.03659
null
null
http://arxiv.org/pdf/2403.03659v1
2024-03-06T12:29:13Z
2024-03-06T12:29:13Z
Robust Graph Structure Learning under Heterophily
Graph is a fundamental mathematical structure in characterizing relations between different objects and has been widely used on various learning tasks. Most methods implicitly assume a given graph to be accurate and complete. However, real data is inevitably noisy and sparse, which will lead to inferior results. Despite the remarkable success of recent graph representation learning methods, they inherently presume that the graph is homophilic, and largely overlook heterophily, where most connected nodes are from different classes. In this regard, we propose a novel robust graph structure learning method to achieve a high-quality graph from heterophilic data for downstream tasks. We first apply a high-pass filter to make each node more distinctive from its neighbors by encoding structure information into the node features. Then, we learn a robust graph with an adaptive norm characterizing different levels of noise. Afterwards, we propose a novel regularizer to further refine the graph structure. Clustering and semi-supervised classification experiments on heterophilic graphs verify the effectiveness of our method.
[ "['Xuanting Xie' 'Zhao Kang' 'Wenyu Chen']" ]
null
null
2403.03664
null
null
http://arxiv.org/pdf/2403.03664v1
2024-03-06T12:34:50Z
2024-03-06T12:34:50Z
Environmental Insights: Democratizing Access to Ambient Air Pollution Data and Predictive Analytics with an Open-Source Python Package
Ambient air pollution is a pervasive issue with wide-ranging effects on human health, ecosystem vitality, and economic structures. Utilizing data on ambient air pollution concentrations, researchers can perform comprehensive analyses to uncover the multifaceted impacts of air pollution across society. To this end, we introduce Environmental Insights, an open-source Python package designed to democratize access to air pollution concentration data. This tool enables users to easily retrieve historical air pollution data and employ a Machine Learning model for forecasting potential future conditions. Moreover, Environmental Insights includes a suite of tools aimed at facilitating the dissemination of analytical findings and enhancing user engagement through dynamic visualizations. This comprehensive approach ensures that the package caters to the diverse needs of individuals looking to explore and understand air pollution trends and their implications.
[ "['Liam J Berrisford' 'Ronaldo Menezes']" ]
null
null
2403.03666
null
null
http://arxiv.org/pdf/2403.03666v1
2024-03-06T12:37:49Z
2024-03-06T12:37:49Z
Provable Filter for Real-world Graph Clustering
Graph clustering, an important unsupervised problem, has been shown to be more resistant to advances in Graph Neural Networks (GNNs). In addition, almost all clustering methods focus on homophilic graphs and ignore heterophily. This significantly limits their applicability in practice, since real-world graphs exhibit a structural disparity and cannot simply be classified as homophily and heterophily. Thus, a principled way to handle practical graphs is urgently needed. To fill this gap, we provide a novel solution with theoretical support. Interestingly, we find that most homophilic and heterophilic edges can be correctly identified on the basis of neighbor information. Motivated by this finding, we construct two graphs that are highly homophilic and heterophilic, respectively. They are used to build low-pass and high-pass filters to capture holistic information. Important features are further enhanced by the squeeze-and-excitation block. We validate our approach through extensive experiments on both homophilic and heterophilic graphs. Empirical results demonstrate the superiority of our method compared to state-of-the-art clustering methods.
[ "['Xuanting Xie' 'Erlin Pan' 'Zhao Kang' 'Wenyu Chen' 'Bingheng Li']" ]
null
null
2403.03669
null
null
http://arxiv.org/pdf/2403.03669v2
2024-03-07T09:20:35Z
2024-03-06T12:43:53Z
Spectral Algorithms on Manifolds through Diffusion
The existing research on spectral algorithms, applied within a Reproducing Kernel Hilbert Space (RKHS), has primarily focused on general kernel functions, often neglecting the inherent structure of the input feature space. Our paper introduces a new perspective, asserting that input data are situated within a low-dimensional manifold embedded in a higher-dimensional Euclidean space. We study the convergence performance of spectral algorithms in the RKHSs, specifically those generated by the heat kernels, known as diffusion spaces. Incorporating the manifold structure of the input, we employ integral operator techniques to derive tight convergence upper bounds concerning generalized norms, which indicates that the estimators converge to the target function in strong sense, entailing the simultaneous convergence of the function itself and its derivatives. These bounds offer two significant advantages: firstly, they are exclusively contingent on the intrinsic dimension of the input manifolds, thereby providing a more focused analysis. Secondly, they enable the efficient derivation of convergence rates for derivatives of any k-th order, all of which can be accomplished within the ambit of the same spectral algorithms. Furthermore, we establish minimax lower bounds to demonstrate the asymptotic optimality of these conclusions in specific contexts. Our study confirms that the spectral algorithms are practically significant in the broader context of high-dimensional approximation.
[ "['Weichun Xia' 'Lei Shi']" ]
null
null
2403.03670
null
null
http://arxiv.org/pdf/2403.03670v1
2024-03-06T12:47:14Z
2024-03-06T12:47:14Z
CDC: A Simple Framework for Complex Data Clustering
In today's data-driven digital era, the amount as well as complexity, such as multi-view, non-Euclidean, and multi-relational, of the collected data are growing exponentially or even faster. Clustering, which unsupervisely extracts valid knowledge from data, is extremely useful in practice. However, existing methods are independently developed to handle one particular challenge at the expense of the others. In this work, we propose a simple but effective framework for complex data clustering (CDC) that can efficiently process different types of data with linear complexity. We first utilize graph filtering to fuse geometry structure and attribute information. We then reduce the complexity with high-quality anchors that are adaptively learned via a novel similarity-preserving regularizer. We illustrate the cluster-ability of our proposed method theoretically and experimentally. In particular, we deploy CDC to graph data of size 111M.
[ "['Zhao Kang' 'Xuanting Xie' 'Bingheng Li' 'Erlin Pan']" ]
null
null
2403.03672
null
null
http://arxiv.org/pdf/2403.03672v2
2024-03-20T08:50:24Z
2024-03-06T12:49:08Z
Learning Adversarial MDPs with Stochastic Hard Constraints
We study online learning problems in constrained Markov decision processes (CMDPs) with adversarial losses and stochastic hard constraints. We consider two different scenarios. In the first one, we address general CMDPs, where we design an algorithm that attains sublinear regret and cumulative positive constraints violation. In the second scenario, under the mild assumption that a policy strictly satisfying the constraints exists and is known to the learner, we design an algorithm that achieves sublinear regret while ensuring that the constraints are satisfied at every episode with high probability. To the best of our knowledge, our work is the first to study CMDPs involving both adversarial losses and hard constraints. Indeed, previous works either focus on much weaker soft constraints--allowing for positive violation to cancel out negative ones--or are restricted to stochastic losses. Thus, our algorithms can deal with general non-stationary environments subject to requirements much stricter than those manageable with state-of-the-art algorithms. This enables their adoption in a much wider range of real-world applications, ranging from autonomous driving to online advertising and recommender systems.
[ "['Francesco Emanuele Stradi' 'Matteo Castiglioni' 'Alberto Marchesi'\n 'Nicola Gatti']" ]
null
null
2403.03676
null
null
http://arxiv.org/pdf/2403.03676v1
2024-03-06T12:57:48Z
2024-03-06T12:57:48Z
Simplified PCNet with Robustness
Graph Neural Networks (GNNs) have garnered significant attention for their success in learning the representation of homophilic or heterophilic graphs. However, they cannot generalize well to real-world graphs with different levels of homophily. In response, the Possion-Charlier Network (PCNet) cite{li2024pc}, the previous work, allows graph representation to be learned from heterophily to homophily. Although PCNet alleviates the heterophily issue, there remain some challenges in further improving the efficacy and efficiency. In this paper, we simplify PCNet and enhance its robustness. We first extend the filter order to continuous values and reduce its parameters. Two variants with adaptive neighborhood sizes are implemented. Theoretical analysis shows our model's robustness to graph structure perturbations or adversarial attacks. We validate our approach through semi-supervised learning tasks on various datasets representing both homophilic and heterophilic graphs.
[ "['Bingheng Li' 'Xuanting Xie' 'Haoxiang Lei' 'Ruiyi Fang' 'Zhao Kang']" ]
null
null
2403.03695
null
null
http://arxiv.org/pdf/2403.03695v1
2024-03-06T13:23:55Z
2024-03-06T13:23:55Z
Spectral Phase Transition and Optimal PCA in Block-Structured Spiked models
We discuss the inhomogeneous spiked Wigner model, a theoretical framework recently introduced to study structured noise in various learning scenarios, through the prism of random matrix theory, with a specific focus on its spectral properties. Our primary objective is to find an optimal spectral method and to extend the celebrated cite{BBP} (BBP) phase transition criterion -- well-known in the homogeneous case -- to our inhomogeneous, block-structured, Wigner model. We provide a thorough rigorous analysis of a transformed matrix and show that the transition for the appearance of 1) an outlier outside the bulk of the limiting spectral distribution and 2) a positive overlap between the associated eigenvector and the signal, occurs precisely at the optimal threshold, making the proposed spectral method optimal within the class of iterative methods for the inhomogeneous Wigner problem.
[ "['Pierre Mergny' 'Justin Ko' 'Florent Krzakala']" ]
null
null
2403.03698
null
null
http://arxiv.org/pdf/2403.03698v1
2024-03-06T13:27:34Z
2024-03-06T13:27:34Z
Towards Controllable Time Series Generation
Time Series Generation (TSG) has emerged as a pivotal technique in synthesizing data that accurately mirrors real-world time series, becoming indispensable in numerous applications. Despite significant advancements in TSG, its efficacy frequently hinges on having large training datasets. This dependency presents a substantial challenge in data-scarce scenarios, especially when dealing with rare or unique conditions. To confront these challenges, we explore a new problem of Controllable Time Series Generation (CTSG), aiming to produce synthetic time series that can adapt to various external conditions, thereby tackling the data scarcity issue. In this paper, we propose textbf{C}ontrollable textbf{T}ime textbf{S}eries (textsf{CTS}), an innovative VAE-agnostic framework tailored for CTSG. A key feature of textsf{CTS} is that it decouples the mapping process from standard VAE training, enabling precise learning of a complex interplay between latent features and external conditions. Moreover, we develop a comprehensive evaluation scheme for CTSG. Extensive experiments across three real-world time series datasets showcase textsf{CTS}'s exceptional capabilities in generating high-quality, controllable outputs. This underscores its adeptness in seamlessly integrating latent features with external conditions. Extending textsf{CTS} to the image domain highlights its remarkable potential for explainability and further reinforces its versatility across different modalities.
[ "['Yifan Bao' 'Yihao Ang' 'Qiang Huang' 'Anthony K. H. Tung'\n 'Zhiyong Huang']" ]
null
null
2403.03699
null
null
http://arxiv.org/pdf/2403.03699v1
2024-03-06T13:29:00Z
2024-03-06T13:29:00Z
Model Parallelism on Distributed Infrastructure: A Literature Review from Theory to LLM Case-Studies
Neural networks have become a cornerstone of machine learning. As the trend for these to get more and more complex continues, so does the underlying hardware and software infrastructure for training and deployment. In this survey we answer three research questions: "What types of model parallelism exist?", "What are the challenges of model parallelism?", and "What is a modern use-case of model parallelism?" We answer the first question by looking at how neural networks can be parallelised and expressing these as operator graphs while exploring the available dimensions. The dimensions along which neural networks can be parallelised are intra-operator and inter-operator. We answer the second question by collecting and listing both implementation challenges for the types of parallelism, as well as the problem of optimally partitioning the operator graph. We answer the last question by collecting and listing how parallelism is applied in modern multi-billion parameter transformer networks, to the extend that this is possible with the limited information shared about these networks.
[ "['Felix Brakel' 'Uraz Odyurt' 'Ana-Lucia Varbanescu']" ]
null
null
2403.03702
null
null
http://arxiv.org/pdf/2403.03702v1
2024-03-06T13:36:31Z
2024-03-06T13:36:31Z
Online model error correction with neural networks: application to the Integrated Forecasting System
In recent years, there has been significant progress in the development of fully data-driven global numerical weather prediction models. These machine learning weather prediction models have their strength, notably accuracy and low computational requirements, but also their weakness: they struggle to represent fundamental dynamical balances, and they are far from being suitable for data assimilation experiments. Hybrid modelling emerges as a promising approach to address these limitations. Hybrid models integrate a physics-based core component with a statistical component, typically a neural network, to enhance prediction capabilities. In this article, we propose to develop a model error correction for the operational Integrated Forecasting System (IFS) of the European Centre for Medium-Range Weather Forecasts using a neural network. The neural network is initially pre-trained offline using a large dataset of operational analyses and analysis increments. Subsequently, the trained network is integrated into the IFS within the Object-Oriented Prediction System (OOPS) so as to be used in data assimilation and forecast experiments. It is then further trained online using a recently developed variant of weak-constraint 4D-Var. The results show that the pre-trained neural network already provides a reliable model error correction, which translates into reduced forecast errors in many conditions and that the online training further improves the accuracy of the hybrid model in many conditions.
[ "['Alban Farchi' 'Marcin Chrust' 'Marc Bocquet' 'Massimo Bonavita']" ]
null
null
2403.03726
null
null
http://arxiv.org/pdf/2403.03726v1
2024-03-06T14:15:20Z
2024-03-06T14:15:20Z
Diffusion on language model embeddings for protein sequence generation
Protein design requires a deep understanding of the inherent complexities of the protein universe. While many efforts lean towards conditional generation or focus on specific families of proteins, the foundational task of unconditional generation remains underexplored and undervalued. Here, we explore this pivotal domain, introducing DiMA, a model that leverages continuous diffusion on embeddings derived from the protein language model, ESM-2, to generate amino acid sequences. DiMA surpasses leading solutions, including autoregressive transformer-based and discrete diffusion models, and we quantitatively illustrate the impact of the design choices that lead to its superior performance. We extensively evaluate the quality, diversity, distribution similarity, and biological relevance of the generated sequences using multiple metrics across various modalities. Our approach consistently produces novel, diverse protein sequences that accurately reflect the inherent structural and functional diversity of the protein space. This work advances the field of protein design and sets the stage for conditional models by providing a robust framework for scalable and high-quality protein sequence generation.
[ "['Viacheslav Meshchaninov' 'Pavel Strashnov' 'Andrey Shevtsov'\n 'Fedor Nikolaev' 'Nikita Ivanisenko' 'Olga Kardymon' 'Dmitry Vetrov']" ]
null
null
2403.03728
null
null
http://arxiv.org/pdf/2403.03728v1
2024-03-06T14:18:24Z
2024-03-06T14:18:24Z
Bridging Diversity and Uncertainty in Active learning with Self-Supervised Pre-Training
This study addresses the integration of diversity-based and uncertainty-based sampling strategies in active learning, particularly within the context of self-supervised pre-trained models. We introduce a straightforward heuristic called TCM that mitigates the cold start problem while maintaining strong performance across various data levels. By initially applying TypiClust for diversity sampling and subsequently transitioning to uncertainty sampling with Margin, our approach effectively combines the strengths of both strategies. Our experiments demonstrate that TCM consistently outperforms existing methods across various datasets in both low and high data regimes.
[ "['Paul Doucet' 'Benjamin Estermann' 'Till Aczel' 'Roger Wattenhofer']" ]
null
null
2403.03730
null
null
http://arxiv.org/pdf/2403.03730v1
2024-03-06T14:19:11Z
2024-03-06T14:19:11Z
Learning 3D object-centric representation through prediction
As part of human core knowledge, the representation of objects is the building block of mental representation that supports high-level concepts and symbolic reasoning. While humans develop the ability of perceiving objects situated in 3D environments without supervision, models that learn the same set of abilities with similar constraints faced by human infants are lacking. Towards this end, we developed a novel network architecture that simultaneously learns to 1) segment objects from discrete images, 2) infer their 3D locations, and 3) perceive depth, all while using only information directly available to the brain as training data, namely: sequences of images and self-motion. The core idea is treating objects as latent causes of visual input which the brain uses to make efficient predictions of future scenes. This results in object representations being learned as an essential byproduct of learning to predict.
[ "['John Day' 'Tushar Arora' 'Jirui Liu' 'Li Erran Li' 'Ming Bo Cai']" ]
null
null
2403.03736
null
null
http://arxiv.org/pdf/2403.03736v1
2024-03-06T14:27:02Z
2024-03-06T14:27:02Z
Unifying Generation and Compression: Ultra-low bitrate Image Coding Via Multi-stage Transformer
Recent progress in generative compression technology has significantly improved the perceptual quality of compressed data. However, these advancements primarily focus on producing high-frequency details, often overlooking the ability of generative models to capture the prior distribution of image content, thus impeding further bitrate reduction in extreme compression scenarios (<0.05 bpp). Motivated by the capabilities of predictive language models for lossless compression, this paper introduces a novel Unified Image Generation-Compression (UIGC) paradigm, merging the processes of generation and compression. A key feature of the UIGC framework is the adoption of vector-quantized (VQ) image models for tokenization, alongside a multi-stage transformer designed to exploit spatial contextual information for modeling the prior distribution. As such, the dual-purpose framework effectively utilizes the learned prior for entropy estimation and assists in the regeneration of lost tokens. Extensive experiments demonstrate the superiority of the proposed UIGC framework over existing codecs in perceptual quality and human perception, particularly in ultra-low bitrate scenarios (<=0.03 bpp), pioneering a new direction in generative compression.
[ "['Naifu Xue' 'Qi Mao' 'Zijian Wang' 'Yuan Zhang' 'Siwei Ma']" ]
null
null
2403.03737
null
null
http://arxiv.org/pdf/2403.03737v1
2024-03-06T14:27:29Z
2024-03-06T14:27:29Z
Probabilistic Topic Modelling with Transformer Representations
Topic modelling was mostly dominated by Bayesian graphical models during the last decade. With the rise of transformers in Natural Language Processing, however, several successful models that rely on straightforward clustering approaches in transformer-based embedding spaces have emerged and consolidated the notion of topics as clusters of embedding vectors. We propose the Transformer-Representation Neural Topic Model (TNTM), which combines the benefits of topic representations in transformer-based embedding spaces and probabilistic modelling. Therefore, this approach unifies the powerful and versatile notion of topics based on transformer embeddings with fully probabilistic modelling, as in models such as Latent Dirichlet Allocation (LDA). We utilize the variational autoencoder (VAE) framework for improved inference speed and modelling flexibility. Experimental results show that our proposed model achieves results on par with various state-of-the-art approaches in terms of embedding coherence while maintaining almost perfect topic diversity. The corresponding source code is available at https://github.com/ArikReuter/TNTM.
[ "['Arik Reuter' 'Anton Thielmann' 'Christoph Weisser' 'Benjamin Säfken'\n 'Thomas Kneib']" ]
null
null
2403.03739
null
null
http://arxiv.org/pdf/2403.03739v1
2024-03-06T14:28:49Z
2024-03-06T14:28:49Z
A&B BNN: Add&Bit-Operation-Only Hardware-Friendly Binary Neural Network
Binary neural networks utilize 1-bit quantized weights and activations to reduce both the model's storage demands and computational burden. However, advanced binary architectures still incorporate millions of inefficient and nonhardware-friendly full-precision multiplication operations. A&B BNN is proposed to directly remove part of the multiplication operations in a traditional BNN and replace the rest with an equal number of bit operations, introducing the mask layer and the quantized RPReLU structure based on the normalizer-free network architecture. The mask layer can be removed during inference by leveraging the intrinsic characteristics of BNN with straightforward mathematical transformations to avoid the associated multiplication operations. The quantized RPReLU structure enables more efficient bit operations by constraining its slope to be integer powers of 2. Experimental results achieved 92.30%, 69.35%, and 66.89% on the CIFAR-10, CIFAR-100, and ImageNet datasets, respectively, which are competitive with the state-of-the-art. Ablation studies have verified the efficacy of the quantized RPReLU structure, leading to a 1.14% enhancement on the ImageNet compared to using a fixed slope RLeakyReLU. The proposed add&bit-operation-only BNN offers an innovative approach for hardware-friendly network architecture.
[ "['Ruichen Ma' 'Guanchao Qiao' 'Yian Liu' 'Liwei Meng' 'Ning Ning'\n 'Yang Liu' 'Shaogang Hu']" ]
null
null
2403.03741
null
null
http://arxiv.org/pdf/2403.03741v1
2024-03-06T14:30:09Z
2024-03-06T14:30:09Z
SUPClust: Active Learning at the Boundaries
Active learning is a machine learning paradigm designed to optimize model performance in a setting where labeled data is expensive to acquire. In this work, we propose a novel active learning method called SUPClust that seeks to identify points at the decision boundary between classes. By targeting these points, SUPClust aims to gather information that is most informative for refining the model's prediction of complex decision regions. We demonstrate experimentally that labeling these points leads to strong model performance. This improvement is observed even in scenarios characterized by strong class imbalance.
[ "['Yuta Ono' 'Till Aczel' 'Benjamin Estermann' 'Roger Wattenhofer']" ]
null
null
2403.03761
null
null
http://arxiv.org/pdf/2403.03761v1
2024-03-06T14:53:24Z
2024-03-06T14:53:24Z
Parameterized quantum comb and simpler circuits for reversing unknown qubit-unitary operations
Quantum comb is an essential tool for characterizing complex quantum protocols in quantum information processing. In this work, we introduce PQComb, a framework leveraging parameterized quantum circuits to explore the capabilities of quantum combs for general quantum process transformation tasks and beyond. By optimizing PQComb for time-reversal simulations of unknown unitary evolutions, we develop a simpler protocol for unknown qubit unitary inversion that reduces the ancilla qubit overhead from 6 to 3 compared to the existing method in [Yoshida, Soeda, Murao, PRL 131, 120602, 2023]. This demonstrates the utility of quantum comb structures and showcases PQComb's potential for solving complex quantum tasks. Our results pave the way for broader PQComb applications in quantum computing and quantum information, emphasizing its versatility for tackling diverse problems in quantum machine learning.
[ "['Yin Mo' 'Lei Zhang' 'Yu-Ao Chen' 'Yingjian Liu' 'Tengxiang Lin'\n 'Xin Wang']" ]
null
null
2403.03767
null
null
http://arxiv.org/abs/2403.03767v1
2024-03-06T15:03:04Z
2024-03-06T15:03:04Z
Predicting the Temperature Dependence of Surfactant CMCs Using Graph Neural Networks
The critical micelle concentration (CMC) of surfactant molecules is an essential property for surfactant applications in industry. Recently, classical QSPR and Graph Neural Networks (GNNs), a deep learning technique, have been successfully applied to predict the CMC of surfactants at room temperature. However, these models have not yet considered the temperature dependency of the CMC, which is highly relevant for practical applications. We herein develop a GNN model for temperature-dependent CMC prediction of surfactants. We collect about 1400 data points from public sources for all surfactant classes, i.e., ionic, nonionic, and zwitterionic, at multiple temperatures. We test the predictive quality of the model for following scenarios: i) when CMC data for surfactants are present in the training of the model in at least one different temperature, and ii) CMC data for surfactants are not present in the training, i.e., generalizing to unseen surfactants. In both test scenarios, our model exhibits a high predictive performance of R$^2 geq $ 0.94 on test data. We also find that the model performance varies by surfactant class. Finally, we evaluate the model for sugar-based surfactants with complex molecular structures, as these represent a more sustainable alternative to synthetic surfactants and are therefore of great interest for future applications in the personal and home care industries.
[ "['Christoforos Brozos' 'Jan G. Rittig' 'Sandip Bhattacharya' 'Elie Akanny'\n 'Christina Kohlmann' 'Alexander Mitsos']" ]