categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.12206
null
null
http://arxiv.org/pdf/2403.12206v1
2024-03-18T19:43:00Z
2024-03-18T19:43:00Z
Useful Compact Representations for Data-Fitting
For minimization problems without 2nd derivative information, methods that estimate Hessian matrices can be very effective. However, conventional techniques generate dense matrices that are prohibitive for large problems. Limited-memory compact representations express the dense arrays in terms of a low rank representation and have become the state-of-the-art for software implementations on large deterministic problems. We develop new compact representations that are parameterized by a choice of vectors and that reduce to existing well known formulas for special choices. We demonstrate effectiveness of the compact representations for large eigenvalue computations, tensor factorizations and nonlinear regressions.
[ "['Johannes J. Brust']" ]
null
null
2403.12210
null
null
http://arxiv.org/pdf/2403.12210v1
2024-03-18T19:51:17Z
2024-03-18T19:51:17Z
Decomposing Control Lyapunov Functions for Efficient Reinforcement Learning
Recent methods using Reinforcement Learning (RL) have proven to be successful for training intelligent agents in unknown environments. However, RL has not been applied widely in real-world robotics scenarios. This is because current state-of-the-art RL methods require large amounts of data to learn a specific task, leading to unreasonable costs when deploying the agent to collect data in real-world applications. In this paper, we build from existing work that reshapes the reward function in RL by introducing a Control Lyapunov Function (CLF), which is demonstrated to reduce the sample complexity. Still, this formulation requires knowing a CLF of the system, but due to the lack of a general method, it is often a challenge to identify a suitable CLF. Existing work can compute low-dimensional CLFs via a Hamilton-Jacobi reachability procedure. However, this class of methods becomes intractable on high-dimensional systems, a problem that we address by using a system decomposition technique to compute what we call Decomposed Control Lyapunov Functions (DCLFs). We use the computed DCLF for reward shaping, which we show improves RL performance. Through multiple examples, we demonstrate the effectiveness of this approach, where our method finds a policy to successfully land a quadcopter in less than half the amount of real-world data required by the state-of-the-art Soft-Actor Critic algorithm.
[ "['Antonio Lopez' 'David Fridovich-Keil']" ]
null
null
2403.12212
null
null
http://arxiv.org/pdf/2403.12212v1
2024-03-18T19:53:56Z
2024-03-18T19:53:56Z
Evaluating Named Entity Recognition: Comparative Analysis of Mono- and Multilingual Transformer Models on Brazilian Corporate Earnings Call Transcriptions
Named Entity Recognition (NER) is a Natural Language Processing technique for extracting information from textual documents. However, much of the existing research on NER has been centered around English-language documents, leaving a gap in the availability of datasets tailored to the financial domain in Portuguese. This study addresses the need for NER within the financial domain, focusing on Portuguese-language texts extracted from earnings call transcriptions of Brazilian banks. By curating a comprehensive dataset comprising 384 transcriptions and leveraging weak supervision techniques for annotation, we evaluate the performance of monolingual models trained on Portuguese (BERTimbau and PTT5) and multilingual models (mBERT and mT5). Notably, we introduce a novel approach that reframes the token classification task as a text generation problem, enabling fine-tuning and evaluation of T5 models. Following the fine-tuning of the models, we conduct an evaluation on the test dataset, employing performance and error metrics. Our findings reveal that BERT-based models consistently outperform T5-based models. Furthermore, while the multilingual models exhibit comparable macro F1-scores, BERTimbau demonstrates superior performance over PTT5. A manual analysis of sentences generated by PTT5 and mT5 unveils a degree of similarity ranging from 0.89 to 1.0, between the original and generated sentences. However, critical errors emerge as both models exhibit discrepancies, such as alterations to monetary and percentage values, underscoring the importance of accuracy and consistency in the financial domain. Despite these challenges, PTT5 and mT5 achieve impressive macro F1-scores of 98.52% and 98.85%, respectively, with our proposed approach. Furthermore, our study sheds light on notable disparities in memory and time consumption for inference across the models.
[ "['Ramon Abilio' 'Guilherme Palermo Coelho' 'Ana Estela Antunes da Silva']" ]
null
null
2403.12213
null
null
http://arxiv.org/pdf/2403.12213v2
2024-04-18T17:35:16Z
2024-03-18T19:54:59Z
Private graphon estimation via sum-of-squares
We develop the first pure node-differentially-private algorithms for learning stochastic block models and for graphon estimation with polynomial running time for any constant number of blocks. The statistical utility guarantees match those of the previous best information-theoretic (exponential-time) node-private mechanisms for these problems. The algorithm is based on an exponential mechanism for a score function defined in terms of a sum-of-squares relaxation whose level depends on the number of blocks. The key ingredients of our results are (1) a characterization of the distance between the block graphons in terms of a quadratic optimization over the polytope of doubly stochastic matrices, (2) a general sum-of-squares convergence result for polynomial optimization over arbitrary polytopes, and (3) a general approach to perform Lipschitz extensions of score functions as part of the sum-of-squares algorithmic paradigm.
[ "['Hongjie Chen' 'Jingqiu Ding' \"Tommaso d'Orsi\" 'Yiding Hua'\n 'Chih-Hung Liu' 'David Steurer']" ]
null
null
2403.12226
null
null
http://arxiv.org/pdf/2403.12226v1
2024-03-18T20:18:32Z
2024-03-18T20:18:32Z
Large-scale flood modeling and forecasting with FloodCast
Large-scale hydrodynamic models generally rely on fixed-resolution spatial grids and model parameters as well as incurring a high computational cost. This limits their ability to accurately forecast flood crests and issue time-critical hazard warnings. In this work, we build a fast, stable, accurate, resolution-invariant, and geometry-adaptative flood modeling and forecasting framework that can perform at large scales, namely FloodCast. The framework comprises two main modules: multi-satellite observation and hydrodynamic modeling. In the multi-satellite observation module, a real-time unsupervised change detection method and a rainfall processing and analysis tool are proposed to harness the full potential of multi-satellite observations in large-scale flood prediction. In the hydrodynamic modeling module, a geometry-adaptive physics-informed neural solver (GeoPINS) is introduced, benefiting from the absence of a requirement for training data in physics-informed neural networks and featuring a fast, accurate, and resolution-invariant architecture with Fourier neural operators. GeoPINS demonstrates impressive performance on popular PDEs across regular and irregular domains. Building upon GeoPINS, we propose a sequence-to-sequence GeoPINS model to handle long-term temporal series and extensive spatial domains in large-scale flood modeling. Next, we establish a benchmark dataset in the 2022 Pakistan flood to assess various flood prediction methods. Finally, we validate the model in three dimensions - flood inundation range, depth, and transferability of spatiotemporal downscaling. Traditional hydrodynamics and sequence-to-sequence GeoPINS exhibit exceptional agreement during high water levels, while comparative assessments with SAR-based flood depth data show that sequence-to-sequence GeoPINS outperforms traditional hydrodynamics, with smaller prediction errors.
[ "['Qingsong Xu' 'Yilei Shi' 'Jonathan Bamber' 'Chaojun Ouyang'\n 'Xiao Xiang Zhu']" ]
null
null
2403.12236
null
null
http://arxiv.org/pdf/2403.12236v2
2024-03-29T06:41:07Z
2024-03-18T20:33:44Z
Improving Generalization via Meta-Learning on Hard Samples
Learned reweighting (LRW) approaches to supervised learning use an optimization criterion to assign weights for training instances, in order to maximize performance on a representative validation dataset. We pose and formalize the problem of optimized selection of the validation set used in LRW training, to improve classifier generalization. In particular, we show that using hard-to-classify instances in the validation set has both a theoretical connection to, and strong empirical evidence of generalization. We provide an efficient algorithm for training this meta-optimized model, as well as a simple train-twice heuristic for careful comparative study. We demonstrate that LRW with easy validation data performs consistently worse than LRW with hard validation data, establishing the validity of our meta-optimization problem. Our proposed algorithm outperforms a wide range of baselines on a range of datasets and domain shift challenges (Imagenet-1K, CIFAR-100, Clothing-1M, CAMELYON, WILDS, etc.), with ~1% gains using VIT-B on Imagenet. We also show that using naturally hard examples for validation (Imagenet-R / Imagenet-A) in LRW training for Imagenet improves performance on both clean and naturally hard test instances by 1-2%. Secondary analyses show that using hard validation data in an LRW framework improves margins on test data, hinting at the mechanism underlying our empirical gains. We believe this work opens up new research directions for the meta-optimization of meta-learning in a supervised learning context.
[ "['Nishant Jain' 'Arun S. Suggala' 'Pradeep Shenoy']" ]
null
null
2403.12237
null
null
http://arxiv.org/pdf/2403.12237v2
2024-05-01T21:39:21Z
2024-03-18T20:35:35Z
Efficient Transformer-based Hyper-parameter Optimization for Resource-constrained IoT Environments
The hyper-parameter optimization (HPO) process is imperative for finding the best-performing Convolutional Neural Networks (CNNs). The automation process of HPO is characterized by its sizable computational footprint and its lack of transparency; both important factors in a resource-constrained Internet of Things (IoT) environment. In this paper, we address these problems by proposing a novel approach that combines transformer architecture and actor-critic Reinforcement Learning (RL) model, TRL-HPO, equipped with multi-headed attention that enables parallelization and progressive generation of layers. These assumptions are founded empirically by evaluating TRL-HPO on the MNIST dataset and comparing it with state-of-the-art approaches that build CNN models from scratch. The results show that TRL-HPO outperforms the classification results of these approaches by 6.8% within the same time frame, demonstrating the efficiency of TRL-HPO for the HPO process. The analysis of the results identifies the main culprit for performance degradation attributed to stacking fully connected layers. This paper identifies new avenues for improving RL-based HPO processes in resource-constrained environments.
[ "['Ibrahim Shaer' 'Soodeh Nikan' 'Abdallah Shami']" ]
null
null
2403.12242
null
null
http://arxiv.org/pdf/2403.12242v2
2024-06-17T15:33:37Z
2024-03-18T20:47:10Z
Reference-based Metrics Disprove Themselves in Question Generation
Reference-based metrics such as BLEU and BERTScore are widely used to evaluate question generation (QG). In this study, on QG benchmarks such as SQuAD and HotpotQA, we find that using human-written references cannot guarantee the effectiveness of the reference-based metrics. Most QG benchmarks have only one reference; we replicated the annotation process and collect another reference. A good metric was expected to grade a human-validated question no worse than generated questions. However, the results of reference-based metrics on our newly collected reference disproved the metrics themselves. We propose a reference-free metric consisted of multi-dimensional criteria such as naturalness, answerability, and complexity, utilizing large language models. These criteria are not constrained to the syntactic or semantic of a single reference question, and the metric does not require a diverse set of references. Experiments reveal that our metric accurately distinguishes between high-quality questions and flawed ones, and achieves state-of-the-art alignment with human judgment.
[ "['Bang Nguyen' 'Mengxia Yu' 'Yun Huang' 'Meng Jiang']" ]
null
null
2403.12254
null
null
http://arxiv.org/pdf/2403.12254v1
2024-03-18T21:07:57Z
2024-03-18T21:07:57Z
Adaptive LPD Radar Waveform Design with Generative Deep Learning
We propose a novel, learning-based method for adaptively generating low probability of detection (LPD) radar waveforms that blend into their operating environment. Our waveforms are designed to follow a distribution that is indistinguishable from the ambient radio frequency (RF) background -- while still being effective at ranging and sensing. To do so, we use an unsupervised, adversarial learning framework; our generator network produces waveforms designed to confuse a critic network, which is optimized to differentiate generated waveforms from the background. To ensure our generated waveforms are still effective for sensing, we introduce and minimize an ambiguity function-based loss on the generated waveforms. We evaluate the performance of our method by comparing the single-pulse detectability of our generated waveforms with traditional LPD waveforms using a separately trained detection neural network. We find that our method can generate LPD waveforms that reduce detectability by up to 90% while simultaneously offering improved ambiguity function (sensing) characteristics. Our framework also provides a mechanism to trade-off detectability and sensing performance.
[ "['Matthew R. Ziemann' 'Christopher A. Metzler']" ]
null
null
2403.12267
null
null
http://arxiv.org/pdf/2403.12267v2
2024-03-20T01:46:13Z
2024-03-18T21:32:58Z
Data-Efficient Contrastive Language-Image Pretraining: Prioritizing Data Quality over Quantity
Contrastive Language-Image Pre-training (CLIP) on large-scale image-caption datasets learns representations that can achieve remarkable zero-shot generalization. However, such models require a massive amount of pre-training data. Improving the quality of the pre-training data has been shown to be much more effective in improving CLIP's performance than increasing its volume. Nevertheless, finding small subsets of training data that provably generalize the best has remained an open question. In this work, we propose the first theoretically rigorous data selection method for CLIP. We show that subsets that closely preserve the cross-covariance of the images and captions of the full data provably achieve a superior generalization performance. Our extensive experiments on ConceptualCaptions3M and ConceptualCaptions12M demonstrate that subsets found by method achieve over 2.7x and 1.4x the accuracy of the next best baseline on ImageNet and its shifted versions. Moreover, we show that our subsets obtain 1.5x the average accuracy across 11 downstream datasets, of the next best baseline. The code is available at: https://github.com/BigML-CS-UCLA/clipcov-data-efficient-clip.
[ "['Siddharth Joshi' 'Arnav Jain' 'Ali Payani' 'Baharan Mirzasoleiman']" ]
null
null
2403.12278
null
null
http://arxiv.org/pdf/2403.12278v1
2024-03-18T21:53:56Z
2024-03-18T21:53:56Z
Stochastic Rounding Implicitly Regularizes Tall-and-Thin Matrices
Motivated by the popularity of stochastic rounding in the context of machine learning and the training of large-scale deep neural network models, we consider stochastic nearness rounding of real matrices $mathbf{A}$ with many more rows than columns. We provide novel theoretical evidence, supported by extensive experimental evaluation that, with high probability, the smallest singular value of a stochastically rounded matrix is well bounded away from zero -- regardless of how close $mathbf{A}$ is to being rank deficient and even if $mathbf{A}$ is rank-deficient. In other words, stochastic rounding textit{implicitly regularizes} tall and skinny matrices $mathbf{A}$ so that the rounded version has full column rank. Our proofs leverage powerful results in random matrix theory, and the idea that stochastic rounding errors do not concentrate in low-dimensional column spaces.
[ "['Gregory Dexter' 'Christos Boutsikas' 'Linkai Ma' 'Ilse C. F. Ipsen'\n 'Petros Drineas']" ]
null
null
2403.12285
null
null
http://arxiv.org/pdf/2403.12285v1
2024-03-18T22:11:00Z
2024-03-18T22:11:00Z
FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications
There are multiple sources of financial news online which influence market movements and trader's decisions. This highlights the need for accurate sentiment analysis, in addition to having appropriate algorithmic trading techniques, to arrive at better informed trading decisions. Standard lexicon based sentiment approaches have demonstrated their power in aiding financial decisions. However, they are known to suffer from issues related to context sensitivity and word ordering. Large Language Models (LLMs) can also be used in this context, but they are not finance-specific and tend to require significant computational resources. To facilitate a finance specific LLM framework, we introduce a novel approach based on the Llama 2 7B foundational model, in order to benefit from its generative nature and comprehensive language manipulation. This is achieved by fine-tuning the Llama2 7B model on a small portion of supervised financial sentiment analysis data, so as to jointly handle the complexities of financial lexicon and context, and further equipping it with a neural network based decision mechanism. Such a generator-classifier scheme, referred to as FinLlama, is trained not only to classify the sentiment valence but also quantify its strength, thus offering traders a nuanced insight into financial news articles. Complementing this, the implementation of parameter-efficient fine-tuning through LoRA optimises trainable parameters, thus minimising computational and memory requirements, without sacrificing accuracy. Simulation results demonstrate the ability of the proposed FinLlama to provide a framework for enhanced portfolio management decisions and increased market returns. These results underpin the ability of FinLlama to construct high-return portfolios which exhibit enhanced resilience, even during volatile periods and unpredictable market events.
[ "['Thanos Konstantinidis' 'Giorgos Iacovides' 'Mingxue Xu'\n 'Tony G. Constantinides' 'Danilo Mandic']" ]
null
null
2403.12307
null
null
http://arxiv.org/pdf/2403.12307v1
2024-03-18T23:16:17Z
2024-03-18T23:16:17Z
Molecular Classification Using Hyperdimensional Graph Classification
Our work introduces an innovative approach to graph learning by leveraging Hyperdimensional Computing. Graphs serve as a widely embraced method for conveying information, and their utilization in learning has gained significant attention. This is notable in the field of chemoinformatics, where learning from graph representations plays a pivotal role. An important application within this domain involves the identification of cancerous cells across diverse molecular structures. We propose an HDC-based model that demonstrates comparable Area Under the Curve results when compared to state-of-the-art models like Graph Neural Networks (GNNs) or the Weisfieler-Lehman graph kernel (WL). Moreover, it outperforms previously proposed hyperdimensional computing graph learning methods. Furthermore, it achieves noteworthy speed enhancements, boasting a 40x acceleration in the training phase and a 15x improvement in inference time compared to GNN and WL models. This not only underscores the efficacy of the HDC-based method, but also highlights its potential for expedited and resource-efficient graph learning.
[ "['Pere Verges' 'Igor Nunes' 'Mike Heddes' 'Tony Givargis'\n 'Alexandru Nicolau']" ]
null
null
2403.12309
null
null
http://arxiv.org/pdf/2403.12309v2
2024-06-26T02:44:18Z
2024-03-18T23:18:27Z
Reinforcement Learning from Delayed Observations via World Models
In standard reinforcement learning settings, agents typically assume immediate feedback about the effects of their actions after taking them. However, in practice, this assumption may not hold true due to physical constraints and can significantly impact the performance of learning algorithms. In this paper, we address observation delays in partially observable environments. We propose leveraging world models, which have shown success in integrating past observations and learning dynamics, to handle observation delays. By reducing delayed POMDPs to delayed MDPs with world models, our methods can effectively handle partial observability, where existing approaches achieve sub-optimal performance or degrade quickly as observability decreases. Experiments suggest that one of our methods can outperform a naive model-based approach by up to 250%. Moreover, we evaluate our methods on visual delayed environments, for the first time showcasing delay-aware reinforcement learning continuous control with visual observations.
[ "['Armin Karamzade' 'Kyungmin Kim' 'Montek Kalsi' 'Roy Fox']" ]
null
null
2403.12313
null
null
http://arxiv.org/pdf/2403.12313v1
2024-03-18T23:20:08Z
2024-03-18T23:20:08Z
Improving LoRA in Privacy-preserving Federated Learning
Low-rank adaptation (LoRA) is one of the most popular task-specific parameter-efficient fine-tuning (PEFT) methods on pre-trained language models for its good performance and computational efficiency. LoRA injects a product of two trainable rank decomposition matrices over the top of each frozen pre-trained model module. However, when applied in the setting of privacy-preserving federated learning (FL), LoRA may become unstable due to the following facts: 1) the effects of data heterogeneity and multi-step local updates are non-negligible, 2) additive noise enforced on updating gradients to guarantee differential privacy (DP) can be amplified and 3) the final performance is susceptible to hyper-parameters. A key factor leading to these phenomena is the discordance between jointly optimizing the two low-rank matrices by local clients and separately aggregating them by the central server. Thus, this paper proposes an efficient and effective version of LoRA, Federated Freeze A LoRA (FFA-LoRA), to alleviate these challenges and further halve the communication cost of federated fine-tuning LLMs. The core idea of FFA-LoRA is to fix the randomly initialized non-zero matrices and only fine-tune the zero-initialized matrices. Compared to LoRA, FFA-LoRA is motivated by practical and theoretical benefits in privacy-preserved FL. Our experiments demonstrate that FFA-LoRA provides more consistent performance with better computational efficiency over vanilla LoRA in various FL tasks.
[ "['Youbang Sun' 'Zitao Li' 'Yaliang Li' 'Bolin Ding']" ]
null
null
2403.12320
null
null
http://arxiv.org/pdf/2403.12320v1
2024-03-18T23:23:50Z
2024-03-18T23:23:50Z
Approximated Likelihood Ratio: A Forward-Only and Parallel Framework for Boosting Neural Network Training
Efficient and biologically plausible alternatives to backpropagation in neural network training remain a challenge due to issues such as high computational complexity and additional assumptions about neural networks, which limit scalability to deeper networks. The likelihood ratio method offers a promising gradient estimation strategy but is constrained by significant memory consumption, especially when deploying multiple copies of data to reduce estimation variance. In this paper, we introduce an approximation technique for the likelihood ratio (LR) method to alleviate computational and memory demands in gradient estimation. By exploiting the natural parallelism during the backward pass using LR, we further provide a high-performance training strategy, which pipelines both the forward and backward pass, to make it more suitable for the computation on specialized hardware. Extensive experiments demonstrate the effectiveness of the approximation technique in neural network training. This work underscores the potential of the likelihood ratio method in achieving high-performance neural network training, suggesting avenues for further exploration.
[ "['Zeliang Zhang' 'Jinyang Jiang' 'Zhuo Liu' 'Susan Liang' 'Yijie Peng'\n 'Chenliang Xu']" ]
null
null
2403.12323
null
null
http://arxiv.org/pdf/2403.12323v1
2024-03-18T23:32:08Z
2024-03-18T23:32:08Z
Enhanced Detection of Transdermal Alcohol Levels Using Hyperdimensional Computing on Embedded Devices
Alcohol consumption has a significant impact on individuals' health, with even more pronounced consequences when consumption becomes excessive. One approach to promoting healthier drinking habits is implementing just-in-time interventions, where timely notifications indicating intoxication are sent during heavy drinking episodes. However, the complexity or invasiveness of an intervention mechanism may deter an individual from using them in practice. Previous research tackled this challenge using collected motion data and conventional Machine Learning (ML) algorithms to classify heavy drinking episodes, but with impractical accuracy and computational efficiency for mobile devices. Consequently, we have elected to use Hyperdimensional Computing (HDC) to design a just-in-time intervention approach that is practical for smartphones, smart wearables, and IoT deployment. HDC is a framework that has proven results in processing real-time sensor data efficiently. This approach offers several advantages, including low latency, minimal power consumption, and high parallelism. We explore various HDC encoding designs and combine them with various HDC learning models to create an optimal and feasible approach for mobile devices. Our findings indicate an accuracy rate of 89%, which represents a substantial 12% improvement over the current state-of-the-art.
[ "['Manuel E. Segura' 'Pere Verges' 'Justin Tian Jin Chen' 'Ramesh Arangott'\n 'Angela Kristine Garcia' 'Laura Garcia Reynoso' 'Alexandru Nicolau'\n 'Tony Givargis' 'Sergio Gago-Masague']" ]
null
null
2403.12326
null
null
http://arxiv.org/pdf/2403.12326v2
2024-07-15T01:32:38Z
2024-03-18T23:42:04Z
Removing Undesirable Concepts in Text-to-Image Diffusion Models with Learnable Prompts
Diffusion models have shown remarkable capability in generating visually impressive content from textual descriptions. However, these models are trained on vast internet data, much of which contains undesirable elements such as sensitive content, copyrighted material, and unethical or harmful concepts. Therefore, beyond generating high-quality content, it is crucial to ensure these models do not propagate these undesirable elements. To address this issue, we propose a novel method to remove undesirable concepts from text-to-image diffusion models by incorporating a learnable prompt into the cross-attention module. This learnable prompt acts as additional memory, capturing the knowledge of undesirable concepts and reducing their dependency on the model parameters and corresponding textual inputs. By transferring this knowledge to the prompt, erasing undesirable concepts becomes more stable and has minimal negative impact on other concepts. We demonstrate the effectiveness of our method on the Stable Diffusion model, showcasing its superiority over state-of-the-art erasure methods in removing undesirable content while preserving unrelated elements.
[ "['Anh Bui' 'Khanh Doan' 'Trung Le' 'Paul Montague' 'Tamas Abraham'\n 'Dinh Phung']" ]
null
null
2403.12327
null
null
http://arxiv.org/pdf/2403.12327v1
2024-03-18T23:45:18Z
2024-03-18T23:45:18Z
GT-Rain Single Image Deraining Challenge Report
This report reviews the results of the GT-Rain challenge on single image deraining at the UG2+ workshop at CVPR 2023. The aim of this competition is to study the rainy weather phenomenon in real world scenarios, provide a novel real world rainy image dataset, and to spark innovative ideas that will further the development of single image deraining methods on real images. Submissions were trained on the GT-Rain dataset and evaluated on an extension of the dataset consisting of 15 additional scenes. Scenes in GT-Rain are comprised of real rainy image and ground truth image captured moments after the rain had stopped. 275 participants were registered in the challenge and 55 competed in the final testing phase.
[ "['Howard Zhang' 'Yunhao Ba' 'Ethan Yang' 'Rishi Upadhyay' 'Alex Wong'\n 'Achuta Kadambi' 'Yun Guo' 'Xueyao Xiao' 'Xiaoxiong Wang' 'Yi Li'\n 'Yi Chang' 'Luxin Yan' 'Chaochao Zheng' 'Luping Wang' 'Bin Liu'\n 'Sunder Ali Khowaja' 'Jiseok Yoon' 'Ik-Hyun Lee' 'Zhao Zhang'\n 'Yanyan Wei' 'Jiahuan Ren' 'Suiyi Zhao' 'Huan Zheng']" ]
null
null
2403.12328
null
null
http://arxiv.org/pdf/2403.12328v1
2024-03-18T23:48:33Z
2024-03-18T23:48:33Z
Methods for Generating Drift in Text Streams
Systems and individuals produce data continuously. On the Internet, people share their knowledge, sentiments, and opinions, provide reviews about services and products, and so on. Automatically learning from these textual data can provide insights to organizations and institutions, thus preventing financial impacts, for example. To learn from textual data over time, the machine learning system must account for concept drift. Concept drift is a frequent phenomenon in real-world datasets and corresponds to changes in data distribution over time. For instance, a concept drift occurs when sentiments change or a word's meaning is adjusted over time. Although concept drift is frequent in real-world applications, benchmark datasets with labeled drifts are rare in the literature. To bridge this gap, this paper provides four textual drift generation methods to ease the production of datasets with labeled drifts. These methods were applied to Yelp and Airbnb datasets and tested using incremental classifiers respecting the stream mining paradigm to evaluate their ability to recover from the drifts. Results show that all methods have their performance degraded right after the drifts, and the incremental SVM is the fastest to run and recover the previous performance levels regarding accuracy and Macro F1-Score.
[ "['Cristiano Mesquita Garcia' 'Alessandro Lameiras Koerich'\n 'Alceu de Souza Britto Jr' 'Jean Paul Barddal']" ]
null
null
2403.12329
null
null
http://arxiv.org/pdf/2403.12329v1
2024-03-19T00:03:40Z
2024-03-19T00:03:40Z
FedFisher: Leveraging Fisher Information for One-Shot Federated Learning
Standard federated learning (FL) algorithms typically require multiple rounds of communication between the server and the clients, which has several drawbacks, including requiring constant network connectivity, repeated investment of computational resources, and susceptibility to privacy attacks. One-Shot FL is a new paradigm that aims to address this challenge by enabling the server to train a global model in a single round of communication. In this work, we present FedFisher, a novel algorithm for one-shot FL that makes use of Fisher information matrices computed on local client models, motivated by a Bayesian perspective of FL. First, we theoretically analyze FedFisher for two-layer over-parameterized ReLU neural networks and show that the error of our one-shot FedFisher global model becomes vanishingly small as the width of the neural networks and amount of local training at clients increases. Next, we propose practical variants of FedFisher using the diagonal Fisher and K-FAC approximation for the full Fisher and highlight their communication and compute efficiency for FL. Finally, we conduct extensive experiments on various datasets, which show that these variants of FedFisher consistently improve over competing baselines.
[ "['Divyansh Jhunjhunwala' 'Shiqiang Wang' 'Gauri Joshi']" ]
null
null
2403.12335
null
null
http://arxiv.org/pdf/2403.12335v1
2024-03-19T00:48:25Z
2024-03-19T00:48:25Z
Temporally-Consistent Koopman Autoencoders for Forecasting Dynamical Systems
Absence of sufficiently high-quality data often poses a key challenge in data-driven modeling of high-dimensional spatio-temporal dynamical systems. Koopman Autoencoders (KAEs) harness the expressivity of deep neural networks (DNNs), the dimension reduction capabilities of autoencoders, and the spectral properties of the Koopman operator to learn a reduced-order feature space with simpler, linear dynamics. However, the effectiveness of KAEs is hindered by limited and noisy training datasets, leading to poor generalizability. To address this, we introduce the Temporally-Consistent Koopman Autoencoder (tcKAE), designed to generate accurate long-term predictions even with constrained and noisy training data. This is achieved through a consistency regularization term that enforces prediction coherence across different time steps, thus enhancing the robustness and generalizability of tcKAE over existing models. We provide analytical justification for this approach based on Koopman spectral theory and empirically demonstrate tcKAE's superior performance over state-of-the-art KAE models across a variety of test cases, including simple pendulum oscillations, kinetic plasmas, fluid flows, and sea surface temperature data.
[ "['Indranil Nayak' 'Debdipta Goswami' 'Mrinal Kumar' 'Fernando Teixeira']" ]
null
null
2403.12338
null
null
http://arxiv.org/pdf/2403.12338v2
2024-04-12T19:14:59Z
2024-03-19T01:07:35Z
Stochastic Halpern iteration in normed spaces and applications to reinforcement learning
We analyze the oracle complexity of the stochastic Halpern iteration with variance reduction, where we aim to approximate fixed-points of nonexpansive and contractive operators in a normed finite-dimensional space. We show that if the underlying stochastic oracle is with uniformly bounded variance, our method exhibits an overall oracle complexity of $tilde{O}(varepsilon^{-5})$, improving recent rates established for the stochastic Krasnoselskii-Mann iteration. Also, we establish a lower bound of $Omega(varepsilon^{-3})$, which applies to a wide range of algorithms, including all averaged iterations even with minibatching. Using a suitable modification of our approach, we derive a $O(varepsilon^{-2}(1-gamma)^{-3})$ complexity bound in the case in which the operator is a $gamma$-contraction. As an application, we propose new synchronous algorithms for average reward and discounted reward Markov decision processes. In particular, for the average reward, our method improves on the best-known sample complexity.
[ "['Mario Bravo' 'Juan Pablo Contreras']" ]
null
null
2403.12350
null
null
http://arxiv.org/pdf/2403.12350v1
2024-03-19T01:39:33Z
2024-03-19T01:39:33Z
Friendly Sharpness-Aware Minimization
Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness. Despite the practical success, the mechanisms behind SAM's generalization enhancements remain elusive, limiting its progress in deep learning optimization. In this work, we investigate SAM's core components for generalization improvement and introduce "Friendly-SAM" (F-SAM) to further enhance SAM's generalization. Our investigation reveals the key role of batch-specific stochastic gradient noise within the adversarial perturbation, i.e., the current minibatch gradient, which significantly influences SAM's generalization performance. By decomposing the adversarial perturbation in SAM into full gradient and stochastic gradient noise components, we discover that relying solely on the full gradient component degrades generalization while excluding it leads to improved performance. The possible reason lies in the full gradient component's increase in sharpness loss for the entire dataset, creating inconsistencies with the subsequent sharpness minimization step solely on the current minibatch data. Inspired by these insights, F-SAM aims to mitigate the negative effects of the full gradient component. It removes the full gradient estimated by an exponentially moving average (EMA) of historical stochastic gradients, and then leverages stochastic gradient noise for improved generalization. Moreover, we provide theoretical validation for the EMA approximation and prove the convergence of F-SAM on non-convex problems. Extensive experiments demonstrate the superior generalization performance and robustness of F-SAM over vanilla SAM. Code is available at https://github.com/nblt/F-SAM.
[ "['Tao Li' 'Pan Zhou' 'Zhengbao He' 'Xinwen Cheng' 'Xiaolin Huang']" ]
null
null
2403.12354
null
null
http://arxiv.org/pdf/2403.12354v2
2024-06-14T23:35:36Z
2024-03-19T01:58:14Z
Sim2Real in Reconstructive Spectroscopy: Deep Learning with Augmented Device-Informed Data Simulation
This work proposes a deep learning (DL)-based framework, namely Sim2Real, for spectral signal reconstruction in reconstructive spectroscopy, focusing on efficient data sampling and fast inference time. The work focuses on the challenge of reconstructing real-world spectral signals under the extreme setting where only device-informed simulated data are available for training. Such device-informed simulated data are much easier to collect than real-world data but exhibit large distribution shifts from their real-world counterparts. To leverage such simulated data effectively, a hierarchical data augmentation strategy is introduced to mitigate the adverse effects of this domain shift, and a corresponding neural network for the spectral signal reconstruction with our augmented data is designed. Experiments using a real dataset measured from our spectrometer device demonstrate that Sim2Real achieves significant speed-up during the inference while attaining on-par performance with the state-of-the-art optimization-based methods.
[ "['Jiyi Chen' 'Pengyu Li' 'Yutong Wang' 'Pei-Cheng Ku' 'Qing Qu']" ]
null
null
2403.12362
null
null
http://arxiv.org/pdf/2403.12362v1
2024-03-19T02:16:32Z
2024-03-19T02:16:32Z
DMAD: Dual Memory Bank for Real-World Anomaly Detection
Training a unified model is considered to be more suitable for practical industrial anomaly detection scenarios due to its generalization ability and storage efficiency. However, this multi-class setting, which exclusively uses normal data, overlooks the few but important accessible annotated anomalies in the real world. To address the challenge of real-world anomaly detection, we propose a new framework named Dual Memory bank enhanced representation learning for Anomaly Detection (DMAD). This framework handles both unsupervised and semi-supervised scenarios in a unified (multi-class) setting. DMAD employs a dual memory bank to calculate feature distance and feature attention between normal and abnormal patterns, thereby encapsulating knowledge about normal and abnormal instances. This knowledge is then used to construct an enhanced representation for anomaly score learning. We evaluated DMAD on the MVTec-AD and VisA datasets. The results show that DMAD surpasses current state-of-the-art methods, highlighting DMAD's capability in handling the complexities of real-world anomaly detection scenarios.
[ "['Jianlong Hu' 'Xu Chen' 'Zhenye Gan' 'Jinlong Peng' 'Shengchuan Zhang'\n 'Jiangning Zhang' 'Yabiao Wang' 'Chengjie Wang' 'Liujuan Cao'\n 'Rongrong Ji']" ]
null
null
2403.12366
null
null
http://arxiv.org/pdf/2403.12366v1
2024-03-19T02:23:12Z
2024-03-19T02:23:12Z
U-Net Kalman Filter (UNetKF): An Example of Machine Learning-assisted Ensemble Data Assimilation
Machine learning techniques have seen a tremendous rise in popularity in weather and climate sciences. Data assimilation (DA), which combines observations and numerical models, has great potential to incorporate machine learning and artificial intelligence (ML/AI) techniques. In this paper, we use U-Net, a type of convolutional neutral network (CNN), to predict the localized ensemble covariances for the Ensemble Kalman Filter (EnKF) algorithm. Using a 2-layer quasi-geostrophic model, U-Nets are trained using data from EnKF DA experiments. The trained U-Nets are then used to predict the flow-dependent localized error covariance matrices in U-Net Kalman Filter (UNetKF) experiments, which are compared to traditional 3-dimensional variational (3DVar), ensemble 3DVar (En3DVar) and EnKF methods. The performance of UNetKF can match or exceed that of 3DVar, En3DVar or EnKF. We also demonstrate that trained U-Nets can be transferred to a higher-resolution model for UNetKF implementation, which again performs competitively to 3DVar and EnKF, particularly for small ensemble sizes.
[ "['Feiyu Lu']" ]
null
null
2403.12367
null
null
http://arxiv.org/pdf/2403.12367v1
2024-03-19T02:24:16Z
2024-03-19T02:24:16Z
Semisupervised score based matching algorithm to evaluate the effect of public health interventions
Multivariate matching algorithms "pair" similar study units in an observational study to remove potential bias and confounding effects caused by the absence of randomizations. In one-to-one multivariate matching algorithms, a large number of "pairs" to be matched could mean both the information from a large sample and a large number of tasks, and therefore, to best match the pairs, such a matching algorithm with efficiency and comparatively limited auxiliary matching knowledge provided through a "training" set of paired units by domain experts, is practically intriguing. We proposed a novel one-to-one matching algorithm based on a quadratic score function $S_{beta}(x_i,x_j)= beta^T (x_i-x_j)(x_i-x_j)^T beta$. The weights $beta$, which can be interpreted as a variable importance measure, are designed to minimize the score difference between paired training units while maximizing the score difference between unpaired training units. Further, in the typical but intricate case where the training set is much smaller than the unpaired set, we propose a underline{s}emisupervised underline{c}ompanion underline{o}ne-underline{t}o-underline{o}ne underline{m}atching underline{a}lgorithm (SCOTOMA) that makes the best use of the unpaired units. The proposed weight estimator is proved to be consistent when the truth matching criterion is indeed the quadratic score function. When the model assumptions are violated, we demonstrate that the proposed algorithm still outperforms some popular competing matching algorithms through a series of simulations. We applied the proposed algorithm to a real-world study to investigate the effect of in-person schooling on community Covid-19 transmission rate for policy making purpose.
[ "['Hongzhe Zhang' 'Jiasheng Shi' 'Jing Huang']" ]
null
null
2403.12371
null
null
http://arxiv.org/pdf/2403.12371v1
2024-03-19T02:32:24Z
2024-03-19T02:32:24Z
Advancing Time Series Classification with Multimodal Language Modeling
For the advancements of time series classification, scrutinizing previous studies, most existing methods adopt a common learning-to-classify paradigm - a time series classifier model tries to learn the relation between sequence inputs and target label encoded by one-hot distribution. Although effective, this paradigm conceals two inherent limitations: (1) encoding target categories with one-hot distribution fails to reflect the comparability and similarity between labels, and (2) it is very difficult to learn transferable model across domains, which greatly hinder the development of universal serving paradigm. In this work, we propose InstructTime, a novel attempt to reshape time series classification as a learning-to-generate paradigm. Relying on the powerful generative capacity of the pre-trained language model, the core idea is to formulate the classification of time series as a multimodal understanding task, in which both task-specific instructions and raw time series are treated as multimodal inputs while the label information is represented by texts. To accomplish this goal, three distinct designs are developed in the InstructTime. Firstly, a time series discretization module is designed to convert continuous time series into a sequence of hard tokens to solve the inconsistency issue across modal inputs. To solve the modality representation gap issue, for one thing, we introduce an alignment projected layer before feeding the transformed token of time series into language models. For another, we highlight the necessity of auto-regressive pre-training across domains, which can facilitate the transferability of the language model and boost the generalization performance. Extensive experiments are conducted over benchmark datasets, whose results uncover the superior performance of InstructTime and the potential for a universal foundation model in time series classification.
[ "['Mingyue Cheng' 'Yiheng Chen' 'Qi Liu' 'Zhiding Liu' 'Yucong Luo']" ]
null
null
2403.12372
null
null
http://arxiv.org/pdf/2403.12372v1
2024-03-19T02:32:47Z
2024-03-19T02:32:47Z
Learning Transferable Time Series Classifier with Cross-Domain Pre-training from Language Model
Advancements in self-supervised pre-training (SSL) have significantly advanced the field of learning transferable time series representations, which can be very useful in enhancing the downstream task. Despite being effective, most existing works struggle to achieve cross-domain SSL pre-training, missing valuable opportunities to integrate patterns and features from different domains. The main challenge lies in the significant differences in the characteristics of time-series data across different domains, such as variations in the number of channels and temporal resolution scales. To address this challenge, we propose CrossTimeNet, a novel cross-domain SSL learning framework to learn transferable knowledge from various domains to largely benefit the target downstream task. One of the key characteristics of CrossTimeNet is the newly designed time series tokenization module, which could effectively convert the raw time series into a sequence of discrete tokens based on a reconstruction optimization process. Besides, we highlight that predicting a high proportion of corrupted tokens can be very helpful for extracting informative patterns across different domains during SSL pre-training, which has been largely overlooked in past years. Furthermore, unlike previous works, our work treats the pre-training language model (PLM) as the initialization of the encoder network, investigating the feasibility of transferring the knowledge learned by the PLM to the time series area. Through these efforts, the path to cross-domain pre-training of a generic time series model can be effectively paved. We conduct extensive experiments in a real-world scenario across various time series classification domains. The experimental results clearly confirm CrossTimeNet's superior performance.
[ "['Mingyue Cheng' 'Xiaoyu Tao' 'Qi Liu' 'Hao Zhang' 'Yiheng Chen'\n 'Chenyi Lei']" ]
null
null
2403.12382
null
null
http://arxiv.org/pdf/2403.12382v1
2024-03-19T02:47:33Z
2024-03-19T02:47:33Z
Low-Trace Adaptation of Zero-shot Self-supervised Blind Image Denoising
Deep learning-based denoiser has been the focus of recent development on image denoising. In the past few years, there has been increasing interest in developing self-supervised denoising networks that only require noisy images, without the need for clean ground truth for training. However, a performance gap remains between current self-supervised methods and their supervised counterparts. Additionally, these methods commonly depend on assumptions about noise characteristics, thereby constraining their applicability in real-world scenarios. Inspired by the properties of the Frobenius norm expansion, we discover that incorporating a trace term reduces the optimization goal disparity between self-supervised and supervised methods, thereby enhancing the performance of self-supervised learning. To exploit this insight, we propose a trace-constraint loss function and design the low-trace adaptation Noise2Noise (LoTA-N2N) model that bridges the gap between self-supervised and supervised learning. Furthermore, we have discovered that several existing self-supervised denoising frameworks naturally fall within the proposed trace-constraint loss as subcases. Extensive experiments conducted on natural and confocal image datasets indicate that our method achieves state-of-the-art performance within the realm of zero-shot self-supervised image denoising approaches, without relying on any assumptions regarding the noise.
[ "['Jintong Hu' 'Bin Xia' 'Bingchen Li' 'Wenming Yang']" ]
null
null
2403.12384
null
null
http://arxiv.org/pdf/2403.12384v3
2024-05-21T11:51:03Z
2024-03-19T02:49:32Z
An Aligning and Training Framework for Multimodal Recommendations
With the development of multimedia applications, multimodal recommendations play an essential role, as they can leverage rich contexts beyond user and item interactions. Existing methods mainly use them to help learn ID features; however, there exist semantic gaps among multimodal content features and ID features. Directly using multimodal information as an auxiliary would lead to misalignment in items' and users' representations. In this paper, we first systematically investigate the misalignment issue in multimodal recommendations, and propose a solution named AlignRec. In AlignRec, the recommendation objective is decomposed into three alignments, namely alignment within contents, alignment between content and categorical ID, and alignment between users and items. Each alignment is characterized by a distinct objective function. To effectively train AlignRec, we propose starting from pre-training the first alignment to obtain unified multimodal features and subsequently training the following two alignments together. As it is essential to analyze whether each multimodal feature helps in training, we design three new classes of metrics to evaluate intermediate performance. Our extensive experiments on three real-world datasets consistently verify the superiority of AlignRec compared to nine baselines. We also find that the multimodal features generated by our framework are better than currently used ones, which are to be open-sourced.
[ "['Yifan Liu' 'Kangning Zhang' 'Xiangyuan Ren' 'Yanhua Huang' 'Jiarui Jin'\n 'Yingjie Qin' 'Ruilong Su' 'Ruiwen Xu' 'Weinan Zhang']" ]
null
null
2403.12391
null
null
http://arxiv.org/pdf/2403.12391v1
2024-03-19T02:59:50Z
2024-03-19T02:59:50Z
FairSTG: Countering performance heterogeneity via collaborative sample-level optimization
Spatiotemporal learning plays a crucial role in mobile computing techniques to empower smart cites. While existing research has made great efforts to achieve accurate predictions on the overall dataset, they still neglect the significant performance heterogeneity across samples. In this work, we designate the performance heterogeneity as the reason for unfair spatiotemporal learning, which not only degrades the practical functions of models, but also brings serious potential risks to real-world urban applications. To fix this gap, we propose a model-independent Fairness-aware framework for SpatioTemporal Graph learning (FairSTG), which inherits the idea of exploiting advantages of well-learned samples to challenging ones with collaborative mix-up. Specifically, FairSTG consists of a spatiotemporal feature extractor for model initialization, a collaborative representation enhancement for knowledge transfer between well-learned samples and challenging ones, and fairness objectives for immediately suppressing sample-level performance heterogeneity. Experiments on four spatiotemporal datasets demonstrate that our FairSTG significantly improves the fairness quality while maintaining comparable forecasting accuracy. Case studies show FairSTG can counter both spatial and temporal performance heterogeneity by our sample-level retrieval and compensation, and our work can potentially alleviate the risks on spatiotemporal resource allocation for underrepresented urban regions.
[ "['Gengyu Lin' 'Zhengyang Zhou' 'Qihe Huang' 'Kuo Yang' 'Shifen Cheng'\n 'Yang Wang']" ]
null
null
2403.12399
null
null
http://arxiv.org/pdf/2403.12399v1
2024-03-19T03:14:24Z
2024-03-19T03:14:24Z
Electioneering the Network: Dynamic Multi-Step Adversarial Attacks for Community Canvassing
The problem of online social network manipulation for community canvassing is of real concern in today's world. Motivated by the study of voter models, opinion and polarization dynamics on networks, we model community canvassing as a dynamic process over a network enabled via gradient-based attacks on GNNs. Existing attacks on GNNs are all single-step and do not account for the dynamic cascading nature of information diffusion in networks. We consider the realistic scenario where an adversary uses a GNN as a proxy to predict and manipulate voter preferences, especially uncertain voters. Gradient-based attacks on the GNN inform the adversary of strategic manipulations that can be made to proselytize targeted voters. In particular, we explore $textit{minimum budget attacks for community canvassing}$ (MBACC). We show that the MBACC problem is NP-Hard and propose Dynamic Multi-Step Adversarial Community Canvassing (MAC) to address it. MAC makes dynamic local decisions based on the heuristic of low budget and high second-order influence to convert and perturb target voters. MAC is a dynamic multi-step attack that discovers low-budget and high-influence targets from which efficient cascading attacks can happen. We evaluate MAC against single-step baselines on the MBACC problem with multiple underlying networks and GNN models. Our experiments show the superiority of MAC which is able to discover efficient multi-hop attacks for adversarial community canvassing. Our code implementation and data is available at https://github.com/saurabhsharma1993/mac.
[ "['Saurabh Sharma' 'Ambuj SIngh']" ]
null
null
2403.12400
null
null
http://arxiv.org/pdf/2403.12400v1
2024-03-19T03:16:52Z
2024-03-19T03:16:52Z
Finding the Missing Data: A BERT-inspired Approach Against Package Loss in Wireless Sensing
Despite the development of various deep learning methods for Wi-Fi sensing, package loss often results in noncontinuous estimation of the Channel State Information (CSI), which negatively impacts the performance of the learning models. To overcome this challenge, we propose a deep learning model based on Bidirectional Encoder Representations from Transformers (BERT) for CSI recovery, named CSI-BERT. CSI-BERT can be trained in an self-supervised manner on the target dataset without the need for additional data. Furthermore, unlike traditional interpolation methods that focus on one subcarrier at a time, CSI-BERT captures the sequential relationships across different subcarriers. Experimental results demonstrate that CSI-BERT achieves lower error rates and faster speed compared to traditional interpolation methods, even when facing with high loss rates. Moreover, by harnessing the recovered CSI obtained from CSI-BERT, other deep learning models like Residual Network and Recurrent Neural Network can achieve an average increase in accuracy of approximately 15% in Wi-Fi sensing tasks. The collected dataset WiGesture and code for our model are publicly available at https://github.com/RS2002/CSI-BERT.
[ "['Zijian Zhao' 'Tingwei Chen' 'Fanyi Meng' 'Hang Li' 'Xiaoyang Li'\n 'Guangxu Zhu']" ]
null
null
2403.12403
null
null
http://arxiv.org/pdf/2403.12403v2
2024-05-08T02:47:36Z
2024-03-19T03:22:35Z
Towards Interpretable Hate Speech Detection using Large Language Model-extracted Rationales
Although social media platforms are a prominent arena for users to engage in interpersonal discussions and express opinions, the facade and anonymity offered by social media may allow users to spew hate speech and offensive content. Given the massive scale of such platforms, there arises a need to automatically identify and flag instances of hate speech. Although several hate speech detection methods exist, most of these black-box methods are not interpretable or explainable by design. To address the lack of interpretability, in this paper, we propose to use state-of-the-art Large Language Models (LLMs) to extract features in the form of rationales from the input text, to train a base hate speech classifier, thereby enabling faithful interpretability by design. Our framework effectively combines the textual understanding capabilities of LLMs and the discriminative power of state-of-the-art hate speech classifiers to make these classifiers faithfully interpretable. Our comprehensive evaluation on a variety of English language social media hate speech datasets demonstrate: (1) the goodness of the LLM-extracted rationales, and (2) the surprising retention of detector performance even after training to ensure interpretability. All code and data will be made available at https://github.com/AmritaBh/shield.
[ "['Ayushi Nirmal' 'Amrita Bhattacharjee' 'Paras Sheth' 'Huan Liu']" ]
null
null
2403.12404
null
null
http://arxiv.org/pdf/2403.12404v2
2024-05-29T09:59:36Z
2024-03-19T03:27:01Z
Understanding and Improving Training-free Loss-based Diffusion Guidance
Adding additional control to pretrained diffusion models has become an increasingly popular research area, with extensive applications in computer vision, reinforcement learning, and AI for science. Recently, several studies have proposed training-free loss-based guidance by using off-the-shelf networks pretrained on clean images. This approach enables zero-shot conditional generation for universal control formats, which appears to offer a free lunch in diffusion guidance. In this paper, we aim to develop a deeper understanding of training-free guidance, as well as overcome its limitations. We offer a theoretical analysis that supports training-free guidance from the perspective of optimization, distinguishing it from classifier-based (or classifier-free) guidance. To elucidate their drawbacks, we theoretically demonstrate that training-free guidance is more susceptible to adversarial gradients and exhibits slower convergence rates compared to classifier guidance. We then introduce a collection of techniques designed to overcome the limitations, accompanied by theoretical rationale and empirical evidence. Our experiments in image and motion generation confirm the efficacy of these techniques.
[ "['Yifei Shen' 'Xinyang Jiang' 'Yezhen Wang' 'Yifan Yang' 'Dongqi Han'\n 'Dongsheng Li']" ]
null
null
2403.12406
null
null
http://arxiv.org/pdf/2403.12406v1
2024-03-19T03:34:23Z
2024-03-19T03:34:23Z
Offline Imitation of Badminton Player Behavior via Experiential Contexts and Brownian Motion
In the dynamic and rapid tactic involvements of turn-based sports, badminton stands out as an intrinsic paradigm that requires alter-dependent decision-making of players. While the advancement of learning from offline expert data in sequential decision-making has been witnessed in various domains, how to rally-wise imitate the behaviors of human players from offline badminton matches has remained underexplored. Replicating opponents' behavior benefits players by allowing them to undergo strategic development with direction before matches. However, directly applying existing methods suffers from the inherent hierarchy of the match and the compounding effect due to the turn-based nature of players alternatively taking actions. In this paper, we propose RallyNet, a novel hierarchical offline imitation learning model for badminton player behaviors: (i) RallyNet captures players' decision dependencies by modeling decision-making processes as a contextual Markov decision process. (ii) RallyNet leverages the experience to generate context as the agent's intent in the rally. (iii) To generate more realistic behavior, RallyNet leverages Geometric Brownian Motion (GBM) to model the interactions between players by introducing a valuable inductive bias for learning player behaviors. In this manner, RallyNet links player intents with interaction models with GBM, providing an understanding of interactions for sports analytics. We extensively validate RallyNet with the largest available real-world badminton dataset consisting of men's and women's singles, demonstrating its ability to imitate player behaviors. Results reveal RallyNet's superiority over offline imitation learning methods and state-of-the-art turn-based approaches, outperforming them by at least 16% in mean rule-based agent normalization score. Furthermore, we discuss various practical use cases to highlight RallyNet's applicability.
[ "['Kuang-Da Wang' 'Wei-Yao Wang' 'Ping-Chun Hsieh' 'Wen-Chih Peng']" ]
null
null
2403.12417
null
null
http://arxiv.org/abs/2403.12417v1
2024-03-19T04:02:31Z
2024-03-19T04:02:31Z
On Predictive planning and counterfactual learning in active inference
Given the rapid advancement of artificial intelligence, understanding the foundations of intelligent behaviour is increasingly important. Active inference, regarded as a general theory of behaviour, offers a principled approach to probing the basis of sophistication in planning and decision-making. In this paper, we examine two decision-making schemes in active inference based on 'planning' and 'learning from experience'. Furthermore, we also introduce a mixed model that navigates the data-complexity trade-off between these strategies, leveraging the strengths of both to facilitate balanced decision-making. We evaluate our proposed model in a challenging grid-world scenario that requires adaptability from the agent. Additionally, our model provides the opportunity to analyze the evolution of various parameters, offering valuable insights and contributing to an explainable framework for intelligent decision-making.
[ "['Aswin Paul' 'Takuya Isomura' 'Adeel Razi']" ]
null
null
2403.12418
null
null
http://arxiv.org/pdf/2403.12418v4
2024-05-18T11:58:16Z
2024-03-19T04:02:57Z
STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space Model
Spatial-Temporal Graph (STG) data is characterized as dynamic, heterogenous, and non-stationary, leading to the continuous challenge of spatial-temporal graph learning. In the past few years, various GNN-based methods have been proposed to solely focus on mimicking the relationships among node individuals of the STG network, ignoring the significance of modeling the intrinsic features that exist in STG system over time. In contrast, modern Selective State Space Models (SSSMs) present a new approach which treat STG Network as a system, and meticulously explore the STG system's dynamic state evolution across temporal dimension. In this work, we introduce Spatial-Temporal Graph Mamba (STG-Mamba) as the first exploration of leveraging the powerful selective state space models for STG learning by treating STG Network as a system, and employing the Spatial-Temporal Selective State Space Module (ST-S3M) to precisely focus on the selected STG latent features. Furthermore, to strengthen GNN's ability of modeling STG data under the setting of selective state space models, we propose Kalman Filtering Graph Neural Networks (KFGN) for dynamically integrate and upgrade the STG embeddings from different temporal granularities through a learnable Kalman Filtering statistical theory-based approach. Extensive empirical studies are conducted on three benchmark STG forecasting datasets, demonstrating the performance superiority and computational efficiency of STG-Mamba. It not only surpasses existing state-of-the-art methods in terms of STG forecasting performance, but also effectively alleviate the computational bottleneck of large-scale graph networks in reducing the computational cost of FLOPs and test inference time. The implementation code is available at: url{https://github.com/LincanLi98/STG-Mamba}.
[ "['Lincan Li' 'Hanchen Wang' 'Wenjie Zhang' 'Adelle Coster']" ]
null
null
2403.12422
null
null
http://arxiv.org/pdf/2403.12422v1
2024-03-19T04:09:11Z
2024-03-19T04:09:11Z
Jetfire: Efficient and Accurate Transformer Pretraining with INT8 Data Flow and Per-Block Quantization
Pretraining transformers are generally time-consuming. Fully quantized training (FQT) is a promising approach to speed up pretraining. However, most FQT methods adopt a quantize-compute-dequantize procedure, which often leads to suboptimal speedup and significant performance degradation when used in transformers due to the high memory access overheads and low-precision computations. In this work, we propose Jetfire, an efficient and accurate INT8 training method specific to transformers. Our method features an INT8 data flow to optimize memory access and a per-block quantization method to maintain the accuracy of pretrained transformers. Extensive experiments demonstrate that our INT8 FQT method achieves comparable accuracy to the FP16 training baseline and outperforms the existing INT8 training works for transformers. Moreover, for a standard transformer block, our method offers an end-to-end training speedup of 1.42x and a 1.49x memory reduction compared to the FP16 baseline.
[ "['Haocheng Xi' 'Yuxiang Chen' 'Kang Zhao' 'Kaijun Zheng' 'Jianfei Chen'\n 'Jun Zhu']" ]
null
null
2403.12428
null
null
http://arxiv.org/pdf/2403.12428v1
2024-03-19T04:35:59Z
2024-03-19T04:35:59Z
Transfer in Sequential Multi-armed Bandits via Reward Samples
We consider a sequential stochastic multi-armed bandit problem where the agent interacts with bandit over multiple episodes. The reward distribution of the arms remain constant throughout an episode but can change over different episodes. We propose an algorithm based on UCB to transfer the reward samples from the previous episodes and improve the cumulative regret performance over all the episodes. We provide regret analysis and empirical results for our algorithm, which show significant improvement over the standard UCB algorithm without transfer.
[ "['Rahul N R' 'Vaibhav Katewa']" ]
null
null
2403.12429
null
null
http://arxiv.org/pdf/2403.12429v1
2024-03-19T04:36:41Z
2024-03-19T04:36:41Z
TransformMix: Learning Transformation and Mixing Strategies from Data
Data augmentation improves the generalization power of deep learning models by synthesizing more training samples. Sample-mixing is a popular data augmentation approach that creates additional data by combining existing samples. Recent sample-mixing methods, like Mixup and Cutmix, adopt simple mixing operations to blend multiple inputs. Although such a heuristic approach shows certain performance gains in some computer vision tasks, it mixes the images blindly and does not adapt to different datasets automatically. A mixing strategy that is effective for a particular dataset does not often generalize well to other datasets. If not properly configured, the methods may create misleading mixed images, which jeopardize the effectiveness of sample-mixing augmentations. In this work, we propose an automated approach, TransformMix, to learn better transformation and mixing augmentation strategies from data. In particular, TransformMix applies learned transformations and mixing masks to create compelling mixed images that contain correct and important information for the target tasks. We demonstrate the effectiveness of TransformMix on multiple datasets in transfer learning, classification, object detection, and knowledge distillation settings. Experimental results show that our method achieves better performance as well as efficiency when compared with strong sample-mixing baselines.
[ "['Tsz-Him Cheung' 'Dit-Yan Yeung']" ]
null
null
2403.12448
null
null
http://arxiv.org/pdf/2403.12448v1
2024-03-19T05:17:47Z
2024-03-19T05:17:47Z
Do Generated Data Always Help Contrastive Learning?
Contrastive Learning (CL) has emerged as one of the most successful paradigms for unsupervised visual representation learning, yet it often depends on intensive manual data augmentations. With the rise of generative models, especially diffusion models, the ability to generate realistic images close to the real data distribution has been well recognized. These generated high-equality images have been successfully applied to enhance contrastive representation learning, a technique termed ``data inflation''. However, we find that the generated data (even from a good diffusion model like DDPM) may sometimes even harm contrastive learning. We investigate the causes behind this failure from the perspective of both data inflation and data augmentation. For the first time, we reveal the complementary roles that stronger data inflation should be accompanied by weaker augmentations, and vice versa. We also provide rigorous theoretical explanations for these phenomena via deriving its generalization bounds under data inflation. Drawing from these insights, we propose Adaptive Inflation (AdaInf), a purely data-centric strategy without introducing any extra computation cost. On benchmark datasets, AdaInf can bring significant improvements for various contrastive learning methods. Notably, without using external data, AdaInf obtains 94.70% linear accuracy on CIFAR-10 with SimCLR, setting a new record that surpasses many sophisticated methods. Code is available at https://github.com/PKU-ML/adainf.
[ "['Yifei Wang' 'Jizhe Zhang' 'Yisen Wang']" ]
null
null
2403.12459
null
null
http://arxiv.org/pdf/2403.12459v3
2024-04-22T21:28:17Z
2024-03-19T05:30:50Z
Non-negative Contrastive Learning
Deep representations have shown promising performance when transferred to downstream tasks in a black-box manner. Yet, their inherent lack of interpretability remains a significant challenge, as these features are often opaque to human understanding. In this paper, we propose Non-negative Contrastive Learning (NCL), a renaissance of Non-negative Matrix Factorization (NMF) aimed at deriving interpretable features. The power of NCL lies in its enforcement of non-negativity constraints on features, reminiscent of NMF's capability to extract features that align closely with sample clusters. NCL not only aligns mathematically well with an NMF objective but also preserves NMF's interpretability attributes, resulting in a more sparse and disentangled representation compared to standard contrastive learning (CL). Theoretically, we establish guarantees on the identifiability and downstream generalization of NCL. Empirically, we show that these advantages enable NCL to outperform CL significantly on feature disentanglement, feature selection, as well as downstream classification tasks. At last, we show that NCL can be easily extended to other learning scenarios and benefit supervised learning as well. Code is available at https://github.com/PKU-ML/non_neg.
[ "['Yifei Wang' 'Qi Zhang' 'Yaoyu Guo' 'Yisen Wang']" ]
null
null
2403.12469
null
null
http://arxiv.org/pdf/2403.12469v1
2024-03-19T06:01:02Z
2024-03-19T06:01:02Z
When Do "More Contexts" Help with Sarcasm Recognition?
Sarcasm recognition is challenging because it needs an understanding of the true intention, which is opposite to or different from the literal meaning of the words. Prior work has addressed this challenge by developing a series of methods that provide richer $contexts$, e.g., sentiment or cultural nuances, to models. While shown to be effective individually, no study has systematically evaluated their collective effectiveness. As a result, it remains unclear to what extent additional contexts can improve sarcasm recognition. In this work, we explore the improvements that existing methods bring by incorporating more contexts into a model. To this end, we develop a framework where we can integrate multiple contextual cues and test different approaches. In evaluation with four approaches on three sarcasm recognition benchmarks, we achieve existing state-of-the-art performances and also demonstrate the benefits of sequentially adding more contexts. We also identify inherent drawbacks of using more contexts, highlighting that in the pursuit of even better results, the model may need to adopt societal biases.
[ "['Ojas Nimase' 'Sanghyun Hong']" ]
null
null
2403.12474
null
null
http://arxiv.org/pdf/2403.12474v1
2024-03-19T06:22:58Z
2024-03-19T06:22:58Z
FairSIN: Achieving Fairness in Graph Neural Networks through Sensitive Information Neutralization
Despite the remarkable success of graph neural networks (GNNs) in modeling graph-structured data, like other machine learning models, GNNs are also susceptible to making biased predictions based on sensitive attributes, such as race and gender. For fairness consideration, recent state-of-the-art (SOTA) methods propose to filter out sensitive information from inputs or representations, e.g., edge dropping or feature masking. However, we argue that such filtering-based strategies may also filter out some non-sensitive feature information, leading to a sub-optimal trade-off between predictive performance and fairness. To address this issue, we unveil an innovative neutralization-based paradigm, where additional Fairness-facilitating Features (F3) are incorporated into node features or representations before message passing. The F3 are expected to statistically neutralize the sensitive bias in node representations and provide additional nonsensitive information. We also provide theoretical explanations for our rationale, concluding that F3 can be realized by emphasizing the features of each node's heterogeneous neighbors (neighbors with different sensitive attributes). We name our method as FairSIN, and present three implementation variants from both data-centric and model-centric perspectives. Experimental results on five benchmark datasets with three different GNN backbones show that FairSIN significantly improves fairness metrics while maintaining high prediction accuracies.
[ "['Cheng Yang' 'Jixi Liu' 'Yunhe Yan' 'Chuan Shi']" ]
null
null
2403.12481
null
null
http://arxiv.org/pdf/2403.12481v1
2024-03-19T06:36:42Z
2024-03-19T06:36:42Z
TT-BLIP: Enhancing Fake News Detection Using BLIP and Tri-Transformer
Detecting fake news has received a lot of attention. Many previous methods concatenate independently encoded unimodal data, ignoring the benefits of integrated multimodal information. Also, the absence of specialized feature extraction for text and images further limits these methods. This paper introduces an end-to-end model called TT-BLIP that applies the bootstrapping language-image pretraining for unified vision-language understanding and generation (BLIP) for three types of information: BERT and BLIPtextsubscript{Txt} for text, ResNet and BLIPtextsubscript{Img} for images, and bidirectional BLIP encoders for multimodal information. The Multimodal Tri-Transformer fuses tri-modal features using three types of multi-head attention mechanisms, ensuring integrated modalities for enhanced representations and improved multimodal data analysis. The experiments are performed using two fake news datasets, Weibo and Gossipcop. The results indicate TT-BLIP outperforms the state-of-the-art models.
[ "['Eunjee Choi' 'Jong-Kook Kim']" ]
null
null
2403.12486
null
null
http://arxiv.org/pdf/2403.12486v1
2024-03-19T06:43:46Z
2024-03-19T06:43:46Z
NTK-Guided Few-Shot Class Incremental Learning
While anti-amnesia FSCIL learners often excel in incremental sessions, they tend to prioritize mitigating knowledge attrition over harnessing the model's potential for knowledge acquisition. In this paper, we delve into the foundations of model generalization in FSCIL through the lens of the Neural Tangent Kernel (NTK). Our primary design focus revolves around ensuring optimal NTK convergence and NTK-related generalization error, serving as the theoretical bedrock for exceptional generalization. To attain globally optimal NTK convergence, we employ a meta-learning mechanism grounded in mathematical principles to guide the optimization process within an expanded network. Furthermore, to reduce the NTK-related generalization error, we commence from the foundational level, optimizing the relevant factors constituting its generalization loss. Specifically, we initiate self-supervised pre-training on the base session to shape the initial network weights. Then they are carefully refined through curricular alignment, followed by the application of dual NTK regularization tailored specifically for both convolutional and linear layers. Through the combined effects of these measures, our network acquires robust NTK properties, significantly enhancing its foundational generalization. On popular FSCIL benchmark datasets, our NTK-FSCIL surpasses contemporary state-of-the-art approaches, elevating end-session accuracy by 2.9% to 8.7%.
[ "['Jingren Liu' 'Zhong Ji' 'Yanwei Pang' 'YunLong Yu']" ]
null
null
2403.12493
null
null
http://arxiv.org/pdf/2403.12493v1
2024-03-19T07:02:06Z
2024-03-19T07:02:06Z
A Trainable Feature Extractor Module for Deep Neural Networks and Scanpath Classification
Scanpath classification is an area in eye tracking research with possible applications in medicine, manufacturing as well as training systems for students in various domains. In this paper we propose a trainable feature extraction module for deep neural networks. The purpose of this module is to transform a scanpath into a feature vector which is directly useable for the deep neural network architecture. Based on the backpropagated error of the deep neural network, the feature extraction module adapts its parameters to improve the classification performance. Therefore, our feature extraction module is jointly trainable with the deep neural network. The motivation to this feature extraction module is based on classical histogram-based approaches which usually compute distributions over a scanpath. We evaluated our module on three public datasets and compared it to the state of the art approaches.
[ "['Wolfgang Fuhl']" ]
null
null
2403.12503
null
null
http://arxiv.org/pdf/2403.12503v1
2024-03-19T07:10:58Z
2024-03-19T07:10:58Z
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices
Large language models (LLMs) have significantly transformed the landscape of Natural Language Processing (NLP). Their impact extends across a diverse spectrum of tasks, revolutionizing how we approach language understanding and generations. Nevertheless, alongside their remarkable utility, LLMs introduce critical security and risk considerations. These challenges warrant careful examination to ensure responsible deployment and safeguard against potential vulnerabilities. This research paper thoroughly investigates security and privacy concerns related to LLMs from five thematic perspectives: security and privacy concerns, vulnerabilities against adversarial attacks, potential harms caused by misuses of LLMs, mitigation strategies to address these challenges while identifying limitations of current strategies. Lastly, the paper recommends promising avenues for future research to enhance the security and risk management of LLMs.
[ "['Sara Abdali' 'Richard Anarfi' 'CJ Barberan' 'Jia He']" ]
null
null
2403.12510
null
null
http://arxiv.org/pdf/2403.12510v1
2024-03-19T07:24:54Z
2024-03-19T07:24:54Z
Generalized Consistency Trajectory Models for Image Manipulation
Diffusion-based generative models excel in unconditional generation, as well as on applied tasks such as image editing and restoration. The success of diffusion models lies in the iterative nature of diffusion: diffusion breaks down the complex process of mapping noise to data into a sequence of simple denoising tasks. Moreover, we are able to exert fine-grained control over the generation process by injecting guidance terms into each denoising step. However, the iterative process is also computationally intensive, often taking from tens up to thousands of function evaluations. Although consistency trajectory models (CTMs) enable traversal between any time points along the probability flow ODE (PFODE) and score inference with a single function evaluation, CTMs only allow translation from Gaussian noise to data. Thus, this work aims to unlock the full potential of CTMs by proposing generalized CTMs (GCTMs), which translate between arbitrary distributions via ODEs. We discuss the design space of GCTMs and demonstrate their efficacy in various image manipulation tasks such as image-to-image translation, restoration, and editing. Code: url{https://github.com/1202kbs/GCTM}
[ "['Beomsu Kim' 'Jaemin Kim' 'Jeongsol Kim' 'Jong Chul Ye']" ]
null
null
2403.12511
null
null
http://arxiv.org/pdf/2403.12511v1
2024-03-19T07:25:36Z
2024-03-19T07:25:36Z
Forward Gradient-Based Frank-Wolfe Optimization for Memory Efficient Deep Neural Network Training
Training a deep neural network using gradient-based methods necessitates the calculation of gradients at each level. However, using backpropagation or reverse mode differentiation, to calculate the gradients necessities significant memory consumption, rendering backpropagation an inefficient method for computing gradients. This paper focuses on analyzing the performance of the well-known Frank-Wolfe algorithm, a.k.a. conditional gradient algorithm by having access to the forward mode of automatic differentiation to compute gradients. We provide in-depth technical details that show the proposed Algorithm does converge to the optimal solution with a sub-linear rate of convergence by having access to the noisy estimate of the true gradient obtained in the forward mode of automated differentiation, referred to as the Projected Forward Gradient. In contrast, the standard Frank-Wolfe algorithm, when provided with access to the Projected Forward Gradient, fails to converge to the optimal solution. We demonstrate the convergence attributes of our proposed algorithms using a numerical example.
[ "['M. Rostami' 'S. S. Kia']" ]
null
null
2403.12529
null
null
http://arxiv.org/pdf/2403.12529v2
2024-05-22T09:02:33Z
2024-03-19T08:05:49Z
Contextualized Messages Boost Graph Representations
Graph neural networks (GNNs) have gained significant attention in recent years for their ability to process data that may be represented as graphs. This success has prompted several studies to explore the representational capability of GNNs based on the graph isomorphism task. These works inherently assume a countable node feature representation, potentially limiting their applicability. Interestingly, only a few theoretical works study GNNs with uncountable node feature representation. This paper presents a novel perspective on the representational capability of GNNs across all levels - node-level, neighborhood-level, and graph-level - when the space of node feature representation is uncountable. Specifically, it relaxes the injective requirement in previous works by employing an implicit pseudometric distance on the space of input to create a soft-injective function. This allows distinct inputs to produce similar outputs only if the pseudometric deems the inputs to be sufficiently similar on some representation, which is often useful in practice. As a consequence, a novel soft-isomorphic relational graph convolution network (SIR-GCN) that emphasizes non-linear and contextualized transformation of neighborhood feature representations is proposed. A mathematical discussion on the relationship between SIR-GCN and widely used GNNs is then laid out to put the contribution in context, establishing SIR-GCN as a generalization of classical GNN methodologies. Experiments on synthetic and benchmark datasets demonstrate the relative superiority of SIR-GCN, outperforming comparable models in node and graph property prediction tasks.
[ "['Brian Godwin Lim' 'Galvin Brice Lim' 'Renzo Roel Tan' 'Kazushi Ikeda']" ]
null
null
2403.12544
null
null
http://arxiv.org/pdf/2403.12544v1
2024-03-19T08:40:21Z
2024-03-19T08:40:21Z
AffineQuant: Affine Transformation Quantization for Large Language Models
The significant resource requirements associated with Large-scale Language Models (LLMs) have generated considerable interest in the development of techniques aimed at compressing and accelerating neural networks. Among these techniques, Post-Training Quantization (PTQ) has emerged as a subject of considerable interest due to its noteworthy compression efficiency and cost-effectiveness in the context of training. Existing PTQ methods for LLMs limit the optimization scope to scaling transformations between pre- and post-quantization weights. In this paper, we advocate for the direct optimization using equivalent Affine transformations in PTQ (AffineQuant). This approach extends the optimization scope and thus significantly minimizing quantization errors. Additionally, by employing the corresponding inverse matrix, we can ensure equivalence between the pre- and post-quantization outputs of PTQ, thereby maintaining its efficiency and generalization capabilities. To ensure the invertibility of the transformation during optimization, we further introduce a gradual mask optimization method. This method initially focuses on optimizing the diagonal elements and gradually extends to the other elements. Such an approach aligns with the Levy-Desplanques theorem, theoretically ensuring invertibility of the transformation. As a result, significant performance improvements are evident across different LLMs on diverse datasets. To illustrate, we attain a C4 perplexity of 15.76 (2.26 lower vs 18.02 in OmniQuant) on the LLaMA2-7B model of W4A4 quantization without overhead. On zero-shot tasks, AffineQuant achieves an average of 58.61 accuracy (1.98 lower vs 56.63 in OmniQuant) when using 4/4-bit quantization for LLaMA-30B, which setting a new state-of-the-art benchmark for PTQ in LLMs.
[ "['Yuexiao Ma' 'Huixia Li' 'Xiawu Zheng' 'Feng Ling' 'Xuefeng Xiao'\n 'Rui Wang' 'Shilei Wen' 'Fei Chao' 'Rongrong Ji']" ]
null
null
2403.12553
null
null
http://arxiv.org/pdf/2403.12553v2
2024-04-05T16:28:18Z
2024-03-19T08:56:20Z
Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs
Existing neural operator architectures face challenges when solving multiphysics problems with coupled partial differential equations (PDEs), due to complex geometries, interactions between physical variables, and the lack of large amounts of high-resolution training data. To address these issues, we propose Codomain Attention Neural Operator (CoDA-NO), which tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems. Specifically, we extend positional encoding, self-attention, and normalization layers to the function space. CoDA-NO can learn representations of different PDE systems with a single model. We evaluate CoDA-NO's potential as a backbone for learning multiphysics PDEs over multiple systems by considering few-shot learning settings. On complex downstream tasks with limited data, such as fluid flow simulations and fluid-structure interactions, we found CoDA-NO to outperform existing methods on the few-shot learning task by over $36%$. The code is available at https://github.com/ashiq24/CoDA-NO.
[ "['Md Ashiqur Rahman' 'Robert Joseph George' 'Mogab Elleithy'\n 'Daniel Leibovici' 'Zongyi Li' 'Boris Bonev' 'Colin White'\n 'Julius Berner' 'Raymond A. Yeh' 'Jean Kossaifi' 'Kamyar Azizzadenesheli'\n 'Anima Anandkumar']" ]
null
null
2403.12559
null
null
http://arxiv.org/pdf/2403.12559v1
2024-03-19T09:14:52Z
2024-03-19T09:14:52Z
Confidence Self-Calibration for Multi-Label Class-Incremental Learning
The partial label challenge in Multi-Label Class-Incremental Learning (MLCIL) arises when only the new classes are labeled during training, while past and future labels remain unavailable. This issue leads to a proliferation of false-positive errors due to erroneously high confidence multi-label predictions, exacerbating catastrophic forgetting within the disjoint label space. In this paper, we aim to refine multi-label confidence calibration in MLCIL and propose a Confidence Self-Calibration (CSC) approach. Firstly, for label relationship calibration, we introduce a class-incremental graph convolutional network that bridges the isolated label spaces by constructing learnable, dynamically extended label relationship graph. Then, for confidence calibration, we present a max-entropy regularization for each multi-label increment, facilitating confidence self-calibration through the penalization of over-confident output distributions. Our approach attains new state-of-the-art results in MLCIL tasks on both MS-COCO and PASCAL VOC datasets, with the calibration of label confidences confirmed through our methodology.
[ "['Kaile Du' 'Yifan Zhou' 'Fan Lyu' 'Yuyang Li' 'Chen Lu' 'Guangcan Liu']" ]
null
null
2403.12562
null
null
http://arxiv.org/pdf/2403.12562v1
2024-03-19T09:17:18Z
2024-03-19T09:17:18Z
Equity through Access: A Case for Small-scale Deep Learning
The recent advances in deep learning (DL) have been accelerated by access to large-scale data and compute. These large-scale resources have been used to train progressively larger models which are resource intensive in terms of compute, data, energy, and carbon emissions. These costs are becoming a new type of entry barrier to researchers and practitioners with limited access to resources at such scale, particularly in the Global South. In this work, we take a comprehensive look at the landscape of existing DL models for vision tasks and demonstrate their usefulness in settings where resources are limited. To account for the resource consumption of DL models, we introduce a novel measure to estimate the performance per resource unit, which we call the PePR score. Using a diverse family of 131 unique DL architectures (spanning 1M to 130M trainable parameters) and three medical image datasets, we capture trends about the performance-resource trade-offs. In applications like medical image analysis, we argue that small-scale, specialized models are better than striving for large-scale models. Furthermore, we show that using pretrained models can significantly reduce the computational resources and data required. We hope this work will encourage the community to focus on improving AI equity by developing methods and models with smaller resource footprints.
[ "['Raghavendra Selvan' 'Bob Pepin' 'Christian Igel' 'Gabrielle Samuel'\n 'Erik B Dam']" ]
null
null
2403.12588
null
null
http://arxiv.org/abs/2403.12588v2
2024-06-02T17:18:40Z
2024-03-19T09:47:54Z
Machine Learning of the Prime Distribution
In the present work we use maximum entropy methods to derive several theorems in probabilistic number theory, including a version of the Hardy-Ramanujan Theorem. We also provide a theoretical argument explaining the experimental observations of Yang-Hui He about the learnability of primes, and posit that the ErdH{o}s-Kac law would very unlikely be discovered by current machine learning techniques. Numerical experiments that we perform corroborate our theoretical findings.
[ "['Alexander Kolpakov' 'A. Alistair Rocke']" ]
null
null
2403.12599
null
null
http://arxiv.org/pdf/2403.12599v1
2024-03-19T10:09:41Z
2024-03-19T10:09:41Z
Preventing Eviction-Caused Homelessness through ML-Informed Distribution of Rental Assistance
Rental assistance programs provide individuals with financial assistance to prevent housing instabilities caused by evictions and avert homelessness. Since these programs operate under resource constraints, they must decide who to prioritize. Typically, funding is distributed by a reactive or first-come-first serve allocation process that does not systematically consider risk of future homelessness. We partnered with Allegheny County, PA to explore a proactive allocation approach that prioritizes individuals facing eviction based on their risk of future homelessness. Our ML system that uses state and county administrative data to accurately identify individuals in need of support outperforms simpler prioritization approaches by at least 20% while being fair and equitable across race and gender. Furthermore, our approach would identify 28% of individuals who are overlooked by the current process and end up homeless. Beyond improvements to the rental assistance program in Allegheny County, this study can inform the development of evidence-based decision support tools in similar contexts, including lessons about data needs, model design, evaluation, and field validation.
[ "['Catalina Vajiac' 'Arun Frey' 'Joachim Baumann' 'Abigail Smith'\n 'Kasun Amarasinghe' 'Alice Lai' 'Kit Rodolfa' 'Rayid Ghani']" ]
null
null
2403.12606
null
null
http://arxiv.org/pdf/2403.12606v1
2024-03-19T10:17:26Z
2024-03-19T10:17:26Z
On the Effectiveness of Heterogeneous Ensemble Methods for Re-identification
In this contribution, we introduce a novel ensemble method for the re-identification of industrial entities, using images of chipwood pallets and galvanized metal plates as dataset examples. Our algorithms replace commonly used, complex siamese neural networks with an ensemble of simplified, rudimentary models, providing wider applicability, especially in hardware-restricted scenarios. Each ensemble sub-model uses different types of extracted features of the given data as its input, allowing for the creation of effective ensembles in a fraction of the training duration needed for more complex state-of-the-art models. We reach state-of-the-art performance at our task, with a Rank-1 accuracy of over 77% and a Rank-10 accuracy of over 99%, and introduce five distinct feature extraction approaches, and study their combination using different ensemble methods.
[ "['Simon Klüttermann' 'Jérôme Rutinowski' 'Anh Nguyen' 'Britta Grimme'\n 'Moritz Roidl' 'Emmanuel Müller']" ]
null
null
2403.12609
null
null
http://arxiv.org/pdf/2403.12609v1
2024-03-19T10:24:15Z
2024-03-19T10:24:15Z
SUN Team's Contribution to ABAW 2024 Competition: Audio-visual Valence-Arousal Estimation and Expression Recognition
As emotions play a central role in human communication, automatic emotion recognition has attracted increasing attention in the last two decades. While multimodal systems enjoy high performances on lab-controlled data, they are still far from providing ecological validity on non-lab-controlled, namely 'in-the-wild' data. This work investigates audiovisual deep learning approaches for emotion recognition in-the-wild problem. We particularly explore the effectiveness of architectures based on fine-tuned Convolutional Neural Networks (CNN) and Public Dimensional Emotion Model (PDEM), for video and audio modality, respectively. We compare alternative temporal modeling and fusion strategies using the embeddings from these multi-stage trained modality-specific Deep Neural Networks (DNN). We report results on the AffWild2 dataset under Affective Behavior Analysis in-the-Wild 2024 (ABAW'24) challenge protocol.
[ "['Denis Dresvyanskiy' 'Maxim Markitantov' 'Jiawei Yu' 'Peitong Li'\n 'Heysem Kaya' 'Alexey Karpov']" ]
null
null
2403.12636
null
null
http://arxiv.org/pdf/2403.12636v1
2024-03-19T11:16:14Z
2024-03-19T11:16:14Z
A Practical Guide to Statistical Distances for Evaluating Generative Models in Science
Generative models are invaluable in many fields of science because of their ability to capture high-dimensional and complicated distributions, such as photo-realistic images, protein structures, and connectomes. How do we evaluate the samples these models generate? This work aims to provide an accessible entry point to understanding popular notions of statistical distances, requiring only foundational knowledge in mathematics and statistics. We focus on four commonly used notions of statistical distances representing different methodologies: Using low-dimensional projections (Sliced-Wasserstein; SW), obtaining a distance using classifiers (Classifier Two-Sample Tests; C2ST), using embeddings through kernels (Maximum Mean Discrepancy; MMD), or neural networks (Fr'echet Inception Distance; FID). We highlight the intuition behind each distance and explain their merits, scalability, complexity, and pitfalls. To demonstrate how these distances are used in practice, we evaluate generative models from different scientific domains, namely a model of decision making and a model generating medical images. We showcase that distinct distances can give different results on similar data. Through this guide, we aim to help researchers to use, interpret, and evaluate statistical distances for generative models in science.
[ "['Sebastian Bischoff' 'Alana Darcher' 'Michael Deistler' 'Richard Gao'\n 'Franziska Gerken' 'Manuel Gloeckler' 'Lisa Haxel' 'Jaivardhan Kapoor'\n 'Janne K Lappalainen' 'Jakob H Macke' 'Guy Moss' 'Matthijs Pals'\n 'Felix Pei' 'Rachel Rapp' 'A Erdem Sağtekin' 'Cornelius Schröder'\n 'Auguste Schulz' 'Zinovia Stefanidi' 'Shoji Toyota' 'Linda Ulmer'\n 'Julius Vetter']" ]
null
null
2403.12641
null
null
http://arxiv.org/pdf/2403.12641v1
2024-03-19T11:24:14Z
2024-03-19T11:24:14Z
Automated Contrastive Learning Strategy Search for Time Series
In recent years, Contrastive Learning (CL) has become a predominant representation learning paradigm for time series. Most existing methods in the literature focus on manually building specific Contrastive Learning Strategies (CLS) by human heuristics for certain datasets and tasks. However, manually developing CLS usually require excessive prior knowledge about the datasets and tasks, e.g., professional cognition of the medical time series in healthcare, as well as huge human labor and massive experiments to determine the detailed learning configurations. In this paper, we present an Automated Machine Learning (AutoML) practice at Microsoft, which automatically learns to contrastively learn representations for various time series datasets and tasks, namely Automated Contrastive Learning (AutoCL). We first construct a principled universal search space of size over 3x1012, covering data augmentation, embedding transformation, contrastive pair construction and contrastive losses. Further, we introduce an efficient reinforcement learning algorithm, which optimizes CLS from the performance on the validation tasks, to obtain more effective CLS within the space. Experimental results on various real-world tasks and datasets demonstrate that AutoCL could automatically find the suitable CLS for a given dataset and task. From the candidate CLS found by AutoCL on several public datasets/tasks, we compose a transferable Generally Good Strategy (GGS), which has a strong performance for other datasets. We also provide empirical analysis as a guidance for future design of CLS.
[ "['Baoyu Jing' 'Yansen Wang' 'Guoxin Sui' 'Jing Hong' 'Jingrui He'\n 'Yuqing Yang' 'Dongsheng Li' 'Kan Ren']" ]
null
null
2403.12646
null
null
http://arxiv.org/pdf/2403.12646v1
2024-03-19T11:30:30Z
2024-03-19T11:30:30Z
Prompt-fused framework for Inductive Logical Query Answering
Answering logical queries on knowledge graphs (KG) poses a significant challenge for machine reasoning. The primary obstacle in this task stems from the inherent incompleteness of KGs. Existing research has predominantly focused on addressing the issue of missing edges in KGs, thereby neglecting another aspect of incompleteness: the emergence of new entities. Furthermore, most of the existing methods tend to reason over each logical operator separately, rather than comprehensively analyzing the query as a whole during the reasoning process. In this paper, we propose a query-aware prompt-fused framework named Pro-QE, which could incorporate existing query embedding methods and address the embedding of emerging entities through contextual information aggregation. Additionally, a query prompt, which is generated by encoding the symbolic query, is introduced to gather information relevant to the query from a holistic perspective. To evaluate the efficacy of our model in the inductive setting, we introduce two new challenging benchmarks. Experimental results demonstrate that our model successfully handles the issue of unseen entities in logical queries. Furthermore, the ablation study confirms the efficacy of the aggregator and prompt components.
[ "['Zezhong Xu' 'Peng Ye' 'Lei Liang' 'Huajun Chen' 'Wen Zhang']" ]
null
null
2403.12650
null
null
http://arxiv.org/pdf/2403.12650v1
2024-03-19T11:34:40Z
2024-03-19T11:34:40Z
Adaptive Multilevel Neural Networks for Parametric PDEs with Error Estimation
To solve high-dimensional parameter-dependent partial differential equations (pPDEs), a neural network architecture is presented. It is constructed to map parameters of the model data to corresponding finite element solutions. To improve training efficiency and to enable control of the approximation error, the network mimics an adaptive finite element method (AFEM). It outputs a coarse grid solution and a series of corrections as produced in an AFEM, allowing a tracking of the error decay over successive layers of the network. The observed errors are measured by a reliable residual based a posteriori error estimator, enabling the reduction to only few parameters for the approximation in the output of the network. This leads to a problem adapted representation of the solution on locally refined grids. Furthermore, each solution of the AFEM is discretized in a hierarchical basis. For the architecture, convolutional neural networks (CNNs) are chosen. The hierarchical basis then allows to handle sparse images for finely discretized meshes. Additionally, as corrections on finer levels decrease in amplitude, i.e., importance for the overall approximation, the accuracy of the network approximation is allowed to decrease successively. This can either be incorporated in the number of generated high fidelity samples used for training or the size of the network components responsible for the fine grid outputs. The architecture is described and preliminary numerical examples are presented.
[ "['Janina E. Schütte' 'Martin Eigel']" ]
null
null
2403.12659
null
null
http://arxiv.org/pdf/2403.12659v2
2024-03-28T14:19:21Z
2024-03-19T11:49:08Z
Graph Neural Networks for Carbon Dioxide Adsorption Prediction in Aluminium-Exchanged Zeolites
The ability to efficiently predict adsorption properties of zeolites can be of large benefit in accelerating the design process of novel materials. The existing configuration space for these materials is wide, while existing molecular simulation methods are computationally expensive. In this work, we propose a model which is 4 to 5 orders of magnitude faster at adsorption properties compared to molecular simulations. To validate the model, we generated datasets containing various aluminium configurations for the MOR, MFI, RHO and ITW zeolites along with their heat of adsorptions and Henry coefficients for CO$_2$, obtained from Monte Carlo simulations. The predictions obtained from the Machine Learning model are in agreement with the values obtained from the Monte Carlo simulations, confirming that the model can be used for property prediction. Furthermore, we show that the model can be used for identifying adsorption sites. Finally, we evaluate the capability of our model for generating novel zeolite configurations by using it in combination with a genetic algorithm.
[ "['Marko Petković' 'José Manuel Vicent-Luna' 'Vlado Menkovski'\n 'Sofía Calero']" ]
null
null
2403.12664
null
null
http://arxiv.org/pdf/2403.12664v1
2024-03-19T11:56:21Z
2024-03-19T11:56:21Z
Deciphering AutoML Ensembles: cattleia's Assistance in Decision-Making
In many applications, model ensembling proves to be better than a single predictive model. Hence, it is the most common post-processing technique in Automated Machine Learning (AutoML). The most popular frameworks use ensembles at the expense of reducing the interpretability of the final models. In our work, we propose cattleia - an application that deciphers the ensembles for regression, multiclass, and binary classification tasks. This tool works with models built by three AutoML packages: auto-sklearn, AutoGluon, and FLAML. The given ensemble is analyzed from different perspectives. We conduct a predictive performance investigation through evaluation metrics of the ensemble and its component models. We extend the validation perspective by introducing new measures to assess the diversity and complementarity of the model predictions. Moreover, we apply explainable artificial intelligence (XAI) techniques to examine the importance of variables. Summarizing obtained insights, we can investigate and adjust the weights with a modification tool to tune the ensemble in the desired way. The application provides the aforementioned aspects through dedicated interactive visualizations, making it accessible to a diverse audience. We believe the cattleia can support users in decision-making and deepen the comprehension of AutoML frameworks.
[ "['Anna Kozak' 'Dominik Kędzierski' 'Jakub Piwko' 'Malwina Wojewoda'\n 'Katarzyna Woźnica']" ]
null
null
2403.12672
null
null
http://arxiv.org/pdf/2403.12672v1
2024-03-19T12:13:52Z
2024-03-19T12:13:52Z
Improving Interpretability of Scores in Anomaly Detection Based on Gaussian-Bernoulli Restricted Boltzmann Machine
Gaussian-Bernoulli restricted Boltzmann machines (GBRBMs) are often used for semi-supervised anomaly detection, where they are trained using only normal data points. In GBRBM-based anomaly detection, normal and anomalous data are classified based on a score that is identical to an energy function of the marginal GBRBM. However, the classification threshold is difficult to set to an appropriate value, as this score cannot be interpreted. In this study, we propose a measure that improves score's interpretability based on its cumulative distribution, and establish a guideline for setting the threshold using the interpretable measure. The results of numerical experiments show that the guideline is reasonable when setting the threshold solely using normal data points. Moreover, because identifying the measure involves computationally infeasible evaluation of the minimum score value, we also propose an evaluation method for the minimum score based on simulated annealing, which is widely used for optimization problems. The proposed evaluation method was also validated using numerical experiments.
[ "['Kaiji Sekimoto' 'Muneki Yasuda']" ]
null
null
2403.12687
null
null
http://arxiv.org/pdf/2403.12687v2
2024-03-29T12:45:27Z
2024-03-19T12:45:52Z
Audio-Visual Compound Expression Recognition Method based on Late Modality Fusion and Rule-based Decision
This paper presents the results of the SUN team for the Compound Expressions Recognition Challenge of the 6th ABAW Competition. We propose a novel audio-visual method for compound expression recognition. Our method relies on emotion recognition models that fuse modalities at the emotion probability level, while decisions regarding the prediction of compound expressions are based on predefined rules. Notably, our method does not use any training data specific to the target task. Thus, the problem is a zero-shot classification task. The method is evaluated in multi-corpus training and cross-corpus validation setups. Using our proposed method is achieved an F1-score value equals to 22.01% on the C-EXPR-DB test subset. Our findings from the challenge demonstrate that the proposed method can potentially form a basis for developing intelligent tools for annotating audio-visual data in the context of human's basic and compound emotions.
[ "['Elena Ryumina' 'Maxim Markitantov' 'Dmitry Ryumin' 'Heysem Kaya'\n 'Alexey Karpov']" ]
null
null
2403.12688
null
null
http://arxiv.org/pdf/2403.12688v1
2024-03-19T12:47:43Z
2024-03-19T12:47:43Z
SEVEN: Pruning Transformer Model by Reserving Sentinels
Large-scale Transformer models (TM) have demonstrated outstanding performance across various tasks. However, their considerable parameter size restricts their applicability, particularly on mobile devices. Due to the dynamic and intricate nature of gradients on TM compared to Convolutional Neural Networks, commonly used pruning methods tend to retain weights with larger gradient noise. This results in pruned models that are sensitive to sparsity and datasets, exhibiting suboptimal performance. Symbolic Descent (SD) is a general approach for training and fine-tuning TM. In this paper, we attempt to describe the noisy batch gradient sequences on TM through the cumulative process of SD. We utilize this design to dynamically assess the importance scores of weights.SEVEN is introduced by us, which particularly favors weights with consistently high sensitivity, i.e., weights with small gradient noise. These weights are tended to be preserved by SEVEN. Extensive experiments on various TM in natural language, question-answering, and image classification domains are conducted to validate the effectiveness of SEVEN. The results demonstrate significant improvements of SEVEN in multiple pruning scenarios and across different sparsity levels. Additionally, SEVEN exhibits robust performance under various fine-tuning strategies. The code is publicly available at https://github.com/xiaojinying/SEVEN.
[ "['Jinying Xiao' 'Ping Li' 'Jie Nie' 'Zhe Tang']" ]
null
null
2403.12690
null
null
http://arxiv.org/pdf/2403.12690v2
2024-03-20T11:06:34Z
2024-03-19T12:49:09Z
LNPT: Label-free Network Pruning and Training
Pruning before training enables the deployment of neural networks on smart devices. By retaining weights conducive to generalization, pruned networks can be accommodated on resource-constrained smart devices. It is commonly held that the distance on weight norms between the initialized and the fully-trained networks correlates with generalization performance. However, as we have uncovered, inconsistency between this metric and generalization during training processes, which poses an obstacle to determine the pruned structures on smart devices in advance. In this paper, we introduce the concept of the learning gap, emphasizing its accurate correlation with generalization. Experiments show that the learning gap, in the form of feature maps from the penultimate layer of networks, aligns with variations of generalization performance. We propose a novel learning framework, LNPT, which enables mature networks on the cloud to provide online guidance for network pruning and learning on smart devices with unlabeled data. Our results demonstrate the superiority of this approach over supervised training.
[ "['Jinying Xiao' 'Ping Li' 'Zhe Tang' 'Jie Nie']" ]
null
null
2403.12695
null
null
http://arxiv.org/pdf/2403.12695v1
2024-03-19T12:52:38Z
2024-03-19T12:52:38Z
Federated Semi-supervised Learning for Medical Image Segmentation with intra-client and inter-client Consistency
Medical image segmentation plays a vital role in clinic disease diagnosis and medical image analysis. However, labeling medical images for segmentation task is tough due to the indispensable domain expertise of radiologists. Furthermore, considering the privacy and sensitivity of medical images, it is impractical to build a centralized segmentation dataset from different medical institutions. Federated learning aims to train a shared model of isolated clients without local data exchange which aligns well with the scarcity and privacy characteristics of medical data. To solve the problem of labeling hard, many advanced semi-supervised methods have been proposed in a centralized data setting. As for federated learning, how to conduct semi-supervised learning under this distributed scenario is worth investigating. In this work, we propose a novel federated semi-supervised learning framework for medical image segmentation. The intra-client and inter-client consistency learning are introduced to smooth predictions at the data level and avoid confirmation bias of local models. They are achieved with the assistance of a Variational Autoencoder (VAE) trained collaboratively by clients. The added VAE model plays three roles: 1) extracting latent low-dimensional features of all labeled and unlabeled data; 2) performing a novel type of data augmentation in calculating intra-client consistency loss; 3) utilizing the generative ability of itself to conduct inter-client consistency distillation. The proposed framework is compared with other federated semi-supervised or self-supervised learning methods. The experimental results illustrate that our method outperforms the state-of-the-art method while avoiding a lot of computation and communication overhead.
[ "['Yubin Zheng' 'Peng Tang' 'Tianjie Ju' 'Weidong Qiu' 'Bo Yan']" ]
null
null
2403.12710
null
null
http://arxiv.org/pdf/2403.12710v1
2024-03-19T13:17:26Z
2024-03-19T13:17:26Z
Selective, Interpretable, and Motion Consistent Privacy Attribute Obfuscation for Action Recognition
Concerns for the privacy of individuals captured in public imagery have led to privacy-preserving action recognition. Existing approaches often suffer from issues arising through obfuscation being applied globally and a lack of interpretability. Global obfuscation hides privacy sensitive regions, but also contextual regions important for action recognition. Lack of interpretability erodes trust in these new technologies. We highlight the limitations of current paradigms and propose a solution: Human selected privacy templates that yield interpretability by design, an obfuscation scheme that selectively hides attributes and also induces temporal consistency, which is important in action recognition. Our approach is architecture agnostic and directly modifies input imagery, while existing approaches generally require architecture training. Our approach offers more flexibility, as no retraining is required, and outperforms alternatives on three widely used datasets.
[ "['Filip Ilic' 'He Zhao' 'Thomas Pock' 'Richard P. Wildes']" ]
null
null
2403.12712
null
null
http://arxiv.org/pdf/2403.12712v1
2024-03-19T13:19:41Z
2024-03-19T13:19:41Z
Addressing Source Scale Bias via Image Warping for Domain Adaptation
In visual recognition, scale bias is a key challenge due to the imbalance of object and image size distribution inherent in real scene datasets. Conventional solutions involve injecting scale invariance priors, oversampling the dataset at different scales during training, or adjusting scale at inference. While these strategies mitigate scale bias to some extent, their ability to adapt across diverse datasets is limited. Besides, they increase computational load during training and latency during inference. In this work, we use adaptive attentional processing -- oversampling salient object regions by warping images in-place during training. Discovering that shifting the source scale distribution improves backbone features, we developed a instance-level warping guidance aimed at object region sampling to mitigate source scale bias in domain adaptation. Our approach improves adaptation across geographies, lighting and weather conditions, is agnostic to the task, domain adaptation algorithm, saliency guidance, and underlying model architecture. Highlights include +6.1 mAP50 for BDD100K Clear $rightarrow$ DENSE Foggy, +3.7 mAP50 for BDD100K Day $rightarrow$ Night, +3.0 mAP50 for BDD100K Clear $rightarrow$ Rainy, and +6.3 mIoU for Cityscapes $rightarrow$ ACDC. Our approach adds minimal memory during training and has no additional latency at inference time. Please see Appendix for more results and analysis.
[ "['Shen Zheng' 'Anurag Ghosh' 'Srinivasa G. Narasimhan']" ]
null
null
2403.12719
null
null
http://arxiv.org/pdf/2403.12719v1
2024-03-19T13:28:03Z
2024-03-19T13:28:03Z
Bilevel Hypergraph Networks for Multi-Modal Alzheimer's Diagnosis
Early detection of Alzheimer's disease's precursor stages is imperative for significantly enhancing patient outcomes and quality of life. This challenge is tackled through a semi-supervised multi-modal diagnosis framework. In particular, we introduce a new hypergraph framework that enables higher-order relations between multi-modal data, while utilising minimal labels. We first introduce a bilevel hypergraph optimisation framework that jointly learns a graph augmentation policy and a semi-supervised classifier. This dual learning strategy is hypothesised to enhance the robustness and generalisation capabilities of the model by fostering new pathways for information propagation. Secondly, we introduce a novel strategy for generating pseudo-labels more effectively via a gradient-driven flow. Our experimental results demonstrate the superior performance of our framework over current techniques in diagnosing Alzheimer's disease.
[ "['Angelica I. Aviles-Rivero' 'Chun-Wun Cheng' 'Zhongying Deng'\n 'Zoe Kourtzi' 'Carola-Bibiane Schönlieb']" ]
null
null
2403.12729
null
null
http://arxiv.org/pdf/2403.12729v1
2024-03-18T17:46:07Z
2024-03-18T17:46:07Z
Posterior Uncertainty Quantification in Neural Networks using Data Augmentation
In this paper, we approach the problem of uncertainty quantification in deep learning through a predictive framework, which captures uncertainty in model parameters by specifying our assumptions about the predictive distribution of unseen future data. Under this view, we show that deep ensembling (Lakshminarayanan et al., 2017) is a fundamentally mis-specified model class, since it assumes that future data are supported on existing observations only -- a situation rarely encountered in practice. To address this limitation, we propose MixupMP, a method that constructs a more realistic predictive distribution using popular data augmentation techniques. MixupMP operates as a drop-in replacement for deep ensembles, where each ensemble member is trained on a random simulation from this predictive distribution. Grounded in the recently-proposed framework of Martingale posteriors (Fong et al., 2023), MixupMP returns samples from an implicitly defined Bayesian posterior. Our empirical analysis showcases that MixupMP achieves superior predictive performance and uncertainty quantification on various image classification datasets, when compared with existing Bayesian and non-Bayesian approaches.
[ "['Luhuan Wu' 'Sinead Williamson']" ]
null
null
2403.12732
null
null
http://arxiv.org/pdf/2403.12732v1
2024-03-19T13:47:35Z
2024-03-19T13:47:35Z
Tighter Confidence Bounds for Sequential Kernel Regression
Confidence bounds are an essential tool for rigorously quantifying the uncertainty of predictions. In this capacity, they can inform the exploration-exploitation trade-off and form a core component in many sequential learning and decision-making algorithms. Tighter confidence bounds give rise to algorithms with better empirical performance and better performance guarantees. In this work, we use martingale tail bounds and finite-dimensional reformulations of infinite-dimensional convex programs to establish new confidence bounds for sequential kernel regression. We prove that our new confidence bounds are always tighter than existing ones in this setting. We apply our confidence bounds to the kernel bandit problem, where future actions depend on the previous history. When our confidence bounds replace existing ones, the KernelUCB (GP-UCB) algorithm has better empirical performance, a matching worst-case performance guarantee and comparable computational cost. Our new confidence bounds can be used as a generic tool to design improved algorithms for other kernelised learning and decision-making problems.
[ "['Hamish Flynn' 'David Reeb']" ]
null
null
2403.12764
null
null
http://arxiv.org/pdf/2403.12764v1
2024-03-19T14:30:56Z
2024-03-19T14:30:56Z
Neural Parameter Regression for Explicit Representations of PDE Solution Operators
We introduce Neural Parameter Regression (NPR), a novel framework specifically developed for learning solution operators in Partial Differential Equations (PDEs). Tailored for operator learning, this approach surpasses traditional DeepONets (Lu et al., 2021) by employing Physics-Informed Neural Network (PINN, Raissi et al., 2019) techniques to regress Neural Network (NN) parameters. By parametrizing each solution based on specific initial conditions, it effectively approximates a mapping between function spaces. Our method enhances parameter efficiency by incorporating low-rank matrices, thereby boosting computational efficiency and scalability. The framework shows remarkable adaptability to new initial and boundary conditions, allowing for rapid fine-tuning and inference, even in cases of out-of-distribution examples.
[ "['Konrad Mundinger' 'Max Zimmer' 'Sebastian Pokutta']" ]
null
null
2403.12818
null
null
http://arxiv.org/pdf/2403.12818v1
2024-03-19T15:17:23Z
2024-03-19T15:17:23Z
Dynamic Survival Analysis for Early Event Prediction
This study advances Early Event Prediction (EEP) in healthcare through Dynamic Survival Analysis (DSA), offering a novel approach by integrating risk localization into alarm policies to enhance clinical event metrics. By adapting and evaluating DSA models against traditional EEP benchmarks, our research demonstrates their ability to match EEP models on a time-step level and significantly improve event-level metrics through a new alarm prioritization scheme (up to 11% AuPRC difference). This approach represents a significant step forward in predictive healthcare, providing a more nuanced and actionable framework for early event prediction and management.
[ "['Hugo Yèche' 'Manuel Burger' 'Dinara Veshchezerova' 'Gunnar Rätsch']" ]
null
null
2403.12820
null
null
http://arxiv.org/pdf/2403.12820v2
2024-03-27T07:35:47Z
2024-03-19T15:21:00Z
A Physics-embedded Deep Learning Framework for Cloth Simulation
Delicate cloth simulations have long been desired in computer graphics. Various methods were proposed to improve engaged force interactions, collision handling, and numerical integrations. Deep learning has the potential to achieve fast and real-time simulation, but common neural network structures often demand many parameters to capture cloth dynamics. This paper proposes a physics-embedded learning framework that directly encodes physical features of cloth simulation. The convolutional neural network is used to represent spatial correlations of the mass-spring system, after which three branches are designed to learn linear, nonlinear, and time derivate features of cloth physics. The framework can also integrate with other external forces and collision handling through either traditional simulators or sub neural networks. The model is tested across different cloth animation cases, without training with new data. Agreement with baselines and predictive realism successfully validate its generalization ability. Inference efficiency of the proposed model also defeats traditional physics simulation. This framework is also designed to easily integrate with other visual refinement techniques like wrinkle carving, which leaves significant chances to incorporate prevailing macing learning techniques in 3D cloth amination.
[ "['Zhiwei Zhao']" ]
null
null
2403.12821
null
null
http://arxiv.org/pdf/2403.12821v2
2024-03-21T10:02:39Z
2024-03-19T15:21:10Z
FlowerFormer: Empowering Neural Architecture Encoding using a Flow-aware Graph Transformer
The success of a specific neural network architecture is closely tied to the dataset and task it tackles; there is no one-size-fits-all solution. Thus, considerable efforts have been made to quickly and accurately estimate the performances of neural architectures, without full training or evaluation, for given tasks and datasets. Neural architecture encoding has played a crucial role in the estimation, and graphbased methods, which treat an architecture as a graph, have shown prominent performance. For enhanced representation learning of neural architectures, we introduce FlowerFormer, a powerful graph transformer that incorporates the information flows within a neural architecture. FlowerFormer consists of two key components: (a) bidirectional asynchronous message passing, inspired by the flows; (b) global attention built on flow-based masking. Our extensive experiments demonstrate the superiority of FlowerFormer over existing neural encoding methods, and its effectiveness extends beyond computer vision models to include graph neural networks and auto speech recognition models. Our code is available at http://github.com/y0ngjaenius/CVPR2024_FLOWERFormer.
[ "['Dongyeong Hwang' 'Hyunju Kim' 'Sunwoo Kim' 'Kijung Shin']" ]
null
null
2403.12830
null
null
http://arxiv.org/pdf/2403.12830v2
2024-04-30T23:20:41Z
2024-03-19T15:37:27Z
Towards Lifecycle Unlearning Commitment Management: Measuring Sample-level Approximate Unlearning Completeness
By adopting a more flexible definition of unlearning and adjusting the model distribution to simulate training without the targeted data, approximate machine unlearning provides a less resource-demanding alternative to the more laborious exact unlearning methods. Yet, the unlearning completeness of target samples-even when the approximate algorithms are executed faithfully without external threats-remains largely unexamined, raising questions about those approximate algorithms' ability to fulfill their commitment of unlearning during the lifecycle. In this paper, we introduce the task of Lifecycle Unlearning Commitment Management (LUCM) for approximate unlearning and outline its primary challenges. We propose an efficient metric designed to assess the sample-level unlearning completeness. Our empirical results demonstrate its superiority over membership inference techniques in two key areas: the strong correlation of its measurements with unlearning completeness across various unlearning tasks, and its computational efficiency, making it suitable for real-time applications. Additionally, we show that this metric is able to serve as a tool for monitoring unlearning anomalies throughout the unlearning lifecycle, including both under-unlearning and over-unlearning. We apply this metric to evaluate the unlearning commitments of current approximate algorithms. Our analysis, conducted across multiple unlearning benchmarks, reveals that these algorithms inconsistently fulfill their unlearning commitments due to two main issues: 1) unlearning new data can significantly affect the unlearning utility of previously requested data, and 2) approximate algorithms fail to ensure equitable unlearning utility across different groups. These insights emphasize the crucial importance of LUCM throughout the unlearning lifecycle. We will soon open-source our newly developed benchmark.
[ "['Cheng-Long Wang' 'Qi Li' 'Zihang Xiang' 'Yinzhi Cao' 'Di Wang']" ]
null
null
2403.12844
null
null
http://arxiv.org/pdf/2403.12844v2
2024-03-20T09:06:08Z
2024-03-19T15:51:21Z
MELTing point: Mobile Evaluation of Language Transformers
Transformers have revolutionized the machine learning landscape, gradually making their way into everyday tasks and equipping our computers with ``sparks of intelligence''. However, their runtime requirements have prevented them from being broadly deployed on mobile. As personal devices become increasingly powerful and prompt privacy becomes an ever more pressing issue, we explore the current state of mobile execution of Large Language Models (LLMs). To achieve this, we have created our own automation infrastructure, MELT, which supports the headless execution and benchmarking of LLMs on device, supporting different models, devices and frameworks, including Android, iOS and Nvidia Jetson devices. We evaluate popular instruction fine-tuned LLMs and leverage different frameworks to measure their end-to-end and granular performance, tracing their memory and energy requirements along the way. Our analysis is the first systematic study of on-device LLM execution, quantifying performance, energy efficiency and accuracy across various state-of-the-art models and showcases the state of on-device intelligence in the era of hyperscale models. Results highlight the performance heterogeneity across targets and corroborates that LLM inference is largely memory-bound. Quantization drastically reduces memory requirements and renders execution viable, but at a non-negligible accuracy cost. Drawing from its energy footprint and thermal behavior, the continuous execution of LLMs remains elusive, as both factors negatively affect user experience. Last, our experience shows that the ecosystem is still in its infancy, and algorithmic as well as hardware breakthroughs can significantly shift the execution cost. We expect NPU acceleration, and framework-hardware co-design to be the biggest bet towards efficient standalone execution, with the alternative of offloading tailored towards edge deployments.
[ "['Stefanos Laskaridis' 'Kleomenis Katevas' 'Lorenzo Minto' 'Hamed Haddadi']" ]
null
null
2403.12847
null
null
http://arxiv.org/pdf/2403.12847v3
2024-03-28T11:46:02Z
2024-03-19T15:54:38Z
Policy Bifurcation in Safe Reinforcement Learning
Safe reinforcement learning (RL) offers advanced solutions to constrained optimal control problems. Existing studies in safe RL implicitly assume continuity in policy functions, where policies map states to actions in a smooth, uninterrupted manner; however, our research finds that in some scenarios, the feasible policy should be discontinuous or multi-valued, interpolating between discontinuous local optima can inevitably lead to constraint violations. We are the first to identify the generating mechanism of such a phenomenon, and employ topological analysis to rigorously prove the existence of policy bifurcation in safe RL, which corresponds to the contractibility of the reachable tuple. Our theorem reveals that in scenarios where the obstacle-free state space is non-simply connected, a feasible policy is required to be bifurcated, meaning its output action needs to change abruptly in response to the varying state. To train such a bifurcated policy, we propose a safe RL algorithm called multimodal policy optimization (MUPO), which utilizes a Gaussian mixture distribution as the policy output. The bifurcated behavior can be achieved by selecting the Gaussian component with the highest mixing coefficient. Besides, MUPO also integrates spectral normalization and forward KL divergence to enhance the policy's capability of exploring different modes. Experiments with vehicle control tasks show that our algorithm successfully learns the bifurcated policy and ensures satisfying safety, while a continuous policy suffers from inevitable constraint violations.
[ "['Wenjun Zou' 'Yao Lyu' 'Jie Li' 'Yujie Yang' 'Shengbo Eben Li'\n 'Jingliang Duan' 'Xianyuan Zhan' 'Jingjing Liu' 'Yaqin Zhang'\n 'Keqiang Li']" ]
null
null
2403.12856
null
null
http://arxiv.org/pdf/2403.12856v1
2024-03-19T16:01:25Z
2024-03-19T16:01:25Z
Equivariant Ensembles and Regularization for Reinforcement Learning in Map-based Path Planning
In reinforcement learning (RL), exploiting environmental symmetries can significantly enhance efficiency, robustness, and performance. However, ensuring that the deep RL policy and value networks are respectively equivariant and invariant to exploit these symmetries is a substantial challenge. Related works try to design networks that are equivariant and invariant by construction, limiting them to a very restricted library of components, which in turn hampers the expressiveness of the networks. This paper proposes a method to construct equivariant policies and invariant value functions without specialized neural network components, which we term equivariant ensembles. We further add a regularization term for adding inductive bias during training. In a map-based path planning case study, we show how equivariant ensembles and regularization benefit sample efficiency and performance.
[ "['Mirco Theile' 'Hongpeng Cao' 'Marco Caccamo'\n 'Alberto L. Sangiovanni-Vincentelli']" ]
null
null
2403.12859
null
null
http://arxiv.org/pdf/2403.12859v1
2024-03-19T16:03:03Z
2024-03-19T16:03:03Z
Primal Methods for Variational Inequality Problems with Functional Constraints
Constrained variational inequality problems are recognized for their broad applications across various fields including machine learning and operations research. First-order methods have emerged as the standard approach for solving these problems due to their simplicity and scalability. However, they typically rely on projection or linear minimization oracles to navigate the feasible set, which becomes computationally expensive in practical scenarios featuring multiple functional constraints. Existing efforts to tackle such functional constrained variational inequality problems have centered on primal-dual algorithms grounded in the Lagrangian function. These algorithms along with their theoretical analysis often require the existence and prior knowledge of the optimal Lagrange multipliers. In this work, we propose a simple primal method, termed Constrained Gradient Method (CGM), for addressing functional constrained variational inequality problems, without necessitating any information on the optimal Lagrange multipliers. We establish a non-asymptotic convergence analysis of the algorithm for variational inequality problems with monotone operators under smooth constraints. Remarkably, our algorithms match the complexity of projection-based methods in terms of operator queries for both monotone and strongly monotone settings, while utilizing significantly cheaper oracles based on quadratic programming. Furthermore, we provide several numerical examples to evaluate the efficacy of our algorithms.
[ "['Liang Zhang' 'Niao He' 'Michael Muehlebach']" ]
null
null
2403.12861
null
null
http://arxiv.org/pdf/2403.12861v1
2024-03-19T16:05:51Z
2024-03-19T16:05:51Z
D-Cubed: Latent Diffusion Trajectory Optimisation for Dexterous Deformable Manipulation
Mastering dexterous robotic manipulation of deformable objects is vital for overcoming the limitations of parallel grippers in real-world applications. Current trajectory optimisation approaches often struggle to solve such tasks due to the large search space and the limited task information available from a cost function. In this work, we propose D-Cubed, a novel trajectory optimisation method using a latent diffusion model (LDM) trained from a task-agnostic play dataset to solve dexterous deformable object manipulation tasks. D-Cubed learns a skill-latent space that encodes short-horizon actions in the play dataset using a VAE and trains a LDM to compose the skill latents into a skill trajectory, representing a long-horizon action trajectory in the dataset. To optimise a trajectory for a target task, we introduce a novel gradient-free guided sampling method that employs the Cross-Entropy method within the reverse diffusion process. In particular, D-Cubed samples a small number of noisy skill trajectories using the LDM for exploration and evaluates the trajectories in simulation. Then, D-Cubed selects the trajectory with the lowest cost for the subsequent reverse process. This effectively explores promising solution areas and optimises the sampled trajectories towards a target task throughout the reverse diffusion process. Through empirical evaluation on a public benchmark of dexterous deformable object manipulation tasks, we demonstrate that D-Cubed outperforms traditional trajectory optimisation and competitive baseline approaches by a significant margin. We further demonstrate that trajectories found by D-Cubed readily transfer to a real-world LEAP hand on a folding task.
[ "['Jun Yamada' 'Shaohong Zhong' 'Jack Collins' 'Ingmar Posner']" ]
null
null
2403.12864
null
null
http://arxiv.org/abs/2403.12864v2
2024-05-17T09:28:40Z
2024-03-19T16:08:27Z
A Comparison of Deep Learning Architectures for Spacecraft Anomaly Detection
Spacecraft operations are highly critical, demanding impeccable reliability and safety. Ensuring the optimal performance of a spacecraft requires the early detection and mitigation of anomalies, which could otherwise result in unit or mission failures. With the advent of deep learning, a surge of interest has been seen in leveraging these sophisticated algorithms for anomaly detection in space operations. This study aims to compare the efficacy of various deep learning architectures in detecting anomalies in spacecraft data. The deep learning models under investigation include Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory (LSTM) networks, and Transformer-based architectures. Each of these models was trained and validated using a comprehensive dataset sourced from multiple spacecraft missions, encompassing diverse operational scenarios and anomaly types. Initial results indicate that while CNNs excel in identifying spatial patterns and may be effective for some classes of spacecraft data, LSTMs and RNNs show a marked proficiency in capturing temporal anomalies seen in time-series spacecraft telemetry. The Transformer-based architectures, given their ability to focus on both local and global contexts, have showcased promising results, especially in scenarios where anomalies are subtle and span over longer durations. Additionally, considerations such as computational efficiency, ease of deployment, and real-time processing capabilities were evaluated. While CNNs and LSTMs demonstrated a balance between accuracy and computational demands, Transformer architectures, though highly accurate, require significant computational resources. In conclusion, the choice of deep learning architecture for spacecraft anomaly detection is highly contingent on the nature of the data, the type of anomalies, and operational constraints.
[ "['Daniel Lakey' 'Tim Schlippe']" ]
null
null
2403.12871
null
null
http://arxiv.org/pdf/2403.12871v1
2024-03-19T16:15:44Z
2024-03-19T16:15:44Z
Wildfire danger prediction optimization with transfer learning
Convolutional Neural Networks (CNNs) have proven instrumental across various computer science domains, enabling advancements in object detection, classification, and anomaly detection. This paper explores the application of CNNs to analyze geospatial data specifically for identifying wildfire-affected areas. Leveraging transfer learning techniques, we fine-tuned CNN hyperparameters and integrated the Canadian Fire Weather Index (FWI) to assess moisture conditions. The study establishes a methodology for computing wildfire risk levels on a scale of 0 to 5, dynamically linked to weather patterns. Notably, through the integration of transfer learning, the CNN model achieved an impressive accuracy of 95% in identifying burnt areas. This research sheds light on the inner workings of CNNs and their practical, real-time utility in predicting and mitigating wildfires. By combining transfer learning and CNNs, this study contributes a robust approach to assess burnt areas, facilitating timely interventions and preventative measures against conflagrations.
[ "['Spiros Maggioros' 'Nikos Tsalkitzis']" ]
null
null
2403.12873
null
null
http://arxiv.org/pdf/2403.12873v1
2024-03-19T16:17:21Z
2024-03-19T16:17:21Z
Short-Term Solar Irradiance Forecasting Under Data Transmission Constraints
We report a data-parsimonious machine learning model for short-term forecasting of solar irradiance. The model inputs include sky camera images that are reduced to scalar features to meet data transmission constraints. The output irradiance values are transformed to focus on unknown short-term dynamics. Inspired by control theory, a noise input is used to reflect unmeasured variables and is shown to improve model predictions, often considerably. Five years of data from the NREL Solar Radiation Research Laboratory were used to create three rolling train-validate sets and determine the best representations for time, the optimal span of input measurements, and the most impactful model input data (features). For the chosen test data, the model achieves a mean absolute error of 74.34 $W/m^2$ compared to a baseline 134.35 $W/m^2$ using the persistence of cloudiness model.
[ "['Joshua Edward Hammond' 'Ricardo A. Lara Orozco' 'Michael Baldea'\n 'Brian A. Korgel']" ]
null
null
2403.12887
null
null
http://arxiv.org/pdf/2403.12887v1
2024-03-19T16:34:31Z
2024-03-19T16:34:31Z
Understanding the training of infinitely deep and wide ResNets with Conditional Optimal Transport
We study the convergence of gradient flow for the training of deep neural networks. If Residual Neural Networks are a popular example of very deep architectures, their training constitutes a challenging optimization problem due notably to the non-convexity and the non-coercivity of the objective. Yet, in applications, those tasks are successfully solved by simple optimization algorithms such as gradient descent. To better understand this phenomenon, we focus here on a ``mean-field'' model of infinitely deep and arbitrarily wide ResNet, parameterized by probability measures over the product set of layers and parameters and with constant marginal on the set of layers. Indeed, in the case of shallow neural networks, mean field models have proven to benefit from simplified loss-landscapes and good theoretical guarantees when trained with gradient flow for the Wasserstein metric on the set of probability measures. Motivated by this approach, we propose to train our model with gradient flow w.r.t. the conditional Optimal Transport distance: a restriction of the classical Wasserstein distance which enforces our marginal condition. Relying on the theory of gradient flows in metric spaces we first show the well-posedness of the gradient flow equation and its consistency with the training of ResNets at finite width. Performing a local Polyak-L{}ojasiewicz analysis, we then show convergence of the gradient flow for well-chosen initializations: if the number of features is finite but sufficiently large and the risk is sufficiently small at initialization, the gradient flow converges towards a global minimizer. This is the first result of this type for infinitely deep and arbitrarily wide ResNets.
[ "['Raphaël Barboni' 'Gabriel Peyré' 'François-Xavier Vialard']" ]
null
null
2403.12900
null
null
http://arxiv.org/pdf/2403.12900v1
2024-03-19T16:53:53Z
2024-03-19T16:53:53Z
Toward Sustainable GenAI using Generation Directives for Carbon-Friendly Large Language Model Inference
The rapid advancement of Generative Artificial Intelligence (GenAI) across diverse sectors raises significant environmental concerns, notably the carbon emissions from their cloud and high performance computing (HPC) infrastructure. This paper presents Sprout, an innovative framework designed to address these concerns by reducing the carbon footprint of generative Large Language Model (LLM) inference services. Sprout leverages the innovative concept of "generation directives" to guide the autoregressive generation process, thereby enhancing carbon efficiency. Our proposed method meticulously balances the need for ecological sustainability with the demand for high-quality generation outcomes. Employing a directive optimizer for the strategic assignment of generation directives to user prompts and an original offline quality evaluator, Sprout demonstrates a significant reduction in carbon emissions by over 40% in real-world evaluations using the Llama2 LLM and global electricity grid data. This research marks a critical step toward aligning AI technology with sustainable practices, highlighting the potential for mitigating environmental impacts in the rapidly expanding domain of generative artificial intelligence.
[ "['Baolin Li' 'Yankai Jiang' 'Vijay Gadepally' 'Devesh Tiwari']" ]
null
null
2403.12910
null
null
http://arxiv.org/pdf/2403.12910v1
2024-03-19T17:08:24Z
2024-03-19T17:08:24Z
Yell At Your Robot: Improving On-the-Fly from Language Corrections
Hierarchical policies that combine language and low-level control have been shown to perform impressively long-horizon robotic tasks, by leveraging either zero-shot high-level planners like pretrained language and vision-language models (LLMs/VLMs) or models trained on annotated robotic demonstrations. However, for complex and dexterous skills, attaining high success rates on long-horizon tasks still represents a major challenge -- the longer the task is, the more likely it is that some stage will fail. Can humans help the robot to continuously improve its long-horizon task performance through intuitive and natural feedback? In this paper, we make the following observation: high-level policies that index into sufficiently rich and expressive low-level language-conditioned skills can be readily supervised with human feedback in the form of language corrections. We show that even fine-grained corrections, such as small movements ("move a bit to the left"), can be effectively incorporated into high-level policies, and that such corrections can be readily obtained from humans observing the robot and making occasional suggestions. This framework enables robots not only to rapidly adapt to real-time language feedback, but also incorporate this feedback into an iterative training scheme that improves the high-level policy's ability to correct errors in both low-level execution and high-level decision-making purely from verbal feedback. Our evaluation on real hardware shows that this leads to significant performance improvement in long-horizon, dexterous manipulation tasks without the need for any additional teleoperation. Videos and code are available at https://yay-robot.github.io/.
[ "['Lucy Xiaoyang Shi' 'Zheyuan Hu' 'Tony Z. Zhao' 'Archit Sharma'\n 'Karl Pertsch' 'Jianlan Luo' 'Sergey Levine' 'Chelsea Finn']" ]
null
null
2403.12918
null
null
http://arxiv.org/pdf/2403.12918v1
2024-03-19T17:21:29Z
2024-03-19T17:21:29Z
Generalizable and Stable Finetuning of Pretrained Language Models on Low-Resource Texts
Pretrained Language Models (PLMs) have advanced Natural Language Processing (NLP) tasks significantly, but finetuning PLMs on low-resource datasets poses significant challenges such as instability and overfitting. Previous methods tackle these issues by finetuning a strategically chosen subnetwork on a downstream task, while keeping the remaining weights fixed to the pretrained weights. However, they rely on a suboptimal criteria for sub-network selection, leading to suboptimal solutions. To address these limitations, we propose a regularization method based on attention-guided weight mixup for finetuning PLMs. Our approach represents each network weight as a mixup of task-specific weight and pretrained weight, controlled by a learnable attention parameter, providing finer control over sub-network selection. Furthermore, we employ a bi-level optimization (BLO) based framework on two separate splits of the training dataset, improving generalization and combating overfitting. We validate the efficacy of our proposed method through extensive experiments, demonstrating its superiority over previous methods, particularly in the context of finetuning PLMs on low-resource datasets.
[ "['Sai Ashish Somayajula' 'Youwei Liang' 'Abhishek Singh' 'Li Zhang'\n 'Pengtao Xie']" ]
null
null
2403.12938
null
null
http://arxiv.org/pdf/2403.12938v1
2024-03-19T17:43:57Z
2024-03-19T17:43:57Z
Neural Differential Algebraic Equations
Differential-Algebraic Equations (DAEs) describe the temporal evolution of systems that obey both differential and algebraic constraints. Of particular interest are systems that contain implicit relationships between their components, such as conservation relationships. Here, we present Neural Differential-Algebraic Equations (NDAEs) suitable for data-driven modeling of DAEs. This methodology is built upon the concept of the Universal Differential Equation; that is, a model constructed as a system of Neural Ordinary Differential Equations informed by theory from particular science domains. In this work, we show that the proposed NDAEs abstraction is suitable for relevant system-theoretic data-driven modeling tasks. Presented examples include (i) the inverse problem of tank-manifold dynamics and (ii) discrepancy modeling of a network of pumps, tanks, and pipes. Our experiments demonstrate the proposed method's robustness to noise and extrapolation ability to (i) learn the behaviors of the system components and their interaction physics and (ii) disambiguate between data trends and mechanistic relationships contained in the system.
[ "['James Koch' 'Madelyn Shapiro' 'Himanshu Sharma' 'Draguna Vrabie'\n 'Jan Drgona']" ]
null
null
2403.12946
null
null
http://arxiv.org/pdf/2403.12946v2
2024-06-27T03:16:30Z
2024-03-19T17:48:42Z
Sample Complexity of Offline Distributionally Robust Linear Markov Decision Processes
In offline reinforcement learning (RL), the absence of active exploration calls for attention on the model robustness to tackle the sim-to-real gap, where the discrepancy between the simulated and deployed environments can significantly undermine the performance of the learned policy. To endow the learned policy with robustness in a sample-efficient manner in the presence of high-dimensional state-action space, this paper considers the sample complexity of distributionally robust linear Markov decision processes (MDPs) with an uncertainty set characterized by the total variation distance using offline data. We develop a pessimistic model-based algorithm and establish its sample complexity bound under minimal data coverage assumptions, which outperforms prior art by at least $widetilde{O}(d)$, where $d$ is the feature dimension. We further improve the performance guarantee of the proposed algorithm by incorporating a carefully-designed variance estimator.
[ "['He Wang' 'Laixi Shi' 'Yuejie Chi']" ]
null
null
2403.12948
null
null
http://arxiv.org/pdf/2403.12948v1
2024-03-19T17:50:32Z
2024-03-19T17:50:32Z
On Safety in Safe Bayesian Optimization
Optimizing an unknown function under safety constraints is a central task in robotics, biomedical engineering, and many other disciplines, and increasingly safe Bayesian Optimization (BO) is used for this. Due to the safety critical nature of these applications, it is of utmost importance that theoretical safety guarantees for these algorithms translate into the real world. In this work, we investigate three safety-related issues of the popular class of SafeOpt-type algorithms. First, these algorithms critically rely on frequentist uncertainty bounds for Gaussian Process (GP) regression, but concrete implementations typically utilize heuristics that invalidate all safety guarantees. We provide a detailed analysis of this problem and introduce Real-b{eta}-SafeOpt, a variant of the SafeOpt algorithm that leverages recent GP bounds and thus retains all theoretical guarantees. Second, we identify assuming an upper bound on the reproducing kernel Hilbert space (RKHS) norm of the target function, a key technical assumption in SafeOpt-like algorithms, as a central obstacle to real-world usage. To overcome this challenge, we introduce the Lipschitz-only Safe Bayesian Optimization (LoSBO) algorithm, which guarantees safety without an assumption on the RKHS bound, and empirically show that this algorithm is not only safe, but also exhibits superior performance compared to the state-of-the-art on several function classes. Third, SafeOpt and derived algorithms rely on a discrete search space, making them difficult to apply to higher-dimensional problems. To widen the applicability of these algorithms, we introduce Lipschitz-only GP-UCB (LoS-GP-UCB), a variant of LoSBO applicable to moderately high-dimensional problems, while retaining safety.
[ "['Christian Fiedler' 'Johanna Menn' 'Lukas Kreisköther' 'Sebastian Trimpe']" ]
null
null
2403.12950
null
null
http://arxiv.org/pdf/2403.12950v1
2024-03-19T17:50:55Z
2024-03-19T17:50:55Z
Optimal and Adaptive Non-Stationary Dueling Bandits Under a Generalized Borda Criterion
In dueling bandits, the learner receives preference feedback between arms, and the regret of an arm is defined in terms of its suboptimality to a winner arm. The more challenging and practically motivated non-stationary variant of dueling bandits, where preferences change over time, has been the focus of several recent works (Saha and Gupta, 2022; Buening and Saha, 2023; Suk and Agarwal, 2023). The goal is to design algorithms without foreknowledge of the amount of change. The bulk of known results here studies the Condorcet winner setting, where an arm preferred over any other exists at all times. Yet, such a winner may not exist and, to contrast, the Borda version of this problem (which is always well-defined) has received little attention. In this work, we establish the first optimal and adaptive Borda dynamic regret upper bound, which highlights fundamental differences in the learnability of severe non-stationarity between Condorcet vs. Borda regret objectives in dueling bandits. Surprisingly, our techniques for non-stationary Borda dueling bandits also yield improved rates within the Condorcet winner setting, and reveal new preference models where tighter notions of non-stationarity are adaptively learnable. This is accomplished through a novel generalized Borda score framework which unites the Borda and Condorcet problems, thus allowing reduction of Condorcet regret to a Borda-like task. Such a generalization was not previously known and is likely to be of independent interest.
[ "['Joe Suk' 'Arpit Agarwal']" ]
null
null
2403.12952
null
null
http://arxiv.org/pdf/2403.12952v1
2024-03-19T17:54:34Z
2024-03-19T17:54:34Z
Just Shift It: Test-Time Prototype Shifting for Zero-Shot Generalization with Vision-Language Models
Advancements in vision-language models (VLMs) have propelled the field of computer vision, particularly in the zero-shot learning setting. Despite their promise, the effectiveness of these models often diminishes due to domain shifts in test environments. To address this, we introduce the Test-Time Prototype Shifting (TPS) framework, a pioneering approach designed to adapt VLMs to test datasets using unlabeled test inputs. Our method is based on the notion of modulating per-class prototypes in the shared embedding space. By pre-computing and caching prototypes generated with the pre-trained text encoder, TPS not only facilitates optimization-free prototype reuse for subsequent predictions but also enables seamless integration with current advancements in prompt engineering. At test-time, TPS dynamically learns shift vectors for each prototype based solely on the given test sample, effectively bridging the domain gap and enhancing classification accuracy. A notable aspect of our framework is its significantly reduced memory and computational demands when compared to conventional text-prompt tuning methods. Extensive evaluations across 15 datasets involving natural distribution shifts and cross-dataset generalization demonstrate TPS's superior performance, achieving state-of-the-art results while reducing resource requirements.
[ "['Elaine Sui' 'Xiaohan Wang' 'Serena Yeung-Levy']" ]