categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.15555
null
null
http://arxiv.org/pdf/2402.15555v2
2024-06-06T18:33:15Z
2024-02-23T18:59:31Z
Deep Networks Always Grok and Here is Why
Grokking, or delayed generalization, is a phenomenon where generalization in a deep neural network (DNN) occurs long after achieving near zero training error. Previous studies have reported the occurrence of grokking in specific controlled settings, such as DNNs initialized with large-norm parameters or transformers trained on algorithmic datasets. We demonstrate that grokking is actually much more widespread and materializes in a wide range of practical settings, such as training of a convolutional neural network (CNN) on CIFAR10 or a Resnet on Imagenette. We introduce the new concept of delayed robustness, whereby a DNN groks adversarial examples and becomes robust, long after interpolation and/or generalization. We develop an analytical explanation for the emergence of both delayed generalization and delayed robustness based on the local complexity of a DNN's input-output mapping. Our local complexity measures the density of so-called linear regions (aka, spline partition regions) that tile the DNN input space and serves as a utile progress measure for training. We provide the first evidence that, for classification problems, the linear regions undergo a phase transition during training whereafter they migrate away from the training samples (making the DNN mapping smoother there) and towards the decision boundary (making the DNN mapping less smooth there). Grokking occurs post phase transition as a robust partition of the input space thanks to the linearization of the DNN mapping around the training points. Website: https://bit.ly/grok-adversarial
[ "['Ahmed Imtiaz Humayun' 'Randall Balestriero' 'Richard Baraniuk']" ]
null
null
2402.15561
null
null
http://arxiv.org/pdf/2402.15561v1
2024-02-23T19:02:24Z
2024-02-23T19:02:24Z
Fair Multivariate Adaptive Regression Splines for Ensuring Equity and Transparency
Predictive analytics is widely used in various domains, including education, to inform decision-making and improve outcomes. However, many predictive models are proprietary and inaccessible for evaluation or modification by researchers and practitioners, limiting their accountability and ethical design. Moreover, predictive models are often opaque and incomprehensible to the officials who use them, reducing their trust and utility. Furthermore, predictive models may introduce or exacerbate bias and inequity, as they have done in many sectors of society. Therefore, there is a need for transparent, interpretable, and fair predictive models that can be easily adopted and adapted by different stakeholders. In this paper, we propose a fair predictive model based on multivariate adaptive regression splines(MARS) that incorporates fairness measures in the learning process. MARS is a non-parametric regression model that performs feature selection, handles non-linear relationships, generates interpretable decision rules, and derives optimal splitting criteria on the variables. Specifically, we integrate fairness into the knot optimization algorithm and provide theoretical and empirical evidence of how it results in a fair knot placement. We apply our fairMARS model to real-world data and demonstrate its effectiveness in terms of accuracy and equity. Our paper contributes to the advancement of responsible and ethical predictive analytics for social good.
[ "['Parian Haghighat' \"Denisa G'andara\" 'Lulu Kang' 'Hadis Anahideh']" ]
null
null
2402.15566
null
null
http://arxiv.org/pdf/2402.15566v1
2024-02-23T19:07:53Z
2024-02-23T19:07:53Z
Closing the AI generalization gap by adjusting for dermatology condition distribution differences across clinical settings
Recently, there has been great progress in the ability of artificial intelligence (AI) algorithms to classify dermatological conditions from clinical photographs. However, little is known about the robustness of these algorithms in real-world settings where several factors can lead to a loss of generalizability. Understanding and overcoming these limitations will permit the development of generalizable AI that can aid in the diagnosis of skin conditions across a variety of clinical settings. In this retrospective study, we demonstrate that differences in skin condition distribution, rather than in demographics or image capture mode are the main source of errors when an AI algorithm is evaluated on data from a previously unseen source. We demonstrate a series of steps to close this generalization gap, requiring progressively more information about the new source, ranging from the condition distribution to training data enriched for data less frequently seen during training. Our results also suggest comparable performance from end-to-end fine tuning versus fine tuning solely the classification layer on top of a frozen embedding model. Our approach can inform the adaptation of AI algorithms to new settings, based on the information and resources available.
[ "['Rajeev V. Rikhye' 'Aaron Loh' 'Grace Eunhae Hong' 'Preeti Singh'\n 'Margaret Ann Smith' 'Vijaytha Muralidharan' 'Doris Wong' 'Rory Sayres'\n 'Michelle Phung' 'Nicolas Betancourt' 'Bradley Fong'\n 'Rachna Sahasrabudhe' 'Khoban Nasim' 'Alec Eschholz' 'Basil Mustafa'\n 'Jan Freyberg' 'Terry Spitz' 'Yossi Matias' 'Greg S. Corrado'\n 'Katherine Chou' 'Dale R. Webster' 'Peggy Bui' 'Yuan Liu' 'Yun Liu'\n 'Justin Ko' 'Steven Lin']" ]
null
null
2402.15567
null
null
http://arxiv.org/pdf/2402.15567v2
2024-05-26T17:44:52Z
2024-02-23T19:09:10Z
Foundation Policies with Hilbert Representations
Unsupervised and self-supervised objectives, such as next token prediction, have enabled pre-training generalist models from large amounts of unlabeled data. In reinforcement learning (RL), however, finding a truly general and scalable unsupervised pre-training objective for generalist policies from offline data remains a major open question. While a number of methods have been proposed to enable generic self-supervised RL, based on principles such as goal-conditioned RL, behavioral cloning, and unsupervised skill learning, such methods remain limited in terms of either the diversity of the discovered behaviors, the need for high-quality demonstration data, or the lack of a clear adaptation mechanism for downstream tasks. In this work, we propose a novel unsupervised framework to pre-train generalist policies that capture diverse, optimal, long-horizon behaviors from unlabeled offline data such that they can be quickly adapted to any arbitrary new tasks in a zero-shot manner. Our key insight is to learn a structured representation that preserves the temporal structure of the underlying environment, and then to span this learned latent space with directional movements, which enables various zero-shot policy "prompting" schemes for downstream tasks. Through our experiments on simulated robotic locomotion and manipulation benchmarks, we show that our unsupervised policies can solve goal-conditioned and general RL tasks in a zero-shot fashion, even often outperforming prior methods designed specifically for each setting. Our code and videos are available at https://seohong.me/projects/hilp/.
[ "['Seohong Park' 'Tobias Kreiman' 'Sergey Levine']" ]
null
null
2402.15569
null
null
http://arxiv.org/pdf/2402.15569v1
2024-02-23T19:12:41Z
2024-02-23T19:12:41Z
Toward Fully Self-Supervised Multi-Pitch Estimation
Multi-pitch estimation is a decades-long research problem involving the detection of pitch activity associated with concurrent musical events within multi-instrument mixtures. Supervised learning techniques have demonstrated solid performance on more narrow characterizations of the task, but suffer from limitations concerning the shortage of large-scale and diverse polyphonic music datasets with multi-pitch annotations. We present a suite of self-supervised learning objectives for multi-pitch estimation, which encourage the concentration of support around harmonics, invariance to timbral transformations, and equivariance to geometric transformations. These objectives are sufficient to train an entirely convolutional autoencoder to produce multi-pitch salience-grams directly, without any fine-tuning. Despite training exclusively on a collection of synthetic single-note audio samples, our fully self-supervised framework generalizes to polyphonic music mixtures, and achieves performance comparable to supervised models trained on conventional multi-pitch datasets.
[ "['Frank Cwitkowitz' 'Zhiyao Duan']" ]
null
null
2402.15583
null
null
http://arxiv.org/pdf/2402.15583v1
2024-02-23T19:43:01Z
2024-02-23T19:43:01Z
Cohere3D: Exploiting Temporal Coherence for Unsupervised Representation Learning of Vision-based Autonomous Driving
Due to the lack of depth cues in images, multi-frame inputs are important for the success of vision-based perception, prediction, and planning in autonomous driving. Observations from different angles enable the recovery of 3D object states from 2D image inputs if we can identify the same instance in different input frames. However, the dynamic nature of autonomous driving scenes leads to significant changes in the appearance and shape of each instance captured by the camera at different time steps. To this end, we propose a novel contrastive learning algorithm, Cohere3D, to learn coherent instance representations in a long-term input sequence robust to the change in distance and perspective. The learned representation aids in instance-level correspondence across multiple input frames in downstream tasks. In the pretraining stage, the raw point clouds from LiDAR sensors are utilized to construct the long-term temporal correspondence for each instance, which serves as guidance for the extraction of instance-level representation from the vision-based bird's eye-view (BEV) feature map. Cohere3D encourages a consistent representation for the same instance at different frames but distinguishes between representations of different instances. We evaluate our algorithm by finetuning the pretrained model on various downstream perception, prediction, and planning tasks. Results show a notable improvement in both data efficiency and task performance.
[ "['Yichen Xie' 'Hongge Chen' 'Gregory P. Meyer' 'Yong Jae Lee'\n 'Eric M. Wolff' 'Masayoshi Tomizuka' 'Wei Zhan' 'Yuning Chai' 'Xin Huang']" ]
null
null
2402.15584
null
null
http://arxiv.org/pdf/2402.15584v3
2024-04-18T15:29:14Z
2024-02-23T19:51:55Z
State Space Models for Event Cameras
Today, state-of-the-art deep neural networks that process event-camera data first convert a temporal window of events into dense, grid-like input representations. As such, they exhibit poor generalizability when deployed at higher inference frequencies (i.e., smaller temporal windows) than the ones they were trained on. We address this challenge by introducing state-space models (SSMs) with learnable timescale parameters to event-based vision. This design adapts to varying frequencies without the need to retrain the network at different frequencies. Additionally, we investigate two strategies to counteract aliasing effects when deploying the model at higher frequencies. We comprehensively evaluate our approach against existing methods based on RNN and Transformer architectures across various benchmarks, including Gen1 and 1 Mpx event camera datasets. Our results demonstrate that SSM-based models train 33% faster and also exhibit minimal performance degradation when tested at higher frequencies than the training input. Traditional RNN and Transformer models exhibit performance drops of more than 20 mAP, with SSMs having a drop of 3.76 mAP, highlighting the effectiveness of SSMs in event-based vision tasks.
[ "['Nikola Zubić' 'Mathias Gehrig' 'Davide Scaramuzza']" ]
null
null
2402.15589
null
null
http://arxiv.org/pdf/2402.15589v1
2024-02-23T20:14:16Z
2024-02-23T20:14:16Z
Prompting LLMs to Compose Meta-Review Drafts from Peer-Review Narratives of Scholarly Manuscripts
One of the most important yet onerous tasks in the academic peer-reviewing process is composing meta-reviews, which involves understanding the core contributions, strengths, and weaknesses of a scholarly manuscript based on peer-review narratives from multiple experts and then summarizing those multiple experts' perspectives into a concise holistic overview. Given the latest major developments in generative AI, especially Large Language Models (LLMs), it is very compelling to rigorously study the utility of LLMs in generating such meta-reviews in an academic peer-review setting. In this paper, we perform a case study with three popular LLMs, i.e., GPT-3.5, LLaMA2, and PaLM2, to automatically generate meta-reviews by prompting them with different types/levels of prompts based on the recently proposed TELeR taxonomy. Finally, we perform a detailed qualitative study of the meta-reviews generated by the LLMs and summarize our findings and recommendations for prompting LLMs for this complex task.
[ "['Shubhra Kanti Karmaker Santu' 'Sanjeev Kumar Sinha' 'Naman Bansal'\n 'Alex Knipper' 'Souvika Sarkar' 'John Salvador' 'Yash Mahajan'\n 'Sri Guttikonda' 'Mousumi Akter' 'Matthew Freestone'\n 'Matthew C. Williams Jr']" ]
null
null
2402.15592
null
null
http://arxiv.org/pdf/2402.15592v1
2024-02-23T20:19:06Z
2024-02-23T20:19:06Z
Neural optimal controller for stochastic systems via pathwise HJB operator
The aim of this work is to develop deep learning-based algorithms for high-dimensional stochastic control problems based on physics-informed learning and dynamic programming. Unlike classical deep learning-based methods relying on a probabilistic representation of the solution to the Hamilton--Jacobi--Bellman (HJB) equation, we introduce a pathwise operator associated with the HJB equation so that we can define a problem of physics-informed learning. According to whether the optimal control has an explicit representation, two numerical methods are proposed to solve the physics-informed learning problem. We provide an error analysis on how the truncation, approximation and optimization errors affect the accuracy of these methods. Numerical results on various applications are presented to illustrate the performance of the proposed algorithms.
[ "['Zhe Jiao' 'Xiaoyan Luo' 'Xinlei Yi']" ]
null
null
2402.15602
null
null
http://arxiv.org/pdf/2402.15602v1
2024-02-23T20:51:31Z
2024-02-23T20:51:31Z
Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions
We study the asymptotic error of score-based diffusion model sampling in large-sample scenarios from a non-parametric statistics perspective. We show that a kernel-based score estimator achieves an optimal mean square error of $widetilde{O}left(n^{-1} t^{-frac{d+2}{2}}(t^{frac{d}{2}} vee 1)right)$ for the score function of $p_0*mathcal{N}(0,tboldsymbol{I}_d)$, where $n$ and $d$ represent the sample size and the dimension, $t$ is bounded above and below by polynomials of $n$, and $p_0$ is an arbitrary sub-Gaussian distribution. As a consequence, this yields an $widetilde{O}left(n^{-1/2} t^{-frac{d}{4}}right)$ upper bound for the total variation error of the distribution of the sample generated by the diffusion model under a mere sub-Gaussian assumption. If in addition, $p_0$ belongs to the nonparametric family of the $beta$-Sobolev space with $betale 2$, by adopting an early stopping strategy, we obtain that the diffusion model is nearly (up to log factors) minimax optimal. This removes the crucial lower bound assumption on $p_0$ in previous proofs of the minimax optimality of the diffusion model for nonparametric families.
[ "['Kaihong Zhang' 'Heqi Yin' 'Feng Liang' 'Jingbo Liu']" ]
null
null
2402.15603
null
null
http://arxiv.org/pdf/2402.15603v2
2024-05-17T19:22:49Z
2024-02-23T20:52:59Z
Differentially Private Fair Binary Classifications
In this work, we investigate binary classification under the constraints of both differential privacy and fairness. We first propose an algorithm based on the decoupling technique for learning a classifier with only fairness guarantee. This algorithm takes in classifiers trained on different demographic groups and generates a single classifier satisfying statistical parity. We then refine this algorithm to incorporate differential privacy. The performance of the final algorithm is rigorously examined in terms of privacy, fairness, and utility guarantees. Empirical evaluations conducted on the Adult and Credit Card datasets illustrate that our algorithm outperforms the state-of-the-art in terms of fairness guarantees, while maintaining the same level of privacy and utility.
[ "['Hrad Ghoukasian' 'Shahab Asoodeh']" ]
null
null
2402.15607
null
null
http://arxiv.org/pdf/2402.15607v3
2024-06-16T04:02:28Z
2024-02-23T21:07:20Z
How Do Nonlinear Transformers Learn and Generalize in In-Context Learning?
Transformer-based large language models have displayed impressive in-context learning capabilities, where a pre-trained model can handle new tasks without fine-tuning by simply augmenting the query with some input-output examples from that task. Despite the empirical success, the mechanics of how to train a Transformer to achieve ICL and the corresponding ICL capacity is mostly elusive due to the technical challenges of analyzing the nonconvex training problems resulting from the nonlinear self-attention and nonlinear activation in Transformers. To the best of our knowledge, this paper provides the first theoretical analysis of the training dynamics of Transformers with nonlinear self-attention and nonlinear MLP, together with the ICL generalization capability of the resulting model. Focusing on a group of binary classification tasks, we train Transformers using data from a subset of these tasks and quantify the impact of various factors on the ICL generalization performance on the remaining unseen tasks with and without data distribution shifts. We also analyze how different components in the learned Transformers contribute to the ICL performance. Furthermore, we provide the first theoretical analysis of how model pruning affects ICL performance and prove that proper magnitude-based pruning can have a minimal impact on ICL while reducing inference costs. These theoretical findings are justified through numerical experiments.
[ "['Hongkang Li' 'Meng Wang' 'Songtao Lu' 'Xiaodong Cui' 'Pin-Yu Chen']" ]
null
null
2402.15608
null
null
http://arxiv.org/pdf/2402.15608v1
2024-02-23T21:11:17Z
2024-02-23T21:11:17Z
Machine Learning-Based Completions Sequencing for Well Performance Optimization
Establishing accurate field development parameters to optimize long-term oil production takes time and effort due to the complexity of oil well development, and the uncertainty in estimating long-term well production. Traditionally, oil and gas companies use simulation software that are inherently computationally expensive to forecast production. Thus, machine learning approaches are recently utilized in literature as an efficient alternative to optimize well developments by enhancing completion conditions. The primary goal of this project is to develop effective machine-learning models that can integrate the effects of multidimensional predictive variables (i.e., completion conditions) to predict 12-Month Cumulative Production accurately. Three predictive regression machine learning models are implemented for predicting 12-month cumulative oil production: Random Forest, Gradient Boosting, and Long Short-Term Memory Models. All three models yielded cumulative production predictions with root mean squared error (RMSE ) values ranging from 7.35 to 20.01 thousand barrels of oil. Although we hypothesized that all models would yield accurate predictions, the results indicated a crucial need for further refinement to create reliable and rational predictive tools in the subsurface. While this study did not produce optimal models for completion sequencing to maximize long-term production, we established that machine learning models alone are not self-sufficient for problems of this nature. Hence, there is potential for significant improvement, including comprehensive feature engineering, and a recommendation of exploring the use of hybrid or surrogate models (i.e., coupling physics reduced models and machine learning models), to ascertain significant contribution to the progress of completion sequencing workflows.
[ "['Anjie Liu' 'Jinglang W. Sun' 'Anh Ngo' 'Ademide O. Mabadeje'\n 'Jose L. Hernandez-Mejia']" ]
null
null
2402.15611
null
null
http://arxiv.org/pdf/2402.15611v1
2024-02-23T21:21:16Z
2024-02-23T21:21:16Z
Data/moment-driven approaches for fast predictive control of collective dynamics
Feedback control synthesis for large-scale particle systems is reviewed in the framework of model predictive control (MPC). The high-dimensional character of collective dynamics hampers the performance of traditional MPC algorithms based on fast online dynamic optimization at every time step. Two alternatives to MPC are proposed. First, the use of supervised learning techniques for the offline approximation of optimal feedback laws is discussed. Then, a procedure based on sequential linearization of the dynamics based on macroscopic quantities of the particle ensemble is reviewed. Both approaches circumvent the online solution of optimal control problems enabling fast, real-time, feedback synthesis for large-scale particle systems. Numerical experiments assess the performance of the proposed algorithms.
[ "['Giacomo Albi' 'Sara Bicego' 'Michael Herty' 'Yuyang Huang'\n 'Dante Kalise' 'Chiara Segala']" ]
null
null
2402.15613
null
null
http://arxiv.org/pdf/2402.15613v1
2024-02-23T21:28:59Z
2024-02-23T21:28:59Z
Towards Efficient Active Learning in NLP via Pretrained Representations
Fine-tuning Large Language Models (LLMs) is now a common approach for text classification in a wide range of applications. When labeled documents are scarce, active learning helps save annotation efforts but requires retraining of massive models on each acquisition iteration. We drastically expedite this process by using pretrained representations of LLMs within the active learning loop and, once the desired amount of labeled data is acquired, fine-tuning that or even a different pretrained LLM on this labeled data to achieve the best performance. As verified on common text classification benchmarks with pretrained BERT and RoBERTa as the backbone, our strategy yields similar performance to fine-tuning all the way through the active learning loop but is orders of magnitude less computationally expensive. The data acquired with our procedure generalizes across pretrained networks, allowing flexibility in choosing the final model or updating it as newer versions get released.
[ "['Artem Vysogorets' 'Achintya Gopal']" ]
null
null
2402.15623
null
null
http://arxiv.org/pdf/2402.15623v1
2024-02-23T21:58:50Z
2024-02-23T21:58:50Z
Language-Based User Profiles for Recommendation
Most conventional recommendation methods (e.g., matrix factorization) represent user profiles as high-dimensional vectors. Unfortunately, these vectors lack interpretability and steerability, and often perform poorly in cold-start settings. To address these shortcomings, we explore the use of user profiles that are represented as human-readable text. We propose the Language-based Factorization Model (LFM), which is essentially an encoder/decoder model where both the encoder and the decoder are large language models (LLMs). The encoder LLM generates a compact natural-language profile of the user's interests from the user's rating history. The decoder LLM uses this summary profile to complete predictive downstream tasks. We evaluate our LFM approach on the MovieLens dataset, comparing it against matrix factorization and an LLM model that directly predicts from the user's rating history. In cold-start settings, we find that our method can have higher accuracy than matrix factorization. Furthermore, we find that generating a compact and human-readable summary often performs comparably with or better than direct LLM prediction, while enjoying better interpretability and shorter model input length. Our results motivate a number of future research directions and potential improvements.
[ "['Joyce Zhou' 'Yijia Dai' 'Thorsten Joachims']" ]
null
null
2402.15625
null
null
http://arxiv.org/pdf/2402.15625v1
2024-02-23T22:03:12Z
2024-02-23T22:03:12Z
Learning Cyclic Causal Models from Incomplete Data
Causal learning is a fundamental problem in statistics and science, offering insights into predicting the effects of unseen treatments on a system. Despite recent advances in this topic, most existing causal discovery algorithms operate under two key assumptions: (i) the underlying graph is acyclic, and (ii) the available data is complete. These assumptions can be problematic as many real-world systems contain feedback loops (e.g., biological systems), and practical scenarios frequently involve missing data. In this work, we propose a novel framework, named MissNODAGS, for learning cyclic causal graphs from partially missing data. Under the additive noise model, MissNODAGS learns the causal graph by alternating between imputing the missing data and maximizing the expected log-likelihood of the visible part of the data in each training step, following the principles of the expectation-maximization (EM) framework. Through synthetic experiments and real-world single-cell perturbation data, we demonstrate improved performance when compared to using state-of-the-art imputation techniques followed by causal learning on partially missing interventional data.
[ "['Muralikrishnna G. Sethuraman' 'Faramarz Fekri']" ]
null
null
2402.15627
null
null
http://arxiv.org/pdf/2402.15627v1
2024-02-23T22:10:59Z
2024-02-23T22:10:59Z
MegaScale: Scaling Large Language Model Training to More Than 10,000 GPUs
We present the design, implementation and engineering experience in building and deploying MegaScale, a production system for training large language models (LLMs) at the scale of more than 10,000 GPUs. Training LLMs at this scale brings unprecedented challenges to training efficiency and stability. We take a full-stack approach that co-designs the algorithmic and system components across model block and optimizer design, computation and communication overlapping, operator optimization, data pipeline, and network performance tuning. Maintaining high efficiency throughout the training process (i.e., stability) is an important consideration in production given the long extent of LLM training jobs. Many hard stability issues only emerge at large scale, and in-depth observability is the key to address them. We develop a set of diagnosis tools to monitor system components and events deep in the stack, identify root causes, and derive effective techniques to achieve fault tolerance and mitigate stragglers. MegaScale achieves 55.2% Model FLOPs Utilization (MFU) when training a 175B LLM model on 12,288 GPUs, improving the MFU by 1.34x compared to Megatron-LM. We share our operational experience in identifying and fixing failures and stragglers. We hope by articulating the problems and sharing our experience from a systems perspective, this work can inspire future LLM systems research.
[ "['Ziheng Jiang' 'Haibin Lin' 'Yinmin Zhong' 'Qi Huang' 'Yangrui Chen'\n 'Zhi Zhang' 'Yanghua Peng' 'Xiang Li' 'Cong Xie' 'Shibiao Nong'\n 'Yulu Jia' 'Sun He' 'Hongmin Chen' 'Zhihao Bai' 'Qi Hou' 'Shipeng Yan'\n 'Ding Zhou' 'Yiyao Sheng' 'Zhuo Jiang' 'Haohan Xu' 'Haoran Wei'\n 'Zhang Zhang' 'Pengfei Nie' 'Leqi Zou' 'Sida Zhao' 'Liang Xiang'\n 'Zherui Liu' 'Zhe Li' 'Xiaoying Jia' 'Jianxi Ye' 'Xin Jin' 'Xin Liu']" ]
null
null
2402.15635
null
null
http://arxiv.org/pdf/2402.15635v1
2024-02-23T22:36:07Z
2024-02-23T22:36:07Z
Bagged Deep Image Prior for Recovering Images in the Presence of Speckle Noise
We investigate both the theoretical and algorithmic aspects of likelihood-based methods for recovering a complex-valued signal from multiple sets of measurements, referred to as looks, affected by speckle (multiplicative) noise. Our theoretical contributions include establishing the first existing theoretical upper bound on the Mean Squared Error (MSE) of the maximum likelihood estimator under the deep image prior hypothesis. Our theoretical results capture the dependence of MSE upon the number of parameters in the deep image prior, the number of looks, the signal dimension, and the number of measurements per look. On the algorithmic side, we introduce the concept of bagged Deep Image Priors (Bagged-DIP) and integrate them with projected gradient descent. Furthermore, we show how employing Newton-Schulz algorithm for calculating matrix inverses within the iterations of PGD reduces the computational complexity of the algorithm. We will show that this method achieves the state-of-the-art performance.
[ "['Xi Chen' 'Zhewen Hou' 'Christopher A. Metzler' 'Arian Maleki'\n 'Shirin Jalali']" ]
null
null
2402.15636
null
null
http://arxiv.org/pdf/2402.15636v1
2024-02-23T22:38:45Z
2024-02-23T22:38:45Z
Smooth and Sparse Latent Dynamics in Operator Learning with Jerk Regularization
Spatiotemporal modeling is critical for understanding complex systems across various scientific and engineering disciplines, but governing equations are often not fully known or computationally intractable due to inherent system complexity. Data-driven reduced-order models (ROMs) offer a promising approach for fast and accurate spatiotemporal forecasting by computing solutions in a compressed latent space. However, these models often neglect temporal correlations between consecutive snapshots when constructing the latent space, leading to suboptimal compression, jagged latent trajectories, and limited extrapolation ability over time. To address these issues, this paper introduces a continuous operator learning framework that incorporates jerk regularization into the learning of the compressed latent space. This jerk regularization promotes smoothness and sparsity of latent space dynamics, which not only yields enhanced accuracy and convergence speed but also helps identify intrinsic latent space coordinates. Consisting of an implicit neural representation (INR)-based autoencoder and a neural ODE latent dynamics model, the framework allows for inference at any desired spatial or temporal resolution. The effectiveness of this framework is demonstrated through a two-dimensional unsteady flow problem governed by the Navier-Stokes equations, highlighting its potential to expedite high-fidelity simulations in various scientific and engineering applications.
[ "['Xiaoyu Xie' 'Saviz Mowlavi' 'Mouhacine Benosman']" ]
null
null
2402.15638
null
null
http://arxiv.org/pdf/2402.15638v2
2024-07-02T02:09:32Z
2024-02-23T22:46:14Z
Fair Resource Allocation in Multi-Task Learning
By jointly learning multiple tasks, multi-task learning (MTL) can leverage the shared knowledge across tasks, resulting in improved data efficiency and generalization performance. However, a major challenge in MTL lies in the presence of conflicting gradients, which can hinder the fair optimization of some tasks and subsequently impede MTL's ability to achieve better overall performance. Inspired by fair resource allocation in communication networks, we formulate the optimization of MTL as a utility maximization problem, where the loss decreases across tasks are maximized under different fairness measurements. To solve this problem, we propose FairGrad, a novel MTL optimization method. FairGrad not only enables flexible emphasis on certain tasks but also achieves a theoretical convergence guarantee. Extensive experiments demonstrate that our method can achieve state-of-the-art performance among gradient manipulation methods on a suite of multi-task benchmarks in supervised learning and reinforcement learning. Furthermore, we incorporate the idea of $alpha$-fairness into loss functions of various MTL methods. Extensive empirical studies demonstrate that their performance can be significantly enhanced. Code is provided at url{https://github.com/OptMN-Lab/fairgrad}.
[ "['Hao Ban' 'Kaiyi Ji']" ]
null
null
2402.15650
null
null
http://arxiv.org/pdf/2402.15650v2
2024-04-16T03:00:51Z
2024-02-23T23:22:06Z
Multi-Constraint Safe RL with Objective Suppression for Safety-Critical Applications
Safe reinforcement learning tasks with multiple constraints are a challenging domain despite being very common in the real world. In safety-critical domains, properly handling the constraints becomes even more important. To address this challenge, we first describe the multi-constraint problem with a stronger Uniformly Constrained MDP (UCMDP) model; we then propose Objective Suppression, a novel method that adaptively suppresses the task reward maximizing objectives according to a safety critic, as a solution to the Lagrangian dual of a UCMDP. We benchmark Objective Suppression in two multi-constraint safety domains, including an autonomous driving domain where any incorrect behavior can lead to disastrous consequences. Empirically, we demonstrate that our proposed method, when combined with existing safe RL algorithms, can match the task reward achieved by our baselines with significantly fewer constraint violations.
[ "['Zihan Zhou' 'Jonathan Booher' 'Khashayar Rohanimanesh' 'Wei Liu'\n 'Aleksandr Petiushko' 'Animesh Garg']" ]
null
null
2402.15655
null
null
http://arxiv.org/pdf/2402.15655v1
2024-02-24T00:09:27Z
2024-02-24T00:09:27Z
Contact Complexity in Customer Service
Customers who reach out for customer service support may face a range of issues that vary in complexity. Routing high-complexity contacts to junior agents can lead to multiple transfers or repeated contacts, while directing low-complexity contacts to senior agents can strain their capacity to assist customers who need professional help. To tackle this, a machine learning model that accurately predicts the complexity of customer issues is highly desirable. However, defining the complexity of a contact is a difficult task as it is a highly abstract concept. While consensus-based data annotation by experienced agents is a possible solution, it is time-consuming and costly. To overcome these challenges, we have developed a novel machine learning approach to define contact complexity. Instead of relying on human annotation, we trained an AI expert model to mimic the behavior of agents and evaluate each contact's complexity based on how the AI expert responds. If the AI expert is uncertain or lacks the skills to comprehend the contact transcript, it is considered a high-complexity contact. Our method has proven to be reliable, scalable, and cost-effective based on the collected data.
[ "['Shu-Ting Pi' 'Michael Yang' 'Qun Liu']" ]
null
null
2402.15656
null
null
http://arxiv.org/pdf/2402.15656v2
2024-03-15T06:47:48Z
2024-02-24T00:10:51Z
Learning Semilinear Neural Operators : A Unified Recursive Framework For Prediction And Data Assimilation
Recent advances in the theory of Neural Operators (NOs) have enabled fast and accurate computation of the solutions to complex systems described by partial differential equations (PDEs). Despite their great success, current NO-based solutions face important challenges when dealing with spatio-temporal PDEs over long time scales. Specifically, the current theory of NOs does not present a systematic framework to perform data assimilation and efficiently correct the evolution of PDE solutions over time based on sparsely sampled noisy measurements. In this paper, we propose a learning-based state-space approach to compute the solution operators to infinite-dimensional semilinear PDEs. Exploiting the structure of semilinear PDEs and the theory of nonlinear observers in function spaces, we develop a flexible recursive method that allows for both prediction and data assimilation by combining prediction and correction operations. The proposed framework is capable of producing fast and accurate predictions over long time horizons, dealing with irregularly sampled noisy measurements to correct the solution, and benefits from the decoupling between the spatial and temporal dynamics of this class of PDEs. We show through experiments on the Kuramoto-Sivashinsky, Navier-Stokes and Korteweg-de Vries equations that the proposed model is robust to noise and can leverage arbitrary amounts of measurements to correct its prediction over a long time horizon with little computational overhead.
[ "['Ashutosh Singh' 'Ricardo Augusto Borsoi' 'Deniz Erdogmus'\n 'Tales Imbiriba']" ]
null
null
2402.15665
null
null
http://arxiv.org/pdf/2402.15665v1
2024-02-24T00:40:40Z
2024-02-24T00:40:40Z
Teacher-Student Learning on Complexity in Intelligent Routing
Customer service is often the most time-consuming aspect for e-commerce websites, with each contact typically taking 10-15 minutes. Effectively routing customers to appropriate agents without transfers is therefore crucial for e-commerce success. To this end, we have developed a machine learning framework that predicts the complexity of customer contacts and routes them to appropriate agents accordingly. The framework consists of two parts. First, we train a teacher model to score the complexity of a contact based on the post-contact transcripts. Then, we use the teacher model as a data annotator to provide labels to train a student model that predicts the complexity based on pre-contact data only. Our experiments show that such a framework is successful and can significantly improve customer experience. We also propose a useful metric called complexity AUC that evaluates the effectiveness of customer service at a statistical level.
[ "['Shu-Ting Pi' 'Michael Yang' 'Yuying Zhu' 'Qun Liu']" ]
null
null
2402.15666
null
null
http://arxiv.org/abs/2402.15666v1
2024-02-24T00:41:16Z
2024-02-24T00:41:16Z
Universal Model in Online Customer Service
Building machine learning models can be a time-consuming process that often takes several months to implement in typical business scenarios. To ensure consistent model performance and account for variations in data distribution, regular retraining is necessary. This paper introduces a solution for improving online customer service in e-commerce by presenting a universal model for predict-ing labels based on customer questions, without requiring training. Our novel approach involves using machine learning techniques to tag customer questions in transcripts and create a repository of questions and corresponding labels. When a customer requests assistance, an information retrieval model searches the repository for similar questions, and statistical analysis is used to predict the corresponding label. By eliminating the need for individual model training and maintenance, our approach reduces both the model development cycle and costs. The repository only requires periodic updating to maintain accuracy.
[ "['Shu-Ting Pi' 'Cheng-Ping Hsieh' 'Qun Liu' 'Yuying Zhu']" ]
null
null
2402.15679
null
null
http://arxiv.org/pdf/2402.15679v1
2024-02-24T01:45:51Z
2024-02-24T01:45:51Z
Scalable Density-based Clustering with Random Projections
We present sDBSCAN, a scalable density-based clustering algorithm in high dimensions with cosine distance. Utilizing the neighborhood-preserving property of random projections, sDBSCAN can quickly identify core points and their neighborhoods, the primary hurdle of density-based clustering. Theoretically, sDBSCAN outputs a clustering structure similar to DBSCAN under mild conditions with high probability. To further facilitate sDBSCAN, we present sOPTICS, a scalable OPTICS for interactive exploration of the intrinsic clustering structure. We also extend sDBSCAN and sOPTICS to L2, L1, $chi^2$, and Jensen-Shannon distances via random kernel features. Empirically, sDBSCAN is significantly faster and provides higher accuracy than many other clustering algorithms on real-world million-point data sets. On these data sets, sDBSCAN and sOPTICS run in a few minutes, while the scikit-learn's counterparts demand several hours or cannot run due to memory constraints.
[ "['Haochuan Xu' 'Ninh Pham']" ]
null
null
2402.15680
null
null
http://arxiv.org/pdf/2402.15680v1
2024-02-24T01:47:56Z
2024-02-24T01:47:56Z
Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward Comprehensive Benchmarks
The rise of self-supervised learning, which operates without the need for labeled data, has garnered significant interest within the graph learning community. This enthusiasm has led to the development of numerous Graph Contrastive Learning (GCL) techniques, all aiming to create a versatile graph encoder that leverages the wealth of unlabeled data for various downstream tasks. However, the current evaluation standards for GCL approaches are flawed due to the need for extensive hyper-parameter tuning during pre-training and the reliance on a single downstream task for assessment. These flaws can skew the evaluation away from the intended goals, potentially leading to misleading conclusions. In our paper, we thoroughly examine these shortcomings and offer fresh perspectives on how GCL methods are affected by hyper-parameter choices and the choice of downstream tasks for their evaluation. Additionally, we introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of GCL methods.
[ "['Qian Ma' 'Hongliang Chi' 'Hengrui Zhang' 'Kay Liu' 'Zhiwei Zhang'\n 'Lu Cheng' 'Suhang Wang' 'Philip S. Yu' 'Yao Ma']" ]
null
null
2402.15688
null
null
http://arxiv.org/pdf/2402.15688v1
2024-02-24T02:16:42Z
2024-02-24T02:16:42Z
Anchor-free Clustering based on Anchor Graph Factorization
Anchor-based methods are a pivotal approach in handling clustering of large-scale data. However, these methods typically entail two distinct stages: selecting anchor points and constructing an anchor graph. This bifurcation, along with the initialization of anchor points, significantly influences the overall performance of the algorithm. To mitigate these issues, we introduce a novel method termed Anchor-free Clustering based on Anchor Graph Factorization (AFCAGF). AFCAGF innovates in learning the anchor graph, requiring only the computation of pairwise distances between samples. This process, achievable through straightforward optimization, circumvents the necessity for explicit selection of anchor points. More concretely, our approach enhances the Fuzzy k-means clustering algorithm (FKM), introducing a new manifold learning technique that obviates the need for initializing cluster centers. Additionally, we evolve the concept of the membership matrix between cluster centers and samples in FKM into an anchor graph encompassing multiple anchor points and samples. Employing Non-negative Matrix Factorization (NMF) on this anchor graph allows for the direct derivation of cluster labels, thereby eliminating the requirement for further post-processing steps. To solve the method proposed, we implement an alternating optimization algorithm that ensures convergence. Empirical evaluations on various real-world datasets underscore the superior efficacy of our algorithm compared to traditional approaches.
[ "['Shikun Mei' 'Fangfang Li' 'Quanxue Gao' 'Ming Yang']" ]
null
null
2402.15691
null
null
http://arxiv.org/pdf/2402.15691v1
2024-02-24T02:29:10Z
2024-02-24T02:29:10Z
Orthogonal Gradient Boosting for Simpler Additive Rule Ensembles
Gradient boosting of prediction rules is an efficient approach to learn potentially interpretable yet accurate probabilistic models. However, actual interpretability requires to limit the number and size of the generated rules, and existing boosting variants are not designed for this purpose. Though corrective boosting refits all rule weights in each iteration to minimise prediction risk, the included rule conditions tend to be sub-optimal, because commonly used objective functions fail to anticipate this refitting. Here, we address this issue by a new objective function that measures the angle between the risk gradient vector and the projection of the condition output vector onto the orthogonal complement of the already selected conditions. This approach correctly approximate the ideal update of adding the risk gradient itself to the model and favours the inclusion of more general and thus shorter rules. As we demonstrate using a wide range of prediction tasks, this significantly improves the comprehensibility/accuracy trade-off of the fitted ensemble. Additionally, we show how objective values for related rule conditions can be computed incrementally to avoid any substantial computational overhead of the new method.
[ "['Fan Yang' 'Pierre Le Bodic' 'Michael Kamp' 'Mario Boley']" ]
null
null
2402.15700
null
null
http://arxiv.org/pdf/2402.15700v1
2024-02-24T03:25:28Z
2024-02-24T03:25:28Z
CoRelation: Boosting Automatic ICD Coding Through Contextualized Code Relation Learning
Automatic International Classification of Diseases (ICD) coding plays a crucial role in the extraction of relevant information from clinical notes for proper recording and billing. One of the most important directions for boosting the performance of automatic ICD coding is modeling ICD code relations. However, current methods insufficiently model the intricate relationships among ICD codes and often overlook the importance of context in clinical notes. In this paper, we propose a novel approach, a contextualized and flexible framework, to enhance the learning of ICD code representations. Our approach, unlike existing methods, employs a dependent learning paradigm that considers the context of clinical notes in modeling all possible code relations. We evaluate our approach on six public ICD coding datasets and the experimental results demonstrate the effectiveness of our approach compared to state-of-the-art baselines.
[ "['Junyu Luo' 'Xiaochen Wang' 'Jiaqi Wang' 'Aofei Chang' 'Yaqing Wang'\n 'Fenglong Ma']" ]
null
null
2402.15703
null
null
http://arxiv.org/pdf/2402.15703v1
2024-02-24T03:41:09Z
2024-02-24T03:41:09Z
Is Offline Decision Making Possible with Only Few Samples? Reliable Decisions in Data-Starved Bandits via Trust Region Enhancement
What can an agent learn in a stochastic Multi-Armed Bandit (MAB) problem from a dataset that contains just a single sample for each arm? Surprisingly, in this work, we demonstrate that even in such a data-starved setting it may still be possible to find a policy competitive with the optimal one. This paves the way to reliable decision-making in settings where critical decisions must be made by relying only on a handful of samples. Our analysis reveals that emph{stochastic policies can be substantially better} than deterministic ones for offline decision-making. Focusing on offline multi-armed bandits, we design an algorithm called Trust Region of Uncertainty for Stochastic policy enhancemenT (TRUST) which is quite different from the predominant value-based lower confidence bound approach. Its design is enabled by localization laws, critical radii, and relative pessimism. We prove that its sample complexity is comparable to that of LCB on minimax problems while being substantially lower on problems with very few samples. Finally, we consider an application to offline reinforcement learning in the special case where the logging policies are known.
[ "['Ruiqi Zhang' 'Yuexiang Zhai' 'Andrea Zanette']" ]
null
null
2402.15710
null
null
http://arxiv.org/pdf/2402.15710v1
2024-02-24T04:13:40Z
2024-02-24T04:13:40Z
A Statistical Analysis of Wasserstein Autoencoders for Intrinsically Low-dimensional Data
Variational Autoencoders (VAEs) have gained significant popularity among researchers as a powerful tool for understanding unknown distributions based on limited samples. This popularity stems partly from their impressive performance and partly from their ability to provide meaningful feature representations in the latent space. Wasserstein Autoencoders (WAEs), a variant of VAEs, aim to not only improve model efficiency but also interpretability. However, there has been limited focus on analyzing their statistical guarantees. The matter is further complicated by the fact that the data distributions to which WAEs are applied - such as natural images - are often presumed to possess an underlying low-dimensional structure within a high-dimensional feature space, which current theory does not adequately account for, rendering known bounds inefficient. To bridge the gap between the theory and practice of WAEs, in this paper, we show that WAEs can learn the data distributions when the network architectures are properly chosen. We show that the convergence rates of the expected excess risk in the number of samples for WAEs are independent of the high feature dimension, instead relying only on the intrinsic dimension of the data distribution.
[ "['Saptarshi Chakraborty' 'Peter L. Bartlett']" ]
null
null
2402.15715
null
null
http://arxiv.org/pdf/2402.15715v1
2024-02-24T04:40:27Z
2024-02-24T04:40:27Z
Operator Learning: Algorithms and Analysis
Operator learning refers to the application of ideas from machine learning to approximate (typically nonlinear) operators mapping between Banach spaces of functions. Such operators often arise from physical models expressed in terms of partial differential equations (PDEs). In this context, such approximate operators hold great potential as efficient surrogate models to complement traditional numerical methods in many-query tasks. Being data-driven, they also enable model discovery when a mathematical description in terms of a PDE is not available. This review focuses primarily on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces. Empirically, neural operators have shown success in a variety of applications, but our theoretical understanding remains incomplete. This review article summarizes recent progress and the current state of our theoretical understanding of neural operators, focusing on an approximation theoretic point of view.
[ "['Nikola B. Kovachki' 'Samuel Lanthaler' 'Andrew M. Stuart']" ]
null
null
2402.15718
null
null
http://arxiv.org/pdf/2402.15718v1
2024-02-24T04:57:59Z
2024-02-24T04:57:59Z
A Duality Analysis of Kernel Ridge Regression in the Noiseless Regime
In this paper, we conduct a comprehensive analysis of generalization properties of Kernel Ridge Regression (KRR) in the noiseless regime, a scenario crucial to scientific computing, where data are often generated via computer simulations. We prove that KRR can attain the minimax optimal rate, which depends on both the eigenvalue decay of the associated kernel and the relative smoothness of target functions. Particularly, when the eigenvalue decays exponentially fast, KRR achieves the spectral accuracy, i.e., a convergence rate faster than any polynomial. Moreover, the numerical experiments well corroborate our theoretical findings. Our proof leverages a novel extension of the duality framework introduced by Chen et al. (2023), which could be useful in analyzing kernel-based methods beyond the scope of this work.
[ "['Jihao Long' 'Xiaojun Peng' 'Lei Wu']" ]
null
null
2402.15730
null
null
http://arxiv.org/pdf/2402.15730v1
2024-02-24T05:48:39Z
2024-02-24T05:48:39Z
Understanding Missingness in Time-series Electronic Health Records for Individualized Representation
With the widespread of machine learning models for healthcare applications, there is increased interest in building applications for personalized medicine. Despite the plethora of proposed research for personalized medicine, very few focus on representing missingness and learning from the missingness patterns in time-series Electronic Health Records (EHR) data. The lack of focus on missingness representation in an individualized way limits the full utilization of machine learning applications towards true personalization. In this brief communication, we highlight new insights into patterns of missingness with real-world examples and implications of missingness in EHRs. The insights in this work aim to bridge the gap between theoretical assumptions and practical observations in real-world EHRs. We hope this work will open new doors for exploring directions for better representation in predictive modelling for true personalization.
[ "['Ghadeer O. Ghosheh' 'Jin Li' 'Tingting Zhu']" ]
null
null
2402.15731
null
null
http://arxiv.org/pdf/2402.15731v2
2024-04-09T09:40:22Z
2024-02-24T05:49:27Z
Clustering in Dynamic Environments: A Framework for Benchmark Dataset Generation With Heterogeneous Changes
Clustering in dynamic environments is of increasing importance, with broad applications ranging from real-time data analysis and online unsupervised learning to dynamic facility location problems. While meta-heuristics have shown promising effectiveness in static clustering tasks, their application for tracking optimal clustering solutions or robust clustering over time in dynamic environments remains largely underexplored. This is partly due to a lack of dynamic datasets with diverse, controllable, and realistic dynamic characteristics, hindering systematic performance evaluations of clustering algorithms in various dynamic scenarios. This deficiency leads to a gap in our understanding and capability to effectively design algorithms for clustering in dynamic environments. To bridge this gap, this paper introduces the Dynamic Dataset Generator (DDG). DDG features multiple dynamic Gaussian components integrated with a range of heterogeneous, local, and global changes. These changes vary in spatial and temporal severity, patterns, and domain of influence, providing a comprehensive tool for simulating a wide range of dynamic scenarios.
[ "['Danial Yazdani' 'Juergen Branke' 'Mohammad Sadegh Khorshidi'\n 'Mohammad Nabi Omidvar' 'Xiaodong Li' 'Amir H. Gandomi' 'Xin Yao']" ]
null
null
2402.15733
null
null
http://arxiv.org/pdf/2402.15733v2
2024-04-01T06:59:21Z
2024-02-24T06:05:15Z
ArEEG_Chars: Dataset for Envisioned Speech Recognition using EEG for Arabic Characters
Brain-Computer-Interface (BCI) has been a hot research topic in the last few years that could help paralyzed people in their lives. Several researches were done to classify electroencephalography (EEG) signals automatically into English characters and words. Arabic language is one of the most used languages around the world. However, to the best of our knowledge, there is no dataset for Arabic characters EEG signals. In this paper, we have created an EEG dataset for Arabic characters and named it ArEEG_Chars. Moreover, several experiments were done on ArEEG_Chars using deep learning. Best results were achieved using LSTM and reached an accuracy of 97%. ArEEG_Chars dataset will be public for researchers.
[ "['Hazem Darwish' 'Abdalrahman Al Malah' 'Khloud Al Jallad' 'Nada Ghneim']" ]
null
null
2402.15734
null
null
http://arxiv.org/pdf/2402.15734v2
2024-06-13T08:28:49Z
2024-02-24T06:27:33Z
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
Recent years have witnessed the promise of coupling machine learning methods and physical domainspecific insights for solving scientific problems based on partial differential equations (PDEs). However, being data-intensive, these methods still require a large amount of PDE data. This reintroduces the need for expensive numerical PDE solutions, partially undermining the original goal of avoiding these expensive simulations. In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning. To reduce the need for training data with heavy simulation costs, we mine unlabeled PDE data without simulated solutions, and pretrain neural operators with physics-inspired reconstruction-based proxy tasks. To improve out-of-distribution performance, we further assist neural operators in flexibly leveraging in-context learning methods, without incurring extra training costs or designs. Extensive empirical evaluations on a diverse set of PDEs demonstrate that our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models.
[ "['Wuyang Chen' 'Jialin Song' 'Pu Ren' 'Shashank Subramanian'\n 'Dmitriy Morozov' 'Michael W. Mahoney']" ]
null
null
2402.15739
null
null
http://arxiv.org/pdf/2402.15739v2
2024-07-04T11:17:38Z
2024-02-24T06:36:08Z
Low-Rank Bandits via Tight Two-to-Infinity Singular Subspace Recovery
We study contextual bandits with low-rank structure where, in each round, if the (context, arm) pair $(i,j)in [m]times [n]$ is selected, the learner observes a noisy sample of the $(i,j)$-th entry of an unknown low-rank reward matrix. Successive contexts are generated randomly in an i.i.d. manner and are revealed to the learner. For such bandits, we present efficient algorithms for policy evaluation, best policy identification and regret minimization. For policy evaluation and best policy identification, we show that our algorithms are nearly minimax optimal. For instance, the number of samples required to return an $varepsilon$-optimal policy with probability at least $1-delta$ typically scales as ${r(m+n)over varepsilon^2}log(1/delta)$. Our regret minimization algorithm enjoys minimax guarantees typically scaling as $r^{7/4}(m+n)^{3/4}sqrt{T}$, which improves over existing algorithms. All the proposed algorithms consist of two phases: they first leverage spectral methods to estimate the left and right singular subspaces of the low-rank reward matrix. We show that these estimates enjoy tight error guarantees in the two-to-infinity norm. This in turn allows us to reformulate our problems as a misspecified linear bandit problem with dimension roughly $r(m+n)$ and misspecification controlled by the subspace recovery error, as well as to design the second phase of our algorithms efficiently.
[ "['Yassir Jedra' 'William Réveillard' 'Stefan Stojanovic'\n 'Alexandre Proutiere']" ]
null
null
2402.15751
null
null
http://arxiv.org/pdf/2402.15751v1
2024-02-24T07:22:04Z
2024-02-24T07:22:04Z
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning
While fine-tuning large language models (LLMs) for specific tasks often yields impressive results, it comes at the cost of memory inefficiency due to back-propagation in gradient-based training. Memory-efficient Zeroth-order (MeZO) optimizers, recently proposed to address this issue, only require forward passes during training, making them more memory-friendly. However, the quality of gradient estimates in zeroth order optimization often depends on the data dimensionality, potentially explaining why MeZO still exhibits significant performance drops compared to standard fine-tuning across various tasks. Inspired by the success of Parameter-Efficient Fine-Tuning (PEFT), this paper introduces Sparse MeZO, a novel memory-efficient zeroth-order optimization approach that applies ZO only to a carefully chosen subset of parameters. We propose a simple yet effective parameter selection scheme that yields significant performance gains with Sparse-MeZO. Additionally, we develop a memory-optimized implementation for sparse masking, ensuring the algorithm requires only inference-level memory consumption, allowing Sparse-MeZO to fine-tune LLaMA-30b on a single A100 GPU. Experimental results illustrate that Sparse-MeZO consistently improves both performance and convergence speed over MeZO without any overhead. For example, it achieves a 9% absolute accuracy improvement and 3.5x speedup over MeZO on the RTE task.
[ "['Yong Liu' 'Zirui Zhu' 'Chaoyu Gong' 'Minhao Cheng' 'Cho-Jui Hsieh'\n 'Yang You']" ]
null
null
2402.15757
null
null
http://arxiv.org/pdf/2402.15757v1
2024-02-24T08:07:48Z
2024-02-24T08:07:48Z
Batch Active Learning of Reward Functions from Human Preferences
Data generation and labeling are often expensive in robot learning. Preference-based learning is a concept that enables reliable labeling by querying users with preference questions. Active querying methods are commonly employed in preference-based learning to generate more informative data at the expense of parallelization and computation time. In this paper, we develop a set of novel algorithms, batch active preference-based learning methods, that enable efficient learning of reward functions using as few data samples as possible while still having short query generation times and also retaining parallelizability. We introduce a method based on determinantal point processes (DPP) for active batch generation and several heuristic-based alternatives. Finally, we present our experimental results for a variety of robotics tasks in simulation. Our results suggest that our batch active learning algorithm requires only a few queries that are computed in a short amount of time. We showcase one of our algorithms in a study to learn human users' preferences.
[ "['Erdem Bıyık' 'Nima Anari' 'Dorsa Sadigh']" ]
null
null
2402.15776
null
null
http://arxiv.org/pdf/2402.15776v2
2024-03-18T11:51:18Z
2024-02-24T09:47:46Z
Truly No-Regret Learning in Constrained MDPs
Constrained Markov decision processes (CMDPs) are a common way to model safety constraints in reinforcement learning. State-of-the-art methods for efficiently solving CMDPs are based on primal-dual algorithms. For these algorithms, all currently known regret bounds allow for error cancellations -- one can compensate for a constraint violation in one round with a strict constraint satisfaction in another. This makes the online learning process unsafe since it only guarantees safety for the final (mixture) policy but not during learning. As Efroni et al. (2020) pointed out, it is an open question whether primal-dual algorithms can provably achieve sublinear regret if we do not allow error cancellations. In this paper, we give the first affirmative answer. We first generalize a result on last-iterate convergence of regularized primal-dual schemes to CMDPs with multiple constraints. Building upon this insight, we propose a model-based primal-dual algorithm to learn in an unknown CMDP. We prove that our algorithm achieves sublinear regret without error cancellations.
[ "['Adrian Müller' 'Pragnya Alatur' 'Volkan Cevher' 'Giorgia Ramponi'\n 'Niao He']" ]
null
null
2402.15781
null
null
http://arxiv.org/pdf/2402.15781v2
2024-04-08T17:45:28Z
2024-02-24T10:42:50Z
Analysis of Off-Policy Multi-Step TD-Learning with Linear Function Approximation
This paper analyzes multi-step TD-learning algorithms within the `deadly triad' scenario, characterized by linear function approximation, off-policy learning, and bootstrapping. In particular, we prove that n-step TD-learning algorithms converge to a solution as the sampling horizon n increases sufficiently. The paper is divided into two parts. In the first part, we comprehensively examine the fundamental properties of their model-based deterministic counterparts, including projected value iteration, gradient descent algorithms, and the control theoretic approach, which can be viewed as prototype deterministic algorithms whose analysis plays a pivotal role in understanding and developing their model-free reinforcement learning counterparts. In particular, we prove that these algorithms converge to meaningful solutions when n is sufficiently large. Based on these findings, two n-step TD-learning algorithms are proposed and analyzed, which can be seen as the model-free reinforcement learning counterparts of the gradient and control theoretic algorithms.
[ "['Donghwan Lee']" ]
null
null
2402.15808
null
null
http://arxiv.org/pdf/2402.15808v2
2024-02-27T20:08:16Z
2024-02-24T13:08:39Z
Optimal Zero-Shot Detector for Multi-Armed Attacks
This paper explores a scenario in which a malicious actor employs a multi-armed attack strategy to manipulate data samples, offering them various avenues to introduce noise into the dataset. Our central objective is to protect the data by detecting any alterations to the input. We approach this defensive strategy with utmost caution, operating in an environment where the defender possesses significantly less information compared to the attacker. Specifically, the defender is unable to utilize any data samples for training a defense model or verifying the integrity of the channel. Instead, the defender relies exclusively on a set of pre-existing detectors readily available "off the shelf". To tackle this challenge, we derive an innovative information-theoretic defense approach that optimally aggregates the decisions made by these detectors, eliminating the need for any training data. We further explore a practical use-case scenario for empirical evaluation, where the attacker possesses a pre-trained classifier and launches well-known adversarial attacks against it. Our experiments highlight the effectiveness of our proposed solution, even in scenarios that deviate from the optimal setup.
[ "['Federica Granese' 'Marco Romanelli' 'Pablo Piantanida']" ]
null
null
2402.15810
null
null
http://arxiv.org/abs/2402.15810v2
2024-06-20T04:15:12Z
2024-02-24T13:15:54Z
OAG-Bench: A Human-Curated Benchmark for Academic Graph Mining
With the rapid proliferation of scientific literature, versatile academic knowledge services increasingly rely on comprehensive academic graph mining. Despite the availability of public academic graphs, benchmarks, and datasets, these resources often fall short in multi-aspect and fine-grained annotations, are constrained to specific task types and domains, or lack underlying real academic graphs. In this paper, we present OAG-Bench, a comprehensive, multi-aspect, and fine-grained human-curated benchmark based on the Open Academic Graph (OAG). OAG-Bench covers 10 tasks, 20 datasets, 70+ baselines, and 120+ experimental results to date. We propose new data annotation strategies for certain tasks and offer a suite of data pre-processing codes, algorithm implementations, and standardized evaluation protocols to facilitate academic graph mining. Extensive experiments reveal that even advanced algorithms like large language models (LLMs) encounter difficulties in addressing key challenges in certain tasks, such as paper source tracing and scholar profiling. We also introduce the Open Academic Graph Challenge (OAG-Challenge) to encourage community input and sharing. We envisage that OAG-Bench can serve as a common ground for the community to evaluate and compare algorithms in academic graph mining, thereby accelerating algorithm development and advancement in this field. OAG-Bench is accessible at https://www.aminer.cn/data/.
[ "['Fanjin Zhang' 'Shijie Shi' 'Yifan Zhu' 'Bo Chen' 'Yukuo Cen' 'Jifan Yu'\n 'Yelin Chen' 'Lulu Wang' 'Qingfei Zhao' 'Yuqing Cheng' 'Tianyi Han'\n 'Yuwei An' 'Dan Zhang' 'Weng Lam Tam' 'Kun Cao' 'Yunhe Pang' 'Xinyu Guan'\n 'Huihui Yuan' 'Jian Song' 'Xiaoyan Li' 'Yuxiao Dong' 'Jie Tang']" ]
null
null
2402.15814
null
null
http://arxiv.org/pdf/2402.15814v2
2024-06-18T15:14:18Z
2024-02-24T13:42:06Z
On Efficiently Representing Regular Languages as RNNs
Recent work by Hewitt et al. (2020) provides an interpretation of the empirical success of recurrent neural networks (RNNs) as language models (LMs). It shows that RNNs can efficiently represent bounded hierarchical structures that are prevalent in human language. This suggests that RNNs' success might be linked to their ability to model hierarchy. However, a closer inspection of Hewitt et al.'s (2020) construction shows that it is not inherently limited to hierarchical structures. This poses a natural question: What other classes of LMs can RNNs efficiently represent? To this end, we generalize Hewitt et al.'s (2020) construction and show that RNNs can efficiently represent a larger class of LMs than previously claimed -- specifically, those that can be represented by a pushdown automaton with a bounded stack and a specific stack update function. Altogether, the efficiency of representing this diverse class of LMs with RNN LMs suggests novel interpretations of their inductive bias.
[ "['Anej Svete' 'Robin Shing Moon Chan' 'Ryan Cotterell']" ]
null
null
2402.15815
null
null
http://arxiv.org/pdf/2402.15815v1
2024-02-24T13:42:34Z
2024-02-24T13:42:34Z
A Generative Machine Learning Model for Material Microstructure 3D Reconstruction and Performance Evaluation
The reconstruction of 3D microstructures from 2D slices is considered to hold significant value in predicting the spatial structure and physical properties of materials.The dimensional extension from 2D to 3D is viewed as a highly challenging inverse problem from the current technological perspective.Recently,methods based on generative adversarial networks have garnered widespread attention.However,they are still hampered by numerous limitations,including oversimplified models,a requirement for a substantial number of training samples,and difficulties in achieving model convergence during training.In light of this,a novel generative model that integrates the multiscale properties of U-net with and the generative capabilities of GAN has been proposed.Based on this,the innovative construction of a multi-scale channel aggregation module,a multi-scale hierarchical feature aggregation module and a convolutional block attention mechanism can better capture the properties of the material microstructure and extract the image information.The model's accuracy is further improved by combining the image regularization loss with the Wasserstein distance loss.In addition,this study utilizes the anisotropy index to accurately distinguish the nature of the image,which can clearly determine the isotropy and anisotropy of the image.It is also the first time that the generation quality of material samples from different domains is evaluated and the performance of the model itself is compared.The experimental results demonstrate that the present model not only shows a very high similarity between the generated 3D structures and real samples but is also highly consistent with real data in terms of statistical data analysis.
[ "['Yilin Zheng' 'Zhigong Song']" ]
null
null
2402.15819
null
null
http://arxiv.org/pdf/2402.15819v1
2024-02-24T14:10:04Z
2024-02-24T14:10:04Z
Debiased Model-based Interactive Recommendation
Existing model-based interactive recommendation systems are trained by querying a world model to capture the user preference, but learning the world model from historical logged data will easily suffer from bias issues such as popularity bias and sampling bias. This is why some debiased methods have been proposed recently. However, two essential drawbacks still remain: 1) ignoring the dynamics of the time-varying popularity results in a false reweighting of items. 2) taking the unknown samples as negative samples in negative sampling results in the sampling bias. To overcome these two drawbacks, we develop a model called textbf{i}dentifiable textbf{D}ebiased textbf{M}odel-based textbf{I}nteractive textbf{R}ecommendation (textbf{iDMIR} in short). In iDMIR, for the first drawback, we devise a debiased causal world model based on the causal mechanism of the time-varying recommendation generation process with identification guarantees; for the second drawback, we devise a debiased contrastive policy, which coincides with the debiased contrastive learning and avoids sampling bias. Moreover, we demonstrate that the proposed method not only outperforms several latest interactive recommendation algorithms but also enjoys diverse recommendation performance.
[ "['Zijian Li' 'Ruichu Cai' 'Haiqin Huang' 'Sili Zhang' 'Yuguang Yan'\n 'Zhifeng Hao' 'Zhenghua Dong']" ]
null
null
2402.15826
null
null
http://arxiv.org/pdf/2402.15826v1
2024-02-24T14:29:30Z
2024-02-24T14:29:30Z
Reward Design for Justifiable Sequential Decision-Making
Equipping agents with the capacity to justify made decisions using supporting evidence represents a cornerstone of accountable decision-making. Furthermore, ensuring that justifications are in line with human expectations and societal norms is vital, especially in high-stakes situations such as healthcare. In this work, we propose the use of a debate-based reward model for reinforcement learning agents, where the outcome of a zero-sum debate game quantifies the justifiability of a decision in a particular state. This reward model is then used to train a justifiable policy, whose decisions can be more easily corroborated with supporting evidence. In the debate game, two argumentative agents take turns providing supporting evidence for two competing decisions. Given the proposed evidence, a proxy of a human judge evaluates which decision is better justified. We demonstrate the potential of our approach in learning policies for prescribing and justifying treatment decisions of septic patients. We show that augmenting the reward with the feedback signal generated by the debate-based reward model yields policies highly favored by the judge when compared to the policy obtained solely from the environment rewards, while hardly sacrificing any performance. Moreover, in terms of the overall performance and justifiability of trained policies, the debate-based feedback is comparable to the feedback obtained from an ideal judge proxy that evaluates decisions using the full information encoded in the state. This suggests that the debate game outputs key information contained in states that is most relevant for evaluating decisions, which in turn substantiates the practicality of combining our approach with human-in-the-loop evaluations. Lastly, we showcase that agents trained via multi-agent debate learn to propose evidence that is resilient to refutations and closely aligns with human preferences.
[ "['Aleksa Sukovic' 'Goran Radanovic']" ]
null
null
2402.15833
null
null
http://arxiv.org/pdf/2402.15833v1
2024-02-24T15:00:58Z
2024-02-24T15:00:58Z
Prompt Perturbation Consistency Learning for Robust Language Models
Large language models (LLMs) have demonstrated impressive performance on a number of natural language processing tasks, such as question answering and text summarization. However, their performance on sequence labeling tasks such as intent classification and slot filling (IC-SF), which is a central component in personal assistant systems, lags significantly behind discriminative models. Furthermore, there is a lack of substantive research on the robustness of LLMs to various perturbations in the input prompts. The contributions of this paper are three-fold. First, we show that fine-tuning sufficiently large LLMs can produce IC-SF performance comparable to discriminative models. Next, we systematically analyze the performance deterioration of those fine-tuned models due to three distinct yet relevant types of input perturbations - oronyms, synonyms, and paraphrasing. Finally, we propose an efficient mitigation approach, Prompt Perturbation Consistency Learning (PPCL), which works by regularizing the divergence between losses from clean and perturbed samples. Our experiments demonstrate that PPCL can recover on average 59% and 69% of the performance drop for IC and SF tasks, respectively. Furthermore, PPCL beats the data augmentation approach while using ten times fewer augmented data samples.
[ "['Yao Qiang' 'Subhrangshu Nandi' 'Ninareh Mehrabi' 'Greg Ver Steeg'\n 'Anoop Kumar' 'Anna Rumshisky' 'Aram Galstyan']" ]
null
null
2402.15864
null
null
http://arxiv.org/pdf/2402.15864v1
2024-02-24T17:13:58Z
2024-02-24T17:13:58Z
Field-based Molecule Generation
This work introduces FMG, a field-based model for drug-like molecule generation. We show how the flexibility of this method provides crucial advantages over the prevalent, point-cloud based methods, and achieves competitive molecular stability generation. We tackle optical isomerism (enantiomers), a previously omitted molecular property that is crucial for drug safety and effectiveness, and thus account for all molecular geometry aspects. We demonstrate how previous methods are invariant to a group of transformations that includes enantiomer pairs, leading them invariant to the molecular R and S configurations, while our field-based generative model captures this property.
[ "['Alexandru Dumitrescu' 'Dani Korpela' 'Markus Heinonen' 'Yogesh Verma'\n 'Valerii Iakovlev' 'Vikas Garg' 'Harri Lähdesmäki']" ]
null
null
2402.15883
null
null
http://arxiv.org/pdf/2402.15883v2
2024-03-04T17:24:11Z
2024-02-24T19:06:41Z
Fusion Encoder Networks
In this paper we present fusion encoder networks (FENs): a class of algorithms for creating neural networks that map sequences to outputs. The resulting neural network has only logarithmic depth (alleviating the degradation of data as it propagates through the network) and can process sequences in linear time (or in logarithmic time with a linear number of processors). The crucial property of FENs is that they learn by training a quasi-linear number of constant-depth feed-forward neural networks in parallel. The fact that these networks have constant depth means that backpropagation works well. We note that currently the performance of FENs is only conjectured as we are yet to implement them.
[ "['Stephen Pasteris' 'Chris Hicks' 'Vasilios Mavroudis']" ]
null
null
2402.15892
null
null
http://arxiv.org/pdf/2402.15892v1
2024-02-24T19:59:15Z
2024-02-24T19:59:15Z
Statistical Games
This work contains the mathematical exploration of a few prototypical games in which central concepts from statistics and probability theory naturally emerge. The first two kinds of games are termed Fisher and Bayesian games, which are connected to Frequentist and Bayesian statistics, respectively. Later, a more general type of game is introduced, termed Statistical game, in which a further parameter, the players' relative risk aversion, can be set. In this work, we show that Fisher and Bayesian games can be viewed as limiting cases of Statistical games. Therefore, Statistical games can be viewed as a unified framework, incorporating both Frequentist and Bayesian statistics. Furthermore, a philosophical framework is (re-)presented -- often referred to as minimax regret criterion -- as a general approach to decision making. The main motivation for this work was to embed Bayesian statistics into a broader decision-making framework, where, based on collected data, actions with consequences have to be made, which can be translated to utilities (or rewards/losses) of the decision-maker. The work starts with the simplest possible toy model, related to hypothesis testing and statistical inference. This choice has two main benefits: i.) it allows us to determine (conjecture) the behaviour of the equilibrium strategies in various limiting cases ii.) this way, we can introduce Statistical games without requiring additional stochastic parameters. The work contains game theoretical methods related to two-player, non-cooperative games to determine and prove equilibrium strategies of Fisher, Bayesian and Statistical games. It also relies on analytical tools for derivations concerning various limiting cases.
[ "['Jozsef Konczer']" ]
null
null
2402.15893
null
null
http://arxiv.org/pdf/2402.15893v3
2024-03-24T19:34:34Z
2024-02-24T20:01:15Z
Concurrent Learning of Policy and Unknown Safety Constraints in Reinforcement Learning
Reinforcement learning (RL) has revolutionized decision-making across a wide range of domains over the past few decades. Yet, deploying RL policies in real-world scenarios presents the crucial challenge of ensuring safety. Traditional safe RL approaches have predominantly focused on incorporating predefined safety constraints into the policy learning process. However, this reliance on predefined safety constraints poses limitations in dynamic and unpredictable real-world settings where such constraints may not be available or sufficiently adaptable. Bridging this gap, we propose a novel approach that concurrently learns a safe RL control policy and identifies the unknown safety constraint parameters of a given environment. Initializing with a parametric signal temporal logic (pSTL) safety specification and a small initial labeled dataset, we frame the problem as a bilevel optimization task, intricately integrating constrained policy optimization, using a Lagrangian-variant of the twin delayed deep deterministic policy gradient (TD3) algorithm, with Bayesian optimization for optimizing parameters for the given pSTL safety specification. Through experimentation in comprehensive case studies, we validate the efficacy of this approach across varying forms of environmental constraints, consistently yielding safe RL policies with high returns. Furthermore, our findings indicate successful learning of STL safety constraint parameters, exhibiting a high degree of conformity with true environmental safety constraints. The performance of our model closely mirrors that of an ideal scenario that possesses complete prior knowledge of safety constraints, demonstrating its proficiency in accurately identifying environmental safety constraints and learning safe policies that adhere to those constraints.
[ "['Lunet Yifru' 'Ali Baheri']" ]
null
null
2402.15898
null
null
http://arxiv.org/pdf/2402.15898v3
2024-05-22T20:19:02Z
2024-02-13T09:22:45Z
Transductive Active Learning: Theory and Applications
We generalize active learning to address real-world settings with concrete prediction targets where sampling is restricted to an accessible region of the domain, while prediction targets may lie outside this region. We analyze a family of decision rules that sample adaptively to minimize uncertainty about prediction targets. We are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data. We demonstrate their strong sample efficiency in two key applications: Active few-shot fine-tuning of large neural networks and safe Bayesian optimization, where they improve significantly upon the state-of-the-art.
[ "['Jonas Hübotter' 'Bhavya Sukhija' 'Lenart Treven' 'Yarden As'\n 'Andreas Krause']" ]
null
null
2402.15903
null
null
http://arxiv.org/pdf/2402.15903v2
2024-04-17T02:59:30Z
2024-02-24T20:50:29Z
ESFL: Efficient Split Federated Learning over Resource-Constrained Heterogeneous Wireless Devices
Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data. How to effectively and efficiently utilize the resources on devices and the central server is a highly interesting yet challenging problem. In this paper, we propose an efficient split federated learning algorithm (ESFL) to take full advantage of the powerful computing capabilities at a central server under a split federated learning framework with heterogeneous end devices (EDs). By splitting the model into different submodels between the server and EDs, our approach jointly optimizes user-side workload and server-side computing resource allocation by considering users' heterogeneity. We formulate the whole optimization problem as a mixed-integer non-linear program, which is an NP-hard problem, and develop an iterative approach to obtain an approximate solution efficiently. Extensive simulations have been conducted to validate the significantly increased efficiency of our ESFL approach compared with standard federated learning, split learning, and splitfed learning.
[ "['Guangyu Zhu' 'Yiqin Deng' 'Xianhao Chen' 'Haixia Zhang' 'Yuguang Fang'\n 'Tan F. Wong']" ]
null
null
2402.15905
null
null
http://arxiv.org/pdf/2402.15905v1
2024-02-24T21:03:30Z
2024-02-24T21:03:30Z
Explainable Contrastive and Cost-Sensitive Learning for Cervical Cancer Classification
This paper proposes an efficient system for classifying cervical cancer cells using pre-trained convolutional neural networks (CNNs). We first fine-tune five pre-trained CNNs and minimize the overall cost of misclassification by prioritizing accuracy for certain classes that have higher associated costs or importance. To further enhance the performance of the models, supervised contrastive learning is included to make the models more adept at capturing important features and patterns. Extensive experimentation are conducted to evaluate the proposed system on the SIPaKMeD dataset. The experimental results demonstrate the effectiveness of the developed system, achieving an accuracy of 97.29%. To make our system more trustworthy, we have employed several explainable AI techniques to interpret how the models reached a specific decision. The implementation of the system can be found at - https://github.com/isha-67/CervicalCancerStudy.
[ "['Ashfiqun Mustari' 'Rushmia Ahmed' 'Afsara Tasnim' 'Jakia Sultana Juthi'\n 'G M Shahariar']" ]
null
null
2402.15919
null
null
http://arxiv.org/pdf/2402.15919v2
2024-03-04T22:42:28Z
2024-02-24T22:22:02Z
Learning to See Through Dazzle
Machine vision is susceptible to laser dazzle, where intense laser light can blind and distort its perception of the environment through oversaturation or permanent damage to sensor pixels. Here we employ a wavefront-coded phase mask to diffuse the energy of laser light and introduce a sandwich generative adversarial network (SGAN) to restore images from complex image degradations, such as varying laser-induced image saturation, mask-induced image blurring, unknown lighting conditions, and various noise corruptions. The SGAN architecture combines discriminative and generative methods by wrapping two GANs around a learnable image deconvolution module. In addition, we make use of Fourier feature representations to reduce the spectral bias of neural networks and improve its learning of high-frequency image details. End-to-end training includes the realistic physics-based synthesis of a large set of training data from publicly available images. We trained the SGAN to suppress the peak laser irradiance as high as $10^6$ times the sensor saturation threshold - the point at which camera sensors may experience damage without the mask. The trained model was evaluated on both a synthetic data set and data collected from the laboratory. The proposed image restoration model quantitatively and qualitatively outperforms state-of-the-art methods for a wide range of scene contents, laser powers, incident laser angles, ambient illumination strengths, and noise characteristics.
[ "['Xiaopeng Peng' 'Erin F. Fleet' 'Abbie T. Watnik'\n 'Grover A. Swartzlander']" ]
null
null
2402.15921
null
null
http://arxiv.org/pdf/2402.15921v2
2024-06-18T23:19:42Z
2024-02-24T22:32:34Z
Pretraining Strategy for Neural Potentials
We propose a mask pretraining method for Graph Neural Networks (GNNs) to improve their performance on fitting potential energy surfaces, particularly in water systems. GNNs are pretrained by recovering spatial information related to masked-out atoms from molecules, then transferred and finetuned on atomic forcefields. Through such pretraining, GNNs learn meaningful prior about structural and underlying physical information of molecule systems that are useful for downstream tasks. From comprehensive experiments and ablation studies, we show that the proposed method improves the accuracy and convergence speed compared to GNNs trained from scratch or using other pretraining techniques such as denoising. On the other hand, our pretraining method is suitable for both energy-centric and force-centric GNNs. This approach showcases its potential to enhance the performance and data efficiency of GNNs in fitting molecular force fields.
[ "['Zehua Zhang' 'Zijie Li' 'Amir Barati Farimani']" ]
null
null
2402.15923
null
null
http://arxiv.org/pdf/2402.15923v1
2024-02-24T22:36:23Z
2024-02-24T22:36:23Z
Predicting Outcomes in Video Games with Long Short Term Memory Networks
Forecasting winners in E-sports with real-time analytics has the potential to further engage audiences watching major tournament events. However, making such real-time predictions is challenging due to unpredictable variables within the game involving diverse player strategies and decision-making. Our work attempts to enhance audience engagement within video game tournaments by introducing a real-time method of predicting wins. Our Long Short Term Memory Network (LSTMs) based approach enables efficient predictions of win-lose outcomes by only using the health indicator of each player as a time series. As a proof of concept, we evaluate our model's performance within a classic, two-player arcade game, Super Street Fighter II Turbo. We also benchmark our method against state of the art methods for time series forecasting; i.e. Transformer models found in large language models (LLMs). Finally, we open-source our data set and code in hopes of furthering work in predictive analysis for arcade games.
[ "['Kittimate Chulajata' 'Sean Wu' 'Fabien Scalzo' 'Eun Sang Cha']" ]
null
null
2402.15926
null
null
http://arxiv.org/pdf/2402.15926v2
2024-06-10T03:32:05Z
2024-02-24T23:10:28Z
Large Stepsize Gradient Descent for Logistic Loss: Non-Monotonicity of the Loss Improves Optimization Efficiency
We consider gradient descent (GD) with a constant stepsize applied to logistic regression with linearly separable data, where the constant stepsize $eta$ is so large that the loss initially oscillates. We show that GD exits this initial oscillatory phase rapidly -- in $mathcal{O}(eta)$ steps -- and subsequently achieves an $tilde{mathcal{O}}(1 / (eta t) )$ convergence rate after $t$ additional steps. Our results imply that, given a budget of $T$ steps, GD can achieve an accelerated loss of $tilde{mathcal{O}}(1/T^2)$ with an aggressive stepsize $eta:= Theta( T)$, without any use of momentum or variable stepsize schedulers. Our proof technique is versatile and also handles general classification loss functions (where exponential tails are needed for the $tilde{mathcal{O}}(1/T^2)$ acceleration), nonlinear predictors in the neural tangent kernel regime, and online stochastic gradient descent (SGD) with a large stepsize, under suitable separability conditions.
[ "['Jingfeng Wu' 'Peter L. Bartlett' 'Matus Telgarsky' 'Bin Yu']" ]
null
null
2402.15929
null
null
http://arxiv.org/pdf/2402.15929v1
2024-02-24T23:16:57Z
2024-02-24T23:16:57Z
QuaCer-C: Quantitative Certification of Knowledge Comprehension in LLMs
Large Language Models (LLMs) have demonstrated impressive performance on several benchmarks. However, traditional studies do not provide formal guarantees on the performance of LLMs. In this work, we propose a novel certification framework for LLM, QuaCer-C, wherein we formally certify the knowledge-comprehension capabilities of popular LLMs. Our certificates are quantitative - they consist of high-confidence, tight bounds on the probability that the target LLM gives the correct answer on any relevant knowledge comprehension prompt. Our certificates for the Llama, Vicuna, and Mistral LLMs indicate that the knowledge comprehension capability improves with an increase in the number of parameters and that the Mistral model is less performant than the rest in this evaluation.
[ "['Isha Chaudhary' 'Vedaant V. Jain' 'Gagandeep Singh']" ]
null
null
2402.15932
null
null
http://arxiv.org/pdf/2402.15932v1
2024-02-24T23:25:35Z
2024-02-24T23:25:35Z
Scalable Volt-VAR Optimization using RLlib-IMPALA Framework: A Reinforcement Learning Approach
In the rapidly evolving domain of electrical power systems, the Volt-VAR optimization (VVO) is increasingly critical, especially with the burgeoning integration of renewable energy sources. Traditional approaches to learning-based VVO in expansive and dynamically changing power systems are often hindered by computational complexities. To address this challenge, our research presents a novel framework that harnesses the potential of Deep Reinforcement Learning (DRL), specifically utilizing the Importance Weighted Actor-Learner Architecture (IMPALA) algorithm, executed on the RAY platform. This framework, built upon RLlib-an industry-standard in Reinforcement Learning-ingeniously capitalizes on the distributed computing capabilities and advanced hyperparameter tuning offered by RAY. This design significantly expedites the exploration and exploitation phases in the VVO solution space. Our empirical results demonstrate that our approach not only surpasses existing DRL methods in achieving superior reward outcomes but also manifests a remarkable tenfold reduction in computational requirements. The integration of our DRL agent with the RAY platform facilitates the creation of RLlib-IMPALA, a novel framework that efficiently uses RAY's resources to improve system adaptability and control. RLlib-IMPALA leverages RAY's toolkit to enhance analytical capabilities and significantly speeds up training to become more than 10 times faster than other state-of-the-art DRL methods.
[ "['Alaa Selim' 'Yanzhu Ye' 'Junbo Zhao' 'Bo Yang']" ]
null
null
2402.15933
null
null
http://arxiv.org/pdf/2402.15933v1
2024-02-24T23:31:34Z
2024-02-24T23:31:34Z
Bridging the Gap between 2D and 3D Visual Question Answering: A Fusion Approach for 3D VQA
In 3D Visual Question Answering (3D VQA), the scarcity of fully annotated data and limited visual content diversity hampers the generalization to novel scenes and 3D concepts (e.g., only around 800 scenes are utilized in ScanQA and SQA dataset). Current approaches resort supplement 3D reasoning with 2D information. However, these methods face challenges: either they use top-down 2D views that introduce overly complex and sometimes question-irrelevant visual clues, or they rely on globally aggregated scene/image-level representations from 2D VLMs, losing the fine-grained vision-language correlations. To overcome these limitations, our approach utilizes question-conditional 2D view selection procedure, pinpointing semantically relevant 2D inputs for crucial visual clues. We then integrate this 2D knowledge into the 3D-VQA system via a two-branch Transformer structure. This structure, featuring a Twin-Transformer design, compactly combines 2D and 3D modalities and captures fine-grained correlations between modalities, allowing them mutually augmenting each other. Integrating proposed mechanisms above, we present BridgeQA, that offers a fresh perspective on multi-modal transformer-based architectures for 3D-VQA. Experiments validate that BridgeQA achieves state-of-the-art on 3D-VQA datasets and significantly outperforms existing solutions. Code is available at $href{https://github.com/matthewdm0816/BridgeQA}{text{this URL}}$.
[ "['Wentao Mo' 'Yang Liu']" ]
null
null
2402.15938
null
null
http://arxiv.org/pdf/2402.15938v3
2024-05-31T17:49:03Z
2024-02-24T23:54:41Z
Generalization or Memorization: Data Contamination and Trustworthy Evaluation for Large Language Models
Recent statements about the impressive capabilities of large language models (LLMs) are usually supported by evaluating on open-access benchmarks. Considering the vast size and wide-ranging sources of LLMs' training data, it could explicitly or implicitly include test data, leading to LLMs being more susceptible to data contamination. However, due to the opacity of training data, the black-box access of models, and the rapid growth of synthetic training data, detecting and mitigating data contamination for LLMs faces significant challenges. In this paper, we propose CDD, which stands for Contamination Detection via output Distribution for LLMs. CDD necessitates only the sampled texts to detect data contamination, by identifying the peakedness of LLM's output distribution. To mitigate the impact of data contamination in evaluation, we also present TED: Trustworthy Evaluation via output Distribution, based on the correction of LLM's output distribution. To facilitate this study, we introduce two benchmarks, i.e., DetCon and ComiEval, for data contamination detection and contamination mitigation evaluation tasks. Extensive experimental results show that CDD achieves the average relative improvements of 21.8%-30.2% over other contamination detection approaches in terms of Accuracy, F1 Score, and AUC metrics, and can effectively detect implicit contamination. TED substantially mitigates performance improvements up to 66.9% attributed to data contamination across various contamination setups. In real-world applications, we reveal that ChatGPT exhibits a high potential to suffer from data contamination on HumanEval benchmark.
[ "['Yihong Dong' 'Xue Jiang' 'Huanyu Liu' 'Zhi Jin' 'Bin Gu' 'Mengfei Yang'\n 'Ge Li']" ]
null
null
2402.15939
null
null
http://arxiv.org/pdf/2402.15939v1
2024-02-24T23:56:15Z
2024-02-24T23:56:15Z
Deep Separable Spatiotemporal Learning for Fast Dynamic Cardiac MRI
Dynamic magnetic resonance imaging (MRI) plays an indispensable role in cardiac diagnosis. To enable fast imaging, the k-space data can be undersampled but the image reconstruction poses a great challenge of high-dimensional processing. This challenge leads to necessitate extensive training data in many deep learning reconstruction methods. This work proposes a novel and efficient approach, leveraging a dimension-reduced separable learning scheme that excels even with highly limited training data. We further integrate it with spatiotemporal priors to develop a Deep Separable Spatiotemporal Learning network (DeepSSL), which unrolls an iteration process of a reconstruction model with both temporal low-rankness and spatial sparsity. Intermediate outputs are visualized to provide insights into the network's behavior and enhance its interpretability. Extensive results on cardiac cine datasets show that the proposed DeepSSL is superior to the state-of-the-art methods visually and quantitatively, while reducing the demand for training cases by up to 75%. And its preliminary adaptability to cardiac patients has been verified through experienced radiologists' and cardiologists' blind reader study. Additionally, DeepSSL also benefits for achieving the downstream task of cardiac segmentation with higher accuracy and shows robustness in prospective real-time cardiac MRI.
[ "['Zi Wang' 'Min Xiao' 'Yirong Zhou' 'Chengyan Wang' 'Naiming Wu' 'Yi Li'\n 'Yiwen Gong' 'Shufu Chang' 'Yinyin Chen' 'Liuhong Zhu' 'Jianjun Zhou'\n 'Congbo Cai' 'He Wang' 'Di Guo' 'Guang Yang' 'Xiaobo Qu']" ]
null
null
2402.15941
null
null
http://arxiv.org/pdf/2402.15941v1
2024-02-25T00:15:30Z
2024-02-25T00:15:30Z
Implementing Recycling Methods for Linear Systems in Python with an Application to Multiple Objective Optimization
Sequences of linear systems arise in the predictor-corrector method when computing the Pareto front for multi-objective optimization. Rather than discarding information generated when solving one system, it may be advantageous to recycle information for subsequent systems. To accomplish this, we seek to reduce the overall cost of computation when solving linear systems using common recycling methods. In this work, we assessed the performance of recycling minimum residual (RMINRES) method along with a map between coefficient matrices. For these methods to be fully integrated into the software used in Enouen et al. (2022), there must be working version of each in both Python and PyTorch. Herein, we discuss the challenges we encountered and solutions undertaken (and some ongoing) when computing efficient Python implementations of these recycling strategies. The goal of this project was to implement RMINRES in Python and PyTorch and add it to the established Pareto front code to reduce computational cost. Additionally, we wanted to implement the sparse approximate maps code in Python and PyTorch, so that it can be parallelized in future work.
[ "['Ainara Garcia' 'Sihong Xie' 'Arielle Carr']" ]
null
null
2402.15951
null
null
http://arxiv.org/pdf/2402.15951v1
2024-02-25T01:56:47Z
2024-02-25T01:56:47Z
GreenLLaMA: A Framework for Detoxification with Explanations
Prior works on detoxification are scattered in the sense that they do not cover all aspects of detoxification needed in a real-world scenario. Notably, prior works restrict the task of developing detoxification models to only a seen subset of platforms, leaving the question of how the models would perform on unseen platforms unexplored. Additionally, these works do not address non-detoxifiability, a phenomenon whereby the toxic text cannot be detoxified without altering the meaning. We propose GreenLLaMA, the first comprehensive end-to-end detoxification framework, which attempts to alleviate the aforementioned limitations. We first introduce a cross-platform pseudo-parallel corpus applying multi-step data processing and generation strategies leveraging ChatGPT. We then train a suite of detoxification models with our cross-platform corpus. We show that our detoxification models outperform the SoTA model trained with human-annotated parallel corpus. We further introduce explanation to promote transparency and trustworthiness. GreenLLaMA additionally offers a unique paraphrase detector especially dedicated for the detoxification task to tackle the non-detoxifiable cases. Through experimental analysis, we demonstrate the effectiveness of our cross-platform corpus and the robustness of GreenLLaMA against adversarial toxicity.
[ "['Md Tawkat Islam Khondaker' 'Muhammad Abdul-Mageed'\n 'Laks V. S. Lakshmanan']" ]
null
null
2402.15957
null
null
http://arxiv.org/pdf/2402.15957v1
2024-02-25T02:36:03Z
2024-02-25T02:36:03Z
DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning
We introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach to approximate inference in environments where the latent state evolves at varying rates. We model episode sessions - parts of the episode where the latent state is fixed - and propose three key modifications to existing meta-RL methods: consistency of latent information within sessions, session masking, and prior latent conditioning. We demonstrate the importance of these modifications in various domains, ranging from discrete Gridworld environments to continuous-control and simulated robot assistive tasks, demonstrating that DynaMITE-RL significantly outperforms state-of-the-art baselines in sample efficiency and inference returns.
[ "['Anthony Liang' 'Guy Tennenholtz' 'Chih-wei Hsu' 'Yinlam Chow'\n 'Erdem Bıyık' 'Craig Boutilier']" ]
null
null
2402.15958
null
null
http://arxiv.org/pdf/2402.15958v2
2024-02-27T05:54:10Z
2024-02-25T02:36:14Z
On the dynamics of three-layer neural networks: initial condensation
Empirical and theoretical works show that the input weights of two-layer neural networks, when initialized with small values, converge towards isolated orientations. This phenomenon, referred to as condensation, indicates that the gradient descent methods tend to spontaneously reduce the complexity of neural networks during the training process. In this work, we elucidate the mechanisms behind the condensation phenomena occurring in the training of three-layer neural networks and distinguish it from the training of two-layer neural networks. Through rigorous theoretical analysis, we establish the blow-up property of effective dynamics and present a sufficient condition for the occurrence of condensation, findings that are substantiated by experimental results. Additionally, we explore the association between condensation and the low-rank bias observed in deep matrix factorization.
[ "['Zheng-An Chen' 'Tao Luo']" ]
null
null
2402.15962
null
null
http://arxiv.org/pdf/2402.15962v1
2024-02-25T02:54:14Z
2024-02-25T02:54:14Z
Hierarchical energy signatures using machine learning for operational visibility and diagnostics in automotive manufacturing
Manufacturing energy consumption data contains important process signatures required for operational visibility and diagnostics. These signatures may be of different temporal scales, ranging from monthly to sub-second resolutions. We introduce a hierarchical machine learning approach to identify automotive process signatures from paint shop electricity consumption data at varying temporal scales (weekly and daily). A Multi-Layer Perceptron (MLP), a Convolutional Neural Network (CNN), and Principal Component Analysis (PCA) combined with Logistic Regression (LR) are used for the analysis. We validate the utility of the developed algorithms with subject matter experts for (i) better operational visibility, and (ii) identifying energy saving opportunities.
[ "['Ankur Verma' 'Seog-Chan Oh' 'Jorge Arinez' 'Soundar Kumara']" ]
null
null
2402.15968
null
null
http://arxiv.org/pdf/2402.15968v2
2024-02-27T17:55:44Z
2024-02-25T03:07:32Z
CoDream: Exchanging dreams instead of models for federated aggregation with heterogeneous models
Federated Learning (FL) enables collaborative optimization of machine learning models across decentralized data by aggregating model parameters. Our approach extends this concept by aggregating "knowledge" derived from models, instead of model parameters. We present a novel framework called CoDream, where clients collaboratively optimize randomly initialized data using federated optimization in the input data space, similar to how randomly initialized model parameters are optimized in FL. Our key insight is that jointly optimizing this data can effectively capture the properties of the global data distribution. Sharing knowledge in data space offers numerous benefits: (1) model-agnostic collaborative learning, i.e., different clients can have different model architectures; (2) communication that is independent of the model size, eliminating scalability concerns with model parameters; (3) compatibility with secure aggregation, thus preserving the privacy benefits of federated learning; (4) allowing of adaptive optimization of knowledge shared for personalized learning. We empirically validate CoDream on standard FL tasks, demonstrating competitive performance despite not sharing model parameters. Our code: https://mitmedialab.github.io/codream.github.io/
[ "['Abhishek Singh' 'Gauri Gupta' 'Ritvik Kapila' 'Yichuan Shi' 'Alex Dang'\n 'Sheshank Shankar' 'Mohammed Ehab' 'Ramesh Raskar']" ]
null
null
2402.15972
null
null
http://arxiv.org/pdf/2402.15972v1
2024-02-25T03:31:59Z
2024-02-25T03:31:59Z
Structural Knowledge-Driven Meta-Learning for Task Offloading in Vehicular Networks with Integrated Communications, Sensing and Computing
Task offloading is a potential solution to satisfy the strict requirements of computation-intensive and latency-sensitive vehicular applications due to the limited onboard computing resources. However, the overwhelming upload traffic may lead to unacceptable uploading time. To tackle this issue, for tasks taking environmental data as input, the data perceived by roadside units (RSU) equipped with several sensors can be directly exploited for computation, resulting in a novel task offloading paradigm with integrated communications, sensing and computing (I-CSC). With this paradigm, vehicles can select to upload their sensed data to RSUs or transmit computing instructions to RSUs during the offloading. By optimizing the computation mode and network resources, in this paper, we investigate an I-CSC-based task offloading problem to reduce the cost caused by resource consumption while guaranteeing the latency of each task. Although this non-convex problem can be handled by the alternating minimization (AM) algorithm that alternatively minimizes the divided four sub-problems, it leads to high computational complexity and local optimal solution. To tackle this challenge, we propose a creative structural knowledge-driven meta-learning (SKDML) method, involving both the model-based AM algorithm and neural networks. Specifically, borrowing the iterative structure of the AM algorithm, also referred to as structural knowledge, the proposed SKDML adopts long short-term memory (LSTM) network-based meta-learning to learn an adaptive optimizer for updating variables in each sub-problem, instead of the handcrafted counterpart in the AM algorithm.
[ "['Ruijin Sun' 'Yao Wen' 'Nan Cheng' 'Wei Wan' 'Rong Chai' 'Yilong Hui']" ]
null
null
2402.15978
null
null
http://arxiv.org/pdf/2402.15978v1
2024-02-25T03:48:13Z
2024-02-25T03:48:13Z
Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks Using the Marginal Likelihood
Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to na"ively deploy on consumer hardware. While much work has focused on different weight pruning criteria, the overall sparsifiability of the network, i.e., its capacity to be pruned without quality loss, has often been overlooked. We present Sparsifiability via the Marginal likelihood (SpaM), a pruning framework that highlights the effectiveness of using the Bayesian marginal likelihood in conjunction with sparsity-inducing priors for making neural networks more sparsifiable. Our approach implements an automatic Occam's razor that selects the most sparsifiable model that still explains the data well, both for structured and unstructured sparsification. In addition, we demonstrate that the pre-computed posterior Hessian approximation used in the Laplace approximation can be re-used to define a cheap pruning criterion, which outperforms many existing (more expensive) approaches. We demonstrate the effectiveness of our framework, especially at high sparsity levels, across a range of different neural network architectures and datasets.
[ "['Rayen Dhahri' 'Alexander Immer' 'Betrand Charpentier'\n 'Stephan Günnemann' 'Vincent Fortuin']" ]
null
null
2402.15984
null
null
http://arxiv.org/abs/2402.15984v2
2024-04-18T19:10:58Z
2024-02-25T04:30:04Z
A unified Fourier slice method to derive ridgelet transform for a variety of depth-2 neural networks
To investigate neural network parameters, it is easier to study the distribution of parameters than to study the parameters in each neuron. The ridgelet transform is a pseudo-inverse operator that maps a given function $f$ to the parameter distribution $gamma$ so that a network $mathtt{NN}[gamma]$ reproduces $f$, i.e. $mathtt{NN}[gamma]=f$. For depth-2 fully-connected networks on a Euclidean space, the ridgelet transform has been discovered up to the closed-form expression, thus we could describe how the parameters are distributed. However, for a variety of modern neural network architectures, the closed-form expression has not been known. In this paper, we explain a systematic method using Fourier expressions to derive ridgelet transforms for a variety of modern networks such as networks on finite fields $mathbb{F}_p$, group convolutional networks on abstract Hilbert space $mathcal{H}$, fully-connected networks on noncompact symmetric spaces $G/K$, and pooling layers, or the $d$-plane ridgelet transform.
[ "['Sho Sonoda' 'Isao Ishikawa' 'Masahiro Ikeda']" ]
null
null
2402.15985
null
null
http://arxiv.org/pdf/2402.15985v1
2024-02-25T04:35:45Z
2024-02-25T04:35:45Z
Phonetic and Lexical Discovery of a Canine Language using HuBERT
This paper delves into the pioneering exploration of potential communication patterns within dog vocalizations and transcends traditional linguistic analysis barriers, which heavily relies on human priori knowledge on limited datasets to find sound units in dog vocalization. We present a self-supervised approach with HuBERT, enabling the accurate classification of phoneme labels and the identification of vocal patterns that suggest a rudimentary vocabulary within dog vocalizations. Our findings indicate a significant acoustic consistency in these identified canine vocabulary, covering the entirety of observed dog vocalization sequences. We further develop a web-based dog vocalization labeling system. This system can highlight phoneme n-grams, present in the vocabulary, in the dog audio uploaded by users.
[ "['Xingyuan Li' 'Sinong Wang' 'Zeyu Xie' 'Mengyue Wu' 'Kenny Q. Zhu']" ]
null
null
2402.15988
null
null
http://arxiv.org/pdf/2402.15988v1
2024-02-25T05:00:20Z
2024-02-25T05:00:20Z
Towards Fair Graph Anomaly Detection: Problem, New Datasets, and Evaluation
The Fair Graph Anomaly Detection (FairGAD) problem aims to accurately detect anomalous nodes in an input graph while ensuring fairness and avoiding biased predictions against individuals from sensitive subgroups such as gender or political leanings. Fairness in graphs is particularly crucial in anomaly detection areas such as misinformation detection in search/ranking systems, where decision outcomes can significantly affect individuals. However, the current literature does not comprehensively discuss this problem, nor does it provide realistic datasets that encompass actual graph structures, anomaly labels, and sensitive attributes for research in FairGAD. To bridge this gap, we introduce a formal definition of the FairGAD problem and present two novel graph datasets constructed from the globally prominent social media platforms Reddit and Twitter. These datasets comprise 1.2 million and 400,000 edges associated with 9,000 and 47,000 nodes, respectively, and leverage political leanings as sensitive attributes and misinformation spreaders as anomaly labels. We demonstrate that our FairGAD datasets significantly differ from the synthetic datasets used currently by the research community. These new datasets offer significant values for FairGAD by providing realistic data that captures the intricacies of social networks. Using our datasets, we investigate the performance-fairness trade-off in eleven existing GAD and non-graph AD methods on five state-of-the-art fairness methods, which sheds light on their effectiveness and limitations in addressing the FairGAD problem.
[ "['Neng Kai Nigel Neo' 'Yeon-Chang Lee' 'Yiqiao Jin' 'Sang-Wook Kim'\n 'Srijan Kumar']" ]
null
null
2402.15992
null
null
http://arxiv.org/pdf/2402.15992v1
2024-02-25T05:16:43Z
2024-02-25T05:16:43Z
A Machine Learning Approach to Detect Customer Satisfaction From Multiple Tweet Parameters
Since internet technologies have advanced, one of the primary factors in company development is customer happiness. Online platforms have become prominent places for sharing reviews. Twitter is one of these platforms where customers frequently post their thoughts. Reviews of flights on these platforms have become a concern for the airline business. A positive review can help the company grow, while a negative one can quickly ruin its revenue and reputation. So it's vital for airline businesses to examine the feedback and experiences of their customers and enhance their services to remain competitive. But studying thousands of tweets and analyzing them to find the satisfaction of the customer is quite a difficult task. This tedious process can be made easier by using a machine learning approach to analyze tweets to determine client satisfaction levels. Some work has already been done on this strategy to automate the procedure using machine learning and deep learning techniques. However, they are all purely concerned with assessing the text's sentiment. In addition to the text, the tweet also includes the time, location, username, airline name, and so on. This additional information can be crucial for improving the model's outcome. To provide a machine learning based solution, this work has broadened its perspective to include these qualities. And it has come as no surprise that the additional features beyond text sentiment analysis produce better outcomes in machine learning based models.
[ "['Md Mahmudul Hasan' 'Dr. Shaikh Anowarul Fattah']" ]
null
null
2402.15993
null
null
http://arxiv.org/pdf/2402.15993v3
2024-07-01T07:55:40Z
2024-02-25T05:22:45Z
Model Compression Method for S4 with Diagonal State Space Layers using Balanced Truncation
To implement deep learning models on edge devices, model compression methods have been widely recognized as useful. However, it remains unclear which model compression methods are effective for Structured State Space Sequence (S4) models incorporating Diagonal State Space (DSS) layers, tailored for processing long-sequence data. In this paper, we propose to use the balanced truncation, a prevalent model reduction technique in control theory, applied specifically to DSS layers in pre-trained S4 model as a novel model compression method. Moreover, we propose using the reduced model parameters obtained by the balanced truncation as initial parameters of S4 models with DSS layers during the main training process. Numerical experiments demonstrate that our trained models combined with the balanced truncation surpass conventionally trained models with Skew-HiPPO initialization in accuracy, even with fewer parameters. Furthermore, our observations reveal a positive correlation: higher accuracy in the original model consistently leads to increased accuracy in models trained using our model compression method, suggesting that our approach effectively leverages the strengths of the original model.
[ "['Haruka Ezoe' 'Kazuhiro Sato']" ]
null
null
2402.15994
null
null
http://arxiv.org/pdf/2402.15994v1
2024-02-25T05:23:57Z
2024-02-25T05:23:57Z
Optimizing Portfolio Management and Risk Assessment in Digital Assets Using Deep Learning for Predictive Analysis
Portfolio management issues have been extensively studied in the field of artificial intelligence in recent years, but existing deep learning-based quantitative trading methods have some areas where they could be improved. First of all, the prediction mode of stocks is singular; often, only one trading expert is trained by a model, and the trading decision is solely based on the prediction results of the model. Secondly, the data source used by the model is relatively simple, and only considers the data of the stock itself, ignoring the impact of the whole market risk on the stock. In this paper, the DQN algorithm is introduced into asset management portfolios in a novel and straightforward way, and the performance greatly exceeds the benchmark, which fully proves the effectiveness of the DRL algorithm in portfolio management. This also inspires us to consider the complexity of financial problems, and the use of algorithms should be fully combined with the problems to adapt. Finally, in this paper, the strategy is implemented by selecting the assets and actions with the largest Q value. Since different assets are trained separately as environments, there may be a phenomenon of Q value drift among different assets (different assets have different Q value distribution areas), which may easily lead to incorrect asset selection. Consider adding constraints so that the Q values of different assets share a Q value distribution to improve results.
[ "['Qishuo Cheng' 'Le Yang' 'Jiajian Zheng' 'Miao Tian' 'Duan Xin']" ]
null
null
2402.15995
null
null
http://arxiv.org/pdf/2402.15995v1
2024-02-25T05:26:35Z
2024-02-25T05:26:35Z
Improved Hardness Results for Learning Intersections of Halfspaces
We show strong (and surprisingly simple) lower bounds for weakly learning intersections of halfspaces in the improper setting. Strikingly little is known about this problem. For instance, it is not even known if there is a polynomial-time algorithm for learning the intersection of only two halfspaces. On the other hand, lower bounds based on well-established assumptions (such as approximating worst-case lattice problems or variants of Feige's 3SAT hypothesis) are only known (or are implied by existing results) for the intersection of super-logarithmically many halfspaces [KS09,KS06,DSS16]. With intersections of fewer halfspaces being only ruled out under less standard assumptions [DV21] (such as the existence of local pseudo-random generators with large stretch). We significantly narrow this gap by showing that even learning $omega(log log N)$ halfspaces in dimension $N$ takes super-polynomial time under standard assumptions on worst-case lattice problems (namely that SVP and SIVP are hard to approximate within polynomial factors). Further, we give unconditional hardness results in the statistical query framework. Specifically, we show that for any $k$ (even constant), learning $k$ halfspaces in dimension $N$ requires accuracy $N^{-Omega(k)}$, or exponentially many queries -- in particular ruling out SQ algorithms with polynomial accuracy for $omega(1)$ halfspaces. To the best of our knowledge this is the first unconditional hardness result for learning a super-constant number of halfspaces. Our lower bounds are obtained in a unified way via a novel connection we make between intersections of halfspaces and the so-called parallel pancakes distribution [DKS17,BLPR19,BRST21] that has been at the heart of many lower bound constructions in (robust) high-dimensional statistics in the past few years.
[ "['Stefan Tiegel']" ]
null
null
2402.15997
null
null
http://arxiv.org/abs/2402.15997v2
2024-02-29T22:04:24Z
2024-02-25T05:45:36Z
Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning
Quality colormaps can help communicate important data patterns. However, finding an aesthetically pleasing colormap that looks "just right" for a given scenario requires significant design and technical expertise. We introduce Cieran, a tool that allows any data analyst to rapidly find quality colormaps while designing charts within Jupyter Notebooks. Our system employs an active preference learning paradigm to rank expert-designed colormaps and create new ones from pairwise comparisons, allowing analysts who are novices in color design to tailor colormaps to their data context. We accomplish this by treating colormap design as a path planning problem through the CIELAB colorspace with a context-specific reward model. In an evaluation with twelve scientists, we found that Cieran effectively modeled user preferences to rank colormaps and leveraged this model to create new quality designs. Our work shows the potential of active preference learning for supporting efficient visualization design optimization.
[ "['Matt-Heun Hong' 'Zachary N. Sunberg' 'Danielle Albers Szafir']" ]
null
null
2402.16002
null
null
http://arxiv.org/abs/2402.16002v1
2024-02-25T06:19:04Z
2024-02-25T06:19:04Z
Post-Quantum Cryptography Neural Network
In recent years, quantum computers and Shor quantum algorithm have posed a threat to current mainstream asymmetric cryptography methods (e.g. RSA and Elliptic Curve Cryptography (ECC)). Therefore, it is necessary to construct a Post-Quantum Cryptography (PQC) method to resist quantum computing attacks. Therefore, this study proposes a PQC-based neural network that maps a code-based PQC method to a neural network structure and enhances the security of ciphertexts with non-linear activation functions, random perturbation of ciphertexts, and uniform distribution of ciphertexts. In practical experiments, this study uses cellular network signals as a case study to demonstrate that encryption and decryption can be performed by the proposed PQC-based neural network with the uniform distribution of ciphertexts. In the future, the proposed PQC-based neural network could be applied to various applications.
[ "['Abel C. H. Chen']" ]
null
null
2402.16005
null
null
http://arxiv.org/pdf/2402.16005v1
2024-02-25T06:39:15Z
2024-02-25T06:39:15Z
Adversarial-Robust Transfer Learning for Medical Imaging via Domain Assimilation
In the field of Medical Imaging, extensive research has been dedicated to leveraging its potential in uncovering critical diagnostic features in patients. Artificial Intelligence (AI)-driven medical diagnosis relies on sophisticated machine learning and deep learning models to analyze, detect, and identify diseases from medical images. Despite the remarkable performance of these models, characterized by high accuracy, they grapple with trustworthiness issues. The introduction of a subtle perturbation to the original image empowers adversaries to manipulate the prediction output, redirecting it to other targeted or untargeted classes. Furthermore, the scarcity of publicly available medical images, constituting a bottleneck for reliable training, has led contemporary algorithms to depend on pretrained models grounded on a large set of natural images -- a practice referred to as transfer learning. However, a significant {em domain discrepancy} exists between natural and medical images, which causes AI models resulting from transfer learning to exhibit heightened {em vulnerability} to adversarial attacks. This paper proposes a {em domain assimilation} approach that introduces texture and color adaptation into transfer learning, followed by a texture preservation component to suppress undesired distortion. We systematically analyze the performance of transfer learning in the face of various adversarial attacks under different data modalities, with the overarching goal of fortifying the model's robustness and security in medical imaging tasks. The results demonstrate high effectiveness in reducing attack efficacy, contributing toward more trustworthy transfer learning in biomedical applications.
[ "['Xiaohui Chen' 'Tie Luo']" ]
null
null
2402.16008
null
null
http://arxiv.org/pdf/2402.16008v1
2024-02-25T06:53:35Z
2024-02-25T06:53:35Z
Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach to Model Interpretability and Precision
The evolution of deep learning and artificial intelligence has significantly reshaped technological landscapes. However, their effective application in crucial sectors such as medicine demands more than just superior performance, but trustworthiness as well. While interpretability plays a pivotal role, existing explainable AI (XAI) approaches often do not reveal {em Clever Hans} behavior where a model makes (ungeneralizable) correct predictions using spurious correlations or biases in data. Likewise, current post-hoc XAI methods are susceptible to generating unjustified counterfactual examples. In this paper, we approach XAI with an innovative {em model debugging} methodology realized through Jacobian Saliency Map (JSM). To cast the problem into a concrete context, we employ Alzheimer's disease (AD) diagnosis as the use case, motivated by its significant impact on human lives and the formidable challenge in its early detection, stemming from the intricate nature of its progression. We introduce an interpretable, multimodal model for AD classification over its multi-stage progression, incorporating JSM as a modality-agnostic tool that provides insights into volumetric changes indicative of brain abnormalities. Our extensive evaluation including ablation study manifests the efficacy of using JSM for model debugging and interpretation, while significantly enhancing model accuracy as well.
[ "['Yasmine Mustafa' 'Tie Luo']" ]
null
null
2402.16012
null
null
http://arxiv.org/abs/2402.16012v1
2024-02-25T07:03:37Z
2024-02-25T07:03:37Z
Deep Contrastive Graph Learning with Clustering-Oriented Guidance
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering. To handle the general clustering scenario without a prior graph, these models estimate an initial graph beforehand to apply GCN. Throughout the literature, we have witnessed that 1) most models focus on the initial graph while neglecting the original features. Therefore, the discriminability of the learned representation may be corrupted by a low-quality initial graph; 2) the training procedure lacks effective clustering guidance, which may lead to the incorporation of clustering-irrelevant information into the learned graph. To tackle these problems, the Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering. Specifically, we establish a pseudo-siamese network, which incorporates auto-encoder with GCN to emphasize both the graph structure and the original features. On this basis, feature-level contrastive learning is introduced to enhance the discriminative capacity, and the relationship between samples and centroids is employed as the clustering-oriented guidance. Afterward, a two-branch graph learning mechanism is designed to extract the local and global structural relationships, which are further embedded into a unified graph under the cluster-level contrastive guidance. Experimental results on several benchmark datasets demonstrate the superiority of DCGL against state-of-the-art algorithms.
[ "['Mulin Chen' 'Bocheng Wang' 'Xuelong Li']" ]
null
null
2402.16014
null
null
http://arxiv.org/pdf/2402.16014v1
2024-02-25T07:19:01Z
2024-02-25T07:19:01Z
Building Flexible Machine Learning Models for Scientific Computing at Scale
Foundation models have revolutionized knowledge acquisition across domains, and our study introduces OmniArch, a paradigm-shifting approach designed for building foundation models in multi-physics scientific computing. OmniArch's pre-training involves a versatile pipeline that processes multi-physics spatio-temporal data, casting forward problem learning into scalable auto-regressive tasks, while our novel Physics-Informed Reinforcement Learning (PIRL) technique during fine-tuning ensures alignment with physical laws. Pre-trained on the comprehensive PDEBench dataset, OmniArch not only sets new performance benchmarks for 1D, 2D and 3D PDEs but also demonstrates exceptional adaptability to new physics via few-shot and zero-shot learning approaches. The model's representations further extend to inverse problem-solving, highlighting the transformative potential of AI-enabled Scientific Computing(AI4SC) foundation models for engineering applications and physics discovery.
[ "['Tianyu Chen' 'Haoyi Zhou' 'Ying Li' 'Hao Wang' 'Chonghan Gao'\n 'Shanghang Zhang' 'Jianxin Li']" ]
null
null
2402.16017
null
null
http://arxiv.org/pdf/2402.16017v1
2024-02-25T07:28:28Z
2024-02-25T07:28:28Z
Spectrum Extraction and Clipping for Implicitly Linear Layers
We show the effectiveness of automatic differentiation in efficiently and correctly computing and controlling the spectrum of implicitly linear operators, a rich family of layer types including all standard convolutional and dense layers. We provide the first clipping method which is correct for general convolution layers, and illuminate the representational limitation that caused correctness issues in prior work. We study the effect of the batch normalization layers when concatenated with convolutional layers and show how our clipping method can be applied to their composition. By comparing the accuracy and performance of our algorithms to the state-of-the-art methods, using various experiments, we show they are more precise and efficient and lead to better generalization and adversarial robustness. We provide the code for using our methods at https://github.com/Ali-E/FastClip.
[ "['Ali Ebrahimpour Boroojeny' 'Matus Telgarsky' 'Hari Sundaram']" ]
null
null
2402.16020
null
null
http://arxiv.org/pdf/2402.16020v1
2024-02-25T07:41:08Z
2024-02-25T07:41:08Z
A Step-by-step Introduction to the Implementation of Automatic Differentiation
Automatic differentiation is a key component in deep learning. This topic is well studied and excellent surveys such as Baydin et al. (2018) have been available to clearly describe the basic concepts. Further, sophisticated implementations of automatic differentiation are now an important part of popular deep learning frameworks. However, it is difficult, if not impossible, to directly teach students the implementation of existing systems due to the complexity. On the other hand, if the teaching stops at the basic concept, students fail to sense the realization of an implementation. For example, we often mention the computational graph in teaching automatic differentiation, but students wonder how to implement and use it. In this document, we partially fill the gap by giving a step by step introduction of implementing a simple automatic differentiation system. We streamline the mathematical concepts and the implementation. Further, we give the motivation behind each implementation detail, so the whole setting becomes very natural.
[ "['Yu-Hsueh Fang' 'He-Zhe Lin' 'Jie-Jyun Liu' 'Chih-Jen Lin']" ]
null
null
2402.16024
null
null
http://arxiv.org/pdf/2402.16024v2
2024-05-19T02:28:56Z
2024-02-25T08:07:22Z
HiGPT: Heterogeneous Graph Language Model
Heterogeneous graph learning aims to capture complex relationships and diverse relational semantics among entities in a heterogeneous graph to obtain meaningful representations for nodes and edges. Recent advancements in heterogeneous graph neural networks (HGNNs) have achieved state-of-the-art performance by considering relation heterogeneity and using specialized message functions and aggregation rules. However, existing frameworks for heterogeneous graph learning have limitations in generalizing across diverse heterogeneous graph datasets. Most of these frameworks follow the "pre-train" and "fine-tune" paradigm on the same dataset, which restricts their capacity to adapt to new and unseen data. This raises the question: "Can we generalize heterogeneous graph models to be well-adapted to diverse downstream learning tasks with distribution shifts in both node token sets and relation type heterogeneity?'' To tackle those challenges, we propose HiGPT, a general large graph model with Heterogeneous graph instruction-tuning paradigm. Our framework enables learning from arbitrary heterogeneous graphs without the need for any fine-tuning process from downstream datasets. To handle distribution shifts in heterogeneity, we introduce an in-context heterogeneous graph tokenizer that captures semantic relationships in different heterogeneous graphs, facilitating model adaptation. We incorporate a large corpus of heterogeneity-aware graph instructions into our HiGPT, enabling the model to effectively comprehend complex relation heterogeneity and distinguish between various types of graph tokens. Furthermore, we introduce the Mixture-of-Thought (MoT) instruction augmentation paradigm to mitigate data scarcity by generating diverse and informative instructions. Through comprehensive evaluations, our proposed framework demonstrates exceptional performance in terms of generalization performance.
[ "['Jiabin Tang' 'Yuhao Yang' 'Wei Wei' 'Lei Shi' 'Long Xia' 'Dawei Yin'\n 'Chao Huang']" ]
null
null
2402.16026
null
null
http://arxiv.org/pdf/2402.16026v1
2024-02-25T08:20:05Z
2024-02-25T08:20:05Z
Feature Selection Based on Orthogonal Constraints and Polygon Area
The goal of feature selection is to choose the optimal subset of features for a recognition task by evaluating the importance of each feature, thereby achieving effective dimensionality reduction. Currently, proposed feature selection methods often overlook the discriminative dependencies between features and labels. To address this problem, this paper introduces a novel orthogonal regression model incorporating the area of a polygon. The model can intuitively capture the discriminative dependencies between features and labels. Additionally, this paper employs a hybrid non-monotone linear search method to efficiently tackle the non-convex optimization challenge posed by orthogonal constraints. Experimental results demonstrate that our approach not only effectively captures discriminative dependency information but also surpasses traditional methods in reducing feature dimensions and enhancing classification performance.
[ "['Zhenxing Zhang' 'Jun Ge' 'Zheng Wei' 'Chunjie Zhou' 'Yilei Wang']" ]
null
null
2402.16036
null
null
http://arxiv.org/pdf/2402.16036v1
2024-02-25T09:28:20Z
2024-02-25T09:28:20Z
Machine Learning-Based Vehicle Intention Trajectory Recognition and Prediction for Autonomous Driving
In recent years, the expansion of internet technology and advancements in automation have brought significant attention to autonomous driving technology. Major automobile manufacturers, including Volvo, Mercedes-Benz, and Tesla, have progressively introduced products ranging from assisted-driving vehicles to semi-autonomous vehicles. However, this period has also witnessed several traffic safety incidents involving self-driving vehicles. For instance, in March 2016, a Google self-driving car was involved in a minor collision with a bus. At the time of the accident, the autonomous vehicle was attempting to merge into the right lane but failed to dynamically respond to the real-time environmental information during the lane change. It incorrectly assumed that the approaching bus would slow down to avoid it, leading to a low-speed collision with the bus. This incident highlights the current technological shortcomings and safety concerns associated with autonomous lane-changing behavior, despite the rapid advancements in autonomous driving technology. Lane-changing is among the most common and hazardous behaviors in highway driving, significantly impacting traffic safety and flow. Therefore, lane-changing is crucial for traffic safety, and accurately predicting drivers' lane change intentions can markedly enhance driving safety. This paper introduces a deep learning-based prediction method for autonomous driving lane change behavior, aiming to facilitate safe lane changes and thereby improve road safety.
[ "['Hanyi Yu' 'Shuning Huo' 'Mengran Zhu' 'Yulu Gong' 'Yafei Xiang']" ]
null
null
2402.16038
null
null
http://arxiv.org/pdf/2402.16038v1
2024-02-25T09:32:17Z
2024-02-25T09:32:17Z
Deep Learning Approaches for Improving Question Answering Systems in Hepatocellular Carcinoma Research
In recent years, advancements in natural language processing (NLP) have been fueled by deep learning techniques, particularly through the utilization of powerful computing resources like GPUs and TPUs. Models such as BERT and GPT-3, trained on vast amounts of data, have revolutionized language understanding and generation. These pre-trained models serve as robust bases for various tasks including semantic understanding, intelligent writing, and reasoning, paving the way for a more generalized form of artificial intelligence. NLP, as a vital application of AI, aims to bridge the gap between humans and computers through natural language interaction. This paper delves into the current landscape and future prospects of large-scale model-based NLP, focusing on the question-answering systems within this domain. Practical cases and developments in artificial intelligence-driven question-answering systems are analyzed to foster further exploration and research in the realm of large-scale NLP.
[ "['Shuning Huo' 'Yafei Xiang' 'Hanyi Yu' 'Mengran Zhu' 'Yulu Gong']" ]
null
null
2402.16041
null
null
http://arxiv.org/pdf/2402.16041v2
2024-02-29T14:46:44Z
2024-02-25T09:44:56Z
Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean Discrepancy
Large language models (LLMs) such as ChatGPT have exhibited remarkable performance in generating human-like texts. However, machine-generated texts (MGTs) may carry critical risks, such as plagiarism issues, misleading information, or hallucination issues. Therefore, it is very urgent and important to detect MGTs in many situations. Unfortunately, it is challenging to distinguish MGTs and human-written texts because the distributional discrepancy between them is often very subtle due to the remarkable performance of LLMs. In this paper, we seek to exploit textit{maximum mean discrepancy} (MMD) to address this issue in the sense that MMD can well identify distributional discrepancies. However, directly training a detector with MMD using diverse MGTs will incur a significantly increased variance of MMD since MGTs may contain textit{multiple text populations} due to various LLMs. This will severely impair MMD's ability to measure the difference between two samples. To tackle this, we propose a novel textit{multi-population} aware optimization method for MMD called MMD-MP, which can textit{avoid variance increases} and thus improve the stability to measure the distributional discrepancy. Relying on MMD-MP, we develop two methods for paragraph-based and sentence-based detection, respectively. Extensive experiments on various LLMs, eg, GPT2 and ChatGPT, show superior detection performance of our MMD-MP. The source code is available at url{https://github.com/ZSHsh98/MMD-MP}.
[ "['Shuhai Zhang' 'Yiliao Song' 'Jiahao Yang' 'Yuanqing Li' 'Bo Han'\n 'Mingkui Tan']" ]
null
null
2402.16048
null
null
http://arxiv.org/pdf/2402.16048v1
2024-02-25T10:13:04Z
2024-02-25T10:13:04Z
LLMs with Chain-of-Thought Are Non-Causal Reasoners
This paper explores the role of the Chain of Thought (CoT) in Large Language Models (LLMs) reasoning. Despite its potential to improve task performance, our analysis reveals a surprising frequency of correct answers following incorrect CoTs and vice versa. We employ causal analysis to assess the cause-effect relationship between CoTs/instructions and answers in LLMs, uncovering the Structural Causal Model (SCM) that LLMs approximate. By comparing the implied SCM with that of human reasoning, we highlight discrepancies between LLM and human reasoning processes. We further examine the factors influencing the causal structure of the implied SCM, revealing that in-context learning, supervised fine-tuning, and reinforcement learning on human feedback significantly impact the causal relations. We release the code and results at https://github.com/StevenZHB/CoT_Causal_Analysis.
[ "['Guangsheng Bao' 'Hongbo Zhang' 'Linyi Yang' 'Cunxiang Wang' 'Yue Zhang']" ]
null
null
2402.16059
null
null
http://arxiv.org/pdf/2402.16059v1
2024-02-25T11:08:19Z
2024-02-25T11:08:19Z
Gradient-enhanced deep Gaussian processes for multifidelity modelling
Multifidelity models integrate data from multiple sources to produce a single approximator for the underlying process. Dense low-fidelity samples are used to reduce interpolation error, while sparse high-fidelity samples are used to compensate for bias or noise in the low-fidelity samples. Deep Gaussian processes (GPs) are attractive for multifidelity modelling as they are non-parametric, robust to overfitting, perform well for small datasets, and, critically, can capture nonlinear and input-dependent relationships between data of different fidelities. Many datasets naturally contain gradient data, especially when they are generated by computational models that are compatible with automatic differentiation or have adjoint solutions. Principally, this work extends deep GPs to incorporate gradient data. We demonstrate this method on an analytical test problem and a realistic partial differential equation problem, where we predict the aerodynamic coefficients of a hypersonic flight vehicle over a range of flight conditions and geometries. In both examples, the gradient-enhanced deep GP outperforms a gradient-enhanced linear GP model and their non-gradient-enhanced counterparts.
[ "['Viv Bone' 'Chris van der Heide' 'Kieran Mackle' 'Ingo H. J. Jahn'\n 'Peter M. Dower' 'Chris Manzie']" ]
null
null
2402.16065
null
null
http://arxiv.org/pdf/2402.16065v1
2024-02-25T11:26:39Z
2024-02-25T11:26:39Z
Training a Bilingual Language Model by Mapping Tokens onto a Shared Character Space
We train a bilingual Arabic-Hebrew language model using a transliterated version of Arabic texts in Hebrew, to ensure both languages are represented in the same script. Given the morphological, structural similarities, and the extensive number of cognates shared among Arabic and Hebrew, we assess the performance of a language model that employs a unified script for both languages, on machine translation which requires cross-lingual knowledge. The results are promising: our model outperforms a contrasting model which keeps the Arabic texts in the Arabic script, demonstrating the efficacy of the transliteration step. Despite being trained on a dataset approximately 60% smaller than that of other existing language models, our model appears to deliver comparable performance in machine translation across both translation directions.
[ "['Aviad Rom' 'Kfir Bar']" ]
null
null
2402.16073
null
null
http://arxiv.org/pdf/2402.16073v2
2024-03-06T10:03:06Z
2024-02-25T12:06:33Z
Pfeed: Generating near real-time personalized feeds using precomputed embedding similarities
In personalized recommender systems, embeddings are often used to encode customer actions and items, and retrieval is then performed in the embedding space using approximate nearest neighbor search. However, this approach can lead to two challenges: 1) user embeddings can restrict the diversity of interests captured and 2) the need to keep them up-to-date requires an expensive, real-time infrastructure. In this paper, we propose a method that overcomes these challenges in a practical, industrial setting. The method dynamically updates customer profiles and composes a feed every two minutes, employing precomputed embeddings and their respective similarities. We tested and deployed this method to personalise promotional items at Bol, one of the largest e-commerce platforms of the Netherlands and Belgium. The method enhanced customer engagement and experience, leading to a significant 4.9% uplift in conversions.
[ "['Binyam Gebre' 'Karoliina Ranta' 'Stef van den Elzen' 'Ernst Kuiper'\n 'Thijs Baars' 'Tom Heskes']" ]
null
null
2402.16075
null
null
http://arxiv.org/pdf/2402.16075v4
2024-07-11T03:41:42Z
2024-02-25T12:19:21Z
Don't Start from Scratch: Behavioral Refinement via Interpolant-based Policy Diffusion
Imitation learning empowers artificial agents to mimic behavior by learning from demonstrations. Recently, diffusion models, which have the ability to model high-dimensional and multimodal distributions, have shown impressive performance on imitation learning tasks. These models learn to shape a policy by diffusing actions (or states) from standard Gaussian noise. However, the target policy to be learned is often significantly different from Gaussian and this mismatch can result in poor performance when using a small number of diffusion steps (to improve inference speed) and under limited data. The key idea in this work is that initiating from a more informative source than Gaussian enables diffusion methods to mitigate the above limitations. We contribute both theoretical results, a new method, and empirical findings that show the benefits of using an informative source policy. Our method, which we call BRIDGER, leverages the stochastic interpolants framework to bridge arbitrary policies, thus enabling a flexible approach towards imitation learning. It generalizes prior work in that standard Gaussians can still be applied, but other source policies can be used if available. In experiments on challenging simulation benchmarks and on real robots, BRIDGER outperforms state-of-the-art diffusion policies. We provide further analysis on design considerations when applying BRIDGER. Code for BRIDGER is available at https://github.com/clear-nus/bridger.
[ "['Kaiqi Chen' 'Eugene Lim' 'Kelvin Lin' 'Yiyang Chen' 'Harold Soh']" ]