categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.12195 | null | null | http://arxiv.org/pdf/2404.12195v1 | 2024-04-18T13:57:18Z | 2024-04-18T13:57:18Z | OpenBezoar: Small, Cost-Effective and Open Models Trained on Mixes of
Instruction Data | Instruction fine-tuning pretrained LLMs for diverse downstream tasks has demonstrated remarkable success and has captured the interest of both academics and practitioners. To ensure such fine-tuned LLMs align with human preferences, techniques such as RLHF and DPO have emerged. At the same time, there is increasing interest in smaller parameter counts for models. In this work, using OpenLLaMA 3Bv2 as a base model, we describe the recipe used to fine-tune the OpenBezoar family of models. In this recipe: We first generate synthetic instruction fine-tuning data using an open and commercially non-restrictive instruction fine-tuned variant of the Falcon-40B model under three schemes based on: LaMini-LM, WizardLM/Evol-Instruct (with databricks-dolly-15k as a seed dataset) and Orca (with the Flan Collection as a seed dataset), then filter these generations using GPT-4 as a human proxy. We then perform cost-effective QLoRA-based supervised fine-tuning sequentially with each scheme. The resulting checkpoint is further fine-tuned with a subset of the HH-RLHF dataset to minimize distribution shift prior to using the DPO loss to obtain the final checkpoint. Evaluation is done with the LM Eval Harness tasks/metrics as well as on MT-Bench using the "LLM-as-a-judge" framework with Claude 2.1, with the finding that the final checkpoint, "OpenBezoar-HH-RLHF-DPO", demonstrates superior performance over many models at the 3B parameter scale, even outperforming the top model in one of the categories on the Huggingface Open LLM Leaderboard. We release "OpenBezoar-SFT", "OpenBezoar-HH-RLHF-SFT", "OpenBezoar-HH-RLHF-DPO" checkpoints, alongside our generated datasets on HuggingFace at https://huggingface.co/collections/SurgeGlobal/open-bezoar-6620a24923e12127e9e2b9cc and our codebase at https://bitbucket.org/paladinanalytics/workspace/projects/OP. | [
"['Chandeepa Dissanayake' 'Lahiru Lowe' 'Sachith Gunasekara'\n 'Yasiru Ratnayake']"
]
|
null | null | 2404.12215 | null | null | http://arxiv.org/pdf/2404.12215v2 | 2024-04-19T09:14:28Z | 2024-04-18T14:20:19Z | Quantifying Aleatoric and Epistemic Uncertainty with Proper Scoring
Rules | Uncertainty representation and quantification are paramount in machine learning and constitute an important prerequisite for safety-critical applications. In this paper, we propose novel measures for the quantification of aleatoric and epistemic uncertainty based on proper scoring rules, which are loss functions with the meaningful property that they incentivize the learner to predict ground-truth (conditional) probabilities. We assume two common representations of (epistemic) uncertainty, namely, in terms of a credal set, i.e. a set of probability distributions, or a second-order distribution, i.e., a distribution over probability distributions. Our framework establishes a natural bridge between these representations. We provide a formal justification of our approach and introduce new measures of epistemic and aleatoric uncertainty as concrete instantiations. | [
"['Paul Hofman' 'Yusuf Sale' 'Eyke Hüllermeier']"
]
|
null | null | 2404.12219 | null | null | http://arxiv.org/pdf/2404.12219v2 | 2024-04-19T11:15:07Z | 2024-04-18T14:30:46Z | A Quadrature Approach for General-Purpose Batch Bayesian Optimization
via Probabilistic Lifting | Parallelisation in Bayesian optimisation is a common strategy but faces several challenges: the need for flexibility in acquisition functions and kernel choices, flexibility dealing with discrete and continuous variables simultaneously, model misspecification, and lastly fast massive parallelisation. To address these challenges, we introduce a versatile and modular framework for batch Bayesian optimisation via probabilistic lifting with kernel quadrature, called SOBER, which we present as a Python library based on GPyTorch/BoTorch. Our framework offers the following unique benefits: (1) Versatility in downstream tasks under a unified approach. (2) A gradient-free sampler, which does not require the gradient of acquisition functions, offering domain-agnostic sampling (e.g., discrete and mixed variables, non-Euclidean space). (3) Flexibility in domain prior distribution. (4) Adaptive batch size (autonomous determination of the optimal batch size). (5) Robustness against a misspecified reproducing kernel Hilbert space. (6) Natural stopping criterion. | [
"['Masaki Adachi' 'Satoshi Hayakawa' 'Martin Jørgensen' 'Saad Hamid'\n 'Harald Oberhauser' 'Michael A. Osborne']"
]
|
null | null | 2404.12228 | null | null | http://arxiv.org/pdf/2404.12228v1 | 2024-04-18T14:44:08Z | 2024-04-18T14:44:08Z | Relationship Discovery for Drug Recommendation | Medication recommendation systems are designed to deliver personalized drug suggestions that are closely aligned with individual patient needs. Previous studies have primarily concentrated on developing medication embeddings, achieving significant progress. Nonetheless, these approaches often fall short in accurately reflecting individual patient profiles, mainly due to challenges in distinguishing between various patient conditions and the inability to establish precise correlations between specific conditions and appropriate medications. In response to these issues, we introduce DisMed, a model that focuses on patient conditions to enhance personalization. DisMed employs causal inference to discern clear, quantifiable causal links. It then examines patient conditions in depth, recognizing and adapting to the evolving nuances of these conditions, and mapping them directly to corresponding medications. Additionally, DisMed leverages data from multiple patient visits to propose combinations of medications. Comprehensive testing on real-world datasets demonstrates that DisMed not only improves the customization of patient profiles but also surpasses leading models in both precision and safety. | [
"['Xiang Li' 'Shunpan Liang' 'Yu Lei' 'Chen Li' 'Yulei Hou' 'Tengfei Ma']"
]
|
null | null | 2404.12238 | null | null | http://arxiv.org/pdf/2404.12238v1 | 2024-04-18T14:57:17Z | 2024-04-18T14:57:17Z | Neural Networks with Causal Graph Constraints: A New Approach for
Treatment Effects Estimation | In recent years, there has been a growing interest in using machine learning techniques for the estimation of treatment effects. Most of the best-performing methods rely on representation learning strategies that encourage shared behavior among potential outcomes to increase the precision of treatment effect estimates. In this paper we discuss and classify these models in terms of their algorithmic inductive biases and present a new model, NN-CGC, that considers additional information from the causal graph. NN-CGC tackles bias resulting from spurious variable interactions by implementing novel constraints on models, and it can be integrated with other representation learning methods. We test the effectiveness of our method using three different base models on common benchmarks. Our results indicate that our model constraints lead to significant improvements, achieving new state-of-the-art results in treatment effects estimation. We also show that our method is robust to imperfect causal graphs and that using partial causal information is preferable to ignoring it. | [
"['Roger Pros' 'Jordi Vitrià']"
]
|
null | null | 2404.12251 | null | null | http://arxiv.org/pdf/2404.12251v1 | 2024-04-18T15:18:14Z | 2024-04-18T15:18:14Z | Dynamic Modality and View Selection for Multimodal Emotion Recognition
with Missing Modalities | The study of human emotions, traditionally a cornerstone in fields like psychology and neuroscience, has been profoundly impacted by the advent of artificial intelligence (AI). Multiple channels, such as speech (voice) and facial expressions (image), are crucial in understanding human emotions. However, AI's journey in multimodal emotion recognition (MER) is marked by substantial technical challenges. One significant hurdle is how AI models manage the absence of a particular modality - a frequent occurrence in real-world situations. This study's central focus is assessing the performance and resilience of two strategies when confronted with the lack of one modality: a novel multimodal dynamic modality and view selection and a cross-attention mechanism. Results on the RECOLA dataset show that dynamic selection-based methods are a promising approach for MER. In the missing modalities scenarios, all dynamic selection-based methods outperformed the baseline. The study concludes by emphasizing the intricate interplay between audio and video modalities in emotion prediction, showcasing the adaptability of dynamic selection methods in handling missing modalities. | [
"['Luciana Trinkaus Menon' 'Luiz Carlos Ribeiro Neduziak'\n 'Jean Paul Barddal' 'Alessandro Lameiras Koerich'\n 'Alceu de Souza Britto Jr']"
]
|
null | null | 2404.12253 | null | null | http://arxiv.org/pdf/2404.12253v1 | 2024-04-18T15:21:34Z | 2024-04-18T15:21:34Z | Toward Self-Improvement of LLMs via Imagination, Searching, and
Criticizing | Despite the impressive capabilities of Large Language Models (LLMs) on various tasks, they still struggle with scenarios that involves complex reasoning and planning. Recent work proposed advanced prompting techniques and the necessity of fine-tuning with high-quality data to augment LLMs' reasoning abilities. However, these approaches are inherently constrained by data availability and quality. In light of this, self-correction and self-learning emerge as viable solutions, employing strategies that allow LLMs to refine their outputs and learn from self-assessed rewards. Yet, the efficacy of LLMs in self-refining its response, particularly in complex reasoning and planning task, remains dubious. In this paper, we introduce AlphaLLM for the self-improvements of LLMs, which integrates Monte Carlo Tree Search (MCTS) with LLMs to establish a self-improving loop, thereby enhancing the capabilities of LLMs without additional annotations. Drawing inspiration from the success of AlphaGo, AlphaLLM addresses the unique challenges of combining MCTS with LLM for self-improvement, including data scarcity, the vastness search spaces of language tasks, and the subjective nature of feedback in language tasks. AlphaLLM is comprised of prompt synthesis component, an efficient MCTS approach tailored for language tasks, and a trio of critic models for precise feedback. Our experimental results in mathematical reasoning tasks demonstrate that AlphaLLM significantly enhances the performance of LLMs without additional annotations, showing the potential for self-improvement in LLMs. | [
"['Ye Tian' 'Baolin Peng' 'Linfeng Song' 'Lifeng Jin' 'Dian Yu' 'Haitao Mi'\n 'Dong Yu']"
]
|
null | null | 2404.12256 | null | null | http://arxiv.org/abs/2404.12256v1 | 2024-04-18T15:22:29Z | 2024-04-18T15:22:29Z | An Online Spatial-Temporal Graph Trajectory Planner for Autonomous
Vehicles | The autonomous driving industry is expected to grow by over 20 times in the coming decade and, thus, motivate researchers to delve into it. The primary focus of their research is to ensure safety, comfort, and efficiency. An autonomous vehicle has several modules responsible for one or more of the aforementioned items. Among these modules, the trajectory planner plays a pivotal role in the safety of the vehicle and the comfort of its passengers. The module is also responsible for respecting kinematic constraints and any applicable road constraints. In this paper, a novel online spatial-temporal graph trajectory planner is introduced to generate safe and comfortable trajectories. First, a spatial-temporal graph is constructed using the autonomous vehicle, its surrounding vehicles, and virtual nodes along the road with respect to the vehicle itself. Next, the graph is forwarded into a sequential network to obtain the desired states. To support the planner, a simple behavioral layer is also presented that determines kinematic constraints for the planner. Furthermore, a novel potential function is also proposed to train the network. Finally, the proposed planner is tested on three different complex driving tasks, and the performance is compared with two frequently used methods. The results show that the proposed planner generates safe and feasible trajectories while achieving similar or longer distances in the forward direction and comparable comfort ride. | [
"['Jilan Samiuddin' 'Benoit Boulet' 'Di Wu']"
]
|
null | null | 2404.12257 | null | null | http://arxiv.org/pdf/2404.12257v1 | 2024-04-18T15:23:37Z | 2024-04-18T15:23:37Z | Food Portion Estimation via 3D Object Scaling | Image-based methods to analyze food images have alleviated the user burden and biases associated with traditional methods. However, accurate portion estimation remains a major challenge due to the loss of 3D information in the 2D representation of foods captured by smartphone cameras or wearable devices. In this paper, we propose a new framework to estimate both food volume and energy from 2D images by leveraging the power of 3D food models and physical reference in the eating scene. Our method estimates the pose of the camera and the food object in the input image and recreates the eating occasion by rendering an image of a 3D model of the food with the estimated poses. We also introduce a new dataset, SimpleFood45, which contains 2D images of 45 food items and associated annotations including food volume, weight, and energy. Our method achieves an average error of 31.10 kCal (17.67%) on this dataset, outperforming existing portion estimation methods. | [
"['Gautham Vinod' 'Jiangpeng He' 'Zeman Shao' 'Fengqing Zhu']"
]
|
null | null | 2404.12260 | null | null | http://arxiv.org/pdf/2404.12260v1 | 2024-04-18T15:28:34Z | 2024-04-18T15:28:34Z | Alleviating Catastrophic Forgetting in Facial Expression Recognition
with Emotion-Centered Models | Facial expression recognition is a pivotal component in machine learning, facilitating various applications. However, convolutional neural networks (CNNs) are often plagued by catastrophic forgetting, impeding their adaptability. The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks. Moreover, ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images. This dual approach enables CNNs to retain past knowledge while learning new tasks, enhancing their performance in emotion recognition. The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset while making the CNN retain previously learned knowledge. | [
"['Israel A. Laurensi' 'Alceu de Souza Britto Jr.' 'Jean Paul Barddal'\n 'Alessandro Lameiras Koerich']"
]
|
null | null | 2404.12267 | null | null | http://arxiv.org/pdf/2404.12267v1 | 2024-04-18T15:38:14Z | 2024-04-18T15:38:14Z | Physics-integrated generative modeling using attentive planar
normalizing flow based variational autoencoder | Physics-integrated generative modeling is a class of hybrid or grey-box modeling in which we augment the the data-driven model with the physics knowledge governing the data distribution. The use of physics knowledge allows the generative model to produce output in a controlled way, so that the output, by construction, complies with the physical laws. It imparts improved generalization ability to extrapolate beyond the training distribution as well as improved interpretability because the model is partly grounded in firm domain knowledge. In this work, we aim to improve the fidelity of reconstruction and robustness to noise in the physics integrated generative model. To this end, we use variational-autoencoder as a generative model. To improve the reconstruction results of the decoder, we propose to learn the latent posterior distribution of both the physics as well as the trainable data-driven components using planar normalizng flow. Normalizng flow based posterior distribution harnesses the inherent dynamical structure of the data distribution, hence the learned model gets closer to the true underlying data distribution. To improve the robustness of generative model against noise injected in the model, we propose a modification in the encoder part of the normalizing flow based VAE. We designed the encoder to incorporate scaled dot product attention based contextual information in the noisy latent vector which will mitigate the adverse effect of noise in the latent vector and make the model more robust. We empirically evaluated our models on human locomotion dataset [33] and the results validate the efficacy of our proposed models in terms of improvement in reconstruction quality as well as robustness against noise injected in the model. | [
"['Sheikh Waqas Akhtar']"
]
|
null | null | 2404.12273 | null | null | http://arxiv.org/pdf/2404.12273v1 | 2024-04-18T15:46:26Z | 2024-04-18T15:46:26Z | FedEval-LLM: Federated Evaluation of Large Language Models on Downstream
Tasks with Collective Wisdom | Federated Learning (FL) has emerged as a promising solution for collaborative training of large language models (LLMs). However, the integration of LLMs into FL introduces new challenges, particularly concerning the evaluation of LLMs. Traditional evaluation methods that rely on labeled test sets and similarity-based metrics cover only a subset of the acceptable answers, thereby failing to accurately reflect the performance of LLMs on generative tasks. Meanwhile, although automatic evaluation methods that leverage advanced LLMs present potential, they face critical risks of data leakage due to the need to transmit data to external servers and suboptimal performance on downstream tasks due to the lack of domain knowledge. To address these issues, we propose a Federated Evaluation framework of Large Language Models, named FedEval-LLM, that provides reliable performance measurements of LLMs on downstream tasks without the reliance on labeled test sets and external tools, thus ensuring strong privacy-preserving capability. FedEval-LLM leverages a consortium of personalized LLMs from participants as referees to provide domain knowledge and collective evaluation capability, thus aligning to the respective downstream tasks and mitigating uncertainties and biases associated with a single referee. Experimental results demonstrate a significant improvement in the evaluation capability of personalized evaluation models on downstream tasks. When applied to FL, these evaluation models exhibit strong agreement with human preference and RougeL-score on meticulously curated test sets. FedEval-LLM effectively overcomes the limitations of traditional metrics and the reliance on external services, making it a promising framework for the evaluation of LLMs within collaborative training scenarios. | [
"['Yuanqin He' 'Yan Kang' 'Lixin Fan' 'Qiang Yang']"
]
|
null | null | 2404.12282 | null | null | http://arxiv.org/pdf/2404.12282v1 | 2024-04-18T15:58:31Z | 2024-04-18T15:58:31Z | Investigating Guiding Information for Adaptive Collocation Point
Sampling in PINNs | Physics-informed neural networks (PINNs) provide a means of obtaining approximate solutions of partial differential equations and systems through the minimisation of an objective function which includes the evaluation of a residual function at a set of collocation points within the domain. The quality of a PINNs solution depends upon numerous parameters, including the number and distribution of these collocation points. In this paper we consider a number of strategies for selecting these points and investigate their impact on the overall accuracy of the method. In particular, we suggest that no single approach is likely to be ``optimal'' but we show how a number of important metrics can have an impact in improving the quality of the results obtained when using a fixed number of residual evaluations. We illustrate these approaches through the use of two benchmark test problems: Burgers' equation and the Allen-Cahn equation. | [
"['Jose Florido' 'He Wang' 'Amirul Khan' 'Peter K. Jimack']"
]
|
null | null | 2404.12290 | null | null | http://arxiv.org/pdf/2404.12290v2 | 2024-05-27T00:47:25Z | 2024-04-18T16:11:16Z | Debiased Distribution Compression | Modern compression methods can summarize a target distribution $mathbb{P}$ more succinctly than i.i.d. sampling but require access to a low-bias input sequence like a Markov chain converging quickly to $mathbb{P}$. We introduce a new suite of compression methods suitable for compression with biased input sequences. Given $n$ points targeting the wrong distribution and quadratic time, Stein kernel thinning (SKT) returns $sqrt{n}$ equal-weighted points with $widetilde{O}(n^{-1/2})$ maximum mean discrepancy (MMD) to $mathbb{P}$. For larger-scale compression tasks, low-rank SKT achieves the same feat in sub-quadratic time using an adaptive low-rank debiasing procedure that may be of independent interest. For downstream tasks that support simplex or constant-preserving weights, Stein recombination and Stein Cholesky achieve even greater parsimony, matching the guarantees of SKT with as few as $text{poly-log}(n)$ weighted points. Underlying these advances are new guarantees for the quality of simplex-weighted coresets, the spectral decay of kernel matrices, and the covering numbers of Stein kernel Hilbert spaces. In our experiments, our techniques provide succinct and accurate posterior summaries while overcoming biases due to burn-in, approximate Markov chain Monte Carlo, and tempering. | [
"['Lingxiao Li' 'Raaz Dwivedi' 'Lester Mackey']"
]
|
null | null | 2404.12293 | null | null | http://arxiv.org/pdf/2404.12293v1 | 2024-04-18T16:13:58Z | 2024-04-18T16:13:58Z | Singular-limit analysis of gradient descent with noise injection | We study the limiting dynamics of a large class of noisy gradient descent systems in the overparameterized regime. In this regime the set of global minimizers of the loss is large, and when initialized in a neighbourhood of this zero-loss set a noisy gradient descent algorithm slowly evolves along this set. In some cases this slow evolution has been related to better generalisation properties. We characterize this evolution for the broad class of noisy gradient descent systems in the limit of small step size. Our results show that the structure of the noise affects not just the form of the limiting process, but also the time scale at which the evolution takes place. We apply the theory to Dropout, label noise and classical SGD (minibatching) noise, and show that these evolve on different two time scales. Classical SGD even yields a trivial evolution on both time scales, implying that additional noise is required for regularization. The results are inspired by the training of neural networks, but the theorems apply to noisy gradient descent of any loss that has a non-trivial zero-loss set. | [
"['Anna Shalova' 'André Schlichting' 'Mark Peletier']"
]
|
null | null | 2404.12294 | null | null | http://arxiv.org/pdf/2404.12294v2 | 2024-06-07T09:16:23Z | 2024-04-18T16:16:02Z | $floZ$: Improved Bayesian evidence estimation from posterior samples
with normalizing flows | We introduce $floZ$, an improved method based on normalizing flows, for estimating the Bayesian evidence (and its numerical uncertainty) from a set of samples drawn from the unnormalized posterior distribution. We validate it on distributions whose evidence is known analytically, up to 15 parameter space dimensions, and compare with two state-of-the-art techniques for estimating the evidence: nested sampling (which computes the evidence as its main target) and a $k$-nearest-neighbors technique that produces evidence estimates from posterior samples. Provided representative samples from the target posterior are available, our method is more robust to posterior distributions with sharp features, especially in higher dimensions. For a simple multivariate Gaussian, we demonstrate its accuracy for up to 200 dimensions with $10^5$ posterior samples. $floZ$ has wide applicability, e.g., to estimate the evidence from variational inference, Markov Chain Monte Carlo samples, or any other method that delivers samples from the unnormalized posterior density, such as simulation-based inference. We apply $floZ$ to compute the Bayes factor for the presence of the first overtone in the ringdown signal of the gravitational wave data of GW150914, finding good agreement with nested sampling. | [
"['Rahul Srinivasan' 'Marco Crisostomi' 'Roberto Trotta' 'Enrico Barausse'\n 'Matteo Breschi']"
]
|
null | null | 2404.12299 | null | null | http://arxiv.org/pdf/2404.12299v1 | 2024-04-18T16:24:12Z | 2024-04-18T16:24:12Z | Simultaneous Interpretation Corpus Construction by Large Language Models
in Distant Language Pair | In Simultaneous Machine Translation (SiMT) systems, training with a simultaneous interpretation (SI) corpus is an effective method for achieving high-quality yet low-latency systems. However, it is very challenging to curate such a corpus due to limitations in the abilities of annotators, and hence, existing SI corpora are limited. Therefore, we propose a method to convert existing speech translation corpora into interpretation-style data, maintaining the original word order and preserving the entire source content using Large Language Models (LLM-SI-Corpus). We demonstrate that fine-tuning SiMT models in text-to-text and speech-to-text settings with the LLM-SI-Corpus reduces latencies while maintaining the same level of quality as the models trained with offline datasets. The LLM-SI-Corpus is available at url{https://github.com/yusuke1997/LLM-SI-Corpus}. | [
"['Yusuke Sakai' 'Mana Makinae' 'Hidetaka Kamigaito' 'Taro Watanabe']"
]
|
null | null | 2404.12308 | null | null | http://arxiv.org/pdf/2404.12308v2 | 2024-06-27T01:22:30Z | 2024-04-18T16:35:38Z | ASID: Active Exploration for System Identification in Robotic
Manipulation | Model-free control strategies such as reinforcement learning have shown the ability to learn control strategies without requiring an accurate model or simulator of the world. While this is appealing due to the lack of modeling requirements, such methods can be sample inefficient, making them impractical in many real-world domains. On the other hand, model-based control techniques leveraging accurate simulators can circumvent these challenges and use a large amount of cheap simulation data to learn controllers that can effectively transfer to the real world. The challenge with such model-based techniques is the requirement for an extremely accurate simulation, requiring both the specification of appropriate simulation assets and physical parameters. This requires considerable human effort to design for every environment being considered. In this work, we propose a learning system that can leverage a small amount of real-world data to autonomously refine a simulation model and then plan an accurate control strategy that can be deployed in the real world. Our approach critically relies on utilizing an initial (possibly inaccurate) simulator to design effective exploration policies that, when deployed in the real world, collect high-quality data. We demonstrate the efficacy of this paradigm in identifying articulation, mass, and other physical parameters in several challenging robotic manipulation tasks, and illustrate that only a small amount of real-world data can allow for effective sim-to-real transfer. Project website at https://weirdlabuw.github.io/asid | [
"['Marius Memmel' 'Andrew Wagenmaker' 'Chuning Zhu' 'Patrick Yin'\n 'Dieter Fox' 'Abhishek Gupta']"
]
|
null | null | 2404.12309 | null | null | http://arxiv.org/pdf/2404.12309v1 | 2024-04-18T16:38:02Z | 2024-04-18T16:38:02Z | iRAG: An Incremental Retrieval Augmented Generation System for Videos | Retrieval augmented generation (RAG) systems combine the strengths of language generation and information retrieval to power many real-world applications like chatbots. Use of RAG for combined understanding of multimodal data such as text, images and videos is appealing but two critical limitations exist: one-time, upfront capture of all content in large multimodal data as text descriptions entails high processing times, and not all information in the rich multimodal data is typically in the text descriptions. Since the user queries are not known apriori, developing a system for multimodal to text conversion and interactive querying of multimodal data is challenging. To address these limitations, we propose iRAG, which augments RAG with a novel incremental workflow to enable interactive querying of large corpus of multimodal data. Unlike traditional RAG, iRAG quickly indexes large repositories of multimodal data, and in the incremental workflow, it uses the index to opportunistically extract more details from select portions of the multimodal data to retrieve context relevant to an interactive user query. Such an incremental workflow avoids long multimodal to text conversion times, overcomes information loss issues by doing on-demand query-specific extraction of details in multimodal data, and ensures high quality of responses to interactive user queries that are often not known apriori. To the best of our knowledge, iRAG is the first system to augment RAG with an incremental workflow to support efficient interactive querying of large, real-world multimodal data. Experimental results on real-world long videos demonstrate 23x to 25x faster video to text ingestion, while ensuring that quality of responses to interactive user queries is comparable to responses from a traditional RAG where all video data is converted to text upfront before any querying. | [
"['Md Adnan Arefeen' 'Biplob Debnath' 'Md Yusuf Sarwar Uddin'\n 'Srimat Chakradhar']"
]
|
null | null | 2404.12312 | null | null | http://arxiv.org/pdf/2404.12312v2 | 2024-05-26T01:22:12Z | 2024-04-18T16:46:08Z | A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for
Functional Minimiax Optimization | This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparameterized two-layer neural networks. In particular, we consider the minimax optimization problem stemming from estimating linear functional equations defined by conditional expectations, where the objective functions are quadratic in the functional spaces. We address (i) the convergence of the stochastic gradient descent-ascent algorithm and (ii) the representation learning of the neural networks. We establish convergence under the mean-field regime by considering the continuous-time and infinite-width limit of the optimization dynamics. Under this regime, the stochastic gradient descent-ascent corresponds to a Wasserstein gradient flow over the space of probability measures defined over the space of neural network parameters. We prove that the Wasserstein gradient flow converges globally to a stationary point of the minimax objective at a $O(T^{-1} + alpha^{-1})$ sublinear rate, and additionally finds the solution to the functional equation when the regularizer of the minimax objective is strongly convex. Here $T$ denotes the time and $alpha$ is a scaling parameter of the neural networks. In terms of representation learning, our results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha^{-1})$, measured in terms of the Wasserstein distance. Finally, we apply our general results to concrete examples including policy evaluation, nonparametric instrumental variable regression, asset pricing, and adversarial Riesz representer estimation. | [
"['Yuchen Zhu' 'Yufeng Zhang' 'Zhaoran Wang' 'Zhuoran Yang' 'Xiaohong Chen']"
]
|
null | null | 2404.12314 | null | null | http://arxiv.org/pdf/2404.12314v2 | 2024-06-14T21:36:03Z | 2024-04-18T16:50:46Z | Guided Discrete Diffusion for Electronic Health Record Generation | Electronic health records (EHRs) are a pivotal data source that enables numerous applications in computational medicine, e.g., disease progression prediction, clinical trial design, and health economics and outcomes research. Despite wide usability, their sensitive nature raises privacy and confidentially concerns, which limit potential use cases. To tackle these challenges, we explore the use of generative models to synthesize artificial, yet realistic EHRs. While diffusion-based methods have recently demonstrated state-of-the-art performance in generating other data modalities and overcome the training instability and mode collapse issues that plague previous GAN-based approaches, their applications in EHR generation remain underexplored. The discrete nature of tabular medical code data in EHRs poses challenges for high-quality data generation, especially for continuous diffusion models. To this end, we introduce a novel tabular EHR generation method, EHR-D3PM, which enables both unconditional and conditional generation using the discrete diffusion model. Our experiments demonstrate that EHR-D3PM significantly outperforms existing generative baselines on comprehensive fidelity and utility metrics while maintaining less attribute and membership vulnerability risks. Furthermore, we show EHR-D3PM is effective as a data augmentation method and enhances performance on downstream tasks when combined with real data. | [
"['Jun Han' 'Zixiang Chen' 'Yongqian Li' 'Yiwen Kou' 'Eran Halperin'\n 'Robert E. Tillman' 'Quanquan Gu']"
]
|
null | null | 2404.12315 | null | null | http://arxiv.org/pdf/2404.12315v1 | 2024-04-18T16:51:12Z | 2024-04-18T16:51:12Z | Adjoint Sensitivities of Chaotic Flows without Adjoint Solvers: A
Data-Driven Approach | In one calculation, adjoint sensitivity analysis provides the gradient of a quantity of interest with respect to all system's parameters. Conventionally, adjoint solvers need to be implemented by differentiating computational models, which can be a cumbersome task and is code-specific. To propose an adjoint solver that is not code-specific, we develop a data-driven strategy. We demonstrate its application on the computation of gradients of long-time averages of chaotic flows. First, we deploy a parameter-aware echo state network (ESN) to accurately forecast and simulate the dynamics of a dynamical system for a range of system's parameters. Second, we derive the adjoint of the parameter-aware ESN. Finally, we combine the parameter-aware ESN with its adjoint version to compute the sensitivities to the system parameters. We showcase the method on a prototypical chaotic system. Because adjoint sensitivities in chaotic regimes diverge for long integration times, we analyse the application of ensemble adjoint method to the ESN. We find that the adjoint sensitivities obtained from the ESN match closely with the original system. This work opens possibilities for sensitivity analysis without code-specific adjoint solvers. | [
"['Defne E. Ozan' 'Luca Magri']"
]
|
null | null | 2404.12341 | null | null | http://arxiv.org/pdf/2404.12341v1 | 2024-04-18T17:10:18Z | 2024-04-18T17:10:18Z | Measuring Feature Dependency of Neural Networks by Collapsing Feature
Dimensions in the Data Manifold | This paper introduces a new technique to measure the feature dependency of neural network models. The motivation is to better understand a model by querying whether it is using information from human-understandable features, e.g., anatomical shape, volume, or image texture. Our method is based on the principle that if a model is dependent on a feature, then removal of that feature should significantly harm its performance. A targeted feature is "removed" by collapsing the dimension in the data distribution that corresponds to that feature. We perform this by moving data points along the feature dimension to a baseline feature value while staying on the data manifold, as estimated by a deep generative model. Then we observe how the model's performance changes on the modified test data set, with the target feature dimension removed. We test our method on deep neural network models trained on synthetic image data with known ground truth, an Alzheimer's disease prediction task using MRI and hippocampus segmentations from the OASIS-3 dataset, and a cell nuclei classification task using the Lizard dataset. | [
"['Yinzhu Jin' 'Matthew B. Dwyer' 'P. Thomas Fletcher']"
]
|
null | null | 2404.12355 | null | null | http://arxiv.org/pdf/2404.12355v2 | 2024-04-19T16:46:44Z | 2024-04-18T17:34:20Z | Towards a Foundation Model for Partial Differential Equations:
Multi-Operator Learning and Extrapolation | Foundation models, such as large language models, have demonstrated success in addressing various language and image processing tasks. In this work, we introduce a multi-modal foundation model for scientific problems, named PROSE-PDE. Our model, designed for bi-modality to bi-modality learning, is a multi-operator learning approach which can predict future states of spatiotemporal systems while concurrently learning the underlying governing equations of the physical system. Specifically, we focus on multi-operator learning by training distinct one-dimensional time-dependent nonlinear constant coefficient partial differential equations, with potential applications to many physical applications including physics, geology, and biology. More importantly, we provide three extrapolation studies to demonstrate that PROSE-PDE can generalize physical features through the robust training of multiple operators and that the proposed model can extrapolate to predict PDE solutions whose models or data were unseen during the training. Furthermore, we show through systematic numerical experiments that the utilization of the symbolic modality in our model effectively resolves the well-posedness problems with training multiple operators and thus enhances our model's predictive capabilities. | [
"['Jingmin Sun' 'Yuxuan Liu' 'Zecheng Zhang' 'Hayden Schaeffer']"
]
|
null | null | 2404.12356 | null | null | http://arxiv.org/pdf/2404.12356v1 | 2024-04-18T17:34:47Z | 2024-04-18T17:34:47Z | Improving the interpretability of GNN predictions through
conformal-based graph sparsification | Graph Neural Networks (GNNs) have achieved state-of-the-art performance in solving graph classification tasks. However, most GNN architectures aggregate information from all nodes and edges in a graph, regardless of their relevance to the task at hand, thus hindering the interpretability of their predictions. In contrast to prior work, in this paper we propose a GNN emph{training} approach that jointly i) finds the most predictive subgraph by removing edges and/or nodes -- -emph{without making assumptions about the subgraph structure} -- while ii) optimizing the performance of the graph classification task. To that end, we rely on reinforcement learning to solve the resulting bi-level optimization with a reward function based on conformal predictions to account for the current in-training uncertainty of the classifier. Our empirical results on nine different graph classification datasets show that our method competes in performance with baselines while relying on significantly sparser subgraphs, leading to more interpretable GNN-based predictions. | [
"['Pablo Sanchez-Martin' 'Kinaan Aamir Khan' 'Isabel Valera']"
]
|
null | null | 2404.12358 | null | null | http://arxiv.org/pdf/2404.12358v1 | 2024-04-18T17:37:02Z | 2024-04-18T17:37:02Z | From $r$ to $Q^*$: Your Language Model is Secretly a Q-Function | Reinforcement Learning From Human Feedback (RLHF) has been a critical to the success of the latest generation of generative AI models. In response to the complex nature of the classical RLHF pipeline, direct alignment algorithms such as Direct Preference Optimization (DPO) have emerged as an alternative approach. Although DPO solves the same objective as the standard RLHF setup, there is a mismatch between the two approaches. Standard RLHF deploys reinforcement learning in a specific token-level MDP, while DPO is derived as a bandit problem in which the whole response of the model is treated as a single arm. In this work we rectify this difference, first we theoretically show that we can derive DPO in the token-level MDP as a general inverse Q-learning algorithm, which satisfies the Bellman equation. Using our theoretical results, we provide three concrete empirical insights. First, we show that because of its token level interpretation, DPO is able to perform some type of credit assignment. Next, we prove that under the token level formulation, classical search-based algorithms, such as MCTS, which have recently been applied to the language generation space, are equivalent to likelihood-based search on a DPO policy. Empirically we show that a simple beam search yields meaningful improvement over the base DPO policy. Finally, we show how the choice of reference policy causes implicit rewards to decline during training. We conclude by discussing applications of our work, including information elicitation in multi-tun dialogue, reasoning, agentic applications and end-to-end training of multi-model systems. | [
"['Rafael Rafailov' 'Joey Hejna' 'Ryan Park' 'Chelsea Finn']"
]
|
null | null | 2404.12362 | null | null | http://arxiv.org/pdf/2404.12362v1 | 2024-04-18T17:45:19Z | 2024-04-18T17:45:19Z | Transformer tricks: Removing weights for skipless transformers | He and Hofmann (arXiv:2311.01906) detailed a skipless transformer without the V and P (post-attention projection) linear layers, which reduces the total number of weights. However, this scheme is only applicable to MHA (multi-head attention), but not for MQA (multi-query attention) and GQA (grouped-query attention). The latter schemes are used by many popular LLMs such as Llama 2, Mistral, Mixtral, PaLM, and Gemma. Therefore, this micro-paper proposes mathematically equivalent versions that are suitable for MQA and GQA. For example, removing Q and P from a skipless version of Mistral-7B would remove 15% of its weights (and thus reduce its compute and memory complexity). See arXiv:2402.13388 and https://github.com/OpenMachine-ai/transformer-tricks for code and more transformer tricks. | [
"['Nils Graef']"
]
|
null | null | 2404.12365 | null | null | http://arxiv.org/pdf/2404.12365v1 | 2024-04-18T17:48:05Z | 2024-04-18T17:48:05Z | When LLMs are Unfit Use FastFit: Fast and Effective Text Classification
with Many Classes | We present FastFit, a method, and a Python package design to provide fast and accurate few-shot classification, especially for scenarios with many semantically similar classes. FastFit utilizes a novel approach integrating batch contrastive learning and token-level similarity score. Compared to existing few-shot learning packages, such as SetFit, Transformers, or few-shot prompting of large language models via API calls, FastFit significantly improves multiclass classification performance in speed and accuracy across FewMany, our newly curated English benchmark, and Multilingual datasets. FastFit demonstrates a 3-20x improvement in training speed, completing training in just a few seconds. The FastFit package is now available on GitHub and PyPi, presenting a user-friendly solution for NLP practitioners. | [
"['Asaf Yehudai' 'Elron Bendel']"
]
|
null | null | 2404.12366 | null | null | http://arxiv.org/pdf/2404.12366v1 | 2024-04-18T17:49:02Z | 2024-04-18T17:49:02Z | Accounting for AI and Users Shaping One Another: The Role of
Mathematical Models | As AI systems enter into a growing number of societal domains, these systems increasingly shape and are shaped by user preferences, opinions, and behaviors. However, the design of AI systems rarely accounts for how AI and users shape one another. In this position paper, we argue for the development of formal interaction models which mathematically specify how AI and users shape one another. Formal interaction models can be leveraged to (1) specify interactions for implementation, (2) monitor interactions through empirical analysis, (3) anticipate societal impacts via counterfactual analysis, and (4) control societal impacts via interventions. The design space of formal interaction models is vast, and model design requires careful consideration of factors such as style, granularity, mathematical complexity, and measurability. Using content recommender systems as a case study, we critically examine the nascent literature of formal interaction models with respect to these use-cases and design axes. More broadly, we call for the community to leverage formal interaction models when designing, evaluating, or auditing any AI system which interacts with users. | [
"['Sarah Dean' 'Evan Dong' 'Meena Jagadeesan' 'Liu Leqi']"
]
|
null | null | 2404.12367 | null | null | http://arxiv.org/pdf/2404.12367v1 | 2024-04-18T17:50:15Z | 2024-04-18T17:50:15Z | Information theory unifies atomistic machine learning, uncertainty
quantification, and materials thermodynamics | An accurate description of information is relevant for a range of problems in atomistic modeling, such as sampling methods, detecting rare events, analyzing datasets, or performing uncertainty quantification (UQ) in machine learning (ML)-driven simulations. Although individual methods have been proposed for each of these tasks, they lack a common theoretical background integrating their solutions. Here, we introduce an information theoretical framework that unifies predictions of phase transformations, kinetic events, dataset optimality, and model-free UQ from atomistic simulations, thus bridging materials modeling, ML, and statistical mechanics. We first demonstrate that, for a proposed representation, the information entropy of a distribution of atom-centered environments is a surrogate value for thermodynamic entropy. Using molecular dynamics (MD) simulations, we show that information entropy differences from trajectories can be used to build phase diagrams, identify rare events, and recover classical theories of nucleation. Building on these results, we use this general concept of entropy to quantify information in datasets for ML interatomic potentials (IPs), informing compression, explaining trends in testing errors, and evaluating the efficiency of active learning strategies. Finally, we propose a model-free UQ method for MLIPs using information entropy, showing it reliably detects extrapolation regimes, scales to millions of atoms, and goes beyond model errors. This method is made available as the package QUESTS: Quick Uncertainty and Entropy via STructural Similarity, providing a new unifying theory for data-driven atomistic modeling and combining efforts in ML, first-principles thermodynamics, and simulations. | [
"['Daniel Schwalbe-Koda' 'Sebastien Hamel' 'Babak Sadigh' 'Fei Zhou'\n 'Vincenzo Lordi']"
]
|
null | null | 2404.12368 | null | null | http://arxiv.org/pdf/2404.12368v2 | 2024-04-23T01:21:58Z | 2024-04-18T17:50:23Z | Gradient-Regularized Out-of-Distribution Detection | One of the challenges for neural networks in real-life applications is the overconfident errors these models make when the data is not from the original training distribution. Addressing this issue is known as Out-of-Distribution (OOD) detection. Many state-of-the-art OOD methods employ an auxiliary dataset as a surrogate for OOD data during training to achieve improved performance. However, these methods fail to fully exploit the local information embedded in the auxiliary dataset. In this work, we propose the idea of leveraging the information embedded in the gradient of the loss function during training to enable the network to not only learn a desired OOD score for each sample but also to exhibit similar behavior in a local neighborhood around each sample. We also develop a novel energy-based sampling method to allow the network to be exposed to more informative OOD samples during the training phase. This is especially important when the auxiliary dataset is large. We demonstrate the effectiveness of our method through extensive experiments on several OOD benchmarks, improving the existing state-of-the-art FPR95 by 4% on our ImageNet experiment. We further provide a theoretical analysis through the lens of certified robustness and Lipschitz analysis to showcase the theoretical foundation of our work. We will publicly release our code after the review process. | [
"['Sina Sharifi' 'Taha Entesari' 'Bardia Safaei' 'Vishal M. Patel'\n 'Mahyar Fazlyab']"
]
|
null | null | 2404.12369 | null | null | http://arxiv.org/pdf/2404.12369v1 | 2024-04-18T17:51:02Z | 2024-04-18T17:51:02Z | KDk: A Defense Mechanism Against Label Inference Attacks in Vertical
Federated Learning | Vertical Federated Learning (VFL) is a category of Federated Learning in which models are trained collaboratively among parties with vertically partitioned data. Typically, in a VFL scenario, the labels of the samples are kept private from all the parties except for the aggregating server, that is the label owner. Nevertheless, recent works discovered that by exploiting gradient information returned by the server to bottom models, with the knowledge of only a small set of auxiliary labels on a very limited subset of training data points, an adversary can infer the private labels. These attacks are known as label inference attacks in VFL. In our work, we propose a novel framework called KDk, that combines Knowledge Distillation and k-anonymity to provide a defense mechanism against potential label inference attacks in a VFL scenario. Through an exhaustive experimental campaign we demonstrate that by applying our approach, the performance of the analyzed label inference attacks decreases consistently, even by more than 60%, maintaining the accuracy of the whole VFL almost unaltered. | [
"['Marco Arazzi' 'Serena Nicolazzo' 'Antonino Nocera']"
]
|
null | null | 2404.12376 | null | null | http://arxiv.org/pdf/2404.12376v1 | 2024-04-18T17:57:53Z | 2024-04-18T17:57:53Z | Matching the Statistical Query Lower Bound for k-sparse Parity Problems
with Stochastic Gradient Descent | The $k$-parity problem is a classical problem in computational complexity and algorithmic theory, serving as a key benchmark for understanding computational classes. In this paper, we solve the $k$-parity problem with stochastic gradient descent (SGD) on two-layer fully-connected neural networks. We demonstrate that SGD can efficiently solve the $k$-sparse parity problem on a $d$-dimensional hypercube ($kle O(sqrt{d})$) with a sample complexity of $tilde{O}(d^{k-1})$ using $2^{Theta(k)}$ neurons, thus matching the established $Omega(d^{k})$ lower bounds of Statistical Query (SQ) models. Our theoretical analysis begins by constructing a good neural network capable of correctly solving the $k$-parity problem. We then demonstrate how a trained neural network with SGD can effectively approximate this good network, solving the $k$-parity problem with small statistical errors. Our theoretical results and findings are supported by empirical evidence, showcasing the efficiency and efficacy of our approach. | [
"['Yiwen Kou' 'Zixiang Chen' 'Quanquan Gu' 'Sham M. Kakade']"
]
|
null | null | 2404.12378 | null | null | http://arxiv.org/pdf/2404.12378v1 | 2024-04-18T17:58:16Z | 2024-04-18T17:58:16Z | 6Img-to-3D: Few-Image Large-Scale Outdoor Driving Scene Reconstruction | Current 3D reconstruction techniques struggle to infer unbounded scenes from a few images faithfully. Specifically, existing methods have high computational demands, require detailed pose information, and cannot reconstruct occluded regions reliably. We introduce 6Img-to-3D, an efficient, scalable transformer-based encoder-renderer method for single-shot image to 3D reconstruction. Our method outputs a 3D-consistent parameterized triplane from only six outward-facing input images for large-scale, unbounded outdoor driving scenarios. We take a step towards resolving existing shortcomings by combining contracted custom cross- and self-attention mechanisms for triplane parameterization, differentiable volume rendering, scene contraction, and image feature projection. We showcase that six surround-view vehicle images from a single timestamp without global pose information are enough to reconstruct 360$^{circ}$ scenes during inference time, taking 395 ms. Our method allows, for example, rendering third-person images and birds-eye views. Our code is available at https://github.com/continental/6Img-to-3D, and more examples can be found at our website here https://6Img-to-3D.GitHub.io/. | [
"['Théo Gieruc' 'Marius Kästingschäfer' 'Sebastian Bernhard'\n 'Mathieu Salzmann']"
]
|
null | null | 2404.12386 | null | null | http://arxiv.org/pdf/2404.12386v1 | 2024-04-18T17:59:46Z | 2024-04-18T17:59:46Z | SOHES: Self-supervised Open-world Hierarchical Entity Segmentation | Open-world entity segmentation, as an emerging computer vision task, aims at segmenting entities in images without being restricted by pre-defined classes, offering impressive generalization capabilities on unseen images and concepts. Despite its promise, existing entity segmentation methods like Segment Anything Model (SAM) rely heavily on costly expert annotators. This work presents Self-supervised Open-world Hierarchical Entity Segmentation (SOHES), a novel approach that eliminates the need for human annotations. SOHES operates in three phases: self-exploration, self-instruction, and self-correction. Given a pre-trained self-supervised representation, we produce abundant high-quality pseudo-labels through visual feature clustering. Then, we train a segmentation model on the pseudo-labels, and rectify the noises in pseudo-labels via a teacher-student mutual-learning procedure. Beyond segmenting entities, SOHES also captures their constituent parts, providing a hierarchical understanding of visual entities. Using raw images as the sole training data, our method achieves unprecedented performance in self-supervised open-world segmentation, marking a significant milestone towards high-quality open-world entity segmentation in the absence of human-annotated masks. Project page: https://SOHES.github.io. | [
"['Shengcao Cao' 'Jiuxiang Gu' 'Jason Kuen' 'Hao Tan' 'Ruiyi Zhang'\n 'Handong Zhao' 'Ani Nenkova' 'Liang-Yan Gui' 'Tong Sun' 'Yu-Xiong Wang']"
]
|
null | null | 2404.12391 | null | null | http://arxiv.org/pdf/2404.12391v1 | 2024-04-18T17:59:58Z | 2024-04-18T17:59:58Z | On the Content Bias in Fréchet Video Distance | Fr'echet Video Distance (FVD), a prominent metric for evaluating video generation models, is known to conflict with human perception occasionally. In this paper, we aim to explore the extent of FVD's bias toward per-frame quality over temporal realism and identify its sources. We first quantify the FVD's sensitivity to the temporal axis by decoupling the frame and motion quality and find that the FVD increases only slightly with large temporal corruption. We then analyze the generated videos and show that via careful sampling from a large set of generated videos that do not contain motions, one can drastically decrease FVD without improving the temporal quality. Both studies suggest FVD's bias towards the quality of individual frames. We further observe that the bias can be attributed to the features extracted from a supervised video classifier trained on the content-biased dataset. We show that FVD with features extracted from the recent large-scale self-supervised video models is less biased toward image quality. Finally, we revisit a few real-world examples to validate our hypothesis. | [
"['Songwei Ge' 'Aniruddha Mahapatra' 'Gaurav Parmar' 'Jun-Yan Zhu'\n 'Jia-Bin Huang']"
]
|
null | null | 2404.12394 | null | null | http://arxiv.org/pdf/2404.12394v1 | 2024-03-19T21:46:52Z | 2024-03-19T21:46:52Z | A Big Data Analytics System for Predicting Suicidal Ideation in
Real-Time Based on Social Media Streaming Data | Online social media platforms have recently become integral to our society and daily routines. Every day, users worldwide spend a couple of hours on such platforms, expressing their sentiments and emotional state and contacting each other. Analyzing such huge amounts of data from these platforms can provide a clear insight into public sentiments and help detect their mental status. The early identification of these health condition risks may assist in preventing or reducing the number of suicide ideation and potentially saving people's lives. The traditional techniques have become ineffective in processing such streams and large-scale datasets. Therefore, the paper proposed a new methodology based on a big data architecture to predict suicidal ideation from social media content. The proposed approach provides a practical analysis of social media data in two phases: batch processing and real-time streaming prediction. The batch dataset was collected from the Reddit forum and used for model building and training, while streaming big data was extracted using Twitter streaming API and used for real-time prediction. After the raw data was preprocessed, the extracted features were fed to multiple Apache Spark ML classifiers: NB, LR, LinearSVC, DT, RF, and MLP. We conducted various experiments using various feature-extraction techniques with different testing scenarios. The experimental results of the batch processing phase showed that the features extracted of (Unigram + Bigram) + CV-IDF with MLP classifier provided high performance for classifying suicidal ideation, with an accuracy of 93.47%, and then applied for real-time streaming prediction phase. | [
"['Mohamed A. Allayla' 'Serkan Ayvaz']"
]
|
null | null | 2404.12396 | null | null | http://arxiv.org/pdf/2404.12396v1 | 2024-04-13T15:44:12Z | 2024-04-13T15:44:12Z | Optimized Dynamic Mode Decomposition for Reconstruction and Forecasting
of Atmospheric Chemistry Data | We introduce the optimized dynamic mode decomposition algorithm for constructing an adaptive and computationally efficient reduced order model and forecasting tool for global atmospheric chemistry dynamics. By exploiting a low-dimensional set of global spatio-temporal modes, interpretable characterizations of the underlying spatial and temporal scales can be computed. Forecasting is also achieved with a linear model that uses a linear superposition of the dominant spatio-temporal features. The DMD method is demonstrated on three months of global chemistry dynamics data, showing its significant performance in computational speed and interpretability. We show that the presented decomposition method successfully extracts known major features of atmospheric chemistry, such as summertime surface pollution and biomass burning activities. Moreover, the DMD algorithm allows for rapid reconstruction of the underlying linear model, which can then easily accommodate non-stationary data and changes in the dynamics. | [
"['Meghana Velegar' 'Christoph Keller' 'J. Nathan Kutz']"
]
|
null | null | 2404.12398 | null | null | http://arxiv.org/pdf/2404.12398v1 | 2024-04-14T05:02:00Z | 2024-04-14T05:02:00Z | Incremental Self-training for Semi-supervised Learning | Semi-supervised learning provides a solution to reduce the dependency of machine learning on labeled data. As one of the efficient semi-supervised techniques, self-training (ST) has received increasing attention. Several advancements have emerged to address challenges associated with noisy pseudo-labels. Previous works on self-training acknowledge the importance of unlabeled data but have not delved into their efficient utilization, nor have they paid attention to the problem of high time consumption caused by iterative learning. This paper proposes Incremental Self-training (IST) for semi-supervised learning to fill these gaps. Unlike ST, which processes all data indiscriminately, IST processes data in batches and priority assigns pseudo-labels to unlabeled samples with high certainty. Then, it processes the data around the decision boundary after the model is stabilized, enhancing classifier performance. Our IST is simple yet effective and fits existing self-training-based semi-supervised learning methods. We verify the proposed IST on five datasets and two types of backbone, effectively improving the recognition accuracy and learning speed. Significantly, it outperforms state-of-the-art competitors on three challenging image classification tasks. | [
"['Jifeng Guo' 'Zhulin Liu' 'Tong Zhang' 'C. L. Philip Chen']"
]
|
null | null | 2404.12399 | null | null | http://arxiv.org/pdf/2404.12399v1 | 2024-04-14T17:07:11Z | 2024-04-14T17:07:11Z | Model Failure or Data Corruption? Exploring Inconsistencies in Building
Energy Ratings with Self-Supervised Contrastive Learning | Building Energy Rating (BER) stands as a pivotal metric, enabling building owners, policymakers, and urban planners to understand the energy-saving potential through improving building energy efficiency. As such, enhancing buildings' BER levels is expected to directly contribute to the reduction of carbon emissions and promote climate improvement. Nonetheless, the BER assessment process is vulnerable to missing and inaccurate measurements. In this study, we introduce texttt{CLEAR}, a data-driven approach designed to scrutinize the inconsistencies in BER assessments through self-supervised contrastive learning. We validated the effectiveness of texttt{CLEAR} using a dataset representing Irish building stocks. Our experiments uncovered evidence of inconsistent BER assessments, highlighting measurement data corruption within this real-world dataset. | [
"['Qian Xiao' 'Dan Liu' 'Kevin Credit']"
]
|
null | null | 2404.12400 | null | null | http://arxiv.org/pdf/2404.12400v1 | 2024-04-15T05:36:27Z | 2024-04-15T05:36:27Z | Efflex: Efficient and Flexible Pipeline for Spatio-Temporal Trajectory
Graph Modeling and Representation Learning | In the landscape of spatio-temporal data analytics, effective trajectory representation learning is paramount. To bridge the gap of learning accurate representations with efficient and flexible mechanisms, we introduce Efflex, a comprehensive pipeline for transformative graph modeling and representation learning of the large-volume spatio-temporal trajectories. Efflex pioneers the incorporation of a multi-scale k-nearest neighbors (KNN) algorithm with feature fusion for graph construction, marking a leap in dimensionality reduction techniques by preserving essential data features. Moreover, the groundbreaking graph construction mechanism and the high-performance lightweight GCN increase embedding extraction speed by up to 36 times faster. We further offer Efflex in two versions, Efflex-L for scenarios demanding high accuracy, and Efflex-B for environments requiring swift data processing. Comprehensive experimentation with the Porto and Geolife datasets validates our approach, positioning Efflex as the state-of-the-art in the domain. Such enhancements in speed and accuracy highlight the versatility of Efflex, underscoring its wide-ranging potential for deployment in time-sensitive and computationally constrained applications. | [
"['Ming Cheng' 'Ziyi Zhou' 'Bowen Zhang' 'Ziyu Wang' 'Jiaqi Gan'\n 'Ziang Ren' 'Weiqi Feng' 'Yi Lyu' 'Hefan Zhang' 'Xingjian Diao']"
]
|
null | null | 2404.12401 | null | null | http://arxiv.org/pdf/2404.12401v1 | 2024-04-15T08:11:45Z | 2024-04-15T08:11:45Z | Items or Relations -- what do Artificial Neural Networks learn? | What has an Artificial Neural Network (ANN) learned after being successfully trained to solve a task - the set of training items or the relations between them? This question is difficult to answer for modern applied ANNs because of their enormous size and complexity. Therefore, here we consider a low-dimensional network and a simple task, i.e., the network has to reproduce a set of training items identically. We construct the family of solutions analytically and use standard learning algorithms to obtain numerical solutions. These numerical solutions differ depending on the optimization algorithm and the weight initialization and are shown to be particular members of the family of analytical solutions. In this simple setting, we observe that the general structure of the network weights represents the training set's symmetry group, i.e., the relations between training items. As a consequence, linear networks generalize, i.e., reproduce items that were not part of the training set but are consistent with the symmetry of the training set. In contrast, non-linear networks tend to learn individual training items and show associative memory. At the same time, their ability to generalize is limited. A higher degree of generalization is obtained for networks whose activation function contains a linear regime, such as tanh. Our results suggest ANN's ability to generalize - instead of learning items - could be improved by generating a sufficiently big set of elementary operations to represent relations and strongly depends on the applied non-linearity. | [
"['Renate Krause' 'Stefan Reimann']"
]
|
null | null | 2404.12402 | null | null | http://arxiv.org/pdf/2404.12402v2 | 2024-04-30T22:37:21Z | 2024-04-15T09:33:19Z | Sup3r: A Semi-Supervised Algorithm for increasing Sparsity, Stability,
and Separability in Hierarchy Of Time-Surfaces architectures | The Hierarchy Of Time-Surfaces (HOTS) algorithm, a neuromorphic approach for feature extraction from event data, presents promising capabilities but faces challenges in accuracy and compatibility with neuromorphic hardware. In this paper, we introduce Sup3r, a Semi-Supervised algorithm aimed at addressing these challenges. Sup3r enhances sparsity, stability, and separability in the HOTS networks. It enables end-to-end online training of HOTS networks replacing external classifiers, by leveraging semi-supervised learning. Sup3r learns class-informative patterns, mitigates confounding features, and reduces the number of processed events. Moreover, Sup3r facilitates continual and incremental learning, allowing adaptation to data distribution shifts and learning new tasks without forgetting. Preliminary results on N-MNIST demonstrate that Sup3r achieves comparable accuracy to similarly sized Artificial Neural Networks trained with back-propagation. This work showcases the potential of Sup3r to advance the capabilities of HOTS networks, offering a promising avenue for neuromorphic algorithms in real-world applications. | [
"['Marco Rasetto' 'Himanshu Akolkar' 'Ryad Benosman']"
]
|
null | null | 2404.12403 | null | null | http://arxiv.org/pdf/2404.12403v1 | 2024-04-15T15:32:58Z | 2024-04-15T15:32:58Z | Multi-Objective Hardware Aware Neural Architecture Search using Hardware
Cost Diversity | Hardware-aware Neural Architecture Search approaches (HW-NAS) automate the design of deep learning architectures, tailored specifically to a given target hardware platform. Yet, these techniques demand substantial computational resources, primarily due to the expensive process of assessing the performance of identified architectures. To alleviate this problem, a recent direction in the literature has employed representation similarity metric for efficiently evaluating architecture performance. Nonetheless, since it is inherently a single objective method, it requires multiple runs to identify the optimal architecture set satisfying the diverse hardware cost constraints, thereby increasing the search cost. Furthermore, simply converting the single objective into a multi-objective approach results in an under-explored architectural search space. In this study, we propose a Multi-Objective method to address the HW-NAS problem, called MO-HDNAS, to identify the trade-off set of architectures in a single run with low computational cost. This is achieved by optimizing three objectives: maximizing the representation similarity metric, minimizing hardware cost, and maximizing the hardware cost diversity. The third objective, i.e. hardware cost diversity, is used to facilitate a better exploration of the architecture search space. Experimental results demonstrate the effectiveness of our proposed method in efficiently addressing the HW-NAS problem across six edge devices for the image classification task. | [
"['Nilotpal Sinha' 'Peyman Rostami' 'Abd El Rahman Shabayek' 'Anis Kacem'\n 'Djamila Aouada']"
]
|
null | null | 2404.12404 | null | null | http://arxiv.org/pdf/2404.12404v2 | 2024-05-27T03:29:18Z | 2024-04-15T17:49:16Z | Exploring Prompting Methods for Mitigating Class Imbalance through
Synthetic Data Generation with Large Language Models | Large language models (LLMs) have demonstrated impressive in-context learning capabilities across various domains. Inspired by this, our study explores the effectiveness of LLMs in generating realistic tabular data to mitigate class imbalance. We investigate and identify key prompt design elements such as data format, class presentation, and variable mapping to optimize the generation performance. Our findings indicate that using CSV format, balancing classes, and employing unique variable mapping produces realistic and reliable data, significantly enhancing machine learning performance for minor classes in imbalanced datasets. Additionally, these approaches improve the stability and efficiency of LLM data generation. We validate our approach using six real-world datasets and a toy dataset, achieving state-of-the-art performance in classification tasks. The code is available at: https://github.com/seharanul17/synthetic-tabular-LLM | [
"['Jinhee Kim' 'Taesung Kim' 'Jaegul Choo']"
]
|
null | null | 2404.12406 | null | null | http://arxiv.org/pdf/2404.12406v1 | 2024-04-15T22:53:30Z | 2024-04-15T22:53:30Z | Lowering PyTorch's Memory Consumption for Selective Differentiation | Memory is a limiting resource for many deep learning tasks. Beside the neural network weights, one main memory consumer is the computation graph built up by automatic differentiation (AD) for backpropagation. We observe that PyTorch's current AD implementation neglects information about parameter differentiability when storing the computation graph. This information is useful though to reduce memory whenever gradients are requested for a parameter subset, as is the case in many modern fine-tuning tasks. Specifically, inputs to layers that act linearly in their parameters (dense, convolution, or normalization layers) can be discarded whenever the parameters are marked as non-differentiable. We provide a drop-in, differentiability-agnostic implementation of such layers and demonstrate its ability to reduce memory without affecting run time. | [
"['Samarth Bhatia' 'Felix Dangel']"
]
|
null | null | 2404.12407 | null | null | http://arxiv.org/pdf/2404.12407v1 | 2024-04-16T17:47:45Z | 2024-04-16T17:47:45Z | TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen | The era of pre-trained models has ushered in a wealth of new insights for the machine learning community. Among the myriad of questions that arise, one of paramount importance is: 'Do pre-trained models possess comprehensive knowledge?' This paper seeks to address this crucial inquiry. In line with our objective, we have made publicly available a novel dataset comprised of images from TV series released post-2021. This dataset holds significant potential for use in various research areas, including the evaluation of incremental learning, novel class discovery, and long-tailed learning, among others. Project page: https://tv-100.github.io/ | [
"['Da-Wei Zhou' 'Zhi-Hong Qi' 'Han-Jia Ye' 'De-Chuan Zhan']"
]
|
null | null | 2404.12408 | null | null | http://arxiv.org/pdf/2404.12408v1 | 2024-04-16T20:48:50Z | 2024-04-16T20:48:50Z | Benchmarking changepoint detection algorithms on cardiac time series | The pattern of state changes in a biomedical time series can be related to health or disease. This work presents a principled approach for selecting a changepoint detection algorithm for a specific task, such as disease classification. Eight key algorithms were compared, and the performance of each algorithm was evaluated as a function of temporal tolerance, noise, and abnormal conduction (ectopy) on realistic artificial cardiovascular time series data. All algorithms were applied to real data (cardiac time series of 22 patients with REM-behavior disorder (RBD) and 15 healthy controls) using the parameters selected on artificial data. Finally, features were derived from the detected changepoints to classify RBD patients from healthy controls using a K-Nearest Neighbors approach. On artificial data, Modified Bayesian Changepoint Detection algorithm provided superior positive predictive value for state change identification while Recursive Mean Difference Maximization (RMDM) achieved the highest true positive rate. For the classification task, features derived from the RMDM algorithm provided the highest leave one out cross validated accuracy of 0.89 and true positive rate of 0.87. Automatically detected changepoints provide useful information about subject's physiological state which cannot be directly observed. However, the choice of change point detection algorithm depends on the nature of the underlying data and the downstream application, such as a classification task. This work represents the first time change point detection algorithms have been compared in a meaningful way and utilized in a classification task, which demonstrates the effect of changepoint algorithm choice on application performance. | [
"['Ayse Cakmak' 'Erik Reinertsen' 'Shamim Nemati' 'Gari D. Clifford']"
]
|
null | null | 2404.12415 | null | null | http://arxiv.org/pdf/2404.12415v1 | 2024-04-17T17:57:20Z | 2024-04-17T17:57:20Z | Soil Fertility Prediction Using Combined USB-microscope Based Soil
Image, Auxiliary Variables, and Portable X-Ray Fluorescence Spectrometry | This study explored the application of portable X-ray fluorescence (PXRF) spectrometry and soil image analysis to rapidly assess soil fertility, focusing on critical parameters such as available B, organic carbon (OC), available Mn, available S, and the sulfur availability index (SAI). Analyzing 1,133 soil samples from various agro-climatic zones in Eastern India, the research combined color and texture features from microscopic soil images, PXRF data, and auxiliary soil variables (AVs) using a Random Forest model. Results indicated that integrating image features (IFs) with auxiliary variables (AVs) significantly enhanced prediction accuracy for available B (R^2 = 0.80) and OC (R^2 = 0.88). A data fusion approach, incorporating IFs, AVs, and PXRF data, further improved predictions for available Mn and SAI with R^2 values of 0.72 and 0.70, respectively. The study demonstrated how these integrated technologies have the potential to provide quick and affordable options for soil testing, opening up access to more sophisticated prediction models and a better comprehension of the fertility and health of the soil. Future research should focus on the application of deep learning models on a larger dataset of soil images, developed using soils from a broader range of agro-climatic zones under field condition. | [
"['Shubhadip Dasgupta' 'Satwik Pate' 'Divya Rathore' 'L. G. Divyanth'\n 'Ayan Das' 'Anshuman Nayak' 'Subhadip Dey' 'Asim Biswas'\n 'David C. Weindorf' 'Bin Li' 'Sergio Henrique Godinho Silva'\n 'Bruno Teixeira Ribeiro' 'Sanjay Srivastava' 'Somsubhra Chakraborty']"
]
|
null | null | 2404.12416 | null | null | http://arxiv.org/pdf/2404.12416v1 | 2024-04-18T00:05:57Z | 2024-04-18T00:05:57Z | Full Shot Predictions for the DIII-D Tokamak via Deep Recurrent Networks | Although tokamaks are one of the most promising devices for realizing nuclear fusion as an energy source, there are still key obstacles when it comes to understanding the dynamics of the plasma and controlling it. As such, it is crucial that high quality models are developed to assist in overcoming these obstacles. In this work, we take an entirely data driven approach to learn such a model. In particular, we use historical data from the DIII-D tokamak to train a deep recurrent network that is able to predict the full time evolution of plasma discharges (or "shots"). Following this, we investigate how different training and inference procedures affect the quality and calibration of the shot predictions. | [
"['Ian Char' 'Youngseog Chung' 'Joseph Abbate' 'Egemen Kolemen'\n 'Jeff Schneider']"
]
|
null | null | 2404.12418 | null | null | http://arxiv.org/pdf/2404.12418v1 | 2024-04-18T15:31:13Z | 2024-04-18T15:31:13Z | The graph alignment problem: fundamental limits and efficient algorithms | This thesis studies the graph alignment problem, the noisy version of the graph isomorphism problem, which aims to find a matching between the nodes of two graphs which preserves most of the edges. Focusing on the planted version where the graphs are random, we are interested in understanding the fundamental information-theoretical limits for this problem, as well as designing and analyzing algorithms that are able to recover the underlying alignment in the data. For these algorithms, we give some high probability guarantees on the regime in which they succeed or fail. | [
"['Luca Ganassali']"
]
|
null | null | 2404.12445 | null | null | http://arxiv.org/pdf/2404.12445v1 | 2024-04-18T18:11:06Z | 2024-04-18T18:11:06Z | Adaptive Catalyst Discovery Using Multicriteria Bayesian Optimization
with Representation Learning | High-performance catalysts are crucial for sustainable energy conversion and human health. However, the discovery of catalysts faces challenges due to the absence of efficient approaches to navigating vast and high-dimensional structure and composition spaces. In this study, we propose a high-throughput computational catalyst screening approach integrating density functional theory (DFT) and Bayesian Optimization (BO). Within the BO framework, we propose an uncertainty-aware atomistic machine learning model, UPNet, which enables automated representation learning directly from high-dimensional catalyst structures and achieves principled uncertainty quantification. Utilizing a constrained expected improvement acquisition function, our BO framework simultaneously considers multiple evaluation criteria. Using the proposed methods, we explore catalyst discovery for the CO2 reduction reaction. The results demonstrate that our approach achieves high prediction accuracy, facilitates interpretable feature extraction, and enables multicriteria design optimization, leading to significant reduction of computing power and time (10x reduction of required DFT calculations) in high-performance catalyst discovery. | [
"['Jie Chen' 'Pengfei Ou' 'Yuxin Chang' 'Hengrui Zhang' 'Xiao-Yan Li'\n 'Edward H. Sargent' 'Wei Chen']"
]
|
null | null | 2404.12450 | null | null | http://arxiv.org/pdf/2404.12450v1 | 2024-04-18T18:25:00Z | 2024-04-18T18:25:00Z | Enhancing AI Diagnostics: Autonomous Lesion Masking via Semi-Supervised
Deep Learning | This study presents an unsupervised domain adaptation method aimed at autonomously generating image masks outlining regions of interest (ROIs) for differentiating breast lesions in breast ultrasound (US) imaging. Our semi-supervised learning approach utilizes a primitive model trained on a small public breast US dataset with true annotations. This model is then iteratively refined for the domain adaptation task, generating pseudo-masks for our private, unannotated breast US dataset. The dataset, twice the size of the public one, exhibits considerable variability in image acquisition perspectives and demographic representation, posing a domain-shift challenge. Unlike typical domain adversarial training, we employ downstream classification outcomes as a benchmark to guide the updating of pseudo-masks in subsequent iterations. We found the classification precision to be highly correlated with the completeness of the generated ROIs, which promotes the explainability of the deep learning classification model. Preliminary findings demonstrate the efficacy and reliability of this approach in streamlining the ROI annotation process, thereby enhancing the classification and localization of breast lesions for more precise and interpretable diagnoses. | [
"['Ting-Ruen Wei' 'Michele Hell' 'Dang Bich Thuy Le' 'Aren Vierra'\n 'Ran Pang' 'Mahesh Patel' 'Young Kang' 'Yuling Yan']"
]
|
null | null | 2404.12457 | null | null | http://arxiv.org/pdf/2404.12457v2 | 2024-04-25T06:47:57Z | 2024-04-18T18:32:30Z | RAGCache: Efficient Knowledge Caching for Retrieval-Augmented Generation | Retrieval-Augmented Generation (RAG) has shown significant improvements in various natural language processing tasks by integrating the strengths of large language models (LLMs) and external knowledge databases. However, RAG introduces long sequence generation and leads to high computation and memory costs. We propose RAGCache, a novel multilevel dynamic caching system tailored for RAG. Our analysis benchmarks current RAG systems, pinpointing the performance bottleneck (i.e., long sequence due to knowledge injection) and optimization opportunities (i.e., caching knowledge's intermediate states). Based on these insights, we design RAGCache, which organizes the intermediate states of retrieved knowledge in a knowledge tree and caches them in the GPU and host memory hierarchy. RAGCache proposes a replacement policy that is aware of LLM inference characteristics and RAG retrieval patterns. It also dynamically overlaps the retrieval and inference steps to minimize the end-to-end latency. We implement RAGCache and evaluate it on vLLM, a state-of-the-art LLM inference system and Faiss, a state-of-the-art vector database. The experimental results show that RAGCache reduces the time to first token (TTFT) by up to 4x and improves the throughput by up to 2.1x compared to vLLM integrated with Faiss. | [
"['Chao Jin' 'Zili Zhang' 'Xuanlin Jiang' 'Fangyue Liu' 'Xin Liu'\n 'Xuanzhe Liu' 'Xin Jin']"
]
|
null | null | 2404.12467 | null | null | http://arxiv.org/pdf/2404.12467v1 | 2024-04-18T19:04:27Z | 2024-04-18T19:04:27Z | Towards Multi-modal Transformers in Federated Learning | Multi-modal transformers mark significant progress in different domains, but siloed high-quality data hinders their further improvement. To remedy this, federated learning (FL) has emerged as a promising privacy-preserving paradigm for training models without direct access to the raw data held by different clients. Despite its potential, a considerable research direction regarding the unpaired uni-modal clients and the transformer architecture in FL remains unexplored. To fill this gap, this paper explores a transfer multi-modal federated learning (MFL) scenario within the vision-language domain, where clients possess data of various modalities distributed across different datasets. We systematically evaluate the performance of existing methods when a transformer architecture is utilized and introduce a novel framework called Federated modality complementary and collaboration (FedCola) by addressing the in-modality and cross-modality gaps among clients. Through extensive experiments across various FL settings, FedCola demonstrates superior performance over previous approaches, offering new perspectives on future federated training of multi-modal transformers. | [
"['Guangyu Sun' 'Matias Mendieta' 'Aritra Dutta' 'Xin Li' 'Chen Chen']"
]
|
null | null | 2404.12474 | null | null | http://arxiv.org/pdf/2404.12474v1 | 2024-04-18T19:11:34Z | 2024-04-18T19:11:34Z | Learning a Stable, Safe, Distributed Feedback Controller for a
Heterogeneous Platoon of Vehicles | Platooning of autonomous vehicles has the potential to increase safety and fuel efficiency on highways. The goal of platooning is to have each vehicle drive at some speed (set by the leader) while maintaining a safe distance from its neighbors. Many prior works have analyzed various controllers for platooning, most commonly linear feedback and distributed model predictive controllers. In this work, we introduce an algorithm for learning a stable, safe, distributed controller for a heterogeneous platoon. Our algorithm relies on recent developments in learning neural network stability and safety certificates. We train a controller for autonomous platooning in simulation and evaluate its performance on hardware with a platoon of four F1Tenth vehicles. We then perform further analysis in simulation with a platoon of 100 vehicles. Experimental results demonstrate the practicality of the algorithm and the learned controller by comparing the performance of the neural network controller to linear feedback and distributed model predictive controllers. | [
"['Michael H. Shaham' 'Taskin Padir']"
]
|
null | null | 2404.12478 | null | null | http://arxiv.org/pdf/2404.12478v1 | 2024-04-18T19:21:28Z | 2024-04-18T19:21:28Z | A New Reliable & Parsimonious Learning Strategy Comprising Two Layers of
Gaussian Processes, to Address Inhomogeneous Empirical Correlation Structures | We present a new strategy for learning the functional relation between a pair of variables, while addressing inhomogeneities in the correlation structure of the available data, by modelling the sought function as a sample function of a non-stationary Gaussian Process (GP), that nests within itself multiple other GPs, each of which we prove can be stationary, thereby establishing sufficiency of two GP layers. In fact, a non-stationary kernel is envisaged, with each hyperparameter set as dependent on the sample function drawn from the outer non-stationary GP, such that a new sample function is drawn at every pair of input values at which the kernel is computed. However, such a model cannot be implemented, and we substitute this by recalling that the average effect of drawing different sample functions from a given GP is equivalent to that of drawing a sample function from each of a set of GPs that are rendered different, as updated during the equilibrium stage of the undertaken inference (via MCMC). The kernel is fully non-parametric, and it suffices to learn one hyperparameter per layer of GP, for each dimension of the input variable. We illustrate this new learning strategy on a real dataset. | [
"['Gargi Roy' 'Dalia Chakrabarty']"
]
|
null | null | 2404.12481 | null | null | http://arxiv.org/pdf/2404.12481v1 | 2024-04-18T19:33:55Z | 2024-04-18T19:33:55Z | Understanding Optimal Feature Transfer via a Fine-Grained Bias-Variance
Analysis | In the transfer learning paradigm models learn useful representations (or features) during a data-rich pretraining stage, and then use the pretrained representation to improve model performance on data-scarce downstream tasks. In this work, we explore transfer learning with the goal of optimizing downstream performance. We introduce a simple linear model that takes as input an arbitrary pretrained feature transform. We derive exact asymptotics of the downstream risk and its fine-grained bias-variance decomposition. Our finding suggests that using the ground-truth featurization can result in "double-divergence" of the asymptotic risk, indicating that it is not necessarily optimal for downstream performance. We then identify the optimal pretrained representation by minimizing the asymptotic downstream risk averaged over an ensemble of downstream tasks. Our analysis reveals the relative importance of learning the task-relevant features and structures in the data covariates and characterizes how each contributes to controlling the downstream risk from a bias-variance perspective. Moreover, we uncover a phase transition phenomenon where the optimal pretrained representation transitions from hard to soft selection of relevant features and discuss its connection to principal component regression. | [
"['Yufan Li' 'Subhabrata Sen' 'Ben Adlam']"
]
|
null | null | 2404.12484 | null | null | http://arxiv.org/pdf/2404.12484v3 | 2024-06-26T04:27:25Z | 2024-04-18T19:57:06Z | Neural Methods for Amortised Inference | Simulation-based methods for statistical inference have evolved dramatically over the past 50 years, keeping pace with technological advancements. The field is undergoing a new revolution as it embraces the representational capacity of neural networks, optimisation libraries and graphics processing units for learning complex mappings between data and inferential targets. The resulting tools are amortised, in the sense that they allow rapid inference through fast feedforward operations. In this article we review recent progress in the context of point estimation, approximate Bayesian inference, summary-statistic construction, and likelihood approximation. We also cover software, and include a simple illustration to showcase the wide array of tools available for amortised inference and the benefits they offer over Markov chain Monte Carlo methods. The article concludes with an overview of relevant topics and an outlook on future research directions. | [
"['Andrew Zammit-Mangion' 'Matthew Sainsbury-Dale' 'Raphaël Huser']"
]
|
null | null | 2404.12485 | null | null | http://arxiv.org/pdf/2404.12485v1 | 2024-04-18T19:58:11Z | 2024-04-18T19:58:11Z | Contract Scheduling with Distributional and Multiple Advice | Contract scheduling is a widely studied framework for designing real-time systems with interruptible capabilities. Previous work has showed that a prediction on the interruption time can help improve the performance of contract-based systems, however it has relied on a single prediction that is provided by a deterministic oracle. In this work, we introduce and study more general and realistic learning-augmented settings in which the prediction is in the form of a probability distribution, or it is given as a set of multiple possible interruption times. For both prediction settings, we design and analyze schedules which perform optimally if the prediction is accurate, while simultaneously guaranteeing the best worst-case performance if the prediction is adversarial. We also provide evidence that the resulting system is robust to prediction errors in the distributional setting. Last, we present an experimental evaluation that confirms the theoretical findings, and illustrates the performance improvements that can be attained in practice. | [
"['Spyros Angelopoulos' 'Marcin Bienkowski' 'Christoph Dürr'\n 'Bertrand Simon']"
]
|
null | null | 2404.12486 | null | null | http://arxiv.org/pdf/2404.12486v2 | 2024-04-22T19:20:33Z | 2024-04-18T20:00:25Z | Follow-Me AI: Energy-Efficient User Interaction with Smart Environments | This article introduces Follow-Me AI, a concept designed to enhance user interactions with smart environments, optimize energy use, and provide better control over data captured by these environments. Through AI agents that accompany users, Follow-Me AI negotiates data management based on user consent, aligns environmental controls as well as user communication and computes resources available in the environment with user preferences, and predicts user behavior to proactively adjust the smart environment. The manuscript illustrates this concept with a detailed example of Follow-Me AI in a smart campus setting, detailing the interactions with the building's management system for optimal comfort and efficiency. Finally, this article looks into the challenges and opportunities related to Follow-Me AI. | [
"['Alaa Saleh' 'Praveen Kumar Donta' 'Roberto Morabito'\n 'Naser Hossein Motlagh' 'Lauri Lovén']"
]
|
null | null | 2404.12488 | null | null | http://arxiv.org/pdf/2404.12488v1 | 2024-04-18T20:03:56Z | 2024-04-18T20:03:56Z | Global Counterfactual Directions | Despite increasing progress in development of methods for generating visual counterfactual explanations, especially with the recent rise of Denoising Diffusion Probabilistic Models, previous works consider them as an entirely local technique. In this work, we take the first step at globalizing them. Specifically, we discover that the latent space of Diffusion Autoencoders encodes the inference process of a given classifier in the form of global directions. We propose a novel proxy-based approach that discovers two types of these directions with the use of only single image in an entirely black-box manner. Precisely, g-directions allow for flipping the decision of a given classifier on an entire dataset of images, while h-directions further increase the diversity of explanations. We refer to them in general as Global Counterfactual Directions (GCDs). Moreover, we show that GCDs can be naturally combined with Latent Integrated Gradients resulting in a new black-box attribution method, while simultaneously enhancing the understanding of counterfactual explanations. We validate our approach on existing benchmarks and show that it generalizes to real-world use-cases. | [
"['Bartlomiej Sobieski' 'Przemysław Biecek']"
]
|
null | null | 2404.12498 | null | null | http://arxiv.org/pdf/2404.12498v1 | 2024-04-18T20:25:33Z | 2024-04-18T20:25:33Z | A Configurable Pythonic Data Center Model for Sustainable Cooling and ML
Integration | There have been growing discussions on estimating and subsequently reducing the operational carbon footprint of enterprise data centers. The design and intelligent control for data centers have an important impact on data center carbon footprint. In this paper, we showcase PyDCM, a Python library that enables extremely fast prototyping of data center design and applies reinforcement learning-enabled control with the purpose of evaluating key sustainability metrics including carbon footprint, energy consumption, and observing temperature hotspots. We demonstrate these capabilities of PyDCM and compare them to existing works in EnergyPlus for modeling data centers. PyDCM can also be used as a standalone Gymnasium environment for demonstrating sustainability-focused data center control. | [
"['Avisek Naug' 'Antonio Guillen' 'Ricardo Luna Gutierrez'\n 'Vineet Gundecha' 'Sahand Ghorbanpour' 'Sajad Mousavi'\n 'Ashwin Ramesh Babu' 'Soumyendu Sarkar']"
]
|
null | null | 2404.12509 | null | null | http://arxiv.org/pdf/2404.12509v1 | 2024-04-18T21:09:34Z | 2024-04-18T21:09:34Z | Compositional Neural Textures | Texture plays a vital role in enhancing visual richness in both real photographs and computer-generated imagery. However, the process of editing textures often involves laborious and repetitive manual adjustments of textons, which are the small, recurring local patterns that define textures. In this work, we introduce a fully unsupervised approach for representing textures using a compositional neural model that captures individual textons. We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance. By modeling a texture as a discrete composition of Gaussian textons, the representation offers both expressiveness and ease of editing. Textures can be edited by modifying the compositional Gaussians within the latent space, and new textures can be efficiently synthesized by feeding the modified Gaussians through a generator network in a feed-forward manner. This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures, texture interpolation, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation. The proposed approach contributes to advancing texture analysis, modeling, and editing techniques, and opens up new possibilities for creating visually appealing images with controllable textures. | [
"['Peihan Tu' 'Li-Yi Wei' 'Matthias Zwicker']"
]
|
null | null | 2404.12511 | null | null | http://arxiv.org/pdf/2404.12511v1 | 2024-04-18T21:22:42Z | 2024-04-18T21:22:42Z | Generalizing Machine Learning Evaluation through the Integration of
Shannon Entropy and Rough Set Theory | This research paper delves into the innovative integration of Shannon entropy and rough set theory, presenting a novel approach to generalize the evaluation approach in machine learning. The conventional application of entropy, primarily focused on information uncertainty, is extended through its combination with rough set theory to offer a deeper insight into data's intrinsic structure and the interpretability of machine learning models. We introduce a comprehensive framework that synergizes the granularity of rough set theory with the uncertainty quantification of Shannon entropy, applied across a spectrum of machine learning algorithms. Our methodology is rigorously tested on various datasets, showcasing its capability to not only assess predictive performance but also to illuminate the underlying data complexity and model robustness. The results underscore the utility of this integrated approach in enhancing the evaluation landscape of machine learning, offering a multi-faceted perspective that balances accuracy with a profound understanding of data attributes and model dynamics. This paper contributes a groundbreaking perspective to machine learning evaluation, proposing a method that encapsulates a holistic view of model performance, thereby facilitating more informed decision-making in model selection and application. | [
"['Olga Cherednichenko' 'Dmytro Chernyshov' 'Dmytro Sytnikov'\n 'Polina Sytnikova']"
]
|
null | null | 2404.12512 | null | null | http://arxiv.org/pdf/2404.12512v1 | 2024-04-18T21:23:25Z | 2024-04-18T21:23:25Z | Proteus: Preserving Model Confidentiality during Graph Optimizations | Deep learning (DL) models have revolutionized numerous domains, yet optimizing them for computational efficiency remains a challenging endeavor. Development of new DL models typically involves two parties: the model developers and performance optimizers. The collaboration between the parties often necessitates the model developers exposing the model architecture and computational graph to the optimizers. However, this exposure is undesirable since the model architecture is an important intellectual property, and its innovations require significant investments and expertise. During the exchange, the model is also vulnerable to adversarial attacks via model stealing. This paper presents Proteus, a novel mechanism that enables model optimization by an independent party while preserving the confidentiality of the model architecture. Proteus obfuscates the protected model by partitioning its computational graph into subgraphs and concealing each subgraph within a large pool of generated realistic subgraphs that cannot be easily distinguished from the original. We evaluate Proteus on a range of DNNs, demonstrating its efficacy in preserving confidentiality without compromising performance optimization opportunities. Proteus effectively hides the model as one alternative among up to $10^{32}$ possible model architectures, and is resilient against attacks with a learning-based adversary. We also demonstrate that heuristic based and manual approaches are ineffective in identifying the protected model. To our knowledge, Proteus is the first work that tackles the challenge of model confidentiality during performance optimization. Proteus will be open-sourced for direct use and experimentation, with easy integration with compilers such as ONNXRuntime. | [
"['Yubo Gao' 'Maryam Haghifam' 'Christina Giannoula' 'Renbo Tu'\n 'Gennady Pekhimenko' 'Nandita Vijaykumar']"
]
|
null | null | 2404.12522 | null | null | http://arxiv.org/pdf/2404.12522v1 | 2024-04-18T21:52:14Z | 2024-04-18T21:52:14Z | Neural Active Learning Beyond Bandits | We study both stream-based and pool-based active learning with neural network approximations. A recent line of works proposed bandit-based approaches that transformed active learning into a bandit problem, achieving both theoretical and empirical success. However, the performance and computational costs of these methods may be susceptible to the number of classes, denoted as $K$, due to this transformation. Therefore, this paper seeks to answer the question: "How can we mitigate the adverse impacts of $K$ while retaining the advantages of principled exploration and provable performance guarantees in active learning?" To tackle this challenge, we propose two algorithms based on the newly designed exploitation and exploration neural networks for stream-based and pool-based active learning. Subsequently, we provide theoretical performance guarantees for both algorithms in a non-parametric setting, demonstrating a slower error-growth rate concerning $K$ for the proposed approaches. We use extensive experiments to evaluate the proposed algorithms, which consistently outperform state-of-the-art baselines. | [
"['Yikun Ban' 'Ishika Agarwal' 'Ziwei Wu' 'Yada Zhu' 'Kommy Weldemariam'\n 'Hanghang Tong' 'Jingrui He']"
]
|
null | null | 2404.12524 | null | null | http://arxiv.org/pdf/2404.12524v1 | 2024-04-18T21:55:23Z | 2024-04-18T21:55:23Z | DoughNet: A Visual Predictive Model for Topological Manipulation of
Deformable Objects | Manipulation of elastoplastic objects like dough often involves topological changes such as splitting and merging. The ability to accurately predict these topological changes that a specific action might incur is critical for planning interactions with elastoplastic objects. We present DoughNet, a Transformer-based architecture for handling these challenges, consisting of two components. First, a denoising autoencoder represents deformable objects of varying topology as sets of latent codes. Second, a visual predictive model performs autoregressive set prediction to determine long-horizon geometrical deformation and topological changes purely in latent space. Given a partial initial state and desired manipulation trajectories, it infers all resulting object geometries and topologies at each step. DoughNet thereby allows to plan robotic manipulation; selecting a suited tool, its pose and opening width to recreate robot- or human-made goals. Our experiments in simulated and real environments show that DoughNet is able to significantly outperform related approaches that consider deformation only as geometrical change. | [
"['Dominik Bauer' 'Zhenjia Xu' 'Shuran Song']"
]
|
null | null | 2404.12526 | null | null | http://arxiv.org/pdf/2404.12526v1 | 2024-04-18T22:01:56Z | 2024-04-18T22:01:56Z | Adaptive Memory Replay for Continual Learning | Foundation Models (FMs) have become the hallmark of modern AI, however, these models are trained on massive data, leading to financially expensive training. Updating FMs as new data becomes available is important, however, can lead to `catastrophic forgetting', where models underperform on tasks related to data sub-populations observed too long ago. This continual learning (CL) phenomenon has been extensively studied, but primarily in a setting where only a small amount of past data can be stored. We advocate for the paradigm where memory is abundant, allowing us to keep all previous data, but computational resources are limited. In this setting, traditional replay-based CL approaches are outperformed by a simple baseline which replays past data selected uniformly at random, indicating that this setting necessitates a new approach. We address this by introducing a framework of adaptive memory replay for continual learning, where sampling of past data is phrased as a multi-armed bandit problem. We utilize Bolzmann sampling to derive a method which dynamically selects past data for training conditioned on the current task, assuming full data access and emphasizing training efficiency. Through extensive evaluations on both vision and language pre-training tasks, we demonstrate the effectiveness of our approach, which maintains high performance while reducing forgetting by up to 10% at no training efficiency cost. | [
"['James Seale Smith' 'Lazar Valkov' 'Shaunak Halbe' 'Vyshnavi Gutta'\n 'Rogerio Feris' 'Zsolt Kira' 'Leonid Karlinsky']"
]
|
null | null | 2404.12530 | null | null | http://arxiv.org/pdf/2404.12530v1 | 2024-04-18T22:23:24Z | 2024-04-18T22:23:24Z | TrajDeleter: Enabling Trajectory Forgetting in Offline Reinforcement
Learning Agents | Reinforcement learning (RL) trains an agent from experiences interacting with the environment. In scenarios where online interactions are impractical, offline RL, which trains the agent using pre-collected datasets, has become popular. While this new paradigm presents remarkable effectiveness across various real-world domains, like healthcare and energy management, there is a growing demand to enable agents to rapidly and completely eliminate the influence of specific trajectories from both the training dataset and the trained agents. To meet this problem, this paper advocates Trajdeleter, the first practical approach to trajectory unlearning for offline RL agents. The key idea of Trajdeleter is to guide the agent to demonstrate deteriorating performance when it encounters states associated with unlearning trajectories. Simultaneously, it ensures the agent maintains its original performance level when facing other remaining trajectories. Additionally, we introduce Trajauditor, a simple yet efficient method to evaluate whether Trajdeleter successfully eliminates the specific trajectories of influence from the offline RL agent. Extensive experiments conducted on six offline RL algorithms and three tasks demonstrate that Trajdeleter requires only about 1.5% of the time needed for retraining from scratch. It effectively unlearns an average of 94.8% of the targeted trajectories yet still performs well in actual environment interactions after unlearning. The replication package and agent parameters are available online. | [
"['Chen Gong' 'Kecen Li' 'Jin Yao' 'Tianhao Wang']"
]
|
null | null | 2404.12534 | null | null | http://arxiv.org/pdf/2404.12534v1 | 2024-04-18T22:54:08Z | 2024-04-18T22:54:08Z | Towards Large Language Models as Copilots for Theorem Proving in Lean | Theorem proving is an important challenge for large language models (LLMs), as formal proofs can be checked rigorously by proof assistants such as Lean, leaving no room for hallucination. Existing LLM-based provers try to prove theorems in a fully autonomous mode without human intervention. In this mode, they struggle with novel and challenging theorems, for which human insights may be critical. In this paper, we explore LLMs as copilots that assist humans in proving theorems. We introduce Lean Copilot, a framework for running LLM inference in Lean. It enables programmers to build various LLM-based proof automation tools that integrate seamlessly into the workflow of Lean users. Using Lean Copilot, we build tools for suggesting proof steps (tactic suggestion), completing intermediate proof goals (proof search), and selecting relevant premises (premise selection) using LLMs. Users can use our pretrained models or bring their own ones that run either locally (with or without GPUs) or on the cloud. Experimental results demonstrate the effectiveness of our method in assisting humans and automating theorem proving process compared to existing rule-based proof automation in Lean. We open source all codes under a permissive MIT license to facilitate further research. | [
"['Peiyang Song' 'Kaiyu Yang' 'Anima Anandkumar']"
]
|
null | null | 2404.12535 | null | null | http://arxiv.org/pdf/2404.12535v1 | 2024-04-18T22:56:57Z | 2024-04-18T22:56:57Z | HalluciBot: Is There No Such Thing as a Bad Question? | Hallucination continues to be one of the most critical challenges in the institutional adoption journey of Large Language Models (LLMs). In this context, an overwhelming number of studies have focused on analyzing the post-generation phase - refining outputs via feedback, analyzing logit output values, or deriving clues via the outputs' artifacts. We propose HalluciBot, a model that predicts the probability of hallucination $textbf{before generation}$, for any query imposed to an LLM. In essence, HalluciBot does not invoke any generation during inference. To derive empirical evidence for HalluciBot, we employ a Multi-Agent Monte Carlo Simulation using a Query Perturbator to craft $n$ variations per query at train time. The construction of our Query Perturbator is motivated by our introduction of a new definition of hallucination - $textit{truthful hallucination}$. Our training methodology generated 2,219,022 estimates for a training corpus of 369,837 queries, spanning 13 diverse datasets and 3 question-answering scenarios. HalluciBot predicts both binary and multi-class probabilities of hallucination, enabling a means to judge the query's quality with regards to its propensity to hallucinate. Therefore, HalluciBot paves the way to revise or cancel a query before generation and the ensuing computational waste. Moreover, it provides a lucid means to measure user accountability for hallucinatory queries. | [
"['William Watson' 'Nicole Cho']"
]
|
null | null | 2404.12538 | null | null | http://arxiv.org/pdf/2404.12538v2 | 2024-04-30T03:46:28Z | 2024-04-18T23:12:46Z | TrACT: A Training Dynamics Aware Contrastive Learning Framework for
Long-tail Trajectory Prediction | As a safety critical task, autonomous driving requires accurate predictions of road users' future trajectories for safe motion planning, particularly under challenging conditions. Yet, many recent deep learning methods suffer from a degraded performance on the challenging scenarios, mainly because these scenarios appear less frequently in the training data. To address such a long-tail issue, existing methods force challenging scenarios closer together in the feature space during training to trigger information sharing among them for more robust learning. These methods, however, primarily rely on the motion patterns to characterize scenarios, omitting more informative contextual information, such as interactions and scene layout. We argue that exploiting such information not only improves prediction accuracy but also scene compliance of the generated trajectories. In this paper, we propose to incorporate richer training dynamics information into a prototypical contrastive learning framework. More specifically, we propose a two-stage process. First, we generate rich contextual features using a baseline encoder-decoder framework. These features are split into clusters based on the model's output errors, using the training dynamics information, and a prototype is computed within each cluster. Second, we retrain the model using the prototypes in a contrastive learning framework. We conduct empirical evaluations of our approach using two large-scale naturalistic datasets and show that our method achieves state-of-the-art performance by improving accuracy and scene compliance on the long-tail samples. Furthermore, we perform experiments on a subset of the clusters to highlight the additional benefit of our approach in reducing training bias. | [
"['Junrui Zhang' 'Mozhgan Pourkeshavarz' 'Amir Rasouli']"
]
|
null | null | 2404.12544 | null | null | http://arxiv.org/pdf/2404.12544v1 | 2024-04-18T23:40:42Z | 2024-04-18T23:40:42Z | Beyond development: Challenges in deploying machine learning models for
structural engineering applications | Machine learning (ML)-based solutions are rapidly changing the landscape of many fields, including structural engineering. Despite their promising performance, these approaches are usually only demonstrated as proof-of-concept in structural engineering, and are rarely deployed for real-world applications. This paper aims to illustrate the challenges of developing ML models suitable for deployment through two illustrative examples. Among various pitfalls, the presented discussion focuses on model overfitting and underspecification, training data representativeness, variable omission bias, and cross-validation. The results highlight the importance of implementing rigorous model validation techniques through adaptive sampling, careful physics-informed feature selection, and considerations of both model complexity and generalizability. | [
"['Mohsen Zaker Esteghamati' 'Brennan Bean' 'Henry V. Burton' 'M. Z. Naser']"
]
|
null | null | 2404.12554 | null | null | http://arxiv.org/pdf/2404.12554v1 | 2024-04-19T00:17:35Z | 2024-04-19T00:17:35Z | Learning Stable and Passive Neural Differential Equations | In this paper, we introduce a novel class of neural differential equation, which are intrinsically Lyapunov stable, exponentially stable or passive. We take a recently proposed Polyak Lojasiewicz network (PLNet) as an Lyapunov function and then parameterize the vector field as the descent directions of the Lyapunov function. The resulting models have a same structure as the general Hamiltonian dynamics, where the Hamiltonian is lower- and upper-bounded by quadratic functions. Moreover, it is also positive definite w.r.t. either a known or learnable equilibrium. We illustrate the effectiveness of the proposed model on a damped double pendulum system. | [
"['Jing Cheng' 'Ruigang Wang' 'Ian R. Manchester']"
]
|
null | null | 2404.12569 | null | null | http://arxiv.org/pdf/2404.12569v1 | 2024-04-19T01:36:50Z | 2024-04-19T01:36:50Z | Multi-View Subgraph Neural Networks: Self-Supervised Learning with
Scarce Labeled Data | While graph neural networks (GNNs) have become the de-facto standard for graph-based node classification, they impose a strong assumption on the availability of sufficient labeled samples. This assumption restricts the classification performance of prevailing GNNs on many real-world applications suffering from low-data regimes. Specifically, features extracted from scarce labeled nodes could not provide sufficient supervision for the unlabeled samples, leading to severe over-fitting. In this work, we point out that leveraging subgraphs to capture long-range dependencies can augment the representation of a node with homophily properties, thus alleviating the low-data regime. However, prior works leveraging subgraphs fail to capture the long-range dependencies among nodes. To this end, we present a novel self-supervised learning framework, called multi-view subgraph neural networks (Muse), for handling long-range dependencies. In particular, we propose an information theory-based identification mechanism to identify two types of subgraphs from the views of input space and latent space, respectively. The former is to capture the local structure of the graph, while the latter captures the long-range dependencies among nodes. By fusing these two views of subgraphs, the learned representations can preserve the topological properties of the graph at large, including the local structure and long-range dependencies, thus maximizing their expressiveness for downstream node classification tasks. Experimental results show that Muse outperforms the alternative methods on node classification tasks with limited labeled data. | [
"['Zhenzhong Wang' 'Qingyuan Zeng' 'Wanyu Lin' 'Min Jiang' 'Kay Chen Tan']"
]
|
null | null | 2404.12575 | null | null | http://arxiv.org/pdf/2404.12575v1 | 2024-04-19T01:48:21Z | 2024-04-19T01:48:21Z | On the use of adversarial validation for quantifying dissimilarity in
geospatial machine learning prediction | Recent geospatial machine learning studies have shown that the results of model evaluation via cross-validation (CV) are strongly affected by the dissimilarity between the sample data and the prediction locations. In this paper, we propose a method to quantify such a dissimilarity in the interval 0 to 100%, and from the perspective of the data feature space. The proposed method is based on adversarial validation, which is an approach that can check whether sample data and prediction locations can be separated with a binary classifier. To study the effectiveness and generality of our method, we tested it on a series of experiments based on both synthetic and real datasets and with gradually increasing dissimilarities. Results show that the proposed method can successfully quantify dissimilarity across the entire range of values. Next to this, we studied how dissimilarity affects CV evaluations by comparing the results of random CV and of two spatial CV methods, namely block and spatial+ CV. Our results showed that CV evaluations follow similar patterns in all datasets and predictions: when dissimilarity is low (usually lower than 30%), random CV provides the most accurate evaluation results. As dissimilarity increases, spatial CV methods, especially spatial+ CV, become more and more accurate and even outperforming random CV. When dissimilarity is high (>=90%), no CV method provides accurate evaluations. These results show the importance of considering feature space dissimilarity when working with geospatial machine learning predictions, and can help researchers and practitioners to select more suitable CV methods for evaluating their predictions. | [
"['Yanwen Wang' 'Mahdi Khodadadzadeh' 'Raul Zurita-Milla']"
]
|
null | null | 2404.12580 | null | null | http://arxiv.org/pdf/2404.12580v1 | 2024-04-19T02:11:41Z | 2024-04-19T02:11:41Z | iTBLS: A Dataset of Interactive Conversations Over Tabular Information | This paper introduces Interactive Tables (iTBLS), a dataset of interactive conversations situated in tables from scientific articles. This dataset is designed to facilitate human-AI collaborative problem-solving through AI-powered multi-task tabular capabilities. In contrast to prior work that models interactions as factoid QA or procedure synthesis, iTBLS broadens the scope of interactions to include mathematical reasoning, natural language manipulation, and expansion of existing tables from natural language conversation by delineating interactions into one of three tasks: interpretation, modification, or generation. Additionally, the paper presents a suite of baseline approaches to iTBLS, utilizing zero-shot prompting and parameter-efficient fine-tuning for different computing situations. We also introduce a novel multi-step approach and show how it can be leveraged in conjunction with parameter-efficient fine-tuning to achieve the state-of-the-art on iTBLS; outperforming standard parameter-efficient fine-tuning by up to 15% on interpretation, 18% on modification, and 38% on generation. | [
"['Anirudh Sundar' 'Christopher Richardson' 'William Gay' 'Larry Heck']"
]
|
null | null | 2404.12586 | null | null | http://arxiv.org/pdf/2404.12586v1 | 2024-04-19T02:31:34Z | 2024-04-19T02:31:34Z | Risk Bounds for Mixture Density Estimation on Compact Domains via the
$h$-Lifted Kullback--Leibler Divergence | We consider the problem of estimating probability density functions based on sample data, using a finite mixture of densities from some component class. To this end, we introduce the $h$-lifted Kullback--Leibler (KL) divergence as a generalization of the standard KL divergence and a criterion for conducting risk minimization. Under a compact support assumption, we prove an $mc{O}(1/{sqrt{n}})$ bound on the expected estimation error when using the $h$-lifted KL divergence, which extends the results of Rakhlin et al. (2005, ESAIM: Probability and Statistics, Vol. 9) and Li and Barron (1999, Advances in Neural Information ProcessingSystems, Vol. 12) to permit the risk bounding of density functions that are not strictly positive. We develop a procedure for the computation of the corresponding maximum $h$-lifted likelihood estimators ($h$-MLLEs) using the Majorization-Maximization framework and provide experimental results in support of our theoretical bounds. | [
"['Mark Chiu Chong' 'Hien Duy Nguyen' 'TrungTin Nguyen']"
]
|
null | null | 2404.12588 | null | null | http://arxiv.org/pdf/2404.12588v1 | 2024-04-19T02:33:23Z | 2024-04-19T02:33:23Z | Cross-Modal Adapter: Parameter-Efficient Transfer Learning Approach for
Vision-Language Models | Adapter-based parameter-efficient transfer learning has achieved exciting results in vision-language models. Traditional adapter methods often require training or fine-tuning, facing challenges such as insufficient samples or resource limitations. While some methods overcome the need for training by leveraging image modality cache and retrieval, they overlook the text modality's importance and cross-modal cues for the efficient adaptation of parameters in visual-language models. This work introduces a cross-modal parameter-efficient approach named XMAdapter. XMAdapter establishes cache models for both text and image modalities. It then leverages retrieval through visual-language bimodal information to gather clues for inference. By dynamically adjusting the affinity ratio, it achieves cross-modal fusion, decoupling different modal similarities to assess their respective contributions. Additionally, it explores hard samples based on differences in cross-modal affinity and enhances model performance through adaptive adjustment of sample learning intensity. Extensive experimental results on benchmark datasets demonstrate that XMAdapter outperforms previous adapter-based methods significantly regarding accuracy, generalization, and efficiency. | [
"['Juncheng Yang' 'Zuchao Li' 'Shuai Xie' 'Weiping Zhu' 'Wei Yu'\n 'Shijun Li']"
]
|
null | null | 2404.12594 | null | null | http://arxiv.org/pdf/2404.12594v1 | 2024-04-19T02:52:56Z | 2024-04-19T02:52:56Z | Random Network Distillation Based Deep Reinforcement Learning for AGV
Path Planning | With the flourishing development of intelligent warehousing systems, the technology of Automated Guided Vehicle (AGV) has experienced rapid growth. Within intelligent warehousing environments, AGV is required to safely and rapidly plan an optimal path in complex and dynamic environments. Most research has studied deep reinforcement learning to address this challenge. However, in the environments with sparse extrinsic rewards, these algorithms often converge slowly, learn inefficiently or fail to reach the target. Random Network Distillation (RND), as an exploration enhancement, can effectively improve the performance of proximal policy optimization, especially enhancing the additional intrinsic rewards of the AGV agent which is in sparse reward environments. Moreover, most of the current research continues to use 2D grid mazes as experimental environments. These environments have insufficient complexity and limited action sets. To solve this limitation, we present simulation environments of AGV path planning with continuous actions and positions for AGVs, so that it can be close to realistic physical scenarios. Based on our experiments and comprehensive analysis of the proposed method, the results demonstrate that our proposed method enables AGV to more rapidly complete path planning tasks with continuous actions in our environments. A video of part of our experiments can be found at https://youtu.be/lwrY9YesGmw. | [
"['Huilin Yin' 'Shengkai Su' 'Yinjia Lin' 'Pengju Zhen' 'Karin Festl'\n 'Daniel Watzenig']"
]
|
null | null | 2404.12596 | null | null | http://arxiv.org/abs/2404.12596v1 | 2024-04-19T02:59:09Z | 2024-04-19T02:59:09Z | Parameter Efficient Diverse Paraphrase Generation Using Sequence-Level
Knowledge Distillation | Over the past year, the field of Natural Language Generation (NLG) has experienced an exponential surge, largely due to the introduction of Large Language Models (LLMs). These models have exhibited the most effective performance in a range of domains within the Natural Language Processing and Generation domains. However, their application in domain-specific tasks, such as paraphrasing, presents significant challenges. The extensive number of parameters makes them difficult to operate on commercial hardware, and they require substantial time for inference, leading to high costs in a production setting. In this study, we tackle these obstacles by employing LLMs to develop three distinct models for the paraphrasing field, applying a method referred to as sequence-level knowledge distillation. These distilled models are capable of maintaining the quality of paraphrases generated by the LLM. They demonstrate faster inference times and the ability to generate diverse paraphrases of comparable quality. A notable characteristic of these models is their ability to exhibit syntactic diversity while also preserving lexical diversity, features previously uncommon due to existing data quality issues in datasets and not typically observed in neural-based approaches. Human evaluation of our models shows that there is only a 4% drop in performance compared to the LLM teacher model used in the distillation process, despite being 1000 times smaller. This research provides a significant contribution to the NLG field, offering a more efficient and cost-effective solution for paraphrasing tasks. | [
"['Lasal Jayawardena' 'Prasan Yapa']"
]
|
null | null | 2404.12597 | null | null | http://arxiv.org/pdf/2404.12597v1 | 2024-04-19T03:04:06Z | 2024-04-19T03:04:06Z | The phase diagram of kernel interpolation in large dimensions | The generalization ability of kernel interpolation in large dimensions (i.e., $n asymp d^{gamma}$ for some $gamma>0$) might be one of the most interesting problems in the recent renaissance of kernel regression, since it may help us understand the 'benign overfitting phenomenon' reported in the neural networks literature. Focusing on the inner product kernel on the sphere, we fully characterized the exact order of both the variance and bias of large-dimensional kernel interpolation under various source conditions $sgeq 0$. Consequently, we obtained the $(s,gamma)$-phase diagram of large-dimensional kernel interpolation, i.e., we determined the regions in $(s,gamma)$-plane where the kernel interpolation is minimax optimal, sub-optimal and inconsistent. | [
"['Haobo Zhang' 'Weihao Lu' 'Qian Lin']"
]
|
null | null | 2404.12598 | null | null | http://arxiv.org/pdf/2404.12598v1 | 2024-04-19T03:05:41Z | 2024-04-19T03:05:41Z | Continuous-time Risk-sensitive Reinforcement Learning via Quadratic
Variation Penalty | This paper studies continuous-time risk-sensitive reinforcement learning (RL) under the entropy-regularized, exploratory diffusion process formulation with the exponential-form objective. The risk-sensitive objective arises either as the agent's risk attitude or as a distributionally robust approach against the model uncertainty. Owing to the martingale perspective in Jia and Zhou (2023) the risk-sensitive RL problem is shown to be equivalent to ensuring the martingale property of a process involving both the value function and the q-function, augmented by an additional penalty term: the quadratic variation of the value process, capturing the variability of the value-to-go along the trajectory. This characterization allows for the straightforward adaptation of existing RL algorithms developed for non-risk-sensitive scenarios to incorporate risk sensitivity by adding the realized variance of the value process. Additionally, I highlight that the conventional policy gradient representation is inadequate for risk-sensitive problems due to the nonlinear nature of quadratic variation; however, q-learning offers a solution and extends to infinite horizon settings. Finally, I prove the convergence of the proposed algorithm for Merton's investment problem and quantify the impact of temperature parameter on the behavior of the learning procedure. I also conduct simulation experiments to demonstrate how risk-sensitive RL improves the finite-sample performance in the linear-quadratic control problem. | [
"['Yanwei Jia']"
]
|
null | null | 2404.12599 | null | null | http://arxiv.org/pdf/2404.12599v1 | 2024-04-19T03:06:50Z | 2024-04-19T03:06:50Z | QUTE: Quantifying Uncertainty in TinyML models with Early-exit-assisted
ensembles | Existing methods for uncertainty quantification incur massive memory and compute overhead, often requiring multiple models/inferences. Hence they are impractical on ultra-low-power KB-sized TinyML devices. To reduce overhead, prior works have proposed the use of early-exit networks as ensembles to quantify uncertainty in a single forward-pass. However, they still have a prohibitive cost for tinyML. To address these challenges, we propose QUTE, a novel resource-efficient early-exit-assisted ensemble architecture optimized for tinyML models. QUTE adds additional output blocks at the final exit of the base network and distills the knowledge of early-exits into these blocks to create a diverse and lightweight ensemble architecture. Our results show that QUTE outperforms popular prior works, and improves the quality of uncertainty estimates by 6% with 3.1x lower model size on average compared to the most relevant prior work. Furthermore, we demonstrate that QUTE is also effective in detecting co-variate shifted and out-of-distribution inputs, and shows competitive performance relative to G-ODIN, a state-of-the-art generalized OOD detector. | [
"['Nikhil P Ghanathe' 'Steve Wilton']"
]
|
null | null | 2404.12602 | null | null | http://arxiv.org/pdf/2404.12602v1 | 2024-04-19T03:12:17Z | 2024-04-19T03:12:17Z | A visualization method for data domain changes in CNN networks and the
optimization method for selecting thresholds in classification tasks | In recent years, Face Anti-Spoofing (FAS) has played a crucial role in preserving the security of face recognition technology. With the rise of counterfeit face generation techniques, the challenge posed by digitally edited faces to face anti-spoofing is escalating. Existing FAS technologies primarily focus on intercepting physically forged faces and lack a robust solution for cross-domain FAS challenges. Moreover, determining an appropriate threshold to achieve optimal deployment results remains an issue for intra-domain FAS. To address these issues, we propose a visualization method that intuitively reflects the training outcomes of models by visualizing the prediction results on datasets. Additionally, we demonstrate that employing data augmentation techniques, such as downsampling and Gaussian blur, can effectively enhance performance on cross-domain tasks. Building upon our data visualization approach, we also introduce a methodology for setting threshold values based on the distribution of the training dataset. Ultimately, our methods secured us second place in both the Unified Physical-Digital Face Attack Detection competition and the Snapshot Spectral Imaging Face Anti-spoofing contest. The training code is available at https://github.com/SeaRecluse/CVPRW2024. | [
"['Minzhe Huang' 'Changwei Nie' 'Weihong Zhong']"
]
|
null | null | 2404.12612 | null | null | http://arxiv.org/pdf/2404.12612v1 | 2024-04-19T03:51:46Z | 2024-04-19T03:51:46Z | SA-Attack: Speed-adaptive stealthy adversarial attack on trajectory
prediction | Trajectory prediction is critical for the safe planning and navigation of automated vehicles. The trajectory prediction models based on the neural networks are vulnerable to adversarial attacks. Previous attack methods have achieved high attack success rates but overlook the adaptability to realistic scenarios and the concealment of the deceits. To address this problem, we propose a speed-adaptive stealthy adversarial attack method named SA-Attack. This method searches the sensitive region of trajectory prediction models and generates the adversarial trajectories by using the vehicle-following method and incorporating information about forthcoming trajectories. Our method has the ability to adapt to different speed scenarios by reconstructing the trajectory from scratch. Fusing future trajectory trends and curvature constraints can guarantee the smoothness of adversarial trajectories, further ensuring the stealthiness of attacks. The empirical study on the datasets of nuScenes and Apolloscape demonstrates the attack performance of our proposed method. Finally, we also demonstrate the adaptability and stealthiness of SA-Attack for different speed scenarios. Our code is available at the repository: https://github.com/eclipse-bot/SA-Attack. | [
"['Huilin Yin' 'Jiaxiang Li' 'Pengju Zhen' 'Jun Yan']"
]
|
null | null | 2404.12613 | null | null | http://arxiv.org/pdf/2404.12613v1 | 2024-04-19T03:53:50Z | 2024-04-19T03:53:50Z | A Fourier Approach to the Parameter Estimation Problem for
One-dimensional Gaussian Mixture Models | The purpose of this paper is twofold. First, we propose a novel algorithm for estimating parameters in one-dimensional Gaussian mixture models (GMMs). The algorithm takes advantage of the Hankel structure inherent in the Fourier data obtained from independent and identically distributed (i.i.d) samples of the mixture. For GMMs with a unified variance, a singular value ratio functional using the Fourier data is introduced and used to resolve the variance and component number simultaneously. The consistency of the estimator is derived. Compared to classic algorithms such as the method of moments and the maximum likelihood method, the proposed algorithm does not require prior knowledge of the number of Gaussian components or good initial guesses. Numerical experiments demonstrate its superior performance in estimation accuracy and computational cost. Second, we reveal that there exists a fundamental limit to the problem of estimating the number of Gaussian components or model order in the mixture model if the number of i.i.d samples is finite. For the case of a single variance, we show that the model order can be successfully estimated only if the minimum separation distance between the component means exceeds a certain threshold value and can fail if below. We derive a lower bound for this threshold value, referred to as the computational resolution limit, in terms of the number of i.i.d samples, the variance, and the number of Gaussian components. Numerical experiments confirm this phase transition phenomenon in estimating the model order. Moreover, we demonstrate that our algorithm achieves better scores in likelihood, AIC, and BIC when compared to the EM algorithm. | [
"['Xinyu Liu' 'Hai Zhang']"
]
|
null | null | 2404.12618 | null | null | http://arxiv.org/pdf/2404.12618v1 | 2024-04-19T04:02:50Z | 2024-04-19T04:02:50Z | CORI: CJKV Benchmark with Romanization Integration -- A step towards
Cross-lingual Transfer Beyond Textual Scripts | Naively assuming English as a source language may hinder cross-lingual transfer for many languages by failing to consider the importance of language contact. Some languages are more well-connected than others, and target languages can benefit from transferring from closely related languages; for many languages, the set of closely related languages does not include English. In this work, we study the impact of source language for cross-lingual transfer, demonstrating the importance of selecting source languages that have high contact with the target language. We also construct a novel benchmark dataset for close contact Chinese-Japanese-Korean-Vietnamese (CJKV) languages to further encourage in-depth studies of language contact. To comprehensively capture contact between these languages, we propose to integrate Romanized transcription beyond textual scripts via Contrastive Learning objectives, leading to enhanced cross-lingual representations and effective zero-shot cross-lingual transfer. | [
"['Hoang H. Nguyen' 'Chenwei Zhang' 'Ye Liu' 'Natalie Parde'\n 'Eugene Rohrbaugh' 'Philip S. Yu']"
]
|
null | null | 2404.12623 | null | null | http://arxiv.org/pdf/2404.12623v1 | 2024-04-19T04:43:01Z | 2024-04-19T04:43:01Z | End-to-End Verifiable Decentralized Federated Learning | Verifiable decentralized federated learning (FL) systems combining blockchains and zero-knowledge proofs (ZKP) make the computational integrity of local learning and global aggregation verifiable across workers. However, they are not end-to-end: data can still be corrupted prior to the learning. In this paper, we propose a verifiable decentralized FL system for end-to-end integrity and authenticity of data and computation extending verifiability to the data source. Addressing an inherent conflict of confidentiality and transparency, we introduce a two-step proving and verification (2PV) method that we apply to central system procedures: a registration workflow that enables non-disclosing verification of device certificates and a learning workflow that extends existing blockchain and ZKP-based FL systems through non-disclosing data authenticity proofs. Our evaluation on a prototypical implementation demonstrates the technical feasibility with only marginal overheads to state-of-the-art solutions. | [
"['Chaehyeon Lee' 'Jonathan Heiss' 'Stefan Tai' 'James Won-Ki Hong']"
]
|
null | null | 2404.12634 | null | null | http://arxiv.org/pdf/2404.12634v1 | 2024-04-19T05:31:37Z | 2024-04-19T05:31:37Z | Transformer-Based Classification Outcome Prediction for Multimodal
Stroke Treatment | This study proposes a multi-modal fusion framework Multitrans based on the Transformer architecture and self-attention mechanism. This architecture combines the study of non-contrast computed tomography (NCCT) images and discharge diagnosis reports of patients undergoing stroke treatment, using a variety of methods based on Transformer architecture approach to predicting functional outcomes of stroke treatment. The results show that the performance of single-modal text classification is significantly better than single-modal image classification, but the effect of multi-modal combination is better than any single modality. Although the Transformer model only performs worse on imaging data, when combined with clinical meta-diagnostic information, both can learn better complementary information and make good contributions to accurately predicting stroke treatment effects.. | [
"['Danqing Ma' 'Meng Wang' 'Ao Xiang' 'Zongqing Qi' 'Qin Yang']"
]
|
null | null | 2404.12635 | null | null | http://arxiv.org/pdf/2404.12635v1 | 2024-04-19T05:32:37Z | 2024-04-19T05:32:37Z | AED-PADA:Improving Generalizability of Adversarial Example Detection via
Principal Adversarial Domain Adaptation | Adversarial example detection, which can be conveniently applied in many scenarios, is important in the area of adversarial defense. Unfortunately, existing detection methods suffer from poor generalization performance, because their training process usually relies on the examples generated from a single known adversarial attack and there exists a large discrepancy between the training and unseen testing adversarial examples. To address this issue, we propose a novel method, named Adversarial Example Detection via Principal Adversarial Domain Adaptation (AED-PADA). Specifically, our approach identifies the Principal Adversarial Domains (PADs), i.e., a combination of features of the adversarial examples from different attacks, which possesses large coverage of the entire adversarial feature space. Then, we pioneer to exploit multi-source domain adaptation in adversarial example detection with PADs as source domains. Experiments demonstrate the superior generalization ability of our proposed AED-PADA. Note that this superiority is particularly achieved in challenging scenarios characterized by employing the minimal magnitude constraint for the perturbations. | [
"['Heqi Peng' 'Yunhong Wang' 'Ruijie Yang' 'Beichen Li' 'Rui Wang'\n 'Yuanfang Guo']"
]
|
null | null | 2404.12639 | null | null | http://arxiv.org/pdf/2404.12639v2 | 2024-05-03T12:43:37Z | 2024-04-19T05:45:43Z | Single-Task Continual Offline Reinforcement Learning | In this paper, we study the continual learning problem of single-task offline reinforcement learning. In the past, continual reinforcement learning usually only dealt with multitasking, that is, learning multiple related or unrelated tasks in a row, but once each learned task was learned, it was not relearned, but only used in subsequent processes. However, offline reinforcement learning tasks require the continuously learning of multiple different datasets for the same task. Existing algorithms will try their best to achieve the best results in each offline dataset they have learned and the skills of the network will overwrite the high-quality datasets that have been learned after learning the subsequent poor datasets. On the other hand, if too much emphasis is placed on stability, the network will learn the subsequent better dataset after learning the poor offline dataset, and the problem of insufficient plasticity and non-learning will occur. How to design a strategy that can always preserve the best performance for each state in the data that has been learned is a new challenge and the focus of this study. Therefore, this study proposes a new algorithm, called Ensemble Offline Reinforcement Learning Based on Experience Replay, which introduces multiple value networks to learn the same dataset and judge whether the strategy has been learned by the discrete degree of the value network, to improve the performance of the network in single-task offline reinforcement learning. | [
"['Sibo Gai' 'Donglin Wang']"
]
|
null | null | 2404.12648 | null | null | http://arxiv.org/pdf/2404.12648v1 | 2024-04-19T06:24:22Z | 2024-04-19T06:24:22Z | Sample-efficient Learning of Infinite-horizon Average-reward MDPs with
General Function Approximation | We study infinite-horizon average-reward Markov decision processes (AMDPs) in the context of general function approximation. Specifically, we propose a novel algorithmic framework named Local-fitted Optimization with OPtimism (LOOP), which incorporates both model-based and value-based incarnations. In particular, LOOP features a novel construction of confidence sets and a low-switching policy updating scheme, which are tailored to the average-reward and function approximation setting. Moreover, for AMDPs, we propose a novel complexity measure -- average-reward generalized eluder coefficient (AGEC) -- which captures the challenge of exploration in AMDPs with general function approximation. Such a complexity measure encompasses almost all previously known tractable AMDP models, such as linear AMDPs and linear mixture AMDPs, and also includes newly identified cases such as kernel AMDPs and AMDPs with Bellman eluder dimensions. Using AGEC, we prove that LOOP achieves a sublinear $tilde{mathcal{O}}(mathrm{poly}(d, mathrm{sp}(V^*)) sqrt{Tbeta} )$ regret, where $d$ and $beta$ correspond to AGEC and log-covering number of the hypothesis class respectively, $mathrm{sp}(V^*)$ is the span of the optimal state bias function, $T$ denotes the number of steps, and $tilde{mathcal{O}} (cdot) $ omits logarithmic factors. When specialized to concrete AMDP models, our regret bounds are comparable to those established by the existing algorithms designed specifically for these special cases. To the best of our knowledge, this paper presents the first comprehensive theoretical framework capable of handling nearly all AMDPs. | [
"['Jianliang He' 'Han Zhong' 'Zhuoran Yang']"
]
|
null | null | 2404.12650 | null | null | http://arxiv.org/pdf/2404.12650v1 | 2024-04-19T06:32:21Z | 2024-04-19T06:32:21Z | F2FLDM: Latent Diffusion Models with Histopathology Pre-Trained
Embeddings for Unpaired Frozen Section to FFPE Translation | The Frozen Section (FS) technique is a rapid and efficient method, taking only 15-30 minutes to prepare slides for pathologists' evaluation during surgery, enabling immediate decisions on further surgical interventions. However, FS process often introduces artifacts and distortions like folds and ice-crystal effects. In contrast, these artifacts and distortions are absent in the higher-quality formalin-fixed paraffin-embedded (FFPE) slides, which require 2-3 days to prepare. While Generative Adversarial Network (GAN)-based methods have been used to translate FS to FFPE images (F2F), they may leave morphological inaccuracies with remaining FS artifacts or introduce new artifacts, reducing the quality of these translations for clinical assessments. In this study, we benchmark recent generative models, focusing on GANs and Latent Diffusion Models (LDMs), to overcome these limitations. We introduce a novel approach that combines LDMs with Histopathology Pre-Trained Embeddings to enhance restoration of FS images. Our framework leverages LDMs conditioned by both text and pre-trained embeddings to learn meaningful features of FS and FFPE histopathology images. Through diffusion and denoising techniques, our approach not only preserves essential diagnostic attributes like color staining and tissue morphology but also proposes an embedding translation mechanism to better predict the targeted FFPE representation of input FS images. As a result, this work achieves a significant improvement in classification performance, with the Area Under the Curve rising from 81.99% to 94.64%, accompanied by an advantageous CaseFD. This work establishes a new benchmark for FS to FFPE image translation quality, promising enhanced reliability and accuracy in histopathology FS image analysis. Our work is available at https://minhmanho.github.io/f2f_ldm/. | [
"['Man M. Ho' 'Shikha Dubey' 'Yosep Chong' 'Beatrice Knudsen'\n 'Tolga Tasdizen']"
]
|
null | null | 2404.12652 | null | null | http://arxiv.org/pdf/2404.12652v1 | 2024-04-19T06:41:32Z | 2024-04-19T06:41:32Z | Pre-trained Vision-Language Models Learn Discoverable Visual Concepts | Do vision-language models (VLMs) pre-trained to caption an image of a "durian" learn visual concepts such as "brown" (color) and "spiky" (texture) at the same time? We aim to answer this question as visual concepts learned "for free" would enable wide applications such as neuro-symbolic reasoning or human-interpretable object classification. We assume that the visual concepts, if captured by pre-trained VLMs, can be extracted by their vision-language interface with text-based concept prompts. We observe that recent works prompting VLMs with concepts often differ in their strategies to define and evaluate the visual concepts, leading to conflicting conclusions. We propose a new concept definition strategy based on two observations: First, certain concept prompts include shortcuts that recognize correct concepts for wrong reasons; Second, multimodal information (e.g. visual discriminativeness, and textual knowledge) should be leveraged when selecting the concepts. Our proposed concept discovery and learning (CDL) framework is thus designed to identify a diverse list of generic visual concepts (e.g. "spiky" as opposed to "spiky durian"), which are ranked and selected based on visual and language mutual information. We carefully design quantitative and human evaluations of the discovered concepts on six diverse visual recognition datasets, which confirm that pre-trained VLMs do learn visual concepts that provide accurate and thorough descriptions for the recognized objects. All code and models are publicly released. | [
"['Yuan Zang' 'Tian Yun' 'Hao Tan' 'Trung Bui' 'Chen Sun']"
]
|
null | null | 2404.12667 | null | null | http://arxiv.org/pdf/2404.12667v1 | 2024-04-19T07:07:36Z | 2024-04-19T07:07:36Z | Detecting Out-Of-Distribution Earth Observation Images with Diffusion
Models | Earth Observation imagery can capture rare and unusual events, such as disasters and major landscape changes, whose visual appearance contrasts with the usual observations. Deep models trained on common remote sensing data will output drastically different features for these out-of-distribution samples, compared to those closer to their training dataset. Detecting them could therefore help anticipate changes in the observations, either geographical or environmental. In this work, we show that the reconstruction error of diffusion models can effectively serve as unsupervised out-of-distribution detectors for remote sensing images, using them as a plausibility score. Moreover, we introduce ODEED, a novel reconstruction-based scorer using the probability-flow ODE of diffusion models. We validate it experimentally on SpaceNet 8 with various scenarios, such as classical OOD detection with geographical shift and near-OOD setups: pre/post-flood and non-flooded/flooded image recognition. We show that our ODEED scorer significantly outperforms other diffusion-based and discriminative baselines on the more challenging near-OOD scenarios of flood image detection, where OOD images are close to the distribution tail. We aim to pave the way towards better use of generative models for anomaly detection in remote sensing. | [
"['Georges Le Bellier' 'Nicolas Audebert']"
]
|
null | null | 2404.12674 | null | null | http://arxiv.org/pdf/2404.12674v2 | 2024-04-27T07:59:21Z | 2024-04-19T07:20:33Z | Towards Universal Performance Modeling for Machine Learning Training on
Multi-GPU Platforms | Characterizing and predicting the training performance of modern machine learning (ML) workloads on compute systems with compute and communication spread between CPUs, GPUs, and network devices is not only the key to optimization and planning but also a complex goal to achieve. The primary challenges include the complexity of synchronization and load balancing between CPUs and GPUs, the variance in input data distribution, and the use of different communication devices and topologies (e.g., NVLink, PCIe, network cards) that connect multiple compute devices, coupled with the desire for flexible training configurations. Built on top of our prior work for single-GPU platforms, we address these challenges and enable multi-GPU performance modeling by incorporating (1) data-distribution-aware performance models for embedding table lookup, and (2) data movement prediction of communication collectives, into our upgraded performance modeling pipeline equipped with inter-and intra-rank synchronization for ML workloads trained on multi-GPU platforms. Beyond accurately predicting the per-iteration training time of DLRM models with random configurations with a geomean error of 5.21% on two multi-GPU platforms, our prediction pipeline generalizes well to other types of ML workloads, such as Transformer-based NLP models with a geomean error of 3.00%. Moreover, even without actually running ML workloads like DLRMs on the hardware, it is capable of generating insights such as quickly selecting the fastest embedding table sharding configuration (with a success rate of 85%). | [
"['Zhongyi Lin' 'Ning Sun' 'Pallab Bhattacharya' 'Xizhou Feng' 'Louis Feng'\n 'John D. Owens']"
]
|
null | null | 2404.12693 | null | null | http://arxiv.org/pdf/2404.12693v1 | 2024-04-19T07:47:23Z | 2024-04-19T07:47:23Z | Improving Chinese Character Representation with Formation Tree | Learning effective representations for Chinese characters presents unique challenges, primarily due to the vast number of characters and their continuous growth, which requires models to handle an expanding category space. Additionally, the inherent sparsity of character usage complicates the generalization of learned representations. Prior research has explored radical-based sequences to overcome these issues, achieving progress in recognizing unseen characters. However, these approaches fail to fully exploit the inherent tree structure of such sequences. To address these limitations and leverage established data properties, we propose Formation Tree-CLIP (FT-CLIP). This model utilizes formation trees to represent characters and incorporates a dedicated tree encoder, significantly improving performance in both seen and unseen character recognition tasks. We further introduce masking for to both character images and tree nodes, enabling efficient and effective training. This approach accelerates training significantly (by a factor of 2 or more) while enhancing accuracy. Extensive experiments show that processing characters through formation trees aligns better with their inherent properties than direct sequential methods, significantly enhancing the generality and usability of the representations. | [
"['Yang Hong' 'Yinfei Li' 'Xiaojun Qiao' 'Rui Li' 'Junsong Zhang']"
]
|
null | null | 2404.12699 | null | null | http://arxiv.org/pdf/2404.12699v1 | 2024-04-19T08:07:26Z | 2024-04-19T08:07:26Z | SOPHON: Non-Fine-Tunable Learning to Restrain Task Transferability For
Pre-trained Models | Instead of building deep learning models from scratch, developers are more and more relying on adapting pre-trained models to their customized tasks. However, powerful pre-trained models may be misused for unethical or illegal tasks, e.g., privacy inference and unsafe content generation. In this paper, we introduce a pioneering learning paradigm, non-fine-tunable learning, which prevents the pre-trained model from being fine-tuned to indecent tasks while preserving its performance on the original task. To fulfill this goal, we propose SOPHON, a protection framework that reinforces a given pre-trained model to be resistant to being fine-tuned in pre-defined restricted domains. Nonetheless, this is challenging due to a diversity of complicated fine-tuning strategies that may be adopted by adversaries. Inspired by model-agnostic meta-learning, we overcome this difficulty by designing sophisticated fine-tuning simulation and fine-tuning evaluation algorithms. In addition, we carefully design the optimization process to entrap the pre-trained model within a hard-to-escape local optimum regarding restricted domains. We have conducted extensive experiments on two deep learning modes (classification and generation), seven restricted domains, and six model architectures to verify the effectiveness of SOPHON. Experiment results verify that fine-tuning SOPHON-protected models incurs an overhead comparable to or even greater than training from scratch. Furthermore, we confirm the robustness of SOPHON to three fine-tuning methods, five optimizers, various learning rates and batch sizes. SOPHON may help boost further investigations into safe and responsible AI. | [
"['Jiangyi Deng' 'Shengyuan Pang' 'Yanjiao Chen' 'Liangming Xia'\n 'Yijie Bai' 'Haiqin Weng' 'Wenyuan Xu']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.