categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2402.13861
null
null
http://arxiv.org/abs/2402.13861v1
2024-02-21T15:10:20Z
2024-02-21T15:10:20Z
Improving Efficiency of Iso-Surface Extraction on Implicit Neural Representations Using Uncertainty Propagation
Implicit Neural representations (INRs) are widely used for scientific data reduction and visualization by modeling the function that maps a spatial location to a data value. Without any prior knowledge about the spatial distribution of values, we are forced to sample densely from INRs to perform visualization tasks like iso-surface extraction which can be very computationally expensive. Recently, range analysis has shown promising results in improving the efficiency of geometric queries, such as ray casting and hierarchical mesh extraction, on INRs for 3D geometries by using arithmetic rules to bound the output range of the network within a spatial region. However, the analysis bounds are often too conservative for complex scientific data. In this paper, we present an improved technique for range analysis by revisiting the arithmetic rules and analyzing the probability distribution of the network output within a spatial region. We model this distribution efficiently as a Gaussian distribution by applying the central limit theorem. Excluding low probability values, we are able to tighten the output bounds, resulting in a more accurate estimation of the value range, and hence more accurate identification of iso-surface cells and more efficient iso-surface extraction on INRs. Our approach demonstrates superior performance in terms of the iso-surface extraction time on four datasets compared to the original range analysis method and can also be generalized to other geometric query tasks.
[ "['Haoyu Li' 'Han-Wei Shen']" ]
null
null
2402.13867
null
null
http://arxiv.org/pdf/2402.13867v1
2024-02-21T15:19:09Z
2024-02-21T15:19:09Z
RFI-DRUnet: Restoring dynamic spectra corrupted by radio frequency interference -- Application to pulsar observations
Radio frequency interference (RFI) have been an enduring concern in radio astronomy, particularly for the observations of pulsars which require high timing precision and data sensitivity. In most works of the literature, RFI mitigation has been formulated as a detection task that consists of localizing possible RFI in dynamic spectra. This strategy inevitably leads to a potential loss of information since parts of the signal identified as possibly RFI-corrupted are generally not considered in the subsequent data processing pipeline. Conversely, this work proposes to tackle RFI mitigation as a joint detection and restoration that allows parts of the dynamic spectrum affected by RFI to be not only identified but also recovered. The proposed supervised method relies on a deep convolutional network whose architecture inherits the performance reached by a recent yet popular image-denoising network. To train this network, a whole simulation framework is built to generate large data sets according to physics-inspired and statistical models of the pulsar signals and of the RFI. The relevance of the proposed approach is quantitatively assessed by conducting extensive experiments. In particular, the results show that the restored dynamic spectra are sufficiently reliable to estimate pulsar times-of-arrivals with an accuracy close to the one that would be obtained from RFI-free signals.
[ "['Xiao Zhang' 'Ismaël Cognard' 'Nicolas Dobigeon']" ]
null
null
2402.13870
null
null
http://arxiv.org/pdf/2402.13870v1
2024-02-21T15:23:21Z
2024-02-21T15:23:21Z
Generative Probabilistic Time Series Forecasting and Applications in Grid Operations
Generative probabilistic forecasting produces future time series samples according to the conditional probability distribution given past time series observations. Such techniques are essential in risk-based decision-making and planning under uncertainty with broad applications in grid operations, including electricity price forecasting, risk-based economic dispatch, and stochastic optimizations. Inspired by Wiener and Kallianpur's innovation representation, we propose a weak innovation autoencoder architecture and a learning algorithm to extract independent and identically distributed innovation sequences from nonparametric stationary time series. We show that the weak innovation sequence is Bayesian sufficient, which makes the proposed weak innovation autoencoder a canonical architecture for generative probabilistic forecasting. The proposed technique is applied to forecasting highly volatile real-time electricity prices, demonstrating superior performance across multiple forecasting measures over leading probabilistic and point forecasting techniques.
[ "['Xinyi Wang' 'Lang Tong' 'Qing Zhao']" ]
null
null
2402.13871
null
null
http://arxiv.org/pdf/2402.13871v1
2024-02-21T15:23:21Z
2024-02-21T15:23:21Z
An Explainable Transformer-based Model for Phishing Email Detection: A Large Language Model Approach
Phishing email is a serious cyber threat that tries to deceive users by sending false emails with the intention of stealing confidential information or causing financial harm. Attackers, often posing as trustworthy entities, exploit technological advancements and sophistication to make detection and prevention of phishing more challenging. Despite extensive academic research, phishing detection remains an ongoing and formidable challenge in the cybersecurity landscape. Large Language Models (LLMs) and Masked Language Models (MLMs) possess immense potential to offer innovative solutions to address long-standing challenges. In this research paper, we present an optimized, fine-tuned transformer-based DistilBERT model designed for the detection of phishing emails. In the detection process, we work with a phishing email dataset and utilize the preprocessing techniques to clean and solve the imbalance class issues. Through our experiments, we found that our model effectively achieves high accuracy, demonstrating its capability to perform well. Finally, we demonstrate our fine-tuned model using Explainable-AI (XAI) techniques such as Local Interpretable Model-Agnostic Explanations (LIME) and Transformer Interpret to explain how our model makes predictions in the context of text classification for phishing emails.
[ "['Mohammad Amaz Uddin' 'Iqbal H. Sarker']" ]
null
null
2402.13891
null
null
http://arxiv.org/pdf/2402.13891v2
2024-06-03T11:40:32Z
2024-02-21T16:02:14Z
Overcoming Saturation in Density Ratio Estimation by Iterated Regularization
Estimating the ratio of two probability densities from finitely many samples, is a central task in machine learning and statistics. In this work, we show that a large class of kernel methods for density ratio estimation suffers from error saturation, which prevents algorithms from achieving fast error convergence rates on highly regular learning problems. To resolve saturation, we introduce iterated regularization in density ratio estimation to achieve fast error rates. Our methods outperform its non-iteratively regularized versions on benchmarks for density ratio estimation as well as on large-scale evaluations for importance-weighted ensembling of deep unsupervised domain adaptation models.
[ "['Lukas Gruber' 'Markus Holzleitner' 'Johannes Lehner' 'Sepp Hochreiter'\n 'Werner Zellinger']" ]
null
null
2402.13897
null
null
http://arxiv.org/pdf/2402.13897v2
2024-03-14T00:21:09Z
2024-02-21T16:09:25Z
Science Checker Reloaded: A Bidirectional Paradigm for Transparency and Logical Reasoning
Information retrieval is a rapidly evolving field. However it still faces significant limitations in the scientific and industrial vast amounts of information, such as semantic divergence and vocabulary gaps in sparse retrieval, low precision and lack of interpretability in semantic search, or hallucination and outdated information in generative models. In this paper, we introduce a two-block approach to tackle these hurdles for long documents. The first block enhances language understanding in sparse retrieval by query expansion to retrieve relevant documents. The second block deepens the result by providing comprehensive and informative answers to the complex question using only the information spread in the long document, enabling bidirectional engagement. At various stages of the pipeline, intermediate results are presented to users to facilitate understanding of the system's reasoning. We believe this bidirectional approach brings significant advancements in terms of transparency, logical thinking, and comprehensive understanding in the field of scientific information retrieval.
[ "['Loïc Rakotoson' 'Sylvain Massip' 'Fréjus A. A. Laleye']" ]
null
null
2402.13901
null
null
http://arxiv.org/pdf/2402.13901v2
2024-05-30T21:18:01Z
2024-02-21T16:11:47Z
Non-asymptotic Convergence of Discrete-time Diffusion Models: New Approach and Improved Rate
The denoising diffusion model has recently emerged as a powerful generative technique that converts noise into data. While there are many studies providing theoretical guarantees for diffusion processes based on discretized stochastic differential equation (D-SDE), many generative samplers in real applications directly employ a discrete-time (DT) diffusion process. However, there are very few studies analyzing these DT processes, e.g., convergence for DT diffusion processes has been obtained only for distributions with bounded support. In this paper, we establish the convergence guarantee for substantially larger classes of distributions under DT diffusion processes and further improve the convergence rate for distributions with bounded support. In particular, we first establish the convergence rates for both smooth and general (possibly non-smooth) distributions having a finite second moment. We then specialize our results to a number of interesting classes of distributions with explicit parameter dependencies, including distributions with Lipschitz scores, Gaussian mixture distributions, and any distributions with early-stopping. We further propose a novel accelerated sampler and show that it improves the convergence rates of the corresponding regular sampler by orders of magnitude with respect to all system parameters. Our study features a novel analytical technique that constructs a tilting factor representation of the convergence error and exploits Tweedie's formula for handling Taylor expansion power terms.
[ "['Yuchen Liang' 'Peizhong Ju' 'Yingbin Liang' 'Ness Shroff']" ]
null
null
2402.13903
null
null
http://arxiv.org/pdf/2402.13903v2
2024-06-07T14:31:14Z
2024-02-21T16:13:49Z
Dealing with unbounded gradients in stochastic saddle-point optimization
We study the performance of stochastic first-order methods for finding saddle points of convex-concave functions. A notorious challenge faced by such methods is that the gradients can grow arbitrarily large during optimization, which may result in instability and divergence. In this paper, we propose a simple and effective regularization technique that stabilizes the iterates and yields meaningful performance guarantees even if the domain and the gradient noise scales linearly with the size of the iterates (and is thus potentially unbounded). Besides providing a set of general results, we also apply our algorithm to a specific problem in reinforcement learning, where it leads to performance guarantees for finding near-optimal policies in an average-reward MDP without prior knowledge of the bias span.
[ "['Gergely Neu' 'Nneka Okolo']" ]
null
null
2402.13911
null
null
http://arxiv.org/pdf/2402.13911v1
2024-02-21T16:26:59Z
2024-02-21T16:26:59Z
Replication Study: Enhancing Hydrological Modeling with Physics-Guided Machine Learning
Current hydrological modeling methods combine data-driven Machine Learning (ML) algorithms and traditional physics-based models to address their respective limitations incorrect parameter estimates from rigid physics-based models and the neglect of physical process constraints by ML algorithms. Despite the accuracy of ML in outcome prediction, the integration of scientific knowledge is crucial for reliable predictions. This study introduces a Physics Informed Machine Learning (PIML) model, which merges the process understanding of conceptual hydrological models with the predictive efficiency of ML algorithms. Applied to the Anandapur sub-catchment, the PIML model demonstrates superior performance in forecasting monthly streamflow and actual evapotranspiration over both standalone conceptual models and ML algorithms, ensuring physical consistency of the outputs. This study replicates the methodologies of Bhasme, P., Vagadiya, J., & Bhatia, U. (2022) from their pivotal work on Physics Informed Machine Learning for hydrological processes, utilizing their shared code and datasets to further explore the predictive capabilities in hydrological modeling.
[ "['Mostafa Esmaeilzadeh' 'Melika Amirzadeh']" ]
null
null
2402.13914
null
null
http://arxiv.org/pdf/2402.13914v2
2024-06-28T08:37:28Z
2024-02-21T16:30:24Z
Position: Explain to Question not to Justify
Explainable Artificial Intelligence (XAI) is a young but very promising field of research. Unfortunately, the progress in this field is currently slowed down by divergent and incompatible goals. We separate various threads tangled within the area of XAI into two complementary cultures of human/value-oriented explanations (BLUE XAI) and model/validation-oriented explanations (RED XAI). This position paper argues that the area of RED XAI is currently under-explored, i.e., more methods for explainability are desperately needed to question models (e.g., extract knowledge from well-performing models as well as spotting and fixing bugs in faulty models), and the area of RED XAI hides great opportunities and potential for important research necessary to ensure the safety of AI systems. We conclude this paper by presenting promising challenges in this area.
[ "['Przemyslaw Biecek' 'Wojciech Samek']" ]
null
null
2402.13916
null
null
http://arxiv.org/pdf/2402.13916v1
2024-02-21T16:31:45Z
2024-02-21T16:31:45Z
Bias correction of wind power forecasts with SCADA data and continuous learning
Wind energy plays a critical role in the transition towards renewable energy sources. However, the uncertainty and variability of wind can impede its full potential and the necessary growth of wind power capacity. To mitigate these challenges, wind power forecasting methods are employed for applications in power management, energy trading, or maintenance scheduling. In this work, we present, evaluate, and compare four machine learning-based wind power forecasting models. Our models correct and improve 48-hour forecasts extracted from a numerical weather prediction (NWP) model. The models are evaluated on datasets from a wind park comprising 65 wind turbines. The best improvement in forecasting error and mean bias was achieved by a convolutional neural network, reducing the average NRMSE down to 22%, coupled with a significant reduction in mean bias, compared to a NRMSE of 35% from the strongly biased baseline model using uncorrected NWP forecasts. Our findings further indicate that changes to neural network architectures play a minor role in affecting the forecasting performance, and that future research should rather investigate changes in the model pipeline. Moreover, we introduce a continuous learning strategy, which is shown to achieve the highest forecasting performance improvements when new data is made available.
[ "['Stefan Jonas' 'Kevin Winter' 'Bernhard Brodbeck' 'Angela Meyer']" ]
null
null
2402.13918
null
null
http://arxiv.org/pdf/2402.13918v3
2024-03-01T13:39:07Z
2024-02-21T16:32:43Z
BenchCloudVision: A Benchmark Analysis of Deep Learning Approaches for Cloud Detection and Segmentation in Remote Sensing Imagery
Satellites equipped with optical sensors capture high-resolution imagery, providing valuable insights into various environmental phenomena. In recent years, there has been a surge of research focused on addressing some challenges in remote sensing, ranging from water detection in diverse landscapes to the segmentation of mountainous and terrains. Ongoing investigations goals to enhance the precision and efficiency of satellite imagery analysis. Especially, there is a growing emphasis on developing methodologies for accurate water body detection, snow and clouds, important for environmental monitoring, resource management, and disaster response. Within this context, this paper focus on the cloud segmentation from remote sensing imagery. Accurate remote sensing data analysis can be challenging due to the presence of clouds in optical sensor-based applications. The quality of resulting products such as applications and research is directly impacted by cloud detection, which plays a key role in the remote sensing data processing pipeline. This paper examines seven cutting-edge semantic segmentation and detection algorithms applied to clouds identification, conducting a benchmark analysis to evaluate their architectural approaches and identify the most performing ones. To increase the model's adaptability, critical elements including the type of imagery and the amount of spectral bands used during training are analyzed. Additionally, this research tries to produce machine learning algorithms that can perform cloud segmentation using only a few spectral bands, including RGB and RGBN-IR combinations. The model's flexibility for a variety of applications and user scenarios is assessed by using imagery from Sentinel-2 and Landsat-8 as datasets. This benchmark can be reproduced using the material from this github link: https://github.com/toelt-llc/cloud_segmentation_comparative.
[ "['Loddo Fabio' 'Dario Piga' 'Michelucci Umberto' 'El Ghazouali Safouane']" ]
null
null
2402.13929
null
null
http://arxiv.org/pdf/2402.13929v3
2024-03-02T09:09:32Z
2024-02-21T16:51:05Z
SDXL-Lightning: Progressive Adversarial Diffusion Distillation
We propose a diffusion distillation method that achieves new state-of-the-art in one-step/few-step 1024px text-to-image generation based on SDXL. Our method combines progressive and adversarial distillation to achieve a balance between quality and mode coverage. In this paper, we discuss the theoretical analysis, discriminator design, model formulation, and training techniques. We open-source our distilled SDXL-Lightning models both as LoRA and full UNet weights.
[ "['Shanchuan Lin' 'Anran Wang' 'Xiao Yang']" ]
null
null
2402.13930
null
null
http://arxiv.org/abs/2402.13930v1
2024-02-21T16:52:26Z
2024-02-21T16:52:26Z
Enhancing Reinforcement Learning Agents with Local Guides
This paper addresses the problem of integrating local guide policies into a Reinforcement Learning agent. For this, we show how to adapt existing algorithms to this setting before introducing a novel algorithm based on a noisy policy-switching procedure. This approach builds on a proper Approximate Policy Evaluation (APE) scheme to provide a perturbation that carefully leads the local guides towards better actions. We evaluated our method on a set of classical Reinforcement Learning problems, including safety-critical systems where the agent cannot enter some areas at the risk of triggering catastrophic consequences. In all the proposed environments, our agent proved to be efficient at leveraging those policies to improve the performance of any APE-based Reinforcement Learning algorithm, especially in its first learning stages.
[ "['Paul Daoudi' 'Bogdan Robu' 'Christophe Prieur' 'Ludovic Dos Santos'\n 'Merwan Barlier']" ]
null
null
2402.13934
null
null
http://arxiv.org/pdf/2402.13934v1
2024-02-21T17:00:56Z
2024-02-21T17:00:56Z
Do Efficient Transformers Really Save Computation?
As transformer-based language models are trained on increasingly large datasets and with vast numbers of parameters, finding more efficient alternatives to the standard Transformer has become very valuable. While many efficient Transformers and Transformer alternatives have been proposed, none provide theoretical guarantees that they are a suitable replacement for the standard Transformer. This makes it challenging to identify when to use a specific model and what directions to prioritize for further investigation. In this paper, we aim to understand the capabilities and limitations of efficient Transformers, specifically the Sparse Transformer and the Linear Transformer. We focus on their reasoning capability as exhibited by Chain-of-Thought (CoT) prompts and follow previous works to model them as Dynamic Programming (DP) problems. Our results show that while these models are expressive enough to solve general DP tasks, contrary to expectations, they require a model size that scales with the problem size. Nonetheless, we identify a class of DP problems for which these models can be more efficient than the standard Transformer. We confirm our theoretical results through experiments on representative DP tasks, adding to the understanding of efficient Transformers' practical strengths and weaknesses.
[ "['Kai Yang' 'Jan Ackermann' 'Zhenyu He' 'Guhao Feng' 'Bohang Zhang'\n 'Yunzhen Feng' 'Qiwei Ye' 'Di He' 'Liwei Wang']" ]
null
null
2402.13937
null
null
http://arxiv.org/pdf/2402.13937v2
2024-05-21T17:10:59Z
2024-02-21T17:05:27Z
Verifying message-passing neural networks via topology-based bounds tightening
Since graph neural networks (GNNs) are often vulnerable to attack, we need to know when we can trust them. We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs) using a Rectified Linear Unit (ReLU) activation function. Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems, for example it admits (i) both adding and removing edges, (ii) both global and local budgets, and (iii) both topological perturbations and feature modifications. Our key technology, topology-based bounds tightening, uses graph structure to tighten bounds. We also experiment with aggressive bounds tightening to dynamically change the optimization constraints by tightening variable bounds. To demonstrate the effectiveness of these strategies, we implement an extension to the open-source branch-and-cut solver SCIP. We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
[ "['Christopher Hojny' 'Shiqiang Zhang' 'Juan S. Campos' 'Ruth Misener']" ]
null
null
2402.13945
null
null
http://arxiv.org/pdf/2402.13945v1
2024-02-21T17:15:47Z
2024-02-21T17:15:47Z
Probabilistic Neural Networks (PNNs) for Modeling Aleatoric Uncertainty in Scientific Machine Learning
This paper investigates the use of probabilistic neural networks (PNNs) to model aleatoric uncertainty, which refers to the inherent variability in the input-output relationships of a system, often characterized by unequal variance or heteroscedasticity. Unlike traditional neural networks that produce deterministic outputs, PNNs generate probability distributions for the target variable, allowing the determination of both predicted means and intervals in regression scenarios. Contributions of this paper include the development of a probabilistic distance metric to optimize PNN architecture, and the deployment of PNNs in controlled data sets as well as a practical material science case involving fiber-reinforced composites. The findings confirm that PNNs effectively model aleatoric uncertainty, proving to be more appropriate than the commonly employed Gaussian process regression for this purpose. Specifically, in a real-world scientific machine learning context, PNNs yield remarkably accurate output mean estimates with R-squared scores approaching 0.97, and their predicted intervals exhibit a high correlation coefficient of nearly 0.80, closely matching observed data intervals. Hence, this research contributes to the ongoing exploration of leveraging the sophisticated representational capacity of neural networks to delineate complex input-output relationships in scientific problems.
[ "['Farhad Pourkamali-Anaraki' 'Jamal F. Husseini' 'Scott E. Stapleton']" ]
null
null
2402.13946
null
null
http://arxiv.org/pdf/2402.13946v2
2024-02-26T20:18:38Z
2024-02-21T17:18:25Z
AttackGNN: Red-Teaming GNNs in Hardware Security Using Reinforcement Learning
Machine learning has shown great promise in addressing several critical hardware security problems. In particular, researchers have developed novel graph neural network (GNN)-based techniques for detecting intellectual property (IP) piracy, detecting hardware Trojans (HTs), and reverse engineering circuits, to name a few. These techniques have demonstrated outstanding accuracy and have received much attention in the community. However, since these techniques are used for security applications, it is imperative to evaluate them thoroughly and ensure they are robust and do not compromise the security of integrated circuits. In this work, we propose AttackGNN, the first red-team attack on GNN-based techniques in hardware security. To this end, we devise a novel reinforcement learning (RL) agent that generates adversarial examples, i.e., circuits, against the GNN-based techniques. We overcome three challenges related to effectiveness, scalability, and generality to devise a potent RL agent. We target five GNN-based techniques for four crucial classes of problems in hardware security: IP piracy, detecting/localizing HTs, reverse engineering, and hardware obfuscation. Through our approach, we craft circuits that fool all GNNs considered in this work. For instance, to evade IP piracy detection, we generate adversarial pirated circuits that fool the GNN-based defense into classifying our crafted circuits as not pirated. For attacking HT localization GNN, our attack generates HT-infested circuits that fool the defense on all tested circuits. We obtain a similar 100% success rate against GNNs for all classes of problems.
[ "['Vasudev Gohil' 'Satwik Patnaik' 'Dileep Kalathil'\n 'Jeyavijayan Rajendran']" ]
null
null
2402.13957
null
null
http://arxiv.org/abs/2402.13957v2
2024-06-01T21:37:21Z
2024-02-21T17:37:30Z
Advancing Audio Fingerprinting Accuracy Addressing Background Noise and Distortion Challenges
Audio fingerprinting, exemplified by pioneers like Shazam, has transformed digital audio recognition. However, existing systems struggle with accuracy in challenging conditions, limiting broad applicability. This research proposes an AI and ML integrated audio fingerprinting algorithm to enhance accuracy. Built on the Dejavu Project's foundations, the study emphasizes real-world scenario simulations with diverse background noises and distortions. Signal processing, central to Dejavu's model, includes the Fast Fourier Transform, spectrograms, and peak extraction. The "constellation" concept and fingerprint hashing enable unique song identification. Performance evaluation attests to 100% accuracy within a 5-second audio input, with a system showcasing predictable matching speed for efficiency. Storage analysis highlights the critical space-speed trade-off for practical implementation. This research advances audio fingerprinting's adaptability, addressing challenges in varied environments and applications.
[ "['Navin Kamuni' 'Sathishkumar Chintala' 'Naveen Kunchakuri'\n 'Jyothi Swaroop Arlagadda Narasimharaju' 'Venkat Kumar']" ]
null
null
2402.13973
null
null
http://arxiv.org/pdf/2402.13973v1
2024-02-21T17:58:10Z
2024-02-21T17:58:10Z
Linear-Time Graph Neural Networks for Scalable Recommendations
In an era of information explosion, recommender systems are vital tools to deliver personalized recommendations for users. The key of recommender systems is to forecast users' future behaviors based on previous user-item interactions. Due to their strong expressive power of capturing high-order connectivities in user-item interaction data, recent years have witnessed a rising interest in leveraging Graph Neural Networks (GNNs) to boost the prediction performance of recommender systems. Nonetheless, classic Matrix Factorization (MF) and Deep Neural Network (DNN) approaches still play an important role in real-world large-scale recommender systems due to their scalability advantages. Despite the existence of GNN-acceleration solutions, it remains an open question whether GNN-based recommender systems can scale as efficiently as classic MF and DNN methods. In this paper, we propose a Linear-Time Graph Neural Network (LTGNN) to scale up GNN-based recommender systems to achieve comparable scalability as classic MF approaches while maintaining GNNs' powerful expressiveness for superior prediction accuracy. Extensive experiments and ablation studies are presented to validate the effectiveness and scalability of the proposed algorithm. Our implementation based on PyTorch is available.
[ "['Jiahao Zhang' 'Rui Xue' 'Wenqi Fan' 'Xin Xu' 'Qing Li' 'Jian Pei'\n 'Xiaorui Liu']" ]
null
null
2402.13979
null
null
http://arxiv.org/pdf/2402.13979v1
2024-02-21T18:09:04Z
2024-02-21T18:09:04Z
The Importance of Architecture Choice in Deep Learning for Climate Applications
Machine Learning has become a pervasive tool in climate science applications. However, current models fail to address nonstationarity induced by anthropogenic alterations in greenhouse emissions and do not routinely quantify the uncertainty of proposed projections. In this paper, we model the Atlantic Meridional Overturning Circulation (AMOC) which is of major importance to climate in Europe and the US East Coast by transporting warm water to these regions, and has the potential for abrupt collapse. We can generate arbitrarily extreme climate scenarios through arbitrary time scales which we then predict using neural networks. Our analysis shows that the AMOC is predictable using neural networks under a diverse set of climate scenarios. Further experiments reveal that MLPs and Deep Ensembles can learn the physics of the AMOC instead of imitating its progression through autocorrelation. With quantified uncertainty, an intriguing pattern of "spikes" before critical points of collapse in the AMOC casts doubt on previous analyses that predicted an AMOC collapse within this century. Our results show that Bayesian Neural Networks perform poorly compared to more dense architectures and care should be taken when applying neural networks to nonstationary scenarios such as climate projections. Further, our results highlight that big NN models might have difficulty in modeling global Earth System dynamics accurately and be successfully applied in nonstationary climate scenarios due to the physics being challenging for neural networks to capture.
[ "['Simon Dräger' 'Maike Sonnewald']" ]
null
null
2402.13984
null
null
http://arxiv.org/pdf/2402.13984v1
2024-02-21T18:12:07Z
2024-02-21T18:12:07Z
Stability-Aware Training of Neural Network Interatomic Potentials with Differentiable Boltzmann Estimators
Neural network interatomic potentials (NNIPs) are an attractive alternative to ab-initio methods for molecular dynamics (MD) simulations. However, they can produce unstable simulations which sample unphysical states, limiting their usefulness for modeling phenomena occurring over longer timescales. To address these challenges, we present Stability-Aware Boltzmann Estimator (StABlE) Training, a multi-modal training procedure which combines conventional supervised training from quantum-mechanical energies and forces with reference system observables, to produce stable and accurate NNIPs. StABlE Training iteratively runs MD simulations to seek out unstable regions, and corrects the instabilities via supervision with a reference observable. The training procedure is enabled by the Boltzmann Estimator, which allows efficient computation of gradients required to train neural networks to system observables, and can detect both global and local instabilities. We demonstrate our methodology across organic molecules, tetrapeptides, and condensed phase systems, along with using three modern NNIP architectures. In all three cases, StABlE-trained models achieve significant improvements in simulation stability and recovery of structural and dynamic observables. In some cases, StABlE-trained models outperform conventional models trained on datasets 50 times larger. As a general framework applicable across NNIP architectures and systems, StABlE Training is a powerful tool for training stable and accurate NNIPs, particularly in the absence of large reference datasets.
[ "['Sanjeev Raja' 'Ishan Amin' 'Fabian Pedregosa' 'Aditi S. Krishnapriyan']" ]
null
null
2402.13987
null
null
http://arxiv.org/pdf/2402.13987v1
2024-02-21T18:16:48Z
2024-02-21T18:16:48Z
A Simple and Yet Fairly Effective Defense for Graph Neural Networks
Graph Neural Networks (GNNs) have emerged as the dominant approach for machine learning on graph-structured data. However, concerns have arisen regarding the vulnerability of GNNs to small adversarial perturbations. Existing defense methods against such perturbations suffer from high time complexity and can negatively impact the model's performance on clean graphs. To address these challenges, this paper introduces NoisyGNNs, a novel defense method that incorporates noise into the underlying model's architecture. We establish a theoretical connection between noise injection and the enhancement of GNN robustness, highlighting the effectiveness of our approach. We further conduct extensive empirical evaluations on the node classification task to validate our theoretical findings, focusing on two popular GNNs: the GCN and GIN. The results demonstrate that NoisyGNN achieves superior or comparable defense performance to existing methods while minimizing added time complexity. The NoisyGNN approach is model-agnostic, allowing it to be integrated with different GNN architectures. Successful combinations of our NoisyGNN approach with existing defense techniques demonstrate even further improved adversarial defense results. Our code is publicly available at: https://github.com/Sennadir/NoisyGNN.
[ "['Sofiane Ennadir' 'Yassine Abbahaddou' 'Johannes F. Lutzeyer'\n 'Michalis Vazirgiannis' 'Henrik Boström']" ]
null
null
2402.13989
null
null
http://arxiv.org/pdf/2402.13989v2
2024-04-11T16:07:30Z
2024-02-21T18:19:20Z
FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning
Federated learning (FL) is a promising framework for learning from distributed data while maintaining privacy. The development of efficient FL algorithms encounters various challenges, including heterogeneous data and systems, limited communication capacities, and constrained local computational resources. Recently developed FedADMM methods show great resilience to both data and system heterogeneity. However, they still suffer from performance deterioration if the hyperparameters are not carefully tuned. To address this issue, we propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa. First, we design an inexactness criterion for the clients' local updates to eliminate the need for empirically setting the local training accuracy. This inexactness criterion can be assessed by each client independently based on its unique condition, thereby reducing the local computational cost and mitigating the undesirable straggle effect. The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions. Additionally, we present a self-adaptive scheme that dynamically adjusts each client's penalty parameter, enhancing algorithm robustness by mitigating the need for empirical penalty parameter choices for each client. Extensive numerical experiments on both synthetic and real-world datasets are conducted. As validated by some numerical tests, our proposed algorithm can reduce the clients' local computational load significantly and also accelerate the learning process compared to the vanilla FedADMM.
[ "['Yongcun Song' 'Ziqi Wang' 'Enrique Zuazua']" ]
null
null
2402.13999
null
null
http://arxiv.org/pdf/2402.13999v2
2024-06-10T10:16:19Z
2024-02-21T18:35:27Z
Asymptotics of Learning with Deep Structured (Random) Features
For a large class of feature maps we provide a tight asymptotic characterisation of the test error associated with learning the readout layer, in the high-dimensional limit where the input dimension, hidden layer widths, and number of training samples are proportionally large. This characterization is formulated in terms of the population covariance of the features. Our work is partially motivated by the problem of learning with Gaussian rainbow neural networks, namely deep non-linear fully-connected networks with random but structured weights, whose row-wise covariances are further allowed to depend on the weights of previous layers. For such networks we also derive a closed-form formula for the feature covariance in terms of the weight matrices. We further find that in some cases our results can capture feature maps learned by deep, finite-width neural networks trained under gradient descent.
[ "['Dominik Schröder' 'Daniil Dmitriev' 'Hugo Cui' 'Bruno Loureiro']" ]
null
null
2402.14009
null
null
http://arxiv.org/pdf/2402.14009v2
2024-05-27T16:12:14Z
2024-02-21T18:50:12Z
Geometry-Informed Neural Networks
Geometry is a ubiquitous language of computer graphics, design, and engineering. However, the lack of large shape datasets limits the application of state-of-the-art supervised learning methods and motivates the exploration of alternative learning strategies. To this end, we introduce geometry-informed neural networks (GINNs) to train shape generative models emph{without any data}. GINNs combine (i) learning under constraints, (ii) neural fields as a suitable representation, and (iii) generating diverse solutions to under-determined problems. We apply GINNs to several two and three-dimensional problems of increasing levels of complexity. Our results demonstrate the feasibility of training shape generative models in a data-free setting. This new paradigm opens several exciting research directions, expanding the application of generative models into domains where data is sparse.
[ "['Arturs Berzins' 'Andreas Radler' 'Sebastian Sanokowski'\n 'Sepp Hochreiter' 'Johannes Brandstetter']" ]
null
null
2402.14012
null
null
http://arxiv.org/pdf/2402.14012v2
2024-07-12T15:44:38Z
2024-02-21T18:51:42Z
Chasing Convex Functions with Long-term Constraints
We introduce and study a family of online metric problems with long-term constraints. In these problems, an online player makes decisions $mathbf{x}_t$ in a metric space $(X,d)$ to simultaneously minimize their hitting cost $f_t(mathbf{x}_t)$ and switching cost as determined by the metric. Over the time horizon $T$, the player must satisfy a long-term demand constraint $sum_{t} c(mathbf{x}_t) geq 1$, where $c(mathbf{x}_t)$ denotes the fraction of demand satisfied at time $t$. Such problems can find a wide array of applications to online resource allocation in sustainable energy/computing systems. We devise optimal competitive and learning-augmented algorithms for the case of bounded hitting cost gradients and weighted $ell_1$ metrics, and further show that our proposed algorithms perform well in numerical experiments.
[ "['Adam Lechowicz' 'Nicolas Christianson' 'Bo Sun' 'Noman Bashir'\n 'Mohammad Hajiesmaili' 'Adam Wierman' 'Prashant Shenoy']" ]
null
null
2402.14013
null
null
http://arxiv.org/pdf/2402.14013v1
2024-02-21T18:52:20Z
2024-02-21T18:52:20Z
Misalignment, Learning, and Ranking: Harnessing Users Limited Attention
In digital health and EdTech, recommendation systems face a significant challenge: users often choose impulsively, in ways that conflict with the platform's long-term payoffs. This misalignment makes it difficult to effectively learn to rank items, as it may hinder exploration of items with greater long-term payoffs. Our paper tackles this issue by utilizing users' limited attention spans. We propose a model where a platform presents items with unknown payoffs to the platform in a ranked list to $T$ users over time. Each user selects an item by first considering a prefix window of these ranked items and then picking the highest preferred item in that window (and the platform observes its payoff for this item). We study the design of online bandit algorithms that obtain vanishing regret against hindsight optimal benchmarks. We first consider adversarial window sizes and stochastic iid payoffs. We design an active-elimination-based algorithm that achieves an optimal instance-dependent regret bound of $O(log(T))$, by showing matching regret upper and lower bounds. The key idea is using the combinatorial structure of the problem to either obtain a large payoff from each item or to explore by getting a sample from that item. This method systematically narrows down the item choices to enhance learning efficiency and payoff. Second, we consider adversarial payoffs and stochastic iid window sizes. We start from the full-information problem of finding the permutation that maximizes the expected payoff. By a novel combinatorial argument, we characterize the polytope of admissible item selection probabilities by a permutation and show it has a polynomial-size representation. Using this representation, we show how standard algorithms for adversarial online linear optimization in the space of admissible probabilities can be used to obtain a polynomial-time algorithm with $O(sqrt{T})$ regret.
[ "['Arpit Agarwal' 'Rad Niazadeh' 'Prathamesh Patil']" ]
null
null
2402.14015
null
null
http://arxiv.org/pdf/2402.14015v1
2024-02-21T18:54:37Z
2024-02-21T18:54:37Z
Corrective Machine Unlearning
Machine Learning models increasingly face data integrity challenges due to the use of large-scale training datasets drawn from the internet. We study what model developers can do if they detect that some data was manipulated or incorrect. Such manipulated data can cause adverse effects like vulnerability to backdoored samples, systematic biases, and in general, reduced accuracy on certain input domains. Often, all manipulated training samples are not known, and only a small, representative subset of the affected data is flagged. We formalize "Corrective Machine Unlearning" as the problem of mitigating the impact of data affected by unknown manipulations on a trained model, possibly knowing only a subset of impacted samples. We demonstrate that the problem of corrective unlearning has significantly different requirements from traditional privacy-oriented unlearning. We find most existing unlearning methods, including the gold-standard retraining-from-scratch, require most of the manipulated data to be identified for effective corrective unlearning. However, one approach, SSD, achieves limited success in unlearning adverse effects with just a small portion of the manipulated samples, showing the tractability of this setting. We hope our work spurs research towards developing better methods for corrective unlearning and offers practitioners a new strategy to handle data integrity challenges arising from web-scale training.
[ "['Shashwat Goel' 'Ameya Prabhu' 'Philip Torr' 'Ponnurangam Kumaraguru'\n 'Amartya Sanyal']" ]
null
null
2402.14017
null
null
http://arxiv.org/pdf/2402.14017v1
2024-02-21T18:56:03Z
2024-02-21T18:56:03Z
D-Flow: Differentiating through Flows for Controlled Generation
Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional generation, and controlled generation in general. In this work we introduce D-Flow, a simple framework for controlling the generation process by differentiating through the flow, optimizing for the source (noise) point. We motivate this framework by our key observation stating that for Diffusion/FM models trained with Gaussian probability paths, differentiating through the generation process projects gradient on the data manifold, implicitly injecting the prior into the optimization process. We validate our framework on linear and non-linear controlled generation problems including: image and audio inverse problems and conditional molecule generation reaching state of the art performance across all.
[ "['Heli Ben-Hamu' 'Omri Puny' 'Itai Gat' 'Brian Karrer' 'Uriel Singer'\n 'Yaron Lipman']" ]
null
null
2402.14020
null
null
http://arxiv.org/pdf/2402.14020v1
2024-02-21T18:59:13Z
2024-02-21T18:59:13Z
Coercing LLMs to do and reveal (almost) anything
It has recently been shown that adversarial attacks on large language models (LLMs) can "jailbreak" the model into making harmful statements. In this work, we argue that the spectrum of adversarial attacks on LLMs is much larger than merely jailbreaking. We provide a broad overview of possible attack surfaces and attack goals. Based on a series of concrete examples, we discuss, categorize and systematize attacks that coerce varied unintended behaviors, such as misdirection, model control, denial-of-service, or data extraction. We analyze these attacks in controlled experiments, and find that many of them stem from the practice of pre-training LLMs with coding capabilities, as well as the continued existence of strange "glitch" tokens in common LLM vocabularies that should be removed for security reasons.
[ "['Jonas Geiping' 'Alex Stein' 'Manli Shu' 'Khalid Saifullah' 'Yuxin Wen'\n 'Tom Goldstein']" ]
null
null
2402.14022
null
null
http://arxiv.org/pdf/2402.14022v1
2024-02-01T13:46:36Z
2024-02-01T13:46:36Z
Statistical validation of a deep learning algorithm for dental anomaly detection in intraoral radiographs using paired data
This article describes the clinical validation study setup, statistical analysis and results for a deep learning algorithm which detects dental anomalies in intraoral radiographic images, more specifically caries, apical lesions, root canal treatment defects, marginal defects at crown restorations, periodontal bone loss and calculus. The study compares the detection performance of dentists using the deep learning algorithm to the prior performance of these dentists evaluating the images without algorithmic assistance. Calculating the marginal profit and loss of performance from the annotated paired image data allows for a quantification of the hypothesized change in sensitivity and specificity. The statistical significance of these results is extensively proven using both McNemar's test and the binomial hypothesis test. The average sensitivity increases from $60.7%$ to $85.9%$, while the average specificity slightly decreases from $94.5%$ to $92.7%$. We prove that the increase of the area under the localization ROC curve (AUC) is significant (from $0.60$ to $0.86$ on average), while the average AUC is bounded by the $95%$ confidence intervals ${[}0.54, 0.65{]}$ and ${[}0.82, 0.90{]}$. When using the deep learning algorithm for diagnostic guidance, the dentist can be $95%$ confident that the average true population sensitivity is bounded by the range $79.6%$ to $91.9%$. The proposed paired data setup and statistical analysis can be used as a blueprint to thoroughly test the effect of a modality change, like a deep learning based detection and/or segmentation, on radiographic images.
[ "['Pieter Van Leemput' 'Johannes Keustermans' 'Wouter Mollemans']" ]
null
null
2402.14027
null
null
http://arxiv.org/pdf/2402.14027v1
2024-02-17T22:26:56Z
2024-02-17T22:26:56Z
Learning causation event conjunction sequences
This is an examination of some methods that learn causations in event sequences. A causation is defined as a conjunction of one or more cause events occurring in an arbitrary order, with possible intervening non-causal events, that lead to an effect. The methods include recurrent and non-recurrent artificial neural networks (ANNs), as well as a histogram-based algorithm. An attention recurrent ANN performed the best of the ANNs, while the histogram algorithm was significantly superior to all the ANNs.
[ "['Thomas E. Portegys']" ]
null
null
2402.14029
null
null
http://arxiv.org/pdf/2402.14029v2
2024-06-03T13:12:18Z
2024-02-20T03:14:45Z
Partial Search in a Frozen Network is Enough to Find a Strong Lottery Ticket
Randomly initialized dense networks contain subnetworks that achieve high accuracy without weight learning -- strong lottery tickets (SLTs). Recently, Gadhikar et al. (2023) demonstrated that SLTs can also be found within a randomly pruned source network, thus reducing the SLT search space. However, this limits the search to SLTs that are even sparser than the source, leading to worse accuracy due to unintentionally high sparsity. This paper proposes a method that reduces the SLT search space by an arbitrary ratio independent of the desired SLT sparsity. A random subset of the initial weights is excluded from the search space by freezing it -- i.e., by either permanently pruning them or locking them as a fixed part of the SLT. In addition to reducing search space, the proposed random freezing can also provide the benefit of reducing the model size for inference. Furthermore, experimental results show that the proposed method finds SLTs with better accuracy-to-model size trade-off than the SLTs obtained from dense or randomly pruned source networks. In particular, the SLTs found in Frozen ResNets on image classification using ImageNet significantly improve the accuracy-to-search space and accuracy-to-model size trade-offs over SLTs within dense (non-freezing) or sparse (non-locking) random networks.
[ "['Hikari Otsuka' 'Daiki Chijiwa' 'Ángel López García-Arias'\n 'Yasuyuki Okoshi' 'Kazushi Kawamura' 'Thiem Van Chu' 'Daichi Fujiki'\n 'Susumu Takeuchi' 'Masato Motomura']" ]
null
null
2402.14031
null
null
http://arxiv.org/pdf/2402.14031v1
2024-02-20T11:34:19Z
2024-02-20T11:34:19Z
Autoencoder with Ordered Variance for Nonlinear Model Identification
This paper presents a novel autoencoder with ordered variance (AEO) in which the loss function is modified with a variance regularization term to enforce order in the latent space. Further, the autoencoder is modified using ResNets, which results in a ResNet AEO (RAEO). The paper also illustrates the effectiveness of AEO and RAEO in extracting nonlinear relationships among input variables in an unsupervised setting.
[ "['Midhun T. Augustine' 'Parag Patil' 'Mani Bhushan' 'Sharad Bhartiya']" ]
null
null
2402.14033
null
null
http://arxiv.org/abs/2402.14033v1
2024-02-21T03:04:34Z
2024-02-21T03:04:34Z
VN Network: Embedding Newly Emerging Entities with Virtual Neighbors
Embedding entities and relations into continuous vector spaces has attracted a surge of interest in recent years. Most embedding methods assume that all test entities are available during training, which makes it time-consuming to retrain embeddings for newly emerging entities. To address this issue, recent works apply the graph neural network on the existing neighbors of the unseen entities. In this paper, we propose a novel framework, namely Virtual Neighbor (VN) network, to address three key challenges. Firstly, to reduce the neighbor sparsity problem, we introduce the concept of the virtual neighbors inferred by rules. And we assign soft labels to these neighbors by solving a rule-constrained problem, rather than simply regarding them as unquestionably true. Secondly, many existing methods only use one-hop or two-hop neighbors for aggregation and ignore the distant information that may be helpful. Instead, we identify both logic and symmetric path rules to capture complex patterns. Finally, instead of one-time injection of rules, we employ an iterative learning scheme between the embedding method and virtual neighbor prediction to capture the interactions within. Experimental results on two knowledge graph completion tasks demonstrate that our VN network significantly outperforms state-of-the-art baselines. Furthermore, results on Subject/Object-R show that our proposed VN network is highly robust to the neighbor sparsity problem.
[ "['Yongquan He' 'Zihan Wang' 'Peng Zhang' 'Zhaopeng Tu' 'Zhaochun Ren']" ]
null
null
2402.14035
null
null
http://arxiv.org/pdf/2402.14035v3
2024-05-15T12:42:04Z
2024-02-21T04:33:26Z
Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model
Recent advancements in foundation models have yielded impressive performance across a wide range of tasks. Meanwhile, for specific applications, practitioners have been developing specialized application models. To enjoy the benefits of both kinds of models, one natural path is to transfer the knowledge in foundation models into specialized application models, which are generally more efficient for serving. Techniques from knowledge distillation may be applied here, where the application model learns to mimic the foundation model. However, specialized application models and foundation models have substantial gaps in capacity, employing distinct architectures, using different input features from different modalities, and being optimized on different distributions. These differences in model characteristics lead to significant challenges for distillation methods. In this work, we propose creating a teaching committee comprising both foundation model teachers and complementary teachers. Complementary teachers possess model characteristics akin to the student's, aiming to bridge the gap between the foundation model and specialized application models for a smoother knowledge transfer. Further, to accommodate the dissimilarity among the teachers in the committee, we introduce DiverseDistill, which allows the student to understand the expertise of each teacher and extract task knowledge. Our evaluations demonstrate that adding complementary teachers enhances student performance. Finally, DiverseDistill consistently outperforms baseline distillation methods, regardless of the teacher choices, resulting in significantly improved student performance.
[ "['Zichang Liu' 'Qingyun Liu' 'Yuening Li' 'Liang Liu'\n 'Anshumali Shrivastava' 'Shuchao Bi' 'Lichan Hong' 'Ed H. Chi' 'Zhe Zhao']" ]
null
null
2402.14039
null
null
http://arxiv.org/pdf/2402.14039v1
2024-02-21T06:39:04Z
2024-02-21T06:39:04Z
Specialty detection in the context of telemedicine in a highly imbalanced multi-class distribution
The Covid-19 pandemic has led to an increase in the awareness of and demand for telemedicine services, resulting in a need for automating the process and relying on machine learning (ML) to reduce the operational load. This research proposes a specialty detection classifier based on a machine learning model to automate the process of detecting the correct specialty for each question and routing it to the correct doctor. The study focuses on handling multiclass and highly imbalanced datasets for Arabic medical questions, comparing some oversampling techniques, developing a Deep Neural Network (DNN) model for specialty detection, and exploring the hidden business areas that rely on specialty detection such as customizing and personalizing the consultation flow for different specialties. The proposed module is deployed in both synchronous and asynchronous medical consultations to provide more real-time classification, minimize the doctor effort in addressing the correct specialty, and give the system more flexibility in customizing the medical consultation flow. The evaluation and assessment are based on accuracy, precision, recall, and F1-score. The experimental results suggest that combining multiple techniques, such as SMOTE and reweighing with keyword identification, is necessary to achieve improved performance in detecting rare classes in imbalanced multiclass datasets. By using these techniques, specialty detection models can more accurately detect rare classes in real-world scenarios where imbalanced data is common.
[ "['Alaa Alomari' 'Hossam Faris' 'Pedro A. Castillo']" ]
null
null
2402.14041
null
null
http://arxiv.org/pdf/2402.14041v6
2024-05-27T08:14:20Z
2024-02-21T10:16:57Z
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Series
Cyber-physical system sensors emit multivariate time series (MTS) that monitor physical system processes. Such time series generally capture unknown numbers of states, each with a different duration, that correspond to specific conditions, e.g., "walking" or "running" in human-activity monitoring. Unsupervised identification of such states facilitates storage and processing in subsequent data analyses, as well as enhances result interpretability. Existing state-detection proposals face three challenges. First, they introduce substantial computational overhead, rendering them impractical in resourceconstrained or streaming settings. Second, although state-of-the-art (SOTA) proposals employ contrastive learning for representation, insufficient attention to false negatives hampers model convergence and accuracy. Third, SOTA proposals predominantly only emphasize offline non-streaming deployment, we highlight an urgent need to optimize online streaming scenarios. We propose E2Usd that enables efficient-yet-accurate unsupervised MTS state detection. E2Usd exploits a Fast Fourier Transform-based Time Series Compressor (fftCompress) and a Decomposed Dual-view Embedding Module (ddEM) that together encode input MTSs at low computational overhead. Additionally, we propose a False Negative Cancellation Contrastive Learning method (fnccLearning) to counteract the effects of false negatives and to achieve more cluster-friendly embedding spaces. To reduce computational overhead further in streaming settings, we introduce Adaptive Threshold Detection (adaTD). Comprehensive experiments with six baselines and six datasets offer evidence that E2Usd is capable of SOTA accuracy at significantly reduced computational overhead.
[ "['Zhichen Lai' 'Huan Li' 'Dalin Zhang' 'Yan Zhao' 'Weizhu Qian'\n 'Christian S. Jensen']" ]
null
null
2402.14042
null
null
http://arxiv.org/pdf/2402.14042v2
2024-03-01T11:46:26Z
2024-02-21T10:24:34Z
Protect and Extend -- Using GANs for Synthetic Data Generation of Time-Series Medical Records
Preservation of private user data is of paramount importance for high Quality of Experience (QoE) and acceptability, particularly with services treating sensitive data, such as IT-based health services. Whereas anonymization techniques were shown to be prone to data re-identification, synthetic data generation has gradually replaced anonymization since it is relatively less time and resource-consuming and more robust to data leakage. Generative Adversarial Networks (GANs) have been used for generating synthetic datasets, especially GAN frameworks adhering to the differential privacy phenomena. This research compares state-of-the-art GAN-based models for synthetic data generation to generate time-series synthetic medical records of dementia patients which can be distributed without privacy concerns. Predictive modeling, autocorrelation, and distribution analysis are used to assess the Quality of Generating (QoG) of the generated data. The privacy preservation of the respective models is assessed by applying membership inference attacks to determine potential data leakage risks. Our experiments indicate the superiority of the privacy-preserving GAN (PPGAN) model over other models regarding privacy preservation while maintaining an acceptable level of QoG. The presented results can support better data protection for medical use cases in the future.
[ "['Navid Ashrafi' 'Vera Schmitt' 'Robert P. Spang' 'Sebastian Möller'\n 'Jan-Niklas Voigt-Antons']" ]
null
null
2402.14045
null
null
http://arxiv.org/pdf/2402.14045v3
2024-05-27T06:00:15Z
2024-02-21T13:06:48Z
A Systematic Review of Low-Rank and Local Low-Rank Matrix Approximation in Big Data Medical Imaging
The large volume and complexity of medical imaging datasets are bottlenecks for storage, transmission, and processing. To tackle these challenges, the application of low-rank matrix approximation (LRMA) and its derivative, local LRMA (LLRMA) has demonstrated potential. A detailed analysis of the literature identifies LRMA and LLRMA methods applied to various imaging modalities, and the challenges and limitations associated with existing LRMA and LLRMA methods are addressed. We note a significant shift towards a preference for LLRMA in the medical imaging field since 2015, demonstrating its potential and effectiveness in capturing complex structures in medical data compared to LRMA. Acknowledging the limitations of shallow similarity methods used with LLRMA, we suggest advanced semantic image segmentation for similarity measure, explaining in detail how it can be used to measure similar patches and its feasibility. We note that LRMA and LLRMA are mainly applied to unstructured medical data, and we propose extending their application to different medical data types, including structured and semi-structured. This paper also discusses how LRMA and LLRMA can be applied to regular data with missing entries and the impact of inaccuracies in predicting missing values and their effects. We discuss the impact of patch size and propose the use of random search (RS) to determine the optimal patch size. To enhance feasibility, a hybrid approach using Bayesian optimization and RS is proposed, which could improve the application of LRMA and LLRMA in medical imaging.
[ "['Sisipho Hamlomo' 'Marcellin Atemkeng' 'Yusuf Brima' 'Chuneeta Nunhokee'\n 'Jeremy Baxter']" ]
null
null
2402.14047
null
null
http://arxiv.org/pdf/2402.14047v2
2024-07-15T08:49:49Z
2024-02-21T15:51:01Z
Simple and Effective Transfer Learning for Neuro-Symbolic Integration
Deep Learning (DL) techniques have achieved remarkable successes in recent years. However, their ability to generalize and execute reasoning tasks remains a challenge. A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning. Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task. These methods exhibit superior generalization capacity compared to fully neural architectures. However, they suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima. This paper proposes a simple yet effective method to ameliorate these problems. The key idea involves pretraining a neural model on the downstream task. Then, a NeSy model is trained on the same task via transfer learning, where the weights of the perceptual part are injected from the pretrained network. The key observation of our work is that the neural network fails to generalize only at the level of the symbolic part while being perfectly capable of learning the mapping from perceptions to symbols. We have tested our training strategy on various SOTA NeSy methods and datasets, demonstrating consistent improvements in the aforementioned problems.
[ "['Alessandro Daniele' 'Tommaso Campari' 'Sagar Malhotra'\n 'Luciano Serafini']" ]
null
null
2402.14048
null
null
http://arxiv.org/pdf/2402.14048v1
2024-02-21T16:38:14Z
2024-02-21T16:38:14Z
PolyNet: Learning Diverse Solution Strategies for Neural Combinatorial Optimization
Reinforcement learning-based methods for constructing solutions to combinatorial optimization problems are rapidly approaching the performance of human-designed algorithms. To further narrow the gap, learning-based approaches must efficiently explore the solution space during the search process. Recent approaches artificially increase exploration by enforcing diverse solution generation through handcrafted rules, however, these rules can impair solution quality and are difficult to design for more complex problems. In this paper, we introduce PolyNet, an approach for improving exploration of the solution space by learning complementary solution strategies. In contrast to other works, PolyNet uses only a single-decoder and a training schema that does not enforce diverse solution generation through handcrafted rules. We evaluate PolyNet on four combinatorial optimization problems and observe that the implicit diversity mechanism allows PolyNet to find better solutions than approaches the explicitly enforce diverse solution generation.
[ "['André Hottung' 'Mridul Mahajan' 'Kevin Tierney']" ]
null
null
2402.14049
null
null
http://arxiv.org/pdf/2402.14049v1
2024-02-21T18:25:04Z
2024-02-21T18:25:04Z
Generative Adversarial Models for Extreme Downscaling of Climate Datasets
Addressing the challenges of climate change requires accurate and high-resolution mapping of climate and weather variables. However, many existing climate datasets, such as the gridded outputs of the state-of-the-art numerical climate models (e.g., general circulation models), are only available at very coarse spatial resolutions due to the model complexity and extremely high computational demand. Deep-learning-based methods, particularly generative adversarial networks (GANs) and their variants, have proved effective for refining natural images, and have shown great promise in improving scientific datasets. In this paper, we describe a conditional GAN-based geospatial downscaling method for extreme downscaling of gridded climate datasets. Compared to most existing methods, the method can generate high-resolution accurate climate datasets from very low-resolution inputs. More importantly, the method explicitly considers the uncertainty inherent to the downscaling process that tends to be ignored in existing methods. Given an input, the method can produce a multitude of plausible high-resolution samples instead of one single deterministic result. These samples allow for an empirical exploration and inferences of model uncertainty and robustness. With a case study of gridded climate datasets (wind velocity and solar irradiance), we demonstrate the performances of the framework in downscaling tasks with very high scaling factors (up to $64times$) and highlight the advantages of the framework with a comprehensive comparison with commonly used downscaling methods, including area-to-point (ATP) kriging, deep image prior (DIP), enhanced deep super-resolution network (EDSR), enhanced super-resolution generative adversarial networks (ESRGAN), and physics-informed resolution-enhancing GAN (PhIRE GAN).
[ "['Guiye Li' 'Guofeng Cao']" ]
null
null
2402.14073
null
null
http://arxiv.org/pdf/2402.14073v1
2024-02-21T19:01:03Z
2024-02-21T19:01:03Z
Improving Language Understanding from Screenshots
An emerging family of language models (LMs), capable of processing both text and images within a single visual view, has the promise to unlock complex tasks such as chart understanding and UI navigation. We refer to these models as screenshot language models. Despite their appeal, existing screenshot LMs substantially lag behind text-only models on language understanding tasks. To close this gap, we adopt a simplified setting where the model inputs are plain-text-rendered screenshots, and we focus on improving the text ability of screenshot LMs. We propose a novel Patch-and-Text Prediction (PTP) objective, which masks and recovers both image patches of screenshots and text within screenshots. We also conduct extensive ablation studies on masking rates and patch sizes, as well as designs for improving training stability. Our pre-trained model, while solely taking visual inputs, achieves comparable performance with BERT on 6 out of 8 GLUE tasks (within 2%) and improves up to 8% over prior work. Additionally, we extend PTP to train autoregressive screenshot LMs and demonstrate its effectiveness--our models can significantly reduce perplexity by utilizing the screenshot context. Together, we hope our findings can inspire future research on developing powerful screenshot LMs and extending their reach to broader applications.
[ "['Tianyu Gao' 'Zirui Wang' 'Adithya Bhaskar' 'Danqi Chen']" ]
null
null
2402.14080
null
null
http://arxiv.org/pdf/2402.14080v1
2024-02-21T19:09:53Z
2024-02-21T19:09:53Z
Efficient Normalized Conformal Prediction and Uncertainty Quantification for Anti-Cancer Drug Sensitivity Prediction with Deep Regression Forests
Deep learning models are being adopted and applied on various critical decision-making tasks, yet they are trained to provide point predictions without providing degrees of confidence. The trustworthiness of deep learning models can be increased if paired with uncertainty estimations. Conformal Prediction has emerged as a promising method to pair machine learning models with prediction intervals, allowing for a view of the model's uncertainty. However, popular uncertainty estimation methods for conformal prediction fail to provide heteroskedastic intervals that are equally accurate for all samples. In this paper, we propose a method to estimate the uncertainty of each sample by calculating the variance obtained from a Deep Regression Forest. We show that the deep regression forest variance improves the efficiency and coverage of normalized inductive conformal prediction on a drug response prediction task.
[ "['Daniel Nolte' 'Souparno Ghosh' 'Ranadip Pal']" ]
null
null
2402.14081
null
null
http://arxiv.org/pdf/2402.14081v2
2024-04-24T02:45:51Z
2024-02-21T19:10:08Z
Motion Code: Robust Time series Classification and Forecasting via Sparse Variational Multi-Stochastic Processes Learning
Despite being extensively studied, time series classification and forecasting on noisy data remain highly difficult. The main challenges lie in finding suitable mathematical concepts to describe time series and effectively separating noise from the true signals. Instead of treating time series as a static vector or a data sequence as often seen in previous methods, we introduce a novel framework that considers each time series, not necessarily of fixed length, as a sample realization of a continuous-time stochastic process. Such mathematical model explicitly captures the data dependence across several timestamps and detects the hidden time-dependent signals from noise. However, since the underlying data is often composed of several distinct dynamics, modeling using a single stochastic process is not sufficient. To handle such settings, we first assign each dynamics a signature vector. We then propose the abstract concept of the most informative timestamps to infer a sparse approximation of the individual dynamics based on their assigned vectors. The final model, referred to as Motion Code, contains parameters that can fully capture different underlying dynamics in an integrated manner. This allows unmixing classification and generation of specific sub-type forecasting simultaneously. Extensive experiments on sensors and devices noisy time series data demonstrate Motion Code's competitiveness against time series classification and forecasting benchmarks.
[ "['Chandrajit Bajaj' 'Minh Nguyen']" ]
null
null
2402.14086
null
null
http://arxiv.org/pdf/2402.14086v1
2024-02-21T19:20:06Z
2024-02-21T19:20:06Z
LexC-Gen: Generating Data for Extremely Low-Resource Languages with Large Language Models and Bilingual Lexicons
Data scarcity in low-resource languages can be addressed with word-to-word translations from labeled task data in high-resource languages using bilingual lexicons. However, bilingual lexicons often have limited lexical overlap with task data, which results in poor translation coverage and lexicon utilization. We propose lexicon-conditioned data generation (LexC-Gen), a method that generates low-resource-language classification task data at scale. Specifically, LexC-Gen first uses high-resource-language words from bilingual lexicons to generate lexicon-compatible task data, and then it translates them into low-resource languages with bilingual lexicons via word translation. Across 17 extremely low-resource languages, LexC-Gen generated data is competitive with expert-translated gold data, and yields on average 5.6 and 8.9 points improvement over existing lexicon-based word translation methods on sentiment analysis and topic classification tasks respectively. We show that conditioning on bilingual lexicons is the key component of LexC-Gen. LexC-Gen is also practical -- it only needs a single GPU to generate data at scale. It works well with open-access LLMs, and its cost is one-fifth of the cost of GPT4-based multilingual data generation.
[ "['Zheng-Xin Yong' 'Cristina Menghini' 'Stephen H. Bach']" ]
null
null
2402.14095
null
null
http://arxiv.org/pdf/2402.14095v4
2024-05-03T15:25:09Z
2024-02-21T19:45:05Z
Zero-shot generalization across architectures for visual classification
Generalization to unseen data is a key desideratum for deep networks, but its relation to classification accuracy is unclear. Using a minimalist vision dataset and a measure of generalizability, we show that popular networks, from deep convolutional networks (CNNs) to transformers, vary in their power to extrapolate to unseen classes both across layers and across architectures. Accuracy is not a good predictor of generalizability, and generalization varies non-monotonically with layer depth.
[ "['Evan Gerritz' 'Luciano Dyballa' 'Steven W. Zucker']" ]
null
null
2402.14098
null
null
http://arxiv.org/pdf/2402.14098v1
2024-02-21T19:48:11Z
2024-02-21T19:48:11Z
Intriguing Properties of Modern GANs
Modern GANs achieve remarkable performance in terms of generating realistic and diverse samples. This has led many to believe that ``GANs capture the training data manifold''. In this work we show that this interpretation is wrong. We empirically show that the manifold learned by modern GANs does not fit the training distribution: specifically the manifold does not pass through the training examples and passes closer to out-of-distribution images than to in-distribution images. We also investigate the distribution over images implied by the prior over the latent codes and study whether modern GANs learn a density that approximates the training distribution. Surprisingly, we find that the learned density is very far from the data distribution and that GANs tend to assign higher density to out-of-distribution images. Finally, we demonstrate that the set of images used to train modern GANs are often not part of the typical set described by the GANs' distribution.
[ "['Roy Friedman' 'Yair Weiss']" ]
null
null
2402.14102
null
null
http://arxiv.org/pdf/2402.14102v2
2024-02-27T19:54:21Z
2024-02-21T19:54:25Z
Learning dynamic representations of the functional connectome in neurobiological networks
The static synaptic connectivity of neuronal circuits stands in direct contrast to the dynamics of their function. As in changing community interactions, different neurons can participate actively in various combinations to effect behaviors at different times. We introduce an unsupervised approach to learn the dynamic affinities between neurons in live, behaving animals, and to reveal which communities form among neurons at different times. The inference occurs in two major steps. First, pairwise non-linear affinities between neuronal traces from brain-wide calcium activity are organized by non-negative tensor factorization (NTF). Each factor specifies which groups of neurons are most likely interacting for an inferred interval in time, and for which animals. Finally, a generative model that allows for weighted community detection is applied to the functional motifs produced by NTF to reveal a dynamic functional connectome. Since time codes the different experimental variables (e.g., application of chemical stimuli), this provides an atlas of neural motifs active during separate stages of an experiment (e.g., stimulus application or spontaneous behaviors). Results from our analysis are experimentally validated, confirming that our method is able to robustly predict causal interactions between neurons to generate behavior. Code is available at https://github.com/dyballa/dynamic-connectomes.
[ "['Luciano Dyballa' 'Samuel Lang' 'Alexandra Haslund-Gourley'\n 'Eviatar Yemini' 'Steven W. Zucker']" ]
null
null
2402.14103
null
null
http://arxiv.org/pdf/2402.14103v2
2024-06-25T08:50:33Z
2024-02-21T19:55:01Z
Computational-Statistical Gaps for Improper Learning in Sparse Linear Regression
We study computational-statistical gaps for improper learning in sparse linear regression. More specifically, given $n$ samples from a $k$-sparse linear model in dimension $d$, we ask what is the minimum sample complexity to efficiently (in time polynomial in $d$, $k$, and $n$) find a potentially dense estimate for the regression vector that achieves non-trivial prediction error on the $n$ samples. Information-theoretically this can be achieved using $Theta(k log (d/k))$ samples. Yet, despite its prominence in the literature, there is no polynomial-time algorithm known to achieve the same guarantees using less than $Theta(d)$ samples without additional restrictions on the model. Similarly, existing hardness results are either restricted to the proper setting, in which the estimate must be sparse as well, or only apply to specific algorithms. We give evidence that efficient algorithms for this task require at least (roughly) $Omega(k^2)$ samples. In particular, we show that an improper learning algorithm for sparse linear regression can be used to solve sparse PCA problems (with a negative spike) in their Wishart form, in regimes in which efficient algorithms are widely believed to require at least $Omega(k^2)$ samples. We complement our reduction with low-degree and statistical query lower bounds for the sparse PCA problems from which we reduce. Our hardness results apply to the (correlated) random design setting in which the covariates are drawn i.i.d. from a mean-zero Gaussian distribution with unknown covariance.
[ "['Rares-Darius Buhai' 'Jingqiu Ding' 'Stefan Tiegel']" ]
null
null
2402.14123
null
null
http://arxiv.org/pdf/2402.14123v1
2024-02-21T20:43:49Z
2024-02-21T20:43:49Z
DeiSAM: Segment Anything with Deictic Prompting
Large-scale, pre-trained neural networks have demonstrated strong capabilities in various tasks, including zero-shot image segmentation. To identify concrete objects in complex scenes, humans instinctively rely on deictic descriptions in natural language, i.e., referring to something depending on the context such as "The object that is on the desk and behind the cup.". However, deep learning approaches cannot reliably interpret such deictic representations due to their lack of reasoning capabilities in complex scenarios. To remedy this issue, we propose DeiSAM -- a combination of large pre-trained neural networks with differentiable logic reasoners -- for deictic promptable segmentation. Given a complex, textual segmentation description, DeiSAM leverages Large Language Models (LLMs) to generate first-order logic rules and performs differentiable forward reasoning on generated scene graphs. Subsequently, DeiSAM segments objects by matching them to the logically inferred image regions. As part of our evaluation, we propose the Deictic Visual Genome (DeiVG) dataset, containing paired visual input and complex, deictic textual prompts. Our empirical results demonstrate that DeiSAM is a substantial improvement over purely data-driven baselines for deictic promptable segmentation.
[ "['Hikaru Shindo' 'Manuel Brack' 'Gopika Sudhakaran' 'Devendra Singh Dhami'\n 'Patrick Schramowski' 'Kristian Kersting']" ]
null
null
2402.14131
null
null
http://arxiv.org/abs/2402.14131v1
2024-02-21T21:10:12Z
2024-02-21T21:10:12Z
Random forests for detecting weak signals and extracting physical information: a case study of magnetic navigation
It was recently demonstrated that two machine-learning architectures, reservoir computing and time-delayed feed-forward neural networks, can be exploited for detecting the Earth's anomaly magnetic field immersed in overwhelming complex signals for magnetic navigation in a GPS-denied environment. The accuracy of the detected anomaly field corresponds to a positioning accuracy in the range of 10 to 40 meters. To increase the accuracy and reduce the uncertainty of weak signal detection as well as to directly obtain the position information, we exploit the machine-learning model of random forests that combines the output of multiple decision trees to give optimal values of the physical quantities of interest. In particular, from time-series data gathered from the cockpit of a flying airplane during various maneuvering stages, where strong background complex signals are caused by other elements of the Earth's magnetic field and the fields produced by the electronic systems in the cockpit, we demonstrate that the random-forest algorithm performs remarkably well in detecting the weak anomaly field and in filtering the position of the aircraft. With the aid of the conventional inertial navigation system, the positioning error can be reduced to less than 10 meters. We also find that, contrary to the conventional wisdom, the classic Tolles-Lawson model for calibrating and removing the magnetic field generated by the body of the aircraft is not necessary and may even be detrimental for the success of the random-forest method.
[ "['Mohammadamin Moradi' 'Zheng-Meng Zhai' 'Aaron Nielsen' 'Ying-Cheng Lai']" ]
null
null
2402.14136
null
null
http://arxiv.org/pdf/2402.14136v1
2024-02-21T21:24:57Z
2024-02-21T21:24:57Z
GDTM: An Indoor Geospatial Tracking Dataset with Distributed Multimodal Sensors
Constantly locating moving objects, i.e., geospatial tracking, is essential for autonomous building infrastructure. Accurate and robust geospatial tracking often leverages multimodal sensor fusion algorithms, which require large datasets with time-aligned, synchronized data from various sensor types. However, such datasets are not readily available. Hence, we propose GDTM, a nine-hour dataset for multimodal object tracking with distributed multimodal sensors and reconfigurable sensor node placements. Our dataset enables the exploration of several research problems, such as optimizing architectures for processing multimodal data, and investigating models' robustness to adverse sensing conditions and sensor placement variances. A GitHub repository containing the code, sample data, and checkpoints of this work is available at https://github.com/nesl/GDTM.
[ "['Ho Lyun Jeong' 'Ziqi Wang' 'Colin Samplawski' 'Jason Wu' 'Shiwei Fang'\n 'Lance M. Kaplan' 'Deepak Ganesan' 'Benjamin Marlin' 'Mani Srivastava']" ]
null
null
2402.14139
null
null
http://arxiv.org/pdf/2402.14139v2
2024-03-04T17:47:32Z
2024-02-21T21:33:07Z
NeuroFlux: Memory-Efficient CNN Training Using Adaptive Local Learning
Efficient on-device Convolutional Neural Network (CNN) training in resource-constrained mobile and edge environments is an open challenge. Backpropagation is the standard approach adopted, but it is GPU memory intensive due to its strong inter-layer dependencies that demand intermediate activations across the entire CNN model to be retained in GPU memory. This necessitates smaller batch sizes to make training possible within the available GPU memory budget, but in turn, results in substantially high and impractical training time. We introduce NeuroFlux, a novel CNN training system tailored for memory-constrained scenarios. We develop two novel opportunities: firstly, adaptive auxiliary networks that employ a variable number of filters to reduce GPU memory usage, and secondly, block-specific adaptive batch sizes, which not only cater to the GPU memory constraints but also accelerate the training process. NeuroFlux segments a CNN into blocks based on GPU memory usage and further attaches an auxiliary network to each layer in these blocks. This disrupts the typical layer dependencies under a new training paradigm - $textit{`adaptive local learning'}$. Moreover, NeuroFlux adeptly caches intermediate activations, eliminating redundant forward passes over previously trained blocks, further accelerating the training process. The results are twofold when compared to Backpropagation: on various hardware platforms, NeuroFlux demonstrates training speed-ups of 2.3$times$ to 6.1$times$ under stringent GPU memory budgets, and NeuroFlux generates streamlined models that have 10.9$times$ to 29.4$times$ fewer parameters.
[ "['Dhananjay Saikumar' 'Blesson Varghese']" ]
null
null
2402.14145
null
null
http://arxiv.org/pdf/2402.14145v2
2024-06-04T01:30:07Z
2024-02-21T22:01:10Z
Multiply Robust Estimation for Local Distribution Shifts with Multiple Domains
Distribution shifts are ubiquitous in real-world machine learning applications, posing a challenge to the generalization of models trained on one data distribution to another. We focus on scenarios where data distributions vary across multiple segments of the entire population and only make local assumptions about the differences between training and test (deployment) distributions within each segment. We propose a two-stage multiply robust estimation method to improve model performance on each individual segment for tabular data analysis. The method involves fitting a linear combination of the based models, learned using clusters of training data from multiple segments, followed by a refinement step for each segment. Our method is designed to be implemented with commonly used off-the-shelf machine learning models. We establish theoretical guarantees on the generalization bound of the method on the test risk. With extensive experiments on synthetic and real datasets, we demonstrate that the proposed method substantially improves over existing alternatives in prediction accuracy and robustness on both regression and classification tasks. We also assess its effectiveness on a user city prediction dataset from Meta.
[ "['Steven Wilkins-Reeves' 'Xu Chen' 'Qi Ma' 'Christine Agarwal'\n 'Aude Hofleitner']" ]
null
null
2402.14148
null
null
http://arxiv.org/pdf/2402.14148v4
2024-06-14T14:27:40Z
2024-02-21T22:11:01Z
Neural Networks and Friction: Slide, Hold, Learn
In this study, it is demonstrated that Recurrent Neural Networks (RNNs), specifically those utilizing Gated Recurrent Unit (GRU) architecture, possess the capability to learn the complex dynamics of rate-and-state friction laws from synthetic data. The data employed for training the network is generated through the application of traditional rate-and-state friction equations coupled with the aging law for state evolution. A novel aspect of our approach is the formulation of a loss function that explicitly accounts for the direct effect by means of automatic differentiation. It is found that the RNN, with its GRU architecture, effectively learns to predict changes in the friction coefficient resulting from velocity jumps (with and without noise in the target data), thereby showcasing the potential of machine learning models in understanding and simulating the physics of frictional processes.
[ "['Joaquin Garcia-Suarez']" ]
null
null
2402.14151
null
null
http://arxiv.org/pdf/2402.14151v2
2024-04-03T20:11:02Z
2024-02-21T22:22:30Z
BIRCO: A Benchmark of Information Retrieval Tasks with Complex Objectives
We present the Benchmark of Information Retrieval (IR) tasks with Complex Objectives (BIRCO). BIRCO evaluates the ability of IR systems to retrieve documents given multi-faceted user objectives. The benchmark's complexity and compact size make it suitable for evaluating large language model (LLM)-based information retrieval systems. We present a modular framework for investigating factors that may influence LLM performance on retrieval tasks, and identify a simple baseline model which matches or outperforms existing approaches and more complex alternatives. No approach achieves satisfactory performance on all benchmark tasks, suggesting that stronger models and new retrieval protocols are necessary to address complex user needs.
[ "['Xiaoyue Wang' 'Jianyou Wang' 'Weili Cao' 'Kaicheng Wang'\n 'Ramamohan Paturi' 'Leon Bergen']" ]
null
null
2402.14160
null
null
http://arxiv.org/pdf/2402.14160v2
2024-03-05T06:55:26Z
2024-02-21T22:57:49Z
Recursive Speculative Decoding: Accelerating LLM Inference via Sampling Without Replacement
Speculative decoding is an inference-acceleration method for large language models (LLMs) where a small language model generates a draft-token sequence which is further verified by the target LLM in parallel. Recent works have advanced this method by establishing a draft-token tree, achieving superior performance over a single-sequence speculative decoding. However, those works independently generate tokens at each level of the tree, not leveraging the tree's entire diversifiability. Besides, their empirical superiority has been shown for fixed length of sequences, implicitly granting more computational resource to LLM for the tree-based methods. None of the existing works has conducted empirical studies with fixed target computational budgets despite its importance to resource-bounded devices. We present Recursive Speculative Decoding (RSD), a novel tree-based method that samples draft tokens without replacement and maximizes the diversity of the tree. During RSD's drafting, the tree is built by either Gumbel-Top-$k$ trick that draws tokens without replacement in parallel or Stochastic Beam Search that samples sequences without replacement while early-truncating unlikely draft sequences and reducing the computational cost of LLM. We empirically evaluate RSD with Llama 2 and OPT models, showing that RSD outperforms the baseline methods, consistently for fixed draft sequence length and in most cases for fixed computational budgets at LLM.
[ "['Wonseok Jeon' 'Mukul Gagrani' 'Raghavv Goel' 'Junyoung Park' 'Mingu Lee'\n 'Christopher Lott']" ]
null
null
2402.14167
null
null
http://arxiv.org/pdf/2402.14167v1
2024-02-21T23:08:54Z
2024-02-21T23:08:54Z
T-Stitch: Accelerating Sampling in Pre-Trained Diffusion Models with Trajectory Stitching
Sampling from diffusion probabilistic models (DPMs) is often expensive for high-quality image generation and typically requires many steps with a large model. In this paper, we introduce sampling Trajectory Stitching T-Stitch, a simple yet efficient technique to improve the sampling efficiency with little or no generation degradation. Instead of solely using a large DPM for the entire sampling trajectory, T-Stitch first leverages a smaller DPM in the initial steps as a cheap drop-in replacement of the larger DPM and switches to the larger DPM at a later stage. Our key insight is that different diffusion models learn similar encodings under the same training data distribution and smaller models are capable of generating good global structures in the early steps. Extensive experiments demonstrate that T-Stitch is training-free, generally applicable for different architectures, and complements most existing fast sampling techniques with flexible speed and quality trade-offs. On DiT-XL, for example, 40% of the early timesteps can be safely replaced with a 10x faster DiT-S without performance drop on class-conditional ImageNet generation. We further show that our method can also be used as a drop-in technique to not only accelerate the popular pretrained stable diffusion (SD) models but also improve the prompt alignment of stylized SD models from the public model zoo. Code is released at https://github.com/NVlabs/T-Stitch
[ "['Zizheng Pan' 'Bohan Zhuang' 'De-An Huang' 'Weili Nie' 'Zhiding Yu'\n 'Chaowei Xiao' 'Jianfei Cai' 'Anima Anandkumar']" ]
null
null
2402.14169
null
null
http://arxiv.org/pdf/2402.14169v5
2024-06-25T17:03:22Z
2024-02-21T23:18:03Z
A Temporal Stochastic Bias Correction using a Machine Learning Attention model
Climate models are biased with respect to real-world observations. They usually need to be adjusted before being used in impact studies. The suite of statistical methods that enable such adjustments is called bias correction (BC). However, BC methods currently struggle to adjust temporal biases. Because they mostly disregard the dependence between consecutive time points. As a result, climate statistics with long-range temporal properties, such as heatwave duration and frequency, cannot be corrected accurately. This makes it more difficult to produce reliable impact studies on such climate statistics. This paper offers a novel BC methodology to correct temporal biases. This is made possible by rethinking the philosophy behind BC. We will introduce BC as a time-indexed regression task with stochastic outputs. Rethinking BC enables us to adapt state-of-the-art machine learning (ML) attention models and thereby learn different types of biases, including temporal asynchronicities. With a case study of heatwave duration statistics in Abuja, Nigeria, and Tokyo, Japan, we show more accurate results than current climate model outputs and alternative BC methods.
[ "['Omer Nivron' 'Damon J. Wischik' 'Mathieu Vrac' 'Emily Shuckburgh'\n 'Alex T. Archibald']" ]
null
null
2402.14180
null
null
http://arxiv.org/pdf/2402.14180v1
2024-02-21T23:45:57Z
2024-02-21T23:45:57Z
Linear Transformers are Versatile In-Context Learners
Recent research has demonstrated that transformers, particularly linear attention models, implicitly execute gradient-descent-like algorithms on data provided in-context during their forward inference step. However, their capability in handling more complex problems remains unexplored. In this paper, we prove that any linear transformer maintains an implicit linear model and can be interpreted as performing a variant of preconditioned gradient descent. We also investigate the use of linear transformers in a challenging scenario where the training data is corrupted with different levels of noise. Remarkably, we demonstrate that for this problem linear transformers discover an intricate and highly effective optimization algorithm, surpassing or matching in performance many reasonable baselines. We reverse-engineer this algorithm and show that it is a novel approach incorporating momentum and adaptive rescaling based on noise levels. Our findings show that even linear transformers possess the surprising ability to discover sophisticated optimization strategies.
[ "['Max Vladymyrov' 'Johannes von Oswald' 'Mark Sandler' 'Rong Ge']" ]
null
null
2402.14184
null
null
http://arxiv.org/pdf/2402.14184v1
2024-02-22T00:04:21Z
2024-02-22T00:04:21Z
Diversity-Aware Ensembling of Language Models Based on Topological Data Analysis
Ensembles are important tools for improving the performance of machine learning models. In cases related to natural language processing, ensembles boost the performance of a method due to multiple large models available in open source. However, existing approaches mostly rely on simple averaging of predictions by ensembles with equal weights for each model, ignoring differences in the quality and conformity of models. We propose to estimate weights for ensembles of NLP models using not only knowledge of their individual performance but also their similarity to each other. By adopting distance measures based on Topological Data Analysis (TDA), we improve our ensemble. The quality improves for both text classification accuracy and relevant uncertainty estimation.
[ "['Polina Proskura' 'Alexey Zaytsev']" ]
null
null
2402.14194
null
null
http://arxiv.org/pdf/2402.14194v2
2024-07-11T16:50:08Z
2024-02-22T00:38:43Z
BeTAIL: Behavior Transformer Adversarial Imitation Learning from Human Racing Gameplay
Imitation learning learns a policy from demonstrations without requiring hand-designed reward functions. In many robotic tasks, such as autonomous racing, imitated policies must model complex environment dynamics and human decision-making. Sequence modeling is highly effective in capturing intricate patterns of motion sequences but struggles to adapt to new environments or distribution shifts that are common in real-world robotics tasks. In contrast, Adversarial Imitation Learning (AIL) can mitigate this effect, but struggles with sample inefficiency and handling complex motion patterns. Thus, we propose BeTAIL: Behavior Transformer Adversarial Imitation Learning, which combines a Behavior Transformer (BeT) policy from human demonstrations with online AIL. BeTAIL adds an AIL residual policy to the BeT policy to model the sequential decision-making process of human experts and correct for out-of-distribution states or shifts in environment dynamics. We test BeTAIL on three challenges with expert-level demonstrations of real human gameplay in Gran Turismo Sport. Our proposed residual BeTAIL reduces environment interactions and improves racing performance and stability, even when the BeT is pretrained on different tracks than downstream learning. Videos and code available at: https://sites.google.com/berkeley.edu/BeTAIL/home.
[ "['Catherine Weaver' 'Chen Tang' 'Ce Hao' 'Kenta Kawamoto'\n 'Masayoshi Tomizuka' 'Wei Zhan']" ]
null
null
2402.14202
null
null
http://arxiv.org/pdf/2402.14202v3
2024-06-05T01:35:24Z
2024-02-22T01:07:48Z
Comparing Graph Transformers via Positional Encodings
The distinguishing power of graph transformers is closely tied to the choice of positional encoding: features used to augment the base transformer with information about the graph. There are two primary types of positional encoding: absolute positional encodings (APEs) and relative positional encodings (RPEs). APEs assign features to each node and are given as input to the transformer. RPEs instead assign a feature to each pair of nodes, e.g., graph distance, and are used to augment the attention block. A priori, it is unclear which method is better for maximizing the power of the resulting graph transformer. In this paper, we aim to understand the relationship between these different types of positional encodings. Interestingly, we show that graph transformers using APEs and RPEs are equivalent in terms of distinguishing power. In particular, we demonstrate how to interchange APEs and RPEs while maintaining their distinguishing power in terms of graph transformers. Based on our theoretical results, we provide a study on several APEs and RPEs (including the resistance distance and the recently introduced stable and expressive positional encoding (SPE)) and compare their distinguishing power in terms of transformers. We believe our work will help navigate the huge number of choices of positional encoding and will provide guidance on the future design of positional encodings for graph transformers.
[ "['Mitchell Black' 'Zhengchao Wan' 'Gal Mishne' 'Amir Nayyeri' 'Yusu Wang']" ]
null
null
2402.14205
null
null
http://arxiv.org/pdf/2402.14205v1
2024-02-22T01:18:55Z
2024-02-22T01:18:55Z
Compression Robust Synthetic Speech Detection Using Patched Spectrogram Transformer
Many deep learning synthetic speech generation tools are readily available. The use of synthetic speech has caused financial fraud, impersonation of people, and misinformation to spread. For this reason forensic methods that can detect synthetic speech have been proposed. Existing methods often overfit on one dataset and their performance reduces substantially in practical scenarios such as detecting synthetic speech shared on social platforms. In this paper we propose, Patched Spectrogram Synthetic Speech Detection Transformer (PS3DT), a synthetic speech detector that converts a time domain speech signal to a mel-spectrogram and processes it in patches using a transformer neural network. We evaluate the detection performance of PS3DT on ASVspoof2019 dataset. Our experiments show that PS3DT performs well on ASVspoof2019 dataset compared to other approaches using spectrogram for synthetic speech detection. We also investigate generalization performance of PS3DT on In-the-Wild dataset. PS3DT generalizes well than several existing methods on detecting synthetic speech from an out-of-distribution dataset. We also evaluate robustness of PS3DT to detect telephone quality synthetic speech and synthetic speech shared on social platforms (compressed speech). PS3DT is robust to compression and can detect telephone quality synthetic speech better than several existing methods.
[ "['Amit Kumar Singh Yadav' 'Ziyue Xiang' 'Kratika Bhagtani'\n 'Paolo Bestagini' 'Stefano Tubaro' 'Edward J. Delp']" ]
null
null
2402.14208
null
null
http://arxiv.org/pdf/2402.14208v3
2024-06-24T04:49:16Z
2024-02-22T01:20:51Z
LLM-Assisted Content Conditional Debiasing for Fair Text Embedding
Mitigating biases in machine learning models has become an increasing concern in Natural Language Processing (NLP), particularly in developing fair text embeddings, which are crucial yet challenging for real-world applications like search engines. In response, this paper proposes a novel method for learning fair text embeddings. First, we define a novel content-conditional equal distance (CCED) fairness for text embeddings, ensuring content-conditional independence between sensitive attributes and text embeddings. Building on CCED, we introduce a content-conditional debiasing (CCD) loss to ensure that embeddings of texts with different sensitive attributes but identical content maintain the same distance from the embedding of their corresponding neutral text. Additionally, we tackle the issue of insufficient training data by using Large Language Models (LLMs) with instructions to fairly augment texts into different sensitive groups. Our extensive evaluations show that our approach effectively enhances fairness while maintaining the utility of embeddings. Furthermore, our augmented dataset, combined with the CCED metric, serves as an new benchmark for evaluating fairness.
[ "['Wenlong Deng' 'Blair Chen' 'Beidi Zhao' 'Chiyu Zhang' 'Xiaoxiao Li'\n 'Christos Thrampoulidis']" ]
null
null
2402.14212
null
null
http://arxiv.org/pdf/2402.14212v1
2024-02-22T01:33:31Z
2024-02-22T01:33:31Z
Moonwalk: Inverse-Forward Differentiation
Backpropagation, while effective for gradient computation, falls short in addressing memory consumption, limiting scalability. This work explores forward-mode gradient computation as an alternative in invertible networks, showing its potential to reduce the memory footprint without substantial drawbacks. We introduce a novel technique based on a vector-inverse-Jacobian product that accelerates the computation of forward gradients while retaining the advantages of memory reduction and preserving the fidelity of true gradients. Our method, Moonwalk, has a time complexity linear in the depth of the network, unlike the quadratic time complexity of na"ive forward, and empirically reduces computation time by several orders of magnitude without allocating more memory. We further accelerate Moonwalk by combining it with reverse-mode differentiation to achieve time complexity comparable with backpropagation while maintaining a much smaller memory footprint. Finally, we showcase the robustness of our method across several architecture choices. Moonwalk is the first forward-based method to compute true gradients in invertible networks in computation time comparable to backpropagation and using significantly less memory.
[ "['Dmitrii Krylov' 'Armin Karamzade' 'Roy Fox']" ]
null
null
2402.14213
null
null
http://arxiv.org/pdf/2402.14213v2
2024-07-14T03:48:28Z
2024-02-22T01:42:12Z
Contrastive Learning of Shared Spatiotemporal EEG Representations Across Individuals for Naturalistic Neuroscience
Neural representations induced by naturalistic stimuli offer insights into how humans respond to stimuli in daily life. Understanding neural mechanisms underlying naturalistic stimuli processing hinges on the precise identification and extraction of the shared neural patterns that are consistently present across individuals. Targeting the Electroencephalogram (EEG) technique, known for its rich spatial and temporal information, this study presents a framework for Contrastive Learning of Shared SpatioTemporal EEG Representations across individuals (CL-SSTER). CL-SSTER utilizes contrastive learning to maximize the similarity of EEG representations across individuals for identical stimuli, contrasting with those for varied stimuli. The network employed spatial and temporal convolutions to simultaneously learn the spatial and temporal patterns inherent in EEG. The versatility of CL-SSTER was demonstrated on three EEG datasets, including a synthetic dataset, a natural speech comprehension EEG dataset, and an emotional video watching EEG dataset. CL-SSTER attained the highest inter-subject correlation (ISC) values compared to the state-of-the-art ISC methods. The latent representations generated by CL-SSTER exhibited reliable spatiotemporal EEG patterns, which can be explained by properties of the naturalistic stimuli. CL-SSTER serves as an interpretable and scalable framework for the identification of inter-subject shared neural representations in naturalistic neuroscience.
[ "['Xinke Shen' 'Lingyi Tao' 'Xuyang Chen' 'Sen Song' 'Quanying Liu'\n 'Dan Zhang']" ]
null
null
2402.14220
null
null
http://arxiv.org/pdf/2402.14220v2
2024-06-09T21:43:28Z
2024-02-22T01:53:56Z
Estimating Unknown Population Sizes Using the Hypergeometric Distribution
The multivariate hypergeometric distribution describes sampling without replacement from a discrete population of elements divided into multiple categories. Addressing a gap in the literature, we tackle the challenge of estimating discrete distributions when both the total population size and the sizes of its constituent categories are unknown. Here, we propose a novel solution using the hypergeometric likelihood to solve this estimation challenge, even in the presence of severe under-sampling. We develop our approach to account for a data generating process where the ground-truth is a mixture of distributions conditional on a continuous latent variable, such as with collaborative filtering, using the variational autoencoder framework. Empirical data simulation demonstrates that our method outperforms other likelihood functions used to model count data, both in terms of accuracy of population size estimate and in its ability to learn an informative latent space. We demonstrate our method's versatility through applications in NLP, by inferring and estimating the complexity of latent vocabularies in text excerpts, and in biology, by accurately recovering the true number of gene transcripts from sparse single-cell genomics data.
[ "['Liam Hodgson' 'Danilo Bzdok']" ]
null
null
2402.14227
null
null
http://arxiv.org/pdf/2402.14227v2
2024-04-03T15:33:47Z
2024-02-22T02:17:50Z
Quaternion recurrent neural network with real-time recurrent learning and maximum correntropy criterion
We develop a robust quaternion recurrent neural network (QRNN) for real-time processing of 3D and 4D data with outliers. This is achieved by combining the real-time recurrent learning (RTRL) algorithm and the maximum correntropy criterion (MCC) as a loss function. While both the mean square error and maximum correntropy criterion are viable cost functions, it is shown that the non-quadratic maximum correntropy loss function is less sensitive to outliers, making it suitable for applications with multidimensional noisy or uncertain data. Both algorithms are derived based on the novel generalised HR (GHR) calculus, which allows for the differentiation of real functions of quaternion variables and offers the product and chain rules, thus enabling elegant and compact derivations. Simulation results in the context of motion prediction of chest internal markers for lung cancer radiotherapy, which includes regular and irregular breathing sequences, support the analysis.
[ "['Pauline Bourigault' 'Dongpo Xu' 'Danilo P. Mandic']" ]
null
null
2402.14228
null
null
http://arxiv.org/pdf/2402.14228v2
2024-02-27T08:47:37Z
2024-02-22T02:20:08Z
COPR: Continual Human Preference Learning via Optimal Policy Regularization
Reinforcement Learning from Human Feedback (RLHF) is commonly utilized to improve the alignment of Large Language Models (LLMs) with human preferences. Given the evolving nature of human preferences, continual alignment becomes more crucial and practical in comparison to traditional static alignment. Nevertheless, making RLHF compatible with Continual Learning (CL) is challenging due to its complex process. Meanwhile, directly learning new human preferences may lead to Catastrophic Forgetting (CF) of historical preferences, resulting in helpless or harmful outputs. To overcome these challenges, we propose the Continual Optimal Policy Regularization (COPR) method, which draws inspiration from the optimal policy theory. COPR utilizes a sampling distribution as a demonstration and regularization constraints for CL. It adopts the Lagrangian Duality (LD) method to dynamically regularize the current policy based on the historically optimal policy, which prevents CF and avoids over-emphasizing unbalanced objectives. We also provide formal proof for the learnability of COPR. The experimental results show that COPR outperforms strong CL baselines on our proposed benchmark, in terms of reward-based, GPT-4 evaluations and human assessment. Furthermore, we validate the robustness of COPR under various CL settings, including different backbones, replay memory sizes, and learning orders.
[ "['Han Zhang' 'Lin Gui' 'Yu Lei' 'Yuanzhao Zhai' 'Yehong Zhang' 'Yulan He'\n 'Hui Wang' 'Yue Yu' 'Kam-Fai Wong' 'Bin Liang' 'Ruifeng Xu']" ]
null
null
2402.14229
null
null
http://arxiv.org/pdf/2402.14229v1
2024-02-22T02:20:24Z
2024-02-22T02:20:24Z
Sample-Efficient Linear Regression with Self-Selection Bias
We consider the problem of linear regression with self-selection bias in the unknown-index setting, as introduced in recent work by Cherapanamjeri, Daskalakis, Ilyas, and Zampetakis [STOC 2023]. In this model, one observes $m$ i.i.d. samples $(mathbf{x}_{ell},z_{ell})_{ell=1}^m$ where $z_{ell}=max_{iin [k]}{mathbf{x}_{ell}^Tmathbf{w}_i+eta_{i,ell}}$, but the maximizing index $i_{ell}$ is unobserved. Here, the $mathbf{x}_{ell}$ are assumed to be $mathcal{N}(0,I_n)$ and the noise distribution $mathbf{eta}_{ell}sim mathcal{D}$ is centered and independent of $mathbf{x}_{ell}$. We provide a novel and near optimally sample-efficient (in terms of $k$) algorithm to recover $mathbf{w}_1,ldots,mathbf{w}_kin mathbb{R}^n$ up to additive $ell_2$-error $varepsilon$ with polynomial sample complexity $tilde{O}(n)cdot mathsf{poly}(k,1/varepsilon)$ and significantly improved time complexity $mathsf{poly}(n,k,1/varepsilon)+O(log(k)/varepsilon)^{O(k)}$. When $k=O(1)$, our algorithm runs in $mathsf{poly}(n,1/varepsilon)$ time, generalizing the polynomial guarantee of an explicit moment matching algorithm of Cherapanamjeri, et al. for $k=2$ and when it is known that $mathcal{D}=mathcal{N}(0,I_k)$. Our algorithm succeeds under significantly relaxed noise assumptions, and therefore also succeeds in the related setting of max-linear regression where the added noise is taken outside the maximum. For this problem, our algorithm is efficient in a much larger range of $k$ than the state-of-the-art due to Ghosh, Pananjady, Guntuboyina, and Ramchandran [IEEE Trans. Inf. Theory 2022] for not too small $varepsilon$, and leads to improved algorithms for any $varepsilon$ by providing a warm start for existing local convergence methods.
[ "['Jason Gaitonde' 'Elchanan Mossel']" ]
null
null
2402.14236
null
null
http://arxiv.org/pdf/2402.14236v1
2024-02-22T02:36:14Z
2024-02-22T02:36:14Z
Automated Design and Optimization of Distributed Filtering Circuits via Reinforcement Learning
Designing distributed filtering circuits (DFCs) is complex and time-consuming, with the circuit performance relying heavily on the expertise and experience of electronics engineers. However, manual design methods tend to have exceedingly low-efficiency. This study proposes a novel end-to-end automated method for fabricating circuits to improve the design of DFCs. The proposed method harnesses reinforcement learning (RL) algorithms, eliminating the dependence on the design experience of engineers. Thus, it significantly reduces the subjectivity and constraints associated with circuit design. The experimental findings demonstrate clear improvements in both design efficiency and quality when comparing the proposed method with traditional engineer-driven methods. In particular, the proposed method achieves superior performance when designing complex or rapidly evolving DFCs. Furthermore, compared to existing circuit automation design techniques, the proposed method demonstrates superior design efficiency, highlighting the substantial potential of RL in circuit design automation.
[ "['Peng Gao' 'Tao Yu' 'Fei Wang' 'Ru-Yue Yuan']" ]
null
null
2402.14244
null
null
http://arxiv.org/pdf/2402.14244v1
2024-02-22T03:11:09Z
2024-02-22T03:11:09Z
MENTOR: Guiding Hierarchical Reinforcement Learning with Human Feedback and Dynamic Distance Constraint
Hierarchical reinforcement learning (HRL) provides a promising solution for complex tasks with sparse rewards of intelligent agents, which uses a hierarchical framework that divides tasks into subgoals and completes them sequentially. However, current methods struggle to find suitable subgoals for ensuring a stable learning process. Without additional guidance, it is impractical to rely solely on exploration or heuristics methods to determine subgoals in a large goal space. To address the issue, We propose a general hierarchical reinforcement learning framework incorporating human feedback and dynamic distance constraints (MENTOR). MENTOR acts as a "mentor", incorporating human feedback into high-level policy learning, to find better subgoals. As for low-level policy, MENTOR designs a dual policy for exploration-exploitation decoupling respectively to stabilize the training. Furthermore, although humans can simply break down tasks into subgoals to guide the right learning direction, subgoals that are too difficult or too easy can still hinder downstream learning efficiency. We propose the Dynamic Distance Constraint (DDC) mechanism dynamically adjusting the space of optional subgoals. Thus MENTOR can generate subgoals matching the low-level policy learning process from easy to hard. Extensive experiments demonstrate that MENTOR uses a small amount of human feedback to achieve significant improvement in complex tasks with sparse rewards.
[ "['Xinglin Zhou' 'Yifu Yuan' 'Shaofu Yang' 'Jianye Hao']" ]
null
null
2402.14245
null
null
http://arxiv.org/pdf/2402.14245v1
2024-02-22T03:14:03Z
2024-02-22T03:14:03Z
Enhancing Robotic Manipulation with AI Feedback from Multimodal Large Language Models
Recently, there has been considerable attention towards leveraging large language models (LLMs) to enhance decision-making processes. However, aligning the natural language text instructions generated by LLMs with the vectorized operations required for execution presents a significant challenge, often necessitating task-specific details. To circumvent the need for such task-specific granularity, inspired by preference-based policy learning approaches, we investigate the utilization of multimodal LLMs to provide automated preference feedback solely from image inputs to guide decision-making. In this study, we train a multimodal LLM, termed CriticGPT, capable of understanding trajectory videos in robot manipulation tasks, serving as a critic to offer analysis and preference feedback. Subsequently, we validate the effectiveness of preference labels generated by CriticGPT from a reward modeling perspective. Experimental evaluation of the algorithm's preference accuracy demonstrates its effective generalization ability to new tasks. Furthermore, performance on Meta-World tasks reveals that CriticGPT's reward model efficiently guides policy learning, surpassing rewards based on state-of-the-art pre-trained representation models.
[ "['Jinyi Liu' 'Yifu Yuan' 'Jianye Hao' 'Fei Ni' 'Lingzhi Fu' 'Yibin Chen'\n 'Yan Zheng']" ]
null
null
2402.14246
null
null
http://arxiv.org/pdf/2402.14246v1
2024-02-22T03:15:13Z
2024-02-22T03:15:13Z
Reconstruction-Based Anomaly Localization via Knowledge-Informed Self-Training
Anomaly localization, which involves localizing anomalous regions within images, is a significant industrial task. Reconstruction-based methods are widely adopted for anomaly localization because of their low complexity and high interpretability. Most existing reconstruction-based methods only use normal samples to construct model. If anomalous samples are appropriately utilized in the process of anomaly localization, the localization performance can be improved. However, usually only weakly labeled anomalous samples are available, which limits the improvement. In many cases, we can obtain some knowledge of anomalies summarized by domain experts. Taking advantage of such knowledge can help us better utilize the anomalous samples and thus further improve the localization performance. In this paper, we propose a novel reconstruction-based method named knowledge-informed self-training (KIST) which integrates knowledge into reconstruction model through self-training. Specifically, KIST utilizes weakly labeled anomalous samples in addition to the normal ones and exploits knowledge to yield pixel-level pseudo-labels of the anomalous samples. Based on the pseudo labels, a novel loss which promotes the reconstruction of normal pixels while suppressing the reconstruction of anomalous pixels is used. We conduct experiments on different datasets and demonstrate the advantages of KIST over the existing reconstruction-based methods.
[ "['Cheng Qian' 'Xiaoxian Lao' 'Chunguang Li']" ]
null
null
2402.14254
null
null
http://arxiv.org/pdf/2402.14254v1
2024-02-22T03:41:05Z
2024-02-22T03:41:05Z
A hierarchical decomposition for explaining ML performance discrepancies
Machine learning (ML) algorithms can often differ in performance across domains. Understanding $textit{why}$ their performance differs is crucial for determining what types of interventions (e.g., algorithmic or operational) are most effective at closing the performance gaps. Existing methods focus on $textit{aggregate decompositions}$ of the total performance gap into the impact of a shift in the distribution of features $p(X)$ versus the impact of a shift in the conditional distribution of the outcome $p(Y|X)$; however, such coarse explanations offer only a few options for how one can close the performance gap. $textit{Detailed variable-level decompositions}$ that quantify the importance of each variable to each term in the aggregate decomposition can provide a much deeper understanding and suggest much more targeted interventions. However, existing methods assume knowledge of the full causal graph or make strong parametric assumptions. We introduce a nonparametric hierarchical framework that provides both aggregate and detailed decompositions for explaining why the performance of an ML algorithm differs across domains, without requiring causal knowledge. We derive debiased, computationally-efficient estimators, and statistical inference procedures for asymptotically valid confidence intervals.
[ "['Jean Feng' 'Harvineet Singh' 'Fan Xia' 'Adarsh Subbaswamy'\n 'Alexej Gossmann']" ]
null
null
2402.14259
null
null
http://arxiv.org/pdf/2402.14259v1
2024-02-22T03:46:08Z
2024-02-22T03:46:08Z
Word-Sequence Entropy: Towards Uncertainty Estimation in Free-Form Medical Question Answering Applications and Beyond
Uncertainty estimation plays a pivotal role in ensuring the reliability of safety-critical human-AI interaction systems, particularly in the medical domain. However, a general method for quantifying the uncertainty of free-form answers has yet to be established in open-ended medical question-answering (QA) tasks, where irrelevant words and sequences with limited semantic information can be the primary source of uncertainty due to the presence of generative inequality. In this paper, we propose the Word-Sequence Entropy (WSE), which calibrates the uncertainty proportion at both the word and sequence levels according to the semantic relevance, with greater emphasis placed on keywords and more relevant sequences when performing uncertainty quantification. We compare WSE with 6 baseline methods on 5 free-form medical QA datasets, utilizing 7 "off-the-shelf" large language models (LLMs), and show that WSE exhibits superior performance on accurate uncertainty measurement under two standard criteria for correctness evaluation (e.g., WSE outperforms existing state-of-the-art method by 3.23% AUROC on the MedQA dataset). Additionally, in terms of the potential for real-world medical QA applications, we achieve a significant enhancement in the performance of LLMs when employing sequences with lower uncertainty, identified by WSE, as final answers (e.g., +6.36% accuracy improvement on the COVID-QA dataset), without requiring any additional task-specific fine-tuning or architectural modifications.
[ "['Zhiyuan Wang' 'Jinhao Duan' 'Chenxi Yuan' 'Qingyu Chen' 'Tianlong Chen'\n 'Huaxiu Yao' 'Yue Zhang' 'Ren Wang' 'Kaidi Xu' 'Xiaoshuang Shi']" ]
null
null
2402.14264
null
null
http://arxiv.org/pdf/2402.14264v2
2024-03-02T02:00:58Z
2024-02-22T04:03:32Z
Structure-agnostic Optimality of Doubly Robust Learning for Treatment Effect Estimation
Average treatment effect estimation is the most central problem in causal inference with application to numerous disciplines. While many estimation strategies have been proposed in the literature, the statistical optimality of these methods has still remained an open area of investigation, especially in regimes where these methods do not achieve parametric rates. In this paper, we adopt the recently introduced structure-agnostic framework of statistical lower bounds, which poses no structural properties on the nuisance functions other than access to black-box estimators that achieve some statistical estimation rate. This framework is particularly appealing when one is only willing to consider estimation strategies that use non-parametric regression and classification oracles as black-box sub-processes. Within this framework, we prove the statistical optimality of the celebrated and widely used doubly robust estimators for both the Average Treatment Effect (ATE) and the Average Treatment Effect on the Treated (ATT), as well as weighted variants of the former, which arise in policy evaluation.
[ "['Jikai Jin' 'Vasilis Syrgkanis']" ]
null
null
2402.14270
null
null
http://arxiv.org/pdf/2402.14270v2
2024-03-01T15:21:16Z
2024-02-22T04:10:57Z
Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization
In the rapidly advancing arena of large language models (LLMs), a key challenge is to enhance their capabilities amid a looming shortage of high-quality training data. Our study starts from an empirical strategy for the light continual training of LLMs using their original pre-training data sets, with a specific focus on selective retention of samples that incur moderately high losses. These samples are deemed informative and beneficial for model refinement, contrasting with the highest-loss samples, which would be discarded due to their correlation with data noise and complexity. We then formalize this strategy into a principled framework of Instance-Reweighted Distributionally Robust Optimization (IR-DRO). IR-DRO is designed to dynamically prioritize the training focus on informative samples through an instance reweighting mechanism, streamlined by a closed-form solution for straightforward integration into established training protocols. Through rigorous experimentation with various models and datasets, our findings indicate that our sample-targeted methods significantly improve LLM performance across multiple benchmarks, in both continual pre-training and instruction tuning scenarios. Our codes are available at https://github.com/VITA-Group/HardFocusTraining.
[ "['Xuxi Chen' 'Zhendong Wang' 'Daouda Sow' 'Junjie Yang' 'Tianlong Chen'\n 'Yingbin Liang' 'Mingyuan Zhou' 'Zhangyang Wang']" ]
null
null
2402.14280
null
null
http://arxiv.org/pdf/2402.14280v1
2024-02-22T04:41:56Z
2024-02-22T04:41:56Z
Secure Navigation using Landmark-based Localization in a GPS-denied Environment
In modern battlefield scenarios, the reliance on GPS for navigation can be a critical vulnerability. Adversaries often employ tactics to deny or deceive GPS signals, necessitating alternative methods for the localization and navigation of mobile troops. Range-free localization methods such as DV-HOP rely on radio-based anchors and their average hop distance which suffers from accuracy and stability in a dynamic and sparse network topology. Vision-based approaches like SLAM and Visual Odometry use sensor fusion techniques for map generation and pose estimation that are more sophisticated and computationally expensive. This paper proposes a novel framework that integrates landmark-based localization (LanBLoc) with an Extended Kalman Filter (EKF) to predict the future state of moving entities along the battlefield. Our framework utilizes safe trajectory information generated by the troop control center by considering identifiable landmarks and pre-defined hazard maps. It performs point inclusion tests on the convex hull of the trajectory segments to ensure the safety and survivability of a moving entity and determines the next point forward decisions. We present a simulated battlefield scenario for two different approaches (with EKF and without EKF) that guide a moving entity through an obstacle and hazard-free path. Using the proposed method, we observed a percent error of 6.51% lengthwise in safe trajectory estimation with an Average Displacement Error (ADE) of 2.97m and a Final Displacement Error (FDE) of 3.27m. The results demonstrate that our approach not only ensures the safety of the mobile units by keeping them within the secure trajectory but also enhances operational effectiveness by adapting to the evolving threat landscape.
[ "['Ganesh Sapkota' 'Sanjay Madria']" ]
null
null
2402.14285
null
null
http://arxiv.org/pdf/2402.14285v3
2024-06-03T02:47:27Z
2024-02-22T04:55:58Z
Symbolic Music Generation with Non-Differentiable Rule Guided Diffusion
We study the problem of symbolic music generation (e.g., generating piano rolls), with a technical focus on non-differentiable rule guidance. Musical rules are often expressed in symbolic form on note characteristics, such as note density or chord progression, many of which are non-differentiable which pose a challenge when using them for guided diffusion. We propose oursfull (ours), a novel guidance method that only requires forward evaluation of rule functions that can work with pre-trained diffusion models in a plug-and-play way, thus achieving training-free guidance for non-differentiable rules for the first time. Additionally, we introduce a latent diffusion architecture for symbolic music generation with high time resolution, which can be composed with SCG in a plug-and-play fashion. Compared to standard strong baselines in symbolic music generation, this framework demonstrates marked advancements in music quality and rule-based controllability, outperforming current state-of-the-art generators in a variety of settings. For detailed demonstrations, code and model checkpoints, please visit our project website: https://scg-rule-guided-music.github.io/.
[ "['Yujia Huang' 'Adishree Ghatare' 'Yuanzhe Liu' 'Ziniu Hu'\n 'Qinsheng Zhang' 'Chandramouli S Sastry' 'Siddharth Gururani'\n 'Sageev Oore' 'Yisong Yue']" ]
null
null
2402.14289
null
null
http://arxiv.org/pdf/2402.14289v1
2024-02-22T05:05:30Z
2024-02-22T05:05:30Z
TinyLLaVA: A Framework of Small-scale Large Multimodal Models
We present the TinyLLaVA framework that provides a unified perspective in designing and analyzing the small-scale Large Multimodal Models (LMMs). We empirically study the effects of different vision encoders, connection modules, language models, training data and training recipes. Our extensive experiments showed that better quality of data combined with better training recipes, smaller LMMs can consistently achieve on-par performances compared to bigger LMMs. Under our framework, we train a family of small-scale LMMs. Our best model, TinyLLaVA-3.1B, achieves better overall performance against existing 7B models such as LLaVA-1.5 and Qwen-VL. We hope our findings can serve as baselines for future research in terms of data scaling, training setups and model selections. Our model weights and codes will be made public.
[ "['Baichuan Zhou' 'Ying Hu' 'Xi Weng' 'Junlong Jia' 'Jie Luo' 'Xien Liu'\n 'Ji Wu' 'Lei Huang']" ]
null
null
2402.14290
null
null
http://arxiv.org/pdf/2402.14290v1
2024-02-22T05:07:31Z
2024-02-22T05:07:31Z
CEV-LM: Controlled Edit Vector Language Model for Shaping Natural Language Generations
As large-scale language models become the standard for text generation, there is a greater need to tailor the generations to be more or less concise, targeted, and informative, depending on the audience/application. Existing control approaches primarily adjust the semantic (e.g., emotion, topics), structural (e.g., syntax tree, parts-of-speech), and lexical (e.g., keyword/phrase inclusion) properties of text, but are insufficient to accomplish complex objectives such as pacing which control the complexity and readability of the text. In this paper, we introduce CEV-LM - a lightweight, semi-autoregressive language model that utilizes constrained edit vectors to control three complementary metrics (speed, volume, and circuitousness) that quantify the shape of text (e.g., pacing of content). We study an extensive set of state-of-the-art CTG models and find that CEV-LM provides significantly more targeted and precise control of these three metrics while preserving semantic content, using less training data, and containing fewer parameters.
[ "['Samraj Moorjani' 'Adit Krishnan' 'Hari Sundaram']" ]
null
null
2402.14294
null
null
http://arxiv.org/pdf/2402.14294v1
2024-02-22T05:16:04Z
2024-02-22T05:16:04Z
High-arity PAC learning via exchangeability
We develop a theory of high-arity PAC learning, which is statistical learning in the presence of "structured correlation". In this theory, hypotheses are either graphs, hypergraphs or, more generally, structures in finite relational languages, and i.i.d. sampling is replaced by sampling an induced substructure, producing an exchangeable distribution. We prove a high-arity version of the fundamental theorem of statistical learning by characterizing high-arity (agnostic) PAC learnability in terms of finiteness of a purely combinatorial dimension and in terms of an appropriate version of uniform convergence.
[ "['Leonardo N. Coregliano' 'Maryanthe Malliaris']" ]
null
null
2402.14301
null
null
http://arxiv.org/pdf/2402.14301v2
2024-04-17T00:55:09Z
2024-02-22T05:41:24Z
GenSERP: Large Language Models for Whole Page Presentation
The advent of large language models (LLMs) brings an opportunity to minimize the effort in search engine result page (SERP) organization. In this paper, we propose GenSERP, a framework that leverages LLMs with vision in a few-shot setting to dynamically organize intermediate search results, including generated chat answers, website snippets, multimedia data, knowledge panels into a coherent SERP layout based on a user's query. Our approach has three main stages: (1) An information gathering phase where the LLM continuously orchestrates API tools to retrieve different types of items, and proposes candidate layouts based on the retrieved items, until it's confident enough to generate the final result. (2) An answer generation phase where the LLM populates the layouts with the retrieved content. In this phase, the LLM adaptively optimize the ranking of items and UX configurations of the SERP. Consequently, it assigns a location on the page to each item, along with the UX display details. (3) A scoring phase where an LLM with vision scores all the generated SERPs based on how likely it can satisfy the user. It then send the one with highest score to rendering. GenSERP features two generation paradigms. First, coarse-to-fine, which allow it to approach optimal layout in a more manageable way, (2) beam search, which give it a better chance to hit the optimal solution compared to greedy decoding. Offline experimental results on real-world data demonstrate how LLMs can contextually organize heterogeneous search results on-the-fly and provide a promising user experience.
[ "['Zhenning Zhang' 'Yunan Zhang' 'Suyu Ge' 'Guangwei Weng' 'Mridu Narang'\n 'Xia Song' 'Saurabh Tiwary']" ]
null
null
2402.14305
null
null
http://arxiv.org/pdf/2402.14305v1
2024-02-22T05:48:54Z
2024-02-22T05:48:54Z
Towards Efficient Pareto-optimal Utility-Fairness between Groups in Repeated Rankings
In this paper, we tackle the problem of computing a sequence of rankings with the guarantee of the Pareto-optimal balance between (1) maximizing the utility of the consumers and (2) minimizing unfairness between producers of the items. Such a multi-objective optimization problem is typically solved using a combination of a scalarization method and linear programming on bi-stochastic matrices, representing the distribution of possible rankings of items. However, the above-mentioned approach relies on Birkhoff-von Neumann (BvN) decomposition, of which the computational complexity is $mathcal{O}(n^5)$ with $n$ being the number of items, making it impractical for large-scale systems. To address this drawback, we introduce a novel approach to the above problem by using the Expohedron - a permutahedron whose points represent all achievable exposures of items. On the Expohedron, we profile the Pareto curve which captures the trade-off between group fairness and user utility by identifying a finite number of Pareto optimal solutions. We further propose an efficient method by relaxing our optimization problem on the Expohedron's circumscribed $n$-sphere, which significantly improve the running time. Moreover, the approximate Pareto curve is asymptotically close to the real Pareto optimal curve as the number of substantial solutions increases. Our methods are applicable with different ranking merits that are non-decreasing functions of item relevance. The effectiveness of our methods are validated through experiments on both synthetic and real-world datasets.
[ "['Phuong Dinh Mai' 'Duc-Trong Le' 'Tuan-Anh Hoang' 'Dung D. Le']" ]
null
null
2402.14307
null
null
http://arxiv.org/pdf/2402.14307v1
2024-02-22T05:52:55Z
2024-02-22T05:52:55Z
An FPGA-Based Accelerator Enabling Efficient Support for CNNs with Arbitrary Kernel Sizes
Convolutional neural networks (CNNs) with large kernels, drawing inspiration from the key operations of vision transformers (ViTs), have demonstrated impressive performance in various vision-based applications. To address the issue of computational efficiency degradation in existing designs for supporting large-kernel convolutions, an FPGA-based inference accelerator is proposed for the efficient deployment of CNNs with arbitrary kernel sizes. Firstly, a Z-flow method is presented to optimize the computing data flow by maximizing data reuse opportunity. Besides, the proposed design, incorporating the kernel-segmentation (Kseg) scheme, enables extended support for large-kernel convolutions, significantly reducing the storage requirements for overlapped data. Moreover, based on the analysis of typical block structures in emerging CNNs, vertical-fused (VF) and horizontal-fused (HF) methods are developed to optimize CNN deployments from both computation and transmission perspectives. The proposed hardware accelerator, evaluated on Intel Arria 10 FPGA, achieves up to 3.91 times better DSP efficiency than prior art on the same network. Particularly, it demonstrates efficient support for large-kernel CNNs, achieving throughputs of 169.68 GOPS and 244.55 GOPS for RepLKNet-31 and PyConvResNet-50, respectively, both of which are implemented on hardware for the first time.
[ "['Miaoxin Wang' 'Xiao Wu' 'Jun Lin' 'Zhongfeng Wang']" ]
null
null
2402.14315
null
null
http://arxiv.org/pdf/2402.14315v2
2024-03-15T05:33:48Z
2024-02-22T06:17:11Z
Structure-Based Drug Design via 3D Molecular Generative Pre-training and Sampling
Structure-based drug design aims at generating high affinity ligands with prior knowledge of 3D target structures. Existing methods either use conditional generative model to learn the distribution of 3D ligands given target binding sites, or iteratively modify molecules to optimize a structure-based activity estimator. The former is highly constrained by data quantity and quality, which leaves optimization-based approaches more promising in practical scenario. However, existing optimization-based approaches choose to edit molecules in 2D space, and use molecular docking to estimate the activity using docking predicted 3D target-ligand complexes. The misalignment between the action space and the objective hinders the performance of these models, especially for those employ deep learning for acceleration. In this work, we propose MolEdit3D to combine 3D molecular generation with optimization frameworks. We develop a novel 3D graph editing model to generate molecules using fragments, and pre-train this model on abundant 3D ligands for learning target-independent properties. Then we employ a target-guided self-learning strategy to improve target-related properties using self-sampled molecules. MolEdit3D achieves state-of-the-art performance on majority of the evaluation metrics, and demonstrate strong capability of capturing both target-dependent and -independent properties.
[ "['Yuwei Yang' 'Siqi Ouyang' 'Xueyu Hu' 'Mingyue Zheng' 'Hao Zhou' 'Lei Li']" ]
null
null
2402.14332
null
null
http://arxiv.org/pdf/2402.14332v2
2024-02-26T02:11:10Z
2024-02-22T06:53:35Z
From Large to Small Datasets: Size Generalization for Clustering Algorithm Selection
In clustering algorithm selection, we are given a massive dataset and must efficiently select which clustering algorithm to use. We study this problem in a semi-supervised setting, with an unknown ground-truth clustering that we can only access through expensive oracle queries. Ideally, the clustering algorithm's output will be structurally close to the ground truth. We approach this problem by introducing a notion of size generalization for clustering algorithm accuracy. We identify conditions under which we can (1) subsample the massive clustering instance, (2) evaluate a set of candidate algorithms on the smaller instance, and (3) guarantee that the algorithm with the best accuracy on the small instance will have the best accuracy on the original big instance. We provide theoretical size generalization guarantees for three classic clustering algorithms: single-linkage, k-means++, and (a smoothed variant of) Gonzalez's k-centers heuristic. We validate our theoretical analysis with empirical results, observing that on real-world clustering instances, we can use a subsample of as little as 5% of the data to identify which algorithm is best on the full dataset.
[ "['Vaggos Chatziafratis' 'Ishani Karmarkar' 'Ellen Vitercik']" ]
null
null
2402.14335
null
null
http://arxiv.org/pdf/2402.14335v1
2024-02-22T07:07:16Z
2024-02-22T07:07:16Z
HyperFast: Instant Classification for Tabular Data
Training deep learning models and performing hyperparameter tuning can be computationally demanding and time-consuming. Meanwhile, traditional machine learning methods like gradient-boosting algorithms remain the preferred choice for most tabular data applications, while neural network alternatives require extensive hyperparameter tuning or work only in toy datasets under limited settings. In this paper, we introduce HyperFast, a meta-trained hypernetwork designed for instant classification of tabular data in a single forward pass. HyperFast generates a task-specific neural network tailored to an unseen dataset that can be directly used for classification inference, removing the need for training a model. We report extensive experiments with OpenML and genomic data, comparing HyperFast to competing tabular data neural networks, traditional ML methods, AutoML systems, and boosting machines. HyperFast shows highly competitive results, while being significantly faster. Additionally, our approach demonstrates robust adaptability across a variety of classification tasks with little to no fine-tuning, positioning HyperFast as a strong solution for numerous applications and rapid model deployment. HyperFast introduces a promising paradigm for fast classification, with the potential to substantially decrease the computational burden of deep learning. Our code, which offers a scikit-learn-like interface, along with the trained HyperFast model, can be found at https://github.com/AI-sandbox/HyperFast.
[ "['David Bonet' 'Daniel Mas Montserrat' 'Xavier Giró-i-Nieto'\n 'Alexander G. Ioannidis']" ]
null
null
2402.14337
null
null
http://arxiv.org/pdf/2402.14337v1
2024-02-22T07:12:34Z
2024-02-22T07:12:34Z
AURA: Natural Language Reasoning for Aleatoric Uncertainty in Rationales
Rationales behind answers not only explain model decisions but boost language models to reason well on complex reasoning tasks. However, obtaining impeccable rationales is often impossible. Besides, it is non-trivial to estimate the degree to which the rationales are faithful enough to encourage model performance. Thus, such reasoning tasks often compel models to output correct answers under undesirable rationales and are sub-optimal compared to what the models are fully capable of. In this work, we propose how to deal with imperfect rationales causing aleatoric uncertainty. We first define the ambiguous rationales with entropy scores of given rationales, using model prior beliefs as informativeness. We then guide models to select one of two different reasoning models according to the ambiguity of rationales. We empirically argue that our proposed method produces robust performance superiority against the adversarial quality of rationales and low-resource settings.
[ "['Hazel Kim']" ]
null
null
2402.14346
null
null
http://arxiv.org/pdf/2402.14346v1
2024-02-22T07:24:26Z
2024-02-22T07:24:26Z
Dependable Distributed Training of Compressed Machine Learning Models
The existing work on the distributed training of machine learning (ML) models has consistently overlooked the distribution of the achieved learning quality, focusing instead on its average value. This leads to a poor dependability}of the resulting ML models, whose performance may be much worse than expected. We fill this gap by proposing DepL, a framework for dependable learning orchestration, able to make high-quality, efficient decisions on (i) the data to leverage for learning, (ii) the models to use and when to switch among them, and (iii) the clusters of nodes, and the resources thereof, to exploit. For concreteness, we consider as possible available models a full DNN and its compressed versions. Unlike previous studies, DepL guarantees that a target learning quality is reached with a target probability, while keeping the training cost at a minimum. We prove that DepL has constant competitive ratio and polynomial complexity, and show that it outperforms the state-of-the-art by over 27% and closely matches the optimum.
[ "['Francesco Malandrino' 'Giuseppe Di Giacomo' 'Marco Levorato'\n 'Carla Fabiana Chiasserini']" ]
null
null
2402.14349
null
null
http://arxiv.org/pdf/2402.14349v2
2024-02-23T07:52:02Z
2024-02-22T07:39:41Z
Uncertainty-driven and Adversarial Calibration Learning for Epicardial Adipose Tissue Segmentation
Epicardial adipose tissue (EAT) is a type of visceral fat that can secrete large amounts of adipokines to affect the myocardium and coronary arteries. EAT volume and density can be used as independent risk markers measurement of volume by noninvasive magnetic resonance images is the best method of assessing EAT. However, segmenting EAT is challenging due to the low contrast between EAT and pericardial effusion and the presence of motion artifacts. we propose a novel feature latent space multilevel supervision network (SPDNet) with uncertainty-driven and adversarial calibration learning to enhance segmentation for more accurate EAT volume estimation. The network first addresses the blurring of EAT edges due to the medical images in the open medical environments with low quality or out-of-distribution by modeling the uncertainty as a Gaussian distribution in the feature latent space, which using its Bayesian estimation as a regularization constraint to optimize SwinUNETR. Second, an adversarial training strategy is introduced to calibrate the segmentation feature map and consider the multi-scale feature differences between the uncertainty-guided predictive segmentation and the ground truth segmentation, synthesizing the multi-scale adversarial loss directly improves the ability to discriminate the similarity between organizations. Experiments on both the cardiac public MRI dataset (ACDC) and the real-world clinical cohort EAT dataset show that the proposed network outperforms mainstream models, validating that uncertainty-driven and adversarial calibration learning can be used to provide additional information for modeling multi-scale ambiguities.
[ "['Kai Zhao' 'Zhiming Liu' 'Jiaqi Liu' 'Jingbiao Zhou' 'Bihong Liao'\n 'Huifang Tang' 'Qiuyu Wang' 'Chunquan Li']" ]
null
null
2402.14361
null
null
http://arxiv.org/pdf/2402.14361v2
2024-04-12T18:27:34Z
2024-02-22T08:01:01Z
OpenTab: Advancing Large Language Models as Open-domain Table Reasoners
Large Language Models (LLMs) trained on large volumes of data excel at various natural language tasks, but they cannot handle tasks requiring knowledge that has not been trained on previously. One solution is to use a retriever that fetches relevant information to expand LLM's knowledge scope. However, existing textual-oriented retrieval-based LLMs are not ideal on structured table data due to diversified data modalities and large table sizes. In this work, we propose OpenTab, an open-domain table reasoning framework powered by LLMs. Overall, OpenTab leverages table retriever to fetch relevant tables and then generates SQL programs to parse the retrieved tables efficiently. Utilizing the intermediate data derived from the SQL executions, it conducts grounded inference to produce accurate response. Extensive experimental evaluation shows that OpenTab significantly outperforms baselines in both open- and closed-domain settings, achieving up to 21.5% higher accuracy. We further run ablation studies to validate the efficacy of our proposed designs of the system.
[ "['Kezhi Kong' 'Jiani Zhang' 'Zhengyuan Shen' 'Balasubramaniam Srinivasan'\n 'Chuan Lei' 'Christos Faloutsos' 'Huzefa Rangwala' 'George Karypis']" ]
null
null
2402.14367
null
null
http://arxiv.org/pdf/2402.14367v1
2024-02-22T08:11:22Z
2024-02-22T08:11:22Z
Representation Learning for Frequent Subgraph Mining
Identifying frequent subgraphs, also called network motifs, is crucial in analyzing and predicting properties of real-world networks. However, finding large commonly-occurring motifs remains a challenging problem not only due to its NP-hard subroutine of subgraph counting, but also the exponential growth of the number of possible subgraphs patterns. Here we present Subgraph Pattern Miner (SPMiner), a novel neural approach for approximately finding frequent subgraphs in a large target graph. SPMiner combines graph neural networks, order embedding space, and an efficient search strategy to identify network subgraph patterns that appear most frequently in the target graph. SPMiner first decomposes the target graph into many overlapping subgraphs and then encodes each subgraph into an order embedding space. SPMiner then uses a monotonic walk in the order embedding space to identify frequent motifs. Compared to existing approaches and possible neural alternatives, SPMiner is more accurate, faster, and more scalable. For 5- and 6-node motifs, we show that SPMiner can almost perfectly identify the most frequent motifs while being 100x faster than exact enumeration methods. In addition, SPMiner can also reliably identify frequent 10-node motifs, which is well beyond the size limit of exact enumeration approaches. And last, we show that SPMiner can find large up to 20 node motifs with 10-100x higher frequency than those found by current approximate methods.
[ "['Rex Ying' 'Tianyu Fu' 'Andrew Wang' 'Jiaxuan You' 'Yu Wang'\n 'Jure Leskovec']" ]
null
null
2402.14384
null
null
http://arxiv.org/pdf/2402.14384v1
2024-02-22T08:54:57Z
2024-02-22T08:54:57Z
Generative Adversarial Network with Soft-Dynamic Time Warping and Parallel Reconstruction for Energy Time Series Anomaly Detection
In this paper, we employ a 1D deep convolutional generative adversarial network (DCGAN) for sequential anomaly detection in energy time series data. Anomaly detection involves gradient descent to reconstruct energy sub-sequences, identifying the noise vector that closely generates them through the generator network. Soft-DTW is used as a differentiable alternative for the reconstruction loss and is found to be superior to Euclidean distance. Combining reconstruction loss and the latent space's prior probability distribution serves as the anomaly score. Our novel method accelerates detection by parallel computation of reconstruction of multiple points and shows promise in identifying anomalous energy consumption in buildings, as evidenced by performing experiments on hourly energy time series from 15 buildings.
[ "['Hardik Prabhu' 'Jayaraman Valadi' 'Pandarasamy Arjunan']" ]
null
null
2402.14385
null
null
http://arxiv.org/pdf/2402.14385v1
2024-02-22T08:55:21Z
2024-02-22T08:55:21Z
WindDragon: Enhancing wind power forecasting with Automated Deep Learning
Achieving net zero carbon emissions by 2050 requires the integration of increasing amounts of wind power into power grids. This energy source poses a challenge to system operators due to its variability and uncertainty. Therefore, accurate forecasting of wind power is critical for grid operation and system balancing. This paper presents an innovative approach to short-term (1 to 6 hour horizon) windpower forecasting at a national level. The method leverages Automated Deep Learning combined with Numerical Weather Predictions wind speed maps to accurately forecast wind power.
[ "['Julie Keisler' 'Etienne Le Naour']" ]