categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
sequence
null
null
2406.12614
null
null
http://arxiv.org/pdf/2406.12614v1
2024-06-18T13:43:22Z
2024-06-18T13:43:22Z
EUvsDisinfo: a Dataset for Multilingual Detection of Pro-Kremlin Disinformation in News Articles
This work introduces EUvsDisinfo, a multilingual dataset of trustworthy and disinformation articles related to pro-Kremlin themes. It is sourced directly from the debunk articles written by experts leading the EUvsDisinfo project. Our dataset is the largest to-date resource in terms of the overall number of articles and distinct languages. It also provides the largest topical and temporal coverage. Using this dataset, we investigate the dissemination of pro-Kremlin disinformation across different languages, uncovering language-specific patterns targeting specific disinformation topics. We further analyse the evolution of topic distribution over an eight-year period, noting a significant surge in disinformation content before the full-scale invasion of Ukraine in 2022. Lastly, we demonstrate the dataset's applicability in training models to effectively distinguish between disinformation and trustworthy content in multilingual settings.
[ "['João A. Leite' 'Olesya Razuvayevskaya' 'Kalina Bontcheva'\n 'Carolina Scarton']" ]
null
null
2406.12615
null
null
http://arxiv.org/pdf/2406.12615v1
2024-06-18T13:43:58Z
2024-06-18T13:43:58Z
When Are Bias-Free ReLU Networks Like Linear Networks?
We investigate the expressivity and learning dynamics of bias-free ReLU networks. We firstly show that two-layer bias-free ReLU networks have limited expressivity: the only odd function two-layer bias-free ReLU networks can express is a linear one. We then show that, under symmetry conditions on the data, these networks have the same learning dynamics as linear networks. This allows us to give closed-form time-course solutions to certain two-layer bias-free ReLU networks, which has not been done for nonlinear networks outside the lazy learning regime. While deep bias-free ReLU networks are more expressive than their two-layer counterparts, they still share a number of similarities with deep linear networks. These similarities enable us to leverage insights from linear networks, leading to a novel understanding of bias-free ReLU networks. Overall, our results show that some properties established for bias-free ReLU networks arise due to equivalence to linear networks, and suggest that including bias or considering asymmetric data are avenues to engage with nonlinear behaviors.
[ "['Yedi Zhang' 'Andrew Saxe' 'Peter E. Latham']" ]
null
null
2406.12616
null
null
http://arxiv.org/pdf/2406.12616v1
2024-06-18T13:44:07Z
2024-06-18T13:44:07Z
Learning Diffusion at Lightspeed
Diffusion regulates a phenomenal number of natural processes and the dynamics of many successful generative models. Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems and properly model only the drift of the system. We propose a new simple model, JKOnet*, which bypasses altogether the complexity of existing architectures while presenting significantly enhanced representational capacity: JKOnet* recovers the potential, interaction, and internal energy components of the underlying diffusion process. JKOnet* minimizes a simple quadratic loss, runs at lightspeed, and drastically outperforms other baselines in practice. Additionally, JKOnet* provides a closed-form optimal solution for linearly parametrized functionals. Our methodology is based on the interpretation of diffusion processes as energy-minimizing trajectories in the probability space via the so-called JKO scheme, which we study via its first-order optimality conditions, in light of few-weeks-old advancements in optimization in the probability space.
[ "['Antonio Terpin' 'Nicolas Lanzetti' 'Florian Dörfler']" ]
null
null
2406.12638
null
null
http://arxiv.org/pdf/2406.12638v1
2024-06-18T14:07:13Z
2024-06-18T14:07:13Z
Efficient and Long-Tailed Generalization for Pre-trained Vision-Language Model
Pre-trained vision-language models like CLIP have shown powerful zero-shot inference ability via image-text matching and prove to be strong few-shot learners in various downstream tasks. However, in real-world scenarios, adapting CLIP to downstream tasks may encounter the following challenges: 1) data may exhibit long-tailed data distributions and might not have abundant samples for all the classes; 2) There might be emerging tasks with new classes that contain no samples at all. To overcome them, we propose a novel framework to achieve efficient and long-tailed generalization, which can be termed as Candle. During the training process, we propose compensating logit-adjusted loss to encourage large margins of prototypes and alleviate imbalance both within the base classes and between the base and new classes. For efficient adaptation, we treat the CLIP model as a black box and leverage the extracted features to obtain visual and textual prototypes for prediction. To make full use of multi-modal information, we also propose cross-modal attention to enrich the features from both modalities. For effective generalization, we introduce virtual prototypes for new classes to make up for their lack of training images. Candle achieves state-of-the-art performance over extensive experiments on 11 diverse datasets while substantially reducing the training time, demonstrating the superiority of our approach. The source code is available at https://github.com/shijxcs/Candle.
[ "['Jiang-Xin Shi' 'Chi Zhang' 'Tong Wei' 'Yu-Feng Li']" ]
null
null
2406.12640
null
null
http://arxiv.org/pdf/2406.12640v1
2024-06-18T14:07:38Z
2024-06-18T14:07:38Z
Research and Implementation of Data Enhancement Techniques for Graph Neural Networks
Data, algorithms, and arithmetic power are the three foundational conditions for deep learning to be effective in the application domain. Data is the focus for developing deep learning algorithms. In practical engineering applications, some data are affected by the conditions under which more data cannot be obtained or the cost of obtaining data is too high, resulting in smaller data sets (generally several hundred to several thousand) and data sizes that are far smaller than the size of large data sets (tens of thousands). The above two methods are based on the original dataset to generate, in the case of insufficient data volume of the original data may not reflect all the real environment, such as the real environment of the light, silhouette and other information, if the amount of data is not enough, it is difficult to use a simple transformation or neural network generative model to generate the required data. The research in this paper firstly analyses the key points of the data enhancement technology of graph neural network, and at the same time introduces the composition foundation of graph neural network in depth, on the basis of which the data enhancement technology of graph neural network is optimized and analysed.
[ "['Jingzhao Gu' 'Haoyang Huang']" ]
null
null
2406.12649
null
null
http://arxiv.org/pdf/2406.12649v2
2024-06-19T02:21:09Z
2024-06-18T14:17:57Z
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Vision transformers (ViTs) have emerged as a significant area of focus, particularly for their capacity to be jointly trained with large language models and to serve as robust vision foundation models. Yet, the development of trustworthy explanation methods for ViTs has lagged, particularly in the context of post-hoc interpretations of ViT predictions. Existing sub-image selection approaches, such as feature-attribution and conceptual models, fall short in this regard. This paper proposes five desiderata for explaining ViTs -- faithfulness, stability, sparsity, multi-level structure, and parsimony -- and demonstrates the inadequacy of current methods in meeting these criteria comprehensively. We introduce a variational Bayesian explanation framework, dubbed ProbAbilistic Concept Explainers (PACE), which models the distributions of patch embeddings to provide trustworthy post-hoc conceptual explanations. Our qualitative analysis reveals the distributions of patch-level concepts, elucidating the effectiveness of ViTs by modeling the joint distribution of patch embeddings and ViT's predictions. Moreover, these patch-level explanations bridge the gap between image-level and dataset-level explanations, thus completing the multi-level structure of PACE. Through extensive experiments on both synthetic and real-world datasets, we demonstrate that PACE surpasses state-of-the-art methods in terms of the defined desiderata.
[ "['Hengyi Wang' 'Shiwei Tan' 'Hao Wang']" ]
null
null
2406.12658
null
null
http://arxiv.org/pdf/2406.12658v1
2024-06-18T14:26:09Z
2024-06-18T14:26:09Z
Federated Learning with a Single Shared Image
Federated Learning (FL) enables multiple machines to collaboratively train a machine learning model without sharing of private training data. Yet, especially for heterogeneous models, a key bottleneck remains the transfer of knowledge gained from each client model with the server. One popular method, FedDF, uses distillation to tackle this task with the use of a common, shared dataset on which predictions are exchanged. However, in many contexts such a dataset might be difficult to acquire due to privacy and the clients might not allow for storage of a large shared dataset. To this end, in this paper, we introduce a new method that improves this knowledge distillation method to only rely on a single shared image between clients and server. In particular, we propose a novel adaptive dataset pruning algorithm that selects the most informative crops generated from only a single image. With this, we show that federated learning with distillation under a limited shared dataset budget works better by using a single image compared to multiple individual ones. Finally, we extend our approach to allow for training heterogeneous client architectures by incorporating a non-uniform distillation schedule and client-model mirroring on the server side.
[ "['Sunny Soni' 'Aaqib Saeed' 'Yuki M. Asano']" ]
null
null
2406.12659
null
null
http://arxiv.org/pdf/2406.12659v1
2024-06-18T14:27:44Z
2024-06-18T14:27:44Z
A variational Bayes approach to debiased inference for low-dimensional parameters in high-dimensional linear regression
We propose a scalable variational Bayes method for statistical inference for a single or low-dimensional subset of the coordinates of a high-dimensional parameter in sparse linear regression. Our approach relies on assigning a mean-field approximation to the nuisance coordinates and carefully modelling the conditional distribution of the target given the nuisance. This requires only a preprocessing step and preserves the computational advantages of mean-field variational Bayes, while ensuring accurate and reliable inference for the target parameter, including for uncertainty quantification. We investigate the numerical performance of our algorithm, showing that it performs competitively with existing methods. We further establish accompanying theoretical guarantees for estimation and uncertainty quantification in the form of a Bernstein--von Mises theorem.
[ "['Ismaël Castillo' \"Alice L'Huillier\" 'Kolyan Ray' 'Luke Travis']" ]
null
null
2406.12661
null
null
http://arxiv.org/pdf/2406.12661v1
2024-06-18T14:28:29Z
2024-06-18T14:28:29Z
SCORE: A 1D Reparameterization Technique to Break Bayesian Optimization's Curse of Dimensionality
Bayesian optimization (BO) has emerged as a powerful tool for navigating complex search spaces, showcasing practical applications in the fields of science and engineering.However, since it typically relies on a surrogate model to approximate the objective function, BO grapples with heightened computational costs that tend to escalate as the number of parameters and experiments grows. Several methods such as parallelization, surrogate model approximations, and memory pruning have been proposed to cut down computing time, but they all fall short of resolving the core issue behind BO's curse of dimensionality. In this paper, a 1D reparametrization trick is proposed to break this curse and sustain linear time complexity for BO in high-dimensional landscapes. This fast and scalable approach named SCORE can successfully find the global minimum of needle-in-a-haystack optimization functions and fit real-world data without the high-performance computing resources typically required by state-of-the-art techniques.
[ "['Joseph Chakar']" ]
null
null
2406.12667
null
null
http://arxiv.org/pdf/2406.12667v1
2024-06-18T14:40:20Z
2024-06-18T14:40:20Z
A Systematization of the Wagner Framework: Graph Theory Conjectures and Reinforcement Learning
In 2021, Adam Zsolt Wagner proposed an approach to disprove conjectures in graph theory using Reinforcement Learning (RL). Wagner's idea can be framed as follows: consider a conjecture, such as a certain quantity f(G) < 0 for every graph G; one can then play a single-player graph-building game, where at each turn the player decides whether to add an edge or not. The game ends when all edges have been considered, resulting in a certain graph G_T, and f(G_T) is the final score of the game; RL is then used to maximize this score. This brilliant idea is as simple as innovative, and it lends itself to systematic generalization. Several different single-player graph-building games can be employed, along with various RL algorithms. Moreover, RL maximizes the cumulative reward, allowing for step-by-step rewards instead of a single final score, provided the final cumulative reward represents the quantity of interest f(G_T). In this paper, we discuss these and various other choices that can be significant in Wagner's framework. As a contribution to this systematization, we present four distinct single-player graph-building games. Each game employs both a step-by-step reward system and a single final score. We also propose a principled approach to select the most suitable neural network architecture for any given conjecture, and introduce a new dataset of graphs labeled with their Laplacian spectra. Furthermore, we provide a counterexample for a conjecture regarding the sum of the matching number and the spectral radius, which is simpler than the example provided in Wagner's original paper. The games have been implemented as environments in the Gymnasium framework, and along with the dataset, are available as open-source supplementary materials.
[ "['Flora Angileri' 'Giulia Lombardi' 'Andrea Fois' 'Renato Faraone'\n 'Carlo Metta' 'Michele Salvi' 'Luigi Amedeo Bianchi' 'Marco Fantozzi'\n 'Silvia Giulia Galfrè' 'Daniele Pavesi' 'Maurizio Parton'\n 'Francesco Morandin']" ]
null
null
2406.12670
null
null
http://arxiv.org/pdf/2406.12670v1
2024-06-18T14:43:18Z
2024-06-18T14:43:18Z
Stealth edits for provably fixing or attacking large language models
We reveal new methods and the theoretical foundations of techniques for editing large language models. We also show how the new theory can be used to assess the editability of models and to expose their susceptibility to previously unknown malicious attacks. Our theoretical approach shows that a single metric (a specific measure of the intrinsic dimensionality of the model's features) is fundamental to predicting the success of popular editing approaches, and reveals new bridges between disparate families of editing methods. We collectively refer to these approaches as stealth editing methods, because they aim to directly and inexpensively update a model's weights to correct the model's responses to known hallucinating prompts without otherwise affecting the model's behaviour, without requiring retraining. By carefully applying the insight gleaned from our theoretical investigation, we are able to introduce a new network block -- named a jet-pack block -- which is optimised for highly selective model editing, uses only standard network operations, and can be inserted into existing networks. The intrinsic dimensionality metric also determines the vulnerability of a language model to a stealth attack: a small change to a model's weights which changes its response to a single attacker-chosen prompt. Stealth attacks do not require access to or knowledge of the model's training data, therefore representing a potent yet previously unrecognised threat to redistributed foundation models. They are computationally simple enough to be implemented in malware in many cases. Extensive experimental results illustrate and support the method and its theoretical underpinnings. Demos and source code for editing language models are available at https://github.com/qinghua-zhou/stealth-edits.
[ "['Oliver J. Sutton' 'Qinghua Zhou' 'Wei Wang' 'Desmond J. Higham'\n 'Alexander N. Gorban' 'Alexander Bastounis' 'Ivan Y. Tyukin']" ]
null
null
2406.12678
null
null
http://arxiv.org/pdf/2406.12678v1
2024-06-18T14:50:42Z
2024-06-18T14:50:42Z
Contraction rates for conjugate gradient and Lanczos approximate posteriors in Gaussian process regression
Due to their flexibility and theoretical tractability Gaussian process (GP) regression models have become a central topic in modern statistics and machine learning. While the true posterior in these models is given explicitly, numerical evaluations depend on the inversion of the augmented kernel matrix $ K + sigma^2 I $, which requires up to $ O(n^3) $ operations. For large sample sizes n, which are typically given in modern applications, this is computationally infeasible and necessitates the use of an approximate version of the posterior. Although such methods are widely used in practice, they typically have very limtied theoretical underpinning. In this context, we analyze a class of recently proposed approximation algorithms from the field of Probabilistic numerics. They can be interpreted in terms of Lanczos approximate eigenvectors of the kernel matrix or a conjugate gradient approximation of the posterior mean, which are particularly advantageous in truly large scale applications, as they are fundamentally only based on matrix vector multiplications amenable to the GPU acceleration of modern software frameworks. We combine result from the numerical analysis literature with state of the art concentration results for spectra of kernel matrices to obtain minimax contraction rates. Our theoretical findings are illustrated by numerical experiments.
[ "['Bernhard Stankewitz' 'Botond Szabo']" ]
null
null
2406.12683
null
null
http://arxiv.org/pdf/2406.12683v1
2024-06-18T14:55:41Z
2024-06-18T14:55:41Z
Spatial Sequence Attention Network for Schizophrenia Classification from Structural Brain MR Images
Schizophrenia is a debilitating, chronic mental disorder that significantly impacts an individual's cognitive abilities, behavior, and social interactions. It is characterized by subtle morphological changes in the brain, particularly in the gray matter. These changes are often imperceptible through manual observation, demanding an automated approach to diagnosis. This study introduces a deep learning methodology for the classification of individuals with Schizophrenia. We achieve this by implementing a diversified attention mechanism known as Spatial Sequence Attention (SSA) which is designed to extract and emphasize significant feature representations from structural MRI (sMRI). Initially, we employ the transfer learning paradigm by leveraging pre-trained DenseNet to extract initial feature maps from the final convolutional block which contains morphological alterations associated with Schizophrenia. These features are further processed by the proposed SSA to capture and emphasize intricate spatial interactions and relationships across volumes within the brain. Our experimental studies conducted on a clinical dataset have revealed that the proposed attention mechanism outperforms the existing Squeeze & Excitation Network for Schizophrenia classification.
[ "['Nagur Shareef Shaik' 'Teja Krishna Cherukuri' 'Vince Calhoun'\n 'Dong Hye Ye']" ]
null
null
2406.12693
null
null
http://arxiv.org/pdf/2406.12693v1
2024-06-18T15:06:22Z
2024-06-18T15:06:22Z
XXLTraffic: Expanding and Extremely Long Traffic Dataset for Ultra-Dynamic Forecasting Challenges
Traffic forecasting is crucial for smart cities and intelligent transportation initiatives, where deep learning has made significant progress in modeling complex spatio-temporal patterns in recent years. However, current public datasets have limitations in reflecting the ultra-dynamic nature of real-world scenarios, characterized by continuously evolving infrastructures, varying temporal distributions, and temporal gaps due to sensor downtimes or changes in traffic patterns. These limitations inevitably restrict the practical applicability of existing traffic forecasting datasets. To bridge this gap, we present XXLTraffic, the largest available public traffic dataset with the longest timespan and increasing number of sensor nodes over the multiple years observed in the data, curated to support research in ultra-dynamic forecasting. Our benchmark includes both typical time-series forecasting settings with hourly and daily aggregated data and novel configurations that introduce gaps and down-sample the training size to better simulate practical constraints. We anticipate the new XXLTraffic will provide a fresh perspective for the time-series and traffic forecasting communities. It would also offer a robust platform for developing and evaluating models designed to tackle ultra-dynamic and extremely long forecasting problems. Our dataset supplements existing spatio-temporal data resources and leads to new research directions in this domain.
[ "['Du Yin' 'Hao Xue' 'Arian Prabowo' 'Shuang Ao' 'Flora Salim']" ]
null
null
2406.12700
null
null
http://arxiv.org/pdf/2406.12700v1
2024-06-18T15:14:14Z
2024-06-18T15:14:14Z
SUPER: Selfie Undistortion and Head Pose Editing with Identity Preservation
Self-portraits captured from a short distance might look unnatural or even unattractive due to heavy distortions making facial features malformed, and ill-placed head poses. In this paper, we propose SUPER, a novel method of eliminating distortions and adjusting head pose in a close-up face crop. We perform 3D GAN inversion for a facial image by optimizing camera parameters and face latent code, which gives a generated image. Besides, we estimate depth from the obtained latent code, create a depth-induced 3D mesh, and render it with updated camera parameters to obtain a warped portrait. Finally, we apply the visibility-based blending so that visible regions are reprojected, and occluded parts are restored with a generative model. Experiments on face undistortion benchmarks and on our self-collected Head Rotation dataset (HeRo), show that SUPER outperforms previous approaches both qualitatively and quantitatively, opening new possibilities for photorealistic selfie editing.
[ "['Polina Karpikova' 'Andrei Spiridonov' 'Anna Vorontsova'\n 'Anastasia Yaschenko' 'Ekaterina Radionova' 'Igor Medvedev'\n 'Alexander Limonov']" ]
null
null
2406.12709
null
null
http://arxiv.org/pdf/2406.12709v1
2024-06-18T15:23:10Z
2024-06-18T15:23:10Z
Enhancing Spatio-temporal Quantile Forecasting with Curriculum Learning: Lessons Learned
Training models on spatio-temporal (ST) data poses an open problem due to the complicated and diverse nature of the data itself, and it is challenging to ensure the model's performance directly trained on the original ST data. While limiting the variety of training data can make training easier, it can also lead to a lack of knowledge and information for the model, resulting in a decrease in performance. To address this challenge, we presented an innovative paradigm that incorporates three separate forms of curriculum learning specifically targeting from spatial, temporal, and quantile perspectives. Furthermore, our framework incorporates a stacking fusion module to combine diverse information from three types of curriculum learning, resulting in a strong and thorough learning process. We demonstrated the effectiveness of this framework with extensive empirical evaluations, highlighting its better performance in addressing complex ST challenges. We provided thorough ablation studies to investigate the effectiveness of our curriculum and to explain how it contributes to the improvement of learning efficiency on ST data.
[ "['Du Yin' 'Jinliang Deng' 'Shuang Ao' 'Zechen Li' 'Hao Xue'\n 'Arian Prabowo' 'Renhe Jiang' 'Xuan Song' 'Flora Salim']" ]
null
null
2406.12723
null
null
http://arxiv.org/pdf/2406.12723v3
2024-06-25T02:00:48Z
2024-06-18T15:45:21Z
BIOSCAN-5M: A Multimodal Dataset for Insect Biodiversity
As part of an ongoing worldwide effort to comprehend and monitor insect biodiversity, this paper presents the BIOSCAN-5M Insect dataset to the machine learning community and establish several benchmark tasks. BIOSCAN-5M is a comprehensive dataset containing multi-modal information for over 5 million insect specimens, and it significantly expands existing image-based biological datasets by including taxonomic labels, raw nucleotide barcode sequences, assigned barcode index numbers, and geographical information. We propose three benchmark experiments to demonstrate the impact of the multi-modal data types on the classification and clustering accuracy. First, we pretrain a masked language model on the DNA barcode sequences of the BIOSCAN-5M dataset, and demonstrate the impact of using this large reference library on species- and genus-level classification performance. Second, we propose a zero-shot transfer learning task applied to images and DNA barcodes to cluster feature embeddings obtained from self-supervised learning, to investigate whether meaningful clusters can be derived from these representation embeddings. Third, we benchmark multi-modality by performing contrastive learning on DNA barcodes, image data, and taxonomic information. This yields a general shared embedding space enabling taxonomic classification using multiple types of information and modalities. The code repository of the BIOSCAN-5M Insect dataset is available at https://github.com/zahrag/BIOSCAN-5M.
[ "['Zahra Gharaee' 'Scott C. Lowe' 'ZeMing Gong' 'Pablo Millan Arias'\n 'Nicholas Pellegrino' 'Austin T. Wang' 'Joakim Bruslund Haurum'\n 'Iuliia Zarubiieva' 'Lila Kari' 'Dirk Steinke' 'Graham W. Taylor'\n 'Paul Fieguth' 'Angel X. Chang']" ]
null
null
2406.12730
null
null
http://arxiv.org/pdf/2406.12730v1
2024-06-18T15:54:50Z
2024-06-18T15:54:50Z
Predicting the energetic proton flux with a machine learning regression algorithm
The need of real-time of monitoring and alerting systems for Space Weather hazards has grown significantly in the last two decades. One of the most important challenge for space mission operations and planning is the prediction of solar proton events (SPEs). In this context, artificial intelligence and machine learning techniques have opened a new frontier, providing a new paradigm for statistical forecasting algorithms. The great majority of these models aim to predict the occurrence of a SPE, i.e., they are based on the classification approach. In this work we present a simple and efficient machine learning regression algorithm which is able to forecast the energetic proton flux up to 1 hour ahead by exploiting features derived from the electron flux only. This approach could be helpful to improve monitoring systems of the radiation risk in both deep space and near-Earth environments. The model is very relevant for mission operations and planning, especially when flare characteristics and source location are not available in real time, as at Mars distance.
[ "['Mirko Stumpo' 'Monica Laurenza' 'Simone Benella'\n 'Maria Federica Marcucci']" ]
null
null
2406.12732
null
null
http://arxiv.org/abs/2406.12732v1
2024-06-18T15:55:11Z
2024-06-18T15:55:11Z
Automatic generation of insights from workers' actions in industrial workflows with explainable Machine Learning
New technologies such as Machine Learning (ML) gave great potential for evaluating industry workflows and automatically generating key performance indicators (KPIs). However, despite established standards for measuring the efficiency of industrial machinery, there is no precise equivalent for workers' productivity, which would be highly desirable given the lack of a skilled workforce for the next generation of industry workflows. Therefore, an ML solution combining data from manufacturing processes and workers' performance for that goal is required. Additionally, in recent times intense effort has been devoted to explainable ML approaches that can automatically explain their decisions to a human operator, thus increasing their trustworthiness. We propose to apply explainable ML solutions to differentiate between expert and inexpert workers in industrial workflows, which we validate at a quality assessment industrial workstation. Regarding the methodology used, input data are captured by a manufacturing machine and stored in a NoSQL database. Data are processed to engineer features used in automatic classification and to compute workers' KPIs to predict their level of expertise (with all classification metrics exceeding 90 %). These KPIs, and the relevant features in the decisions are textually explained by natural language expansion on an explainability dashboard. These automatic explanations made it possible to infer knowledge from expert workers for inexpert workers. The latter illustrates the interest of research in self-explainable ML for automatically generating insights to improve productivity in industrial workflows.
[ "['Francisco de Arriba-Pérez' 'Silvia García-Méndez'\n 'Javier Otero-Mosquera' 'Francisco J. González-Castaño'\n 'Felipe Gil-Castiñeira']" ]
null
null
2406.12747
null
null
http://arxiv.org/pdf/2406.12747v1
2024-06-18T16:07:33Z
2024-06-18T16:07:33Z
TSI-Bench: Benchmarking Time Series Imputation
Effective imputation is a crucial preprocessing step for time series analysis. Despite the development of numerous deep learning algorithms for time series imputation, the community lacks standardized and comprehensive benchmark platforms to effectively evaluate imputation performance across different settings. Moreover, although many deep learning forecasting algorithms have demonstrated excellent performance, whether their modeling achievements can be transferred to time series imputation tasks remains unexplored. To bridge these gaps, we develop TSI-Bench, the first (to our knowledge) comprehensive benchmark suite for time series imputation utilizing deep learning techniques. The TSI-Bench pipeline standardizes experimental settings to enable fair evaluation of imputation algorithms and identification of meaningful insights into the influence of domain-appropriate missingness ratios and patterns on model performance. Furthermore, TSI-Bench innovatively provides a systematic paradigm to tailor time series forecasting algorithms for imputation purposes. Our extensive study across 34,804 experiments, 28 algorithms, and 8 datasets with diverse missingness scenarios demonstrates TSI-Bench's effectiveness in diverse downstream tasks and potential to unlock future directions in time series imputation research and analysis. The source code and experiment logs are available at https://github.com/WenjieDu/AwesomeImputation.
[ "['Wenjie Du' 'Jun Wang' 'Linglong Qian' 'Yiyuan Yang' 'Fanxing Liu'\n 'Zepu Wang' 'Zina Ibrahim' 'Haoxin Liu' 'Zhiyuan Zhao' 'Yingjie Zhou'\n 'Wenjia Wang' 'Kaize Ding' 'Yuxuan Liang' 'B. Aditya Prakash'\n 'Qingsong Wen']" ]
null
null
2406.12752
null
null
http://arxiv.org/pdf/2406.12752v1
2024-06-18T16:20:12Z
2024-06-18T16:20:12Z
Extracting Training Data from Unconditional Diffusion Models
As diffusion probabilistic models (DPMs) are being employed as mainstream models for generative artificial intelligence (AI), the study of their memorization of the raw training data has attracted growing attention. Existing works in this direction aim to establish an understanding of whether or to what extent DPMs learn by memorization. Such an understanding is crucial for identifying potential risks of data leakage and copyright infringement in diffusion models and, more importantly, for more controllable generation and trustworthy application of Artificial Intelligence Generated Content (AIGC). While previous works have made important observations of when DPMs are prone to memorization, these findings are mostly empirical, and the developed data extraction methods only work for conditional diffusion models. In this work, we aim to establish a theoretical understanding of memorization in DPMs with 1) a memorization metric for theoretical analysis, 2) an analysis of conditional memorization with informative and random labels, and 3) two better evaluation metrics for measuring memorization. Based on the theoretical analysis, we further propose a novel data extraction method called textbf{Surrogate condItional Data Extraction (SIDE)} that leverages a classifier trained on generated data as a surrogate condition to extract training data directly from unconditional diffusion models. Our empirical results demonstrate that SIDE can extract training data from diffusion models where previous methods fail, and it is on average over 50% more effective across different scales of the CelebA dataset.
[ "['Yunhao Chen' 'Xingjun Ma' 'Difan Zou' 'Yu-Gang Jiang']" ]
null
null
2406.12756
null
null
http://arxiv.org/pdf/2406.12756v1
2024-06-18T16:24:28Z
2024-06-18T16:24:28Z
GFM4MPM: Towards Geospatial Foundation Models for Mineral Prospectivity Mapping
Machine Learning (ML) for Mineral Prospectivity Mapping (MPM) remains a challenging problem as it requires the analysis of associations between large-scale multi-modal geospatial data and few historical mineral commodity observations (positive labels). Recent MPM works have explored Deep Learning (DL) as a modeling tool with more representation capacity. However, these overparameterized methods may be more prone to overfitting due to their reliance on scarce labeled data. While a large quantity of unlabeled geospatial data exists, no prior MPM works have considered using such information in a self-supervised manner. Our MPM approach uses a masked image modeling framework to pretrain a backbone neural network in a self-supervised manner using unlabeled geospatial data alone. After pretraining, the backbone network provides feature extraction for downstream MPM tasks. We evaluated our approach alongside existing methods to assess mineral prospectivity of Mississippi Valley Type (MVT) and Clastic-Dominated (CD) Lead-Zinc deposits in North America and Australia. Our results demonstrate that self-supervision promotes robustness in learned features, improving prospectivity predictions. Additionally, we leverage explainable artificial intelligence techniques to demonstrate that individual predictions can be interpreted from a geological perspective.
[ "['Angel Daruna' 'Vasily Zadorozhnyy' 'Georgina Lukoczki' 'Han-Pang Chiu']" ]
null
null
2406.12762
null
null
http://arxiv.org/abs/2406.12762v1
2024-06-18T16:29:07Z
2024-06-18T16:29:07Z
Unsupervised explainable activity prediction in competitive Nordic Walking from experimental data
Artificial Intelligence (AI) has found application in Human Activity Recognition (HAR) in competitive sports. To date, most Machine Learning (ML) approaches for HAR have relied on offline (batch) training, imposing higher computational and tagging burdens compared to online processing unsupervised approaches. Additionally, the decisions behind traditional ML predictors are opaque and require human interpretation. In this work, we apply an online processing unsupervised clustering approach based on low-cost wearable Inertial Measurement Units (IMUs). The outcomes generated by the system allow for the automatic expansion of limited tagging available (e.g., by referees) within those clusters, producing pertinent information for the explainable classification stage. Specifically, our work focuses on achieving automatic explainability for predictions related to athletes' activities, distinguishing between correct, incorrect, and cheating practices in Nordic Walking. The proposed solution achieved performance metrics of close to 100 % on average.
[ "['Silvia García-Méndez' 'Francisco de Arriba-Pérez'\n 'Francisco J. González-Castaño' 'Javier Vales-Alonso']" ]
null
null
2406.12763
null
null
http://arxiv.org/pdf/2406.12763v2
2024-06-19T15:25:57Z
2024-06-18T16:30:51Z
Implicit Bias of Mirror Flow on Separable Data
We examine the continuous-time counterpart of mirror descent, namely mirror flow, on classification problems which are linearly separable. Such problems are minimised `at infinity' and have many possible solutions; we study which solution is preferred by the algorithm depending on the mirror potential. For exponential tailed losses and under mild assumptions on the potential, we show that the iterates converge in direction towards a $phi_infty$-maximum margin classifier. The function $phi_infty$ is the $textit{horizon function}$ of the mirror potential and characterises its shape `at infinity'. When the potential is separable, a simple formula allows to compute this function. We analyse several examples of potentials and provide numerical experiments highlighting our results.
[ "['Scott Pesme' 'Radu-Alexandru Dragomir' 'Nicolas Flammarion']" ]
null
null
2406.12764
null
null
http://arxiv.org/pdf/2406.12764v1
2024-06-18T16:31:02Z
2024-06-18T16:31:02Z
Quasi-Bayes meets Vines
Recently proposed quasi-Bayesian (QB) methods initiated a new era in Bayesian computation by directly constructing the Bayesian predictive distribution through recursion, removing the need for expensive computations involved in sampling the Bayesian posterior distribution. This has proved to be data-efficient for univariate predictions, but extensions to multiple dimensions rely on a conditional decomposition resulting from predefined assumptions on the kernel of the Dirichlet Process Mixture Model, which is the implicit nonparametric model used. Here, we propose a different way to extend Quasi-Bayesian prediction to high dimensions through the use of Sklar's theorem by decomposing the predictive distribution into one-dimensional predictive marginals and a high-dimensional copula. Thus, we use the efficient recursive QB construction for the one-dimensional marginals and model the dependence using highly expressive vine copulas. Further, we tune hyperparameters using robust divergences (eg. energy score) and show that our proposed Quasi-Bayesian Vine (QB-Vine) is a fully non-parametric density estimator with emph{an analytical form} and convergence rate independent of the dimension of data in some situations. Our experiments illustrate that the QB-Vine is appropriate for high dimensional distributions ($sim$64), needs very few samples to train ($sim$200) and outperforms state-of-the-art methods with analytical forms for density estimation and supervised tasks by a considerable margin.
[ "['David Huk' 'Yuanhe Zhang' 'Mark Steel' 'Ritabrata Dutta']" ]
null
null
2406.12771
null
null
http://arxiv.org/pdf/2406.12771v1
2024-06-18T16:41:21Z
2024-06-18T16:41:21Z
First-Order Methods for Linearly Constrained Bilevel Optimization
Algorithms for bilevel optimization often encounter Hessian computations, which are prohibitive in high dimensions. While recent works offer first-order methods for unconstrained bilevel problems, the constrained setting remains relatively underexplored. We present first-order linearly constrained optimization methods with finite-time hypergradient stationarity guarantees. For linear equality constraints, we attain $epsilon$-stationarity in $widetilde{O}(epsilon^{-2})$ gradient oracle calls, which is nearly-optimal. For linear inequality constraints, we attain $(delta,epsilon)$-Goldstein stationarity in $widetilde{O}(d{delta^{-1} epsilon^{-3}})$ gradient oracle calls, where $d$ is the upper-level dimension. Finally, we obtain for the linear inequality setting dimension-free rates of $widetilde{O}({delta^{-1} epsilon^{-4}})$ oracle complexity under the additional assumption of oracle access to the optimal dual variable. Along the way, we develop new nonsmooth nonconvex optimization methods with inexact oracles. We verify these guarantees with preliminary numerical experiments.
[ "['Guy Kornowski' 'Swati Padmanabhan' 'Kai Wang' 'Zhe Zhang' 'Suvrit Sra']" ]
null
null
2406.12774
null
null
http://arxiv.org/pdf/2406.12774v1
2024-06-18T16:43:59Z
2024-06-18T16:43:59Z
Towards Exact Gradient-based Training on Analog In-memory Computing
Given the high economic and environmental costs of using large vision or language models, analog in-memory accelerators present a promising solution for energy-efficient AI. While inference on analog accelerators has been studied recently, the training perspective is underexplored. Recent studies have shown that the "workhorse" of digital AI training - stochastic gradient descent (SGD) algorithm converges inexactly when applied to model training on non-ideal devices. This paper puts forth a theoretical foundation for gradient-based training on analog devices. We begin by characterizing the non-convergent issue of SGD, which is caused by the asymmetric updates on the analog devices. We then provide a lower bound of the asymptotic error to show that there is a fundamental performance limit of SGD-based analog training rather than an artifact of our analysis. To address this issue, we study a heuristic analog algorithm called Tiki-Taka that has recently exhibited superior empirical performance compared to SGD and rigorously show its ability to exactly converge to a critical point and hence eliminates the asymptotic error. The simulations verify the correctness of the analyses.
[ "['Zhaoxian Wu' 'Tayfun Gokmen' 'Malte J. Rasch' 'Tianyi Chen']" ]
null
null
2406.12785
null
null
http://arxiv.org/pdf/2406.12785v1
2024-06-18T16:54:43Z
2024-06-18T16:54:43Z
In-Context Learning of Energy Functions
In-context learning is a powerful capability of certain machine learning models that arguably underpins the success of today's frontier AI models. However, in-context learning is critically limited to settings where the in-context distribution of interest $p_{theta}^{ICL}( x|mathcal{D})$ can be straightforwardly expressed and/or parameterized by the model; for instance, language modeling relies on expressing the next-token distribution as a categorical distribution parameterized by the network's output logits. In this work, we present a more general form of in-context learning without such a limitation that we call textit{in-context learning of energy functions}. The idea is to instead learn the unconstrained and arbitrary in-context energy function $E_{theta}^{ICL}(x|mathcal{D})$ corresponding to the in-context distribution $p_{theta}^{ICL}(x|mathcal{D})$. To do this, we use classic ideas from energy-based modeling. We provide preliminary evidence that our method empirically works on synthetic data. Interestingly, our work contributes (to the best of our knowledge) the first example of in-context learning where the input space and output space differ from one another, suggesting that in-context learning is a more-general capability than previously realized.
[ "['Rylan Schaeffer' 'Mikail Khona' 'Sanmi Koyejo']" ]
null
null
2406.12795
null
null
http://arxiv.org/pdf/2406.12795v1
2024-06-18T17:00:13Z
2024-06-18T17:00:13Z
The Limits of Pure Exploration in POMDPs: When the Observation Entropy is Enough
The problem of pure exploration in Markov decision processes has been cast as maximizing the entropy over the state distribution induced by the agent's policy, an objective that has been extensively studied. However, little attention has been dedicated to state entropy maximization under partial observability, despite the latter being ubiquitous in applications, e.g., finance and robotics, in which the agent only receives noisy observations of the true state governing the system's dynamics. How can we address state entropy maximization in those domains? In this paper, we study the simple approach of maximizing the entropy over observations in place of true latent states. First, we provide lower and upper bounds to the approximation of the true state entropy that only depends on some properties of the observation function. Then, we show how knowledge of the latter can be exploited to compute a principled regularization of the observation entropy to improve performance. With this work, we provide both a flexible approach to bring advances in state entropy maximization to the POMDP setting and a theoretical characterization of its intrinsic limits.
[ "['Riccardo Zamboni' 'Duilio Cirino' 'Marcello Restelli' 'Mirco Mutti']" ]
null
null
2406.12803
null
null
http://arxiv.org/abs/2406.12803v1
2024-06-18T17:15:00Z
2024-06-18T17:15:00Z
Scalable Rule Lists Learning with Sampling
Learning interpretable models has become a major focus of machine learning research, given the increasing prominence of machine learning in socially important decision-making. Among interpretable models, rule lists are among the best-known and easily interpretable ones. However, finding optimal rule lists is computationally challenging, and current approaches are impractical for large datasets. We present a novel and scalable approach to learn nearly optimal rule lists from large datasets. Our algorithm uses sampling to efficiently obtain an approximation of the optimal rule list with rigorous guarantees on the quality of the approximation. In particular, our algorithm guarantees to find a rule list with accuracy very close to the optimal rule list when a rule list with high accuracy exists. Our algorithm builds on the VC-dimension of rule lists, for which we prove novel upper and lower bounds. Our experimental evaluation on large datasets shows that our algorithm identifies nearly optimal rule lists with a speed-up up to two orders of magnitude over state-of-the-art exact approaches. Moreover, our algorithm is as fast as, and sometimes faster than, recent heuristic approaches, while reporting higher quality rule lists. In addition, the rules reported by our algorithm are more similar to the rules in the optimal rule list than the rules from heuristic approaches.
[ "['Leonardo Pellegrina' 'Fabio Vandin']" ]
null
null
2406.12807
null
null
http://arxiv.org/pdf/2406.12807v1
2024-06-18T17:22:55Z
2024-06-18T17:22:55Z
Probabilistic Temporal Prediction of Continuous Disease Trajectories and Treatment Effects Using Neural SDEs
Personalized medicine based on medical images, including predicting future individualized clinical disease progression and treatment response, would have an enormous impact on healthcare and drug development, particularly for diseases (e.g. multiple sclerosis (MS)) with long term, complex, heterogeneous evolutions and no cure. In this work, we present the first stochastic causal temporal framework to model the continuous temporal evolution of disease progression via Neural Stochastic Differential Equations (NSDE). The proposed causal inference model takes as input the patient's high dimensional images (MRI) and tabular data, and predicts both factual and counterfactual progression trajectories on different treatments in latent space. The NSDE permits the estimation of high-confidence personalized trajectories and treatment effects. Extensive experiments were performed on a large, multi-centre, proprietary dataset of patient 3D MRI and clinical data acquired during several randomized clinical trials for MS treatments. Our results present the first successful uncertainty-based causal Deep Learning (DL) model to: (a) accurately predict future patient MS disability evolution (e.g. EDSS) and treatment effects leveraging baseline MRI, and (b) permit the discovery of subgroups of patients for which the model has high confidence in their response to treatment even in clinical trials which did not reach their clinical endpoints.
[ "['Joshua Durso-Finley' 'Berardino Barile' 'Jean-Pierre Falet'\n 'Douglas L. Arnold' 'Nick Pawlowski' 'Tal Arbel']" ]
null
null
2406.12808
null
null
http://arxiv.org/pdf/2406.12808v3
2024-06-21T08:57:40Z
2024-06-18T17:23:50Z
Graph Neural Networks in Histopathology: Emerging Trends and Future Directions
Histopathological analysis of Whole Slide Images (WSIs) has seen a surge in the utilization of deep learning methods, particularly Convolutional Neural Networks (CNNs). However, CNNs often fall short in capturing the intricate spatial dependencies inherent in WSIs. Graph Neural Networks (GNNs) present a promising alternative, adept at directly modeling pairwise interactions and effectively discerning the topological tissue and cellular structures within WSIs. Recognizing the pressing need for deep learning techniques that harness the topological structure of WSIs, the application of GNNs in histopathology has experienced rapid growth. In this comprehensive review, we survey GNNs in histopathology, discuss their applications, and explore emerging trends that pave the way for future advancements in the field. We begin by elucidating the fundamentals of GNNs and their potential applications in histopathology. Leveraging quantitative literature analysis, we identify four emerging trends: Hierarchical GNNs, Adaptive Graph Structure Learning, Multimodal GNNs, and Higher-order GNNs. Through an in-depth exploration of these trends, we offer insights into the evolving landscape of GNNs in histopathological analysis. Based on our findings, we propose future directions to propel the field forward. Our analysis serves to guide researchers and practitioners towards innovative approaches and methodologies, fostering advancements in histopathological analysis through the lens of graph neural networks.
[ "['Siemen Brussee' 'Giorgio Buzzanca' 'Anne M. R. Schrader' 'Jesper Kers']" ]
null
null
2406.12814
null
null
http://arxiv.org/pdf/2406.12814v1
2024-06-18T17:32:48Z
2024-06-18T17:32:48Z
Adversarial Attacks on Multimodal Agents
Vision-enabled language models (VLMs) are now used to build autonomous multimodal agents capable of taking actions in real environments. In this paper, we show that multimodal agents raise new safety risks, even though attacking agents is more challenging than prior attacks due to limited access to and knowledge about the environment. Our attacks use adversarial text strings to guide gradient-based perturbation over one trigger image in the environment: (1) our captioner attack attacks white-box captioners if they are used to process images into captions as additional inputs to the VLM; (2) our CLIP attack attacks a set of CLIP models jointly, which can transfer to proprietary VLMs. To evaluate the attacks, we curated VisualWebArena-Adv, a set of adversarial tasks based on VisualWebArena, an environment for web-based multimodal agent tasks. Within an L-infinity norm of $16/256$ on a single image, the captioner attack can make a captioner-augmented GPT-4V agent execute the adversarial goals with a 75% success rate. When we remove the captioner or use GPT-4V to generate its own captions, the CLIP attack can achieve success rates of 21% and 43%, respectively. Experiments on agents based on other VLMs, such as Gemini-1.5, Claude-3, and GPT-4o, show interesting differences in their robustness. Further analysis reveals several key factors contributing to the attack's success, and we also discuss the implications for defenses as well. Project page: https://chenwu.io/attack-agent Code and data: https://github.com/ChenWu98/agent-attack
[ "['Chen Henry Wu' 'Jing Yu Koh' 'Ruslan Salakhutdinov' 'Daniel Fried'\n 'Aditi Raghunathan']" ]
null
null
2406.12815
null
null
http://arxiv.org/pdf/2406.12815v1
2024-06-18T17:35:52Z
2024-06-18T17:35:52Z
Privacy Preserving Federated Learning in Medical Imaging with Uncertainty Estimation
Machine learning (ML) and Artificial Intelligence (AI) have fueled remarkable advancements, particularly in healthcare. Within medical imaging, ML models hold the promise of improving disease diagnoses, treatment planning, and post-treatment monitoring. Various computer vision tasks like image classification, object detection, and image segmentation are poised to become routine in clinical analysis. However, privacy concerns surrounding patient data hinder the assembly of large training datasets needed for developing and training accurate, robust, and generalizable models. Federated Learning (FL) emerges as a compelling solution, enabling organizations to collaborate on ML model training by sharing model training information (gradients) rather than data (e.g., medical images). FL's distributed learning framework facilitates inter-institutional collaboration while preserving patient privacy. However, FL, while robust in privacy preservation, faces several challenges. Sensitive information can still be gleaned from shared gradients that are passed on between organizations during model training. Additionally, in medical imaging, quantifying model confidenceuncertainty accurately is crucial due to the noise and artifacts present in the data. Uncertainty estimation in FL encounters unique hurdles due to data heterogeneity across organizations. This paper offers a comprehensive review of FL, privacy preservation, and uncertainty estimation, with a focus on medical imaging. Alongside a survey of current research, we identify gaps in the field and suggest future directions for FL research to enhance privacy and address noisy medical imaging data challenges.
[ "['Nikolas Koutsoubis' 'Yasin Yilmaz' 'Ravi P. Ramachandran'\n 'Matthew Schabath' 'Ghulam Rasool']" ]
null
null
2406.12816
null
null
http://arxiv.org/pdf/2406.12816v1
2024-06-18T17:36:09Z
2024-06-18T17:36:09Z
Neural Approximate Mirror Maps for Constrained Diffusion Models
Diffusion models excel at creating visually-convincing images, but they often struggle to meet subtle constraints inherent in the training data. Such constraints could be physics-based (e.g., satisfying a PDE), geometric (e.g., respecting symmetry), or semantic (e.g., including a particular number of objects). When the training data all satisfy a certain constraint, enforcing this constraint on a diffusion model not only improves its distribution-matching accuracy but also makes it more reliable for generating valid synthetic data and solving constrained inverse problems. However, existing methods for constrained diffusion models are inflexible with different types of constraints. Recent work proposed to learn mirror diffusion models (MDMs) in an unconstrained space defined by a mirror map and to impose the constraint with an inverse mirror map, but analytical mirror maps are challenging to derive for complex constraints. We propose neural approximate mirror maps (NAMMs) for general constraints. Our approach only requires a differentiable distance function from the constraint set. We learn an approximate mirror map that pushes data into an unconstrained space and a corresponding approximate inverse that maps data back to the constraint set. A generative model, such as an MDM, can then be trained in the learned mirror space and its samples restored to the constraint set by the inverse map. We validate our approach on a variety of constraints, showing that compared to an unconstrained diffusion model, a NAMM-based MDM substantially improves constraint satisfaction. We also demonstrate how existing diffusion-based inverse-problem solvers can be easily applied in the learned mirror space to solve constrained inverse problems.
[ "['Berthy T. Feng' 'Ricardo Baptista' 'Katherine L. Bouman']" ]
null
null
2406.12832
null
null
http://arxiv.org/pdf/2406.12832v1
2024-06-18T17:52:59Z
2024-06-18T17:52:59Z
LaMDA: Large Model Fine-Tuning via Spectrally Decomposed Low-Dimensional Adaptation
Low-rank adaptation (LoRA) has become the default approach to fine-tune large language models (LLMs) due to its significant reduction in trainable parameters. However, trainable parameter demand for LoRA increases with increasing model embedding dimensions, leading to high compute costs. Additionally, its backward updates require storing high-dimensional intermediate activations and optimizer states, demanding high peak GPU memory. In this paper, we introduce large model fine-tuning via spectrally decomposed low-dimensional adaptation (LaMDA), a novel approach to fine-tuning large language models, which leverages low-dimensional adaptation to achieve significant reductions in trainable parameters and peak GPU memory footprint. LaMDA freezes a first projection matrix (PMA) in the adaptation path while introducing a low-dimensional trainable square matrix, resulting in substantial reductions in trainable parameters and peak GPU memory usage. LaMDA gradually freezes a second projection matrix (PMB) during the early fine-tuning stages, reducing the compute cost associated with weight updates to enhance parameter efficiency further. We also present an enhancement, LaMDA++, incorporating a ``lite-weight" adaptive rank allocation for the LoRA path via normalized spectrum analysis of pre-trained model weights. We evaluate LaMDA/LaMDA++ across various tasks, including natural language understanding with the GLUE benchmark, text summarization, natural language generation, and complex reasoning on different LLMs. Results show that LaMDA matches or surpasses the performance of existing alternatives while requiring up to 17.7x fewer parameter updates and up to 1.32x lower peak GPU memory usage during fine-tuning. Code will be publicly available.
[ "['Seyedarmin Azizi' 'Souvik Kundu' 'Massoud Pedram']" ]
null
null
2406.12835
null
null
http://arxiv.org/pdf/2406.12835v1
2024-06-18T17:54:33Z
2024-06-18T17:54:33Z
Influence Maximization via Graph Neural Bandits
We consider a ubiquitous scenario in the study of Influence Maximization (IM), in which there is limited knowledge about the topology of the diffusion network. We set the IM problem in a multi-round diffusion campaign, aiming to maximize the number of distinct users that are influenced. Leveraging the capability of bandit algorithms to effectively balance the objectives of exploration and exploitation, as well as the expressivity of neural networks, our study explores the application of neural bandit algorithms to the IM problem. We propose the framework IM-GNB (Influence Maximization with Graph Neural Bandits), where we provide an estimate of the users' probabilities of being influenced by influencers (also known as diffusion seeds). This initial estimate forms the basis for constructing both an exploitation graph and an exploration one. Subsequently, IM-GNB handles the exploration-exploitation tradeoff, by selecting seed nodes in real-time using Graph Convolutional Networks (GCN), in which the pre-estimated graphs are employed to refine the influencers' estimated rewards in each contextual setting. Through extensive experiments on two large real-world datasets, we demonstrate the effectiveness of IM-GNB compared with other baseline methods, significantly improving the spread outcome of such diffusion campaigns, when the underlying network is unknown.
[ "['Yuting Feng' 'Vincent Y. F. Tan' 'Bogdan Cautis']" ]
null
null
2406.12837
null
null
http://arxiv.org/pdf/2406.12837v3
2024-07-08T04:55:34Z
2024-06-18T17:55:15Z
LayerMerge: Neural Network Depth Compression through Layer Pruning and Merging
Recent works show that reducing the number of layers in a convolutional neural network can enhance efficiency while maintaining the performance of the network. Existing depth compression methods remove redundant non-linear activation functions and merge the consecutive convolution layers into a single layer. However, these methods suffer from a critical drawback; the kernel size of the merged layers becomes larger, significantly undermining the latency reduction gained from reducing the depth of the network. We show that this problem can be addressed by jointly pruning convolution layers and activation functions. To this end, we propose LayerMerge, a novel depth compression method that selects which activation layers and convolution layers to remove, to achieve a desired inference speed-up while minimizing performance loss. Since the corresponding selection problem involves an exponential search space, we formulate a novel surrogate optimization problem and efficiently solve it via dynamic programming. Empirical results demonstrate that our method consistently outperforms existing depth compression and layer pruning methods on various network architectures, both on image classification and generation tasks. We release the code at https://github.com/snu-mllab/LayerMerge.
[ "['Jinuk Kim' 'Marwa El Halabi' 'Mingi Ji' 'Hyun Oh Song']" ]
null
null
2406.12839
null
null
http://arxiv.org/pdf/2406.12839v1
2024-06-18T17:56:10Z
2024-06-18T17:56:10Z
Evaluating the design space of diffusion-based generative models
Most existing theoretical investigations of the accuracy of diffusion models, albeit significant, assume the score function has been approximated to a certain accuracy, and then use this a priori bound to control the error of generation. This article instead provides a first quantitative understanding of the whole generation process, i.e., both training and sampling. More precisely, it conducts a non-asymptotic convergence analysis of denoising score matching under gradient descent. In addition, a refined sampling error analysis for variance exploding models is also provided. The combination of these two results yields a full error analysis, which elucidates (again, but this time theoretically) how to design the training and sampling processes for effective generation. For instance, our theory implies a preference toward noise distribution and loss weighting that qualitatively agree with the ones used in [Karras et al. 2022]. It also provides some perspectives on why the time and variance schedule used in [Karras et al. 2022] could be better tuned than the pioneering version in [Song et al. 2020].
[ "['Yuqing Wang' 'Ye He' 'Molei Tao']" ]
null
null
2406.12841
null
null
http://arxiv.org/pdf/2406.12841v1
2024-06-18T17:57:11Z
2024-06-18T17:57:11Z
Demystifying Higher-Order Graph Neural Networks
Higher-order graph neural networks (HOGNNs) are an important class of GNN models that harness polyadic relations between vertices beyond plain edges. They have been used to eliminate issues such as over-smoothing or over-squashing, to significantly enhance the accuracy of GNN predictions, to improve the expressiveness of GNN architectures, and for numerous other goals. A plethora of HOGNN models have been introduced, and they come with diverse neural architectures, and even with different notions of what the "higher-order" means. This richness makes it very challenging to appropriately analyze and compare HOGNN models, and to decide in what scenario to use specific ones. To alleviate this, we first design an in-depth taxonomy and a blueprint for HOGNNs. This facilitates designing models that maximize performance. Then, we use our taxonomy to analyze and compare the available HOGNN models. The outcomes of our analysis are synthesized in a set of insights that help to select the most beneficial GNN model in a given scenario, and a comprehensive list of challenges and opportunities for further research into more powerful HOGNNs.
[ "['Maciej Besta' 'Florian Scheidl' 'Lukas Gianinazzi' 'Shachar Klaiman'\n 'Jürgen Müller' 'Torsten Hoefler']" ]
null
null
2406.12843
null
null
http://arxiv.org/pdf/2406.12843v1
2024-06-18T17:57:49Z
2024-06-18T17:57:49Z
Can Go AIs be adversarially robust?
Prior work found that superhuman Go AIs like KataGo can be defeated by simple adversarial strategies. In this paper, we study if simple defenses can improve KataGo's worst-case performance. We test three natural defenses: adversarial training on hand-constructed positions, iterated adversarial training, and changing the network architecture. We find that some of these defenses are able to protect against previously discovered attacks. Unfortunately, we also find that none of these defenses are able to withstand adaptive attacks. In particular, we are able to train new adversaries that reliably defeat our defended agents by causing them to blunder in ways humans would not. Our results suggest that building robust AI systems is challenging even in narrow domains such as Go. For interactive examples of attacks and a link to our codebase, see https://goattack.far.ai.
[ "['Tom Tseng' 'Euan McLean' 'Kellin Pelrine' 'Tony T. Wang' 'Adam Gleave']" ]
null
null
2406.12844
null
null
http://arxiv.org/pdf/2406.12844v1
2024-06-18T17:58:09Z
2024-06-18T17:58:09Z
Synergizing Foundation Models and Federated Learning: A Survey
The recent development of Foundation Models (FMs), represented by large language models, vision transformers, and multimodal models, has been making a significant impact on both academia and industry. Compared with small-scale models, FMs have a much stronger demand for high-volume data during the pre-training phase. Although general FMs can be pre-trained on data collected from open sources such as the Internet, domain-specific FMs need proprietary data, posing a practical challenge regarding the amount of data available due to privacy concerns. Federated Learning (FL) is a collaborative learning paradigm that breaks the barrier of data availability from different participants. Therefore, it provides a promising solution to customize and adapt FMs to a wide range of domain-specific tasks using distributed datasets whilst preserving privacy. This survey paper discusses the potentials and challenges of synergizing FL and FMs and summarizes core techniques, future directions, and applications. A periodically updated paper collection on FM-FL is available at https://github.com/lishenghui/awesome-fm-fl.
[ "['Shenghui Li' 'Fanghua Ye' 'Meng Fang' 'Jiaxu Zhao' 'Yun-Hin Chan'\n 'Edith C. -H. Ngai' 'Thiemo Voigt']" ]
null
null
2406.12845
null
null
http://arxiv.org/pdf/2406.12845v1
2024-06-18T17:58:28Z
2024-06-18T17:58:28Z
Interpretable Preferences via Multi-Objective Reward Modeling and Mixture-of-Experts
Reinforcement learning from human feedback (RLHF) has emerged as the primary method for aligning large language models (LLMs) with human preferences. The RLHF process typically starts by training a reward model (RM) using human preference data. Conventional RMs are trained on pairwise responses to the same user request, with relative ratings indicating which response humans prefer. The trained RM serves as a proxy for human preferences. However, due to the black-box nature of RMs, their outputs lack interpretability, as humans cannot intuitively understand why an RM thinks a response is good or not. As RMs act as human preference proxies, we believe they should be human-interpretable to ensure that their internal decision processes are consistent with human preferences and to prevent reward hacking in LLM alignment. To build RMs with interpretable preferences, we propose a two-stage approach: i) train an Absolute-Rating Multi-Objective Reward Model (ArmoRM) with multi-dimensional absolute-rating data, each dimension corresponding to a human-interpretable objective (e.g., honesty, verbosity, safety); ii) employ a Mixture-of-Experts (MoE) strategy with a gating network that automatically selects the most suitable reward objectives based on the context. We efficiently trained an ArmoRM with Llama-3 8B and a gating network consisting of a shallow MLP on top of the ArmoRM. Our trained model, ArmoRM-Llama3-8B, obtains state-of-the-art performance on RewardBench, a benchmark evaluating RMs for language modeling. Notably, the performance of our model surpasses the LLM-as-a-judge method with GPT-4 judges by a margin, and approaches the performance of the much larger Nemotron-4 340B reward model.
[ "['Haoxiang Wang' 'Wei Xiong' 'Tengyang Xie' 'Han Zhao' 'Tong Zhang']" ]
null
null
2406.12876
null
null
http://arxiv.org/abs/2406.12876v1
2024-05-08T21:11:00Z
2024-05-08T21:11:00Z
Controlling Chaos Using Edge Computing Hardware
Machine learning provides a data-driven approach for creating a digital twin of a system - a digital model used to predict the system behavior. Having an accurate digital twin can drive many applications, such as controlling autonomous systems. Often the size, weight, and power consumption of the digital twin or related controller must be minimized, ideally realized on embedded computing hardware that can operate without a cloud-computing connection. Here, we show that a nonlinear controller based on next-generation reservoir computing can tackle a difficult control problem: controlling a chaotic system to an arbitrary time-dependent state. The model is accurate, yet it is small enough to be evaluated on a field-programmable gate array typically found in embedded devices. Furthermore, the model only requires 25.0 $pm$ 7.0 nJ per evaluation, well below other algorithms, even without systematic power optimization. Our work represents the first step in deploying efficient machine learning algorithms to the computing "edge."
[ "['Robert M. Kent' 'Wendson A. S. Barbosa' 'Daniel J. Gauthier']" ]
null
null
2406.12896
null
null
http://arxiv.org/pdf/2406.12896v1
2024-06-07T10:14:30Z
2024-06-07T10:14:30Z
Leveraging Pedagogical Theories to Understand Student Learning Process with Graph-based Reasonable Knowledge Tracing
Knowledge tracing (KT) is a crucial task in intelligent education, focusing on predicting students' performance on given questions to trace their evolving knowledge. The advancement of deep learning in this field has led to deep-learning knowledge tracing (DLKT) models that prioritize high predictive accuracy. However, many existing DLKT methods overlook the fundamental goal of tracking students' dynamical knowledge mastery. These models do not explicitly model knowledge mastery tracing processes or yield unreasonable results that educators find difficulty to comprehend and apply in real teaching scenarios. In response, our research conducts a preliminary analysis of mainstream KT approaches to highlight and explain such unreasonableness. We introduce GRKT, a graph-based reasonable knowledge tracing method to address these issues. By leveraging graph neural networks, our approach delves into the mutual influences of knowledge concepts, offering a more accurate representation of how the knowledge mastery evolves throughout the learning process. Additionally, we propose a fine-grained and psychological three-stage modeling process as knowledge retrieval, memory strengthening, and knowledge learning/forgetting, to conduct a more reasonable knowledge tracing process. Comprehensive experiments demonstrate that GRKT outperforms eleven baselines across three datasets, not only enhancing predictive accuracy but also generating more reasonable knowledge tracing results. This makes our model a promising advancement for practical implementation in educational settings. The source code is available at https://github.com/JJCui96/GRKT.
[ "['Jiajun Cui' 'Hong Qian' 'Bo Jiang' 'Wei Zhang']" ]
null
null
2406.12897
null
null
http://arxiv.org/pdf/2406.12897v1
2024-06-07T19:23:22Z
2024-06-07T19:23:22Z
Advancing Histopathology-Based Breast Cancer Diagnosis: Insights into Multi-Modality and Explainability
It is imperative that breast cancer is detected precisely and timely to improve patient outcomes. Diagnostic methodologies have traditionally relied on unimodal approaches; however, medical data analytics is integrating diverse data sources beyond conventional imaging. Using multi-modal techniques, integrating both image and non-image data, marks a transformative advancement in breast cancer diagnosis. The purpose of this review is to explore the burgeoning field of multimodal techniques, particularly the fusion of histopathology images with non-image data. Further, Explainable AI (XAI) will be used to elucidate the decision-making processes of complex algorithms, emphasizing the necessity of explainability in diagnostic processes. This review utilizes multi-modal data and emphasizes explainability to enhance diagnostic accuracy, clinician confidence, and patient engagement, ultimately fostering more personalized treatment strategies for breast cancer, while also identifying research gaps in multi-modality and explainability, guiding future studies, and contributing to the strategic direction of the field.
[ "['Faseela Abdullakutty' 'Younes Akbari' 'Somaya Al-Maadeed'\n 'Ahmed Bouridane' 'Rifat Hamoudi']" ]
null
null
2406.12900
null
null
http://arxiv.org/pdf/2406.12900v1
2024-06-09T12:08:56Z
2024-06-09T12:08:56Z
Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding
The design of optimal linear block codes capable of being efficiently decoded is of major concern, especially for short block lengths. As near capacity-approaching codes, Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes, the most notable being its efficient decoding via Belief Propagation. While many LDPC code design methods exist, the development of efficient sparse codes that meet the constraints of modern short code lengths and accommodate new channel models remains a challenge. In this work, we propose for the first time a data-driven approach for the design of sparse codes. We develop locally optimal codes with respect to Belief Propagation decoding via the learning on the Factor graph (also called the Tanner graph) under channel noise simulations. This is performed via a novel tensor representation of the Belief Propagation algorithm, optimized over finite fields via backpropagation coupled with an efficient line-search method. The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude and demonstrates the power of data-driven approaches for code design.
[ "['Yoni Choukroun' 'Lior Wolf']" ]
null
null
2406.12901
null
null
http://arxiv.org/pdf/2406.12901v1
2024-06-09T18:17:08Z
2024-06-09T18:17:08Z
Interpretable machine learning approach for electron antineutrino selection in a large liquid scintillator detector
Several neutrino detectors, KamLAND, Daya Bay, Double Chooz, RENO, and the forthcoming large-scale JUNO, rely on liquid scintillator to detect reactor antineutrino interactions. In this context, inverse beta decay represents the golden channel for antineutrino detection, providing a pair of correlated events, thus a strong experimental signature to distinguish the signal from a variety of backgrounds. However, given the low cross-section of antineutrino interactions, the development of a powerful event selection algorithm becomes imperative to achieve effective discrimination between signal and backgrounds. In this study, we introduce a machine learning (ML) model to achieve this goal: a fully connected neural network as a powerful signal-background discriminator for a large liquid scintillator detector. We demonstrate, using the JUNO detector as an example, that, despite the already high efficiency of a cut-based approach, the presented ML model can further improve the overall event selection efficiency. Moreover, it allows for the retention of signal events at the detector edges that would otherwise be rejected because of the overwhelming amount of background events in that region. We also present the first interpretable analysis of the ML approach for event selection in reactor neutrino experiments. This method provides insights into the decision-making process of the model and offers valuable information for improving and updating traditional event selection approaches.
[ "['A. Gavrikov' 'V. Cerrone' 'A. Serafini' 'R. Brugnera' 'A. Garfagnini'\n 'M. Grassi' 'B. Jelmini' 'L. Lastrucci' 'S. Aiello' 'G. Andronico'\n 'V. Antonelli' 'A. Barresi' 'D. Basilico' 'M. Beretta' 'A. Bergnoli'\n 'M. Borghesi' 'A. Brigatti' 'R. Bruno' 'A. Budano' 'B. Caccianiga'\n 'A. Cammi' 'R. Caruso' 'D. Chiesa' 'C. Clementi' 'S. Dusini' 'A. Fabbri'\n 'G. Felici' 'F. Ferraro' 'M. G. Giammarchi' 'N. Giugice'\n 'R. M. Guizzetti' 'N. Guardone' 'C. Landini' 'I. Lippi' 'S. Loffredo'\n 'L. Loi' 'P. Lombardi' 'C. Lombardo' 'F. Mantovani' 'S. M. Mari'\n 'A. Martini' 'L. Miramonti' 'M. Montuschi' 'M. Nastasi' 'D. Orestano'\n 'F. Ortica' 'A. Paoloni' 'E. Percalli' 'F. Petrucci' 'E. Previtali'\n 'G. Ranucci' 'A. C. Re' 'M. Redchuck' 'B. Ricci' 'A. Romani' 'P. Saggese'\n 'G. Sava' 'C. Sirignano' 'M. Sisti' 'L. Stanco' 'E. Stanescu Farilla'\n 'V. Strati' 'M. D. C. Torri' 'A. Triossi' 'C. Tuvé' 'C. Venettacci'\n 'G. Verde' 'L. Votano']" ]
null
null
2406.12902
null
null
http://arxiv.org/pdf/2406.12902v1
2024-06-10T06:43:25Z
2024-06-10T06:43:25Z
Can AI Beat Undergraduates in Entry-level Java Assignments? Benchmarking Large Language Models on JavaBench
Code generation benchmarks such as HumanEval are widely adopted to evaluate LLMs' capabilities. However, after consolidating the latest 24 benchmarks, we noticed three significant imbalances. First, imbalanced programming language. 95.8% of benchmarks involve Python, while only 5 benchmarks involve Java. Second, imbalanced code granularity. Function-/statement-level benchmarks account for over 83.3% of benchmarks. Only a mere handful extends to class-/project-levels, and all are limited to Python. Third, lacking advanced features. Existing benchmarks primarily assess basic coding skills, while overlooking advanced Object-Oriented Programming (OOP) features (i.e., encapsulation, inheritance, and polymorphism). To fill these gaps, we propose JavaBench, a project-level Java benchmark that exercises OOP features. It comprises four Java projects with 389 methods in 106 Java classes. The test coverage is up to 92%, and JavaBench is attested by 282 undergraduate students, reaching a 90.93/100 average score (i.e., pass rate against the test suite), ensuring the quality of documentation, code skeleton, and tests. To better evaluate LLM's capability against JavaBench, we introduce a systematic evaluation design covering three context settings and five synthesis strategies at two granularities using three hierarchical metrics. Our extensive experiment yields several interesting findings. First, we noticed that regarding project-level Java programming, LLMs are far behind undergraduate students (no project can be correctly completed by any studied LLMs, and at most 41.17% Pass@5 in a more relaxed evaluation). Second, using method signature as prompt context may strike an ideal balance for project-level code generation. JavaBench is publicly available at https://github.com/java-bench/JavaBench.
[ "['Jialun Cao' 'Zhiyong Chen' 'Jiarong Wu' 'Shing-chi Cheung' 'Chang Xu']" ]
null
null
2406.12904
null
null
http://arxiv.org/pdf/2406.12904v1
2024-06-11T10:00:06Z
2024-06-11T10:00:06Z
Meent: Differentiable Electromagnetic Simulator for Machine Learning
Electromagnetic (EM) simulation plays a crucial role in analyzing and designing devices with sub-wavelength scale structures such as solar cells, semiconductor devices, image sensors, future displays and integrated photonic devices. Specifically, optics problems such as estimating semiconductor device structures and designing nanophotonic devices provide intriguing research topics with far-reaching real world impact. Traditional algorithms for such tasks require iteratively refining parameters through simulations, which often yield sub-optimal results due to the high computational cost of both the algorithms and EM simulations. Machine learning (ML) emerged as a promising candidate to mitigate these challenges, and optics research community has increasingly adopted ML algorithms to obtain results surpassing classical methods across various tasks. To foster a synergistic collaboration between the optics and ML communities, it is essential to have an EM simulation software that is user-friendly for both research communities. To this end, we present Meent, an EM simulation software that employs rigorous coupled-wave analysis (RCWA). Developed in Python and equipped with automatic differentiation (AD) capabilities, Meent serves as a versatile platform for integrating ML into optics research and vice versa. To demonstrate its utility as a research platform, we present three applications of Meent: 1) generating a dataset for training neural operator, 2) serving as an environment for the reinforcement learning of nanophotonic device optimization, and 3) providing a solution for inverse problems with gradient-based optimizers. These applications highlight Meent's potential to advance both EM simulation and ML methodologies. The code is available at https://github.com/kc-ml2/meent with the MIT license to promote the cross-polinations of ideas among academic researchers and industry practitioners.
[ "['Yongha Kim' 'Anthony W. Jung' 'Sanmun Kim' 'Kevin Octavian'\n 'Doyoung Heo' 'Chaejin Park' 'Jeongmin Shin' 'Sunghyun Nam'\n 'Chanhyung Park' 'Juho Park' 'Sangjun Han' 'Jinmyoung Lee' 'Seolho Kim'\n 'Min Seok Jang' 'Chan Y. Park']" ]
null
null
2406.12905
null
null
http://arxiv.org/pdf/2406.12905v1
2024-06-11T21:13:34Z
2024-06-11T21:13:34Z
PufferLib: Making Reinforcement Learning Libraries and Environments Play Nice
You have an environment, a model, and a reinforcement learning library that are designed to work together but don't. PufferLib makes them play nice. The library provides one-line environment wrappers that eliminate common compatibility problems and fast vectorization to accelerate training. With PufferLib, you can use familiar libraries like CleanRL and SB3 to scale from classic benchmarks like Atari and Procgen to complex simulators like NetHack and Neural MMO. We release pip packages and prebuilt images with dependencies for dozens of environments. All of our code is free and open-source software under the MIT license, complete with baselines, documentation, and support at pufferai.github.io.
[ "['Joseph Suarez']" ]
null
null
2406.12906
null
null
http://arxiv.org/pdf/2406.12906v1
2024-06-12T02:34:48Z
2024-06-12T02:34:48Z
Entropy-statistical approach to phase-locking detection of pulse oscillations: application for the analysis of biosignal synchronization
In this study a new method for analyzing synchronization in oscillator systems is proposed using the example of modeling the dynamics of a circuit of two resistively coupled pulse oscillators. The dynamic characteristic of synchronization is fuzzy entropy (FuzzyEn) calculated a time series composed of the ratios of the number of pulse periods (subharmonic ratio, SHR) during phase-locking intervals. Low entropy values indicate strong synchronization, whereas high entropy values suggest weak synchronization between the two oscillators. This method effectively visualizes synchronized modes of the circuit using entropy maps of synchronization states. Additionally, a classification of synchronization states is proposed based on the dependencies of FuzzyEn on the length of embedding vectors of SHR time series. An extension of this method for analyzing non-relaxation (non-spike) type signals is illustrated using the example of phase-phase coupling rhythms of local field potential of rat hippocampus. The entropy-statistical approach using rational fractions and pulse signal forms makes this method promising for analyzing biosignal synchronization and implementing the algorithm in mobile digital platforms.
[ "['Petr Boriskov' 'Vadim Putrolaynen' 'Andrei Velichko' 'Kristina Peltonen']" ]
null
null
2406.12907
null
null
http://arxiv.org/pdf/2406.12907v1
2024-06-12T13:30:48Z
2024-06-12T13:30:48Z
Reconciling Kaplan and Chinchilla Scaling Laws
Kaplan et al. [2020] (`Kaplan') and Hoffmann et al. [2022] (`Chinchilla') studied the scaling behavior of transformers trained on next-token language prediction. These studies produced different estimates for how the number of parameters ($N$) and training tokens ($D$) should be set to achieve the lowest possible loss for a given compute budget ($C$). Kaplan: $N_text{optimal} propto C^{0.73}$, Chinchilla: $N_text{optimal} propto C^{0.50}$. This note finds that much of this discrepancy can be attributed to Kaplan counting non-embedding rather than total parameters, combined with their analysis being performed at small scale. Simulating the Chinchilla study under these conditions produces biased scaling coefficients close to Kaplan's. Hence, this note reaffirms Chinchilla's scaling coefficients, by explaining the cause of Kaplan's original overestimation.
[ "['Tim Pearce' 'Jinyeop Song']" ]
null
null
2406.12908
null
null
http://arxiv.org/pdf/2406.12908v1
2024-06-12T17:39:16Z
2024-06-12T17:39:16Z
Rating Multi-Modal Time-Series Forecasting Models (MM-TSFM) for Robustness Through a Causal Lens
AI systems are notorious for their fragility; minor input changes can potentially cause major output swings. When such systems are deployed in critical areas like finance, the consequences of their uncertain behavior could be severe. In this paper, we focus on multi-modal time-series forecasting, where imprecision due to noisy or incorrect data can lead to erroneous predictions, impacting stakeholders such as analysts, investors, and traders. Recently, it has been shown that beyond numeric data, graphical transformations can be used with advanced visual models to achieve better performance. In this context, we introduce a rating methodology to assess the robustness of Multi-Modal Time-Series Forecasting Models (MM-TSFM) through causal analysis, which helps us understand and quantify the isolated impact of various attributes on the forecasting accuracy of MM-TSFM. We apply our novel rating method on a variety of numeric and multi-modal forecasting models in a large experimental setup (six input settings of control and perturbations, ten data distributions, time series from six leading stocks in three industries over a year of data, and five time-series forecasters) to draw insights on robust forecasting models and the context of their strengths. Within the scope of our study, our main result is that multi-modal (numeric + visual) forecasting, which was found to be more accurate than numeric forecasting in previous studies, can also be more robust in diverse settings. Our work will help different stakeholders of time-series forecasting understand the models` behaviors along trust (robustness) and accuracy dimensions to select an appropriate model for forecasting using our rating method, leading to improved decision-making.
[ "['Kausik Lakkaraju' 'Rachneet Kaur' 'Zhen Zeng' 'Parisa Zehtabi'\n 'Sunandita Patra' 'Biplav Srivastava' 'Marco Valtorta']" ]
null
null
2406.12909
null
null
http://arxiv.org/pdf/2406.12909v2
2024-06-28T17:58:27Z
2024-06-12T21:21:42Z
Scalable Training of Graph Foundation Models for Atomistic Materials Modeling: A Case Study with HydraGNN
We present our work on developing and training scalable graph foundation models (GFM) using HydraGNN, a multi-headed graph convolutional neural network architecture. HydraGNN expands the boundaries of graph neural network (GNN) in both training scale and data diversity. It abstracts over message passing algorithms, allowing both reproduction of and comparison across algorithmic innovations that define convolution in GNNs. This work discusses a series of optimizations that have allowed scaling up the GFM training to tens of thousands of GPUs on datasets that consist of hundreds of millions of graphs. Our GFMs use multi-task learning (MTL) to simultaneously learn graph-level and node-level properties of atomistic structures, such as the total energy and atomic forces. Using over 150 million atomistic structures for training, we illustrate the performance of our approach along with the lessons learned on two United States Department of Energy (US-DOE) supercomputers, namely the Perlmutter petascale system at the National Energy Research Scientific Computing Center and the Frontier exascale system at Oak Ridge National Laboratory. The HydraGNN architecture enables the GFM to achieve near-linear strong scaling performance using more than 2,000 GPUs on Perlmutter and 16,000 GPUs on Frontier. Hyperparameter optimization (HPO) was performed on over 64,000 GPUs on Frontier to select GFM architectures with high accuracy. Early stopping was applied on each GFM architecture for energy awareness in performing such an extreme-scale task. The training of an ensemble of highest-ranked GFM architectures continued until convergence to establish uncertainty quantification (UQ) capabilities with ensemble learning. Our contribution opens the door for rapidly developing, training, and deploying GFMs using large-scale computational resources to enable AI-accelerated materials discovery and design.
[ "['Massimiliano Lupo Pasini' 'Jong Youl Choi' 'Kshitij Mehta' 'Pei Zhang'\n 'David Rogers' 'Jonghyun Bae' 'Khaled Z. Ibrahim' 'Ashwin M. Aji'\n 'Karl W. Schulz' 'Jorda Polo' 'Prasanna Balaprakash']" ]
null
null
2406.12910
null
null
http://arxiv.org/pdf/2406.12910v1
2024-06-13T01:06:03Z
2024-06-13T01:06:03Z
Human-level molecular optimization driven by mol-gene evolution
De novo molecule generation allows the search for more drug-like hits across a vast chemical space. However, lead optimization is still required, and the process of optimizing molecular structures faces the challenge of balancing structural novelty with pharmacological properties. This study introduces the Deep Genetic Molecular Modification Algorithm (DGMM), which brings structure modification to the level of medicinal chemists. A discrete variational autoencoder (D-VAE) is used in DGMM to encode molecules as quantization code, mol-gene, which incorporates deep learning into genetic algorithms for flexible structural optimization. The mol-gene allows for the discovery of pharmacologically similar but structurally distinct compounds, and reveals the trade-offs of structural optimization in drug discovery. We demonstrate the effectiveness of the DGMM in several applications.
[ "['Jiebin Fang' 'Churu Mao' 'Yuchen Zhu' 'Xiaoming Chen' 'Chang-Yu Hsieh'\n 'Zhongjun Ma']" ]
null
null
2406.12911
null
null
http://arxiv.org/pdf/2406.12911v1
2024-06-13T07:52:33Z
2024-06-13T07:52:33Z
The Promise of Analog Deep Learning: Recent Advances, Challenges and Opportunities
Much of the present-day Artificial Intelligence (AI) utilizes artificial neural networks, which are sophisticated computational models designed to recognize patterns and solve complex problems by learning from data. However, a major bottleneck occurs during a device's calculation of weighted sums for forward propagation and optimization procedure for backpropagation, especially for deep neural networks, or networks with numerous layers. Exploration into different methods of implementing neural networks is necessary for further advancement of the area. While a great deal of research into AI hardware in both directions, analog and digital implementation widely exists, much of the existing survey works lacks discussion on the progress of analog deep learning. To this end, we attempt to evaluate and specify the advantages and disadvantages, along with the current progress with regards to deep learning, for analog implementations. In this paper, our focus lies on the comprehensive examination of eight distinct analog deep learning methodologies across multiple key parameters. These parameters include attained accuracy levels, application domains, algorithmic advancements, computational speed, and considerations of energy efficiency and power consumption. We also identify the neural network-based experiments implemented using these hardware devices and discuss comparative performance achieved by the different analog deep learning methods along with an analysis of their current limitations. Overall, we find that Analog Deep Learning has great potential for future consumer-level applications, but there is still a long road ahead in terms of scalability. Most of the current implementations are more proof of concept and are not yet practically deployable for large-scale models.
[ "['Aditya Datar' 'Pramit Saha']" ]
null
null
2406.12913
null
null
http://arxiv.org/pdf/2406.12913v1
2024-06-13T09:51:51Z
2024-06-13T09:51:51Z
T-JEPA: A Joint-Embedding Predictive Architecture for Trajectory Similarity Computation
Trajectory similarity computation is an essential technique for analyzing moving patterns of spatial data across various applications such as traffic management, wildlife tracking, and location-based services. Modern methods often apply deep learning techniques to approximate heuristic metrics but struggle to learn more robust and generalized representations from the vast amounts of unlabeled trajectory data. Recent approaches focus on self-supervised learning methods such as contrastive learning, which have made significant advancements in trajectory representation learning. However, contrastive learning-based methods heavily depend on manually pre-defined data augmentation schemes, limiting the diversity of generated trajectories and resulting in learning from such variations in 2D Euclidean space, which prevents capturing high-level semantic variations. To address these limitations, we propose T-JEPA, a self-supervised trajectory similarity computation method employing Joint-Embedding Predictive Architecture (JEPA) to enhance trajectory representation learning. T-JEPA samples and predicts trajectory information in representation space, enabling the model to infer the missing components of trajectories at high-level semantics without relying on domain knowledge or manual effort. Extensive experiments conducted on three urban trajectory datasets and two Foursquare datasets demonstrate the effectiveness of T-JEPA in trajectory similarity computation.
[ "['Lihuan Li' 'Hao Xue' 'Yang Song' 'Flora Salim']" ]
null
null
2406.12914
null
null
http://arxiv.org/pdf/2406.12914v1
2024-06-13T11:41:20Z
2024-06-13T11:41:20Z
The Significance of Latent Data Divergence in Predicting System Degradation
Condition-Based Maintenance is pivotal in enabling the early detection of potential failures in engineering systems, where precise prediction of the Remaining Useful Life is essential for effective maintenance and operation. However, a predominant focus in the field centers on predicting the Remaining Useful Life using unprocessed or minimally processed data, frequently neglecting the intricate dynamics inherent in the dataset. In this work we introduce a novel methodology grounded in the analysis of statistical similarity within latent data from system components. Leveraging a specifically designed architecture based on a Vector Quantized Variational Autoencoder, we create a sequence of discrete vectors which is used to estimate system-specific priors. We infer the similarity between systems by evaluating the divergence of these priors, offering a nuanced understanding of individual system behaviors. The efficacy of our approach is demonstrated through experiments on the NASA commercial modular aero-propulsion system simulation (C-MAPSS) dataset. Our validation not only underscores the potential of our method in advancing the study of latent statistical divergence but also demonstrates its superiority over existing techniques.
[ "['Miguel Fernandes' 'Catarina Silva' 'Alberto Cardoso'\n 'Bernardete Ribeiro']" ]
null
null
2406.12915
null
null
http://arxiv.org/pdf/2406.12915v1
2024-06-13T17:54:09Z
2024-06-13T17:54:09Z
GROD: Enhancing Generalization of Transformer with Out-of-Distribution Detection
Transformer networks excel in natural language processing (NLP) and computer vision (CV) tasks. However, they face challenges in generalizing to Out-of-Distribution (OOD) datasets, that is, data whose distribution differs from that seen during training. The OOD detection aims to distinguish data that deviates from the expected distribution, while maintaining optimal performance on in-distribution (ID) data. This paper introduces a novel approach based on OOD detection, termed the Generate Rounded OOD Data (GROD) algorithm, which significantly bolsters the generalization performance of transformer networks across various tasks. GROD is motivated by our new OOD detection Probably Approximately Correct (PAC) Theory for transformer. The transformer has learnability in terms of OOD detection that is, when the data is sufficient the outlier can be well represented. By penalizing the misclassification of OOD data within the loss function and generating synthetic outliers, GROD guarantees learnability and refines the decision boundaries between inlier and outlier. This strategy demonstrates robust adaptability and general applicability across different data types. Evaluated across diverse OOD detection tasks in NLP and CV, GROD achieves SOTA regardless of data format. On average, it reduces the SOTA FPR@95 from 21.97% to 0.12%, and improves AUROC from 93.62% to 99.98% on image classification tasks, and the SOTA FPR@95 by 12.89% and AUROC by 2.27% in detecting semantic text outliers. The code is available at https://anonymous.4open.science/r/GROD-OOD-Detection-with-transformers-B70F.
[ "['Yijin Zhou' 'Yuguang Wang']" ]
null
null
2406.12916
null
null
http://arxiv.org/pdf/2406.12916v1
2024-06-13T18:00:05Z
2024-06-13T18:00:05Z
Opening the Black Box: predicting the trainability of deep neural networks with reconstruction entropy
An important challenge in machine learning is to predict the initial conditions under which a given neural network will be trainable. We present a method for predicting the trainable regime in parameter space for deep feedforward neural networks, based on reconstructing the input from subsequent activation layers via a cascade of single-layer auxiliary networks. For both MNIST and CIFAR10, we show that a single epoch of training of the shallow cascade networks is sufficient to predict the trainability of the deep feedforward network, thereby providing a significant reduction in overall training time. We achieve this by computing the relative entropy between reconstructed images and the original inputs, and show that this probe of information loss is sensitive to the phase behaviour of the network. Our results provide a concrete link between the flow of information and the trainability of deep neural networks, further elucidating the role of criticality in these systems.
[ "['Yanick Thurn' 'Ro Jefferson' 'Johanna Erdmenger']" ]
null
null
2406.12918
null
null
http://arxiv.org/pdf/2406.12918v1
2024-06-14T04:06:17Z
2024-06-14T04:06:17Z
Brain-Inspired Spike Echo State Network Dynamics for Aero-Engine Intelligent Fault Prediction
Aero-engine fault prediction aims to accurately predict the development trend of the future state of aero-engines, so as to diagnose faults in advance. Traditional aero-engine parameter prediction methods mainly use the nonlinear mapping relationship of time series data but generally ignore the adequate spatiotemporal features contained in aero-engine data. To this end, we propose a brain-inspired spike echo state network (Spike-ESN) model for aero-engine intelligent fault prediction, which is used to effectively capture the evolution process of aero-engine time series data in the framework of spatiotemporal dynamics. In the proposed approach, we design a spike input layer based on Poisson distribution inspired by the spike neural encoding mechanism of biological neurons, which can extract the useful temporal characteristics in aero-engine sequence data. Then, the temporal characteristics are input into a spike reservoir through the current calculation method of spike accumulation in neurons, which projects the data into a high-dimensional sparse space. In addition, we use the ridge regression method to read out the internal state of the spike reservoir. Finally, the experimental results of aero-engine states prediction demonstrate the superiority and potential of the proposed method.
[ "['Mo-Ran Liu' 'Tao Sun' 'Xi-Ming Sun']" ]
null
null
2406.12919
null
null
http://arxiv.org/pdf/2406.12919v1
2024-06-14T05:43:42Z
2024-06-14T05:43:42Z
Understanding active learning of molecular docking and its applications
With the advancing capabilities of computational methodologies and resources, ultra-large-scale virtual screening via molecular docking has emerged as a prominent strategy for in silico hit discovery. Given the exhaustive nature of ultra-large-scale virtual screening, active learning methodologies have garnered attention as a means to mitigate computational cost through iterative small-scale docking and machine learning model training. While the efficacy of active learning methodologies has been empirically validated in extant literature, a critical investigation remains in how surrogate models can predict docking score without considering three-dimensional structural features, such as receptor conformation and binding poses. In this paper, we thus investigate how active learning methodologies effectively predict docking scores using only 2D structures and under what circumstances they may work particularly well through benchmark studies encompassing six receptor targets. Our findings suggest that surrogate models tend to memorize structural patterns prevalent in high docking scored compounds obtained during acquisition steps. Despite this tendency, surrogate models demonstrate utility in virtual screening, as exemplified in the identification of actives from DUD-E dataset and high docking-scored compounds from EnamineReal library, a significantly larger set than the initial screening pool. Our comprehensive analysis underscores the reliability and potential applicability of active learning methodologies in virtual screening campaigns.
[ "['Jeonghyeon Kim' 'Juno Nam' 'Seongok Ryu']" ]
null
null
2406.12921
null
null
http://arxiv.org/pdf/2406.12921v2
2024-07-06T15:14:20Z
2024-06-14T08:09:39Z
WindowMixer: Intra-Window and Inter-Window Modeling for Time Series Forecasting
Time series forecasting (TSF) is crucial in fields like economic forecasting, weather prediction, traffic flow analysis, and public health surveillance. Real-world time series data often include noise, outliers, and missing values, making accurate forecasting challenging. Traditional methods model point-to-point relationships, which limits their ability to capture complex temporal patterns and increases their susceptibility to noise.To address these issues, we introduce the WindowMixer model, built on an all-MLP framework. WindowMixer leverages the continuous nature of time series by examining temporal variations from a window-based perspective. It decomposes time series into trend and seasonal components, handling them individually. For trends, a fully connected (FC) layer makes predictions. For seasonal components, time windows are projected to produce window tokens, processed by Intra-Window-Mixer and Inter-Window-Mixer modules. The Intra-Window-Mixer models relationships within each window, while the Inter-Window-Mixer models relationships between windows. This approach captures intricate patterns and long-range dependencies in the data.Experiments show WindowMixer consistently outperforms existing methods in both long-term and short-term forecasting tasks.
[ "['Quangao Liu' 'Ruiqi Li' 'Maowei Jiang' 'Wei Yang' 'Chen Liang'\n 'LongLong Pang' 'Zhuozhang Zou']" ]
null
null
2406.12923
null
null
http://arxiv.org/pdf/2406.12923v1
2024-06-14T12:57:17Z
2024-06-14T12:57:17Z
Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
Rapid urbanization has significantly escalated traffic congestion, underscoring the need for advanced congestion prediction services to bolster intelligent transportation systems. As one of the world's largest ride-hailing platforms, DiDi places great emphasis on the accuracy of congestion prediction to enhance the effectiveness and reliability of their real-time services, such as travel time estimation and route planning. Despite numerous efforts have been made on congestion prediction, most of them fall short in handling heterogeneous and dynamic spatio-temporal dependencies (e.g., periodic and non-periodic congestions), particularly in the presence of noisy and incomplete traffic data. In this paper, we introduce a Congestion Prediction Mixture-of-Experts, CP-MoE, to address the above challenges. We first propose a sparsely-gated Mixture of Adaptive Graph Learners (MAGLs) with congestion-aware inductive biases to improve the model capacity for efficiently capturing complex spatio-temporal dependencies in varying traffic scenarios. Then, we devise two specialized experts to help identify stable trends and periodic patterns within the traffic data, respectively. By cascading these experts with MAGLs, CP-MoE delivers congestion predictions in a more robust and interpretable manner. Furthermore, an ordinal regression strategy is adopted to facilitate effective collaboration among diverse experts. Extensive experiments on real-world datasets demonstrate the superiority of our proposed method compared with state-of-the-art spatio-temporal prediction models. More importantly, CP-MoE has been deployed in DiDi to improve the accuracy and reliability of the travel time estimation system.
[ "['Wenzhao Jiang' 'Jindong Han' 'Hao Liu' 'Tao Tao' 'Naiqiang Tan'\n 'Hui Xiong']" ]
null
null
2406.12925
null
null
http://arxiv.org/pdf/2406.12925v1
2024-06-14T13:54:29Z
2024-06-14T13:54:29Z
GLiNER multi-task: Generalist Lightweight Model for Various Information Extraction Tasks
Information extraction tasks require both accurate, efficient, and generalisable models. Classical supervised deep learning approaches can achieve the required performance, but they need large datasets and are limited in their ability to adapt to different tasks. On the other hand, large language models (LLMs) demonstrate good generalization, meaning that they can adapt to many different tasks based on user requests. However, LLMs are computationally expensive and tend to fail to generate structured outputs. In this article, we will introduce a new kind of GLiNER model that can be used for various information extraction tasks while being a small encoder model. Our model achieved SoTA performance on zero-shot NER benchmarks and leading performance on question-answering, summarization and relation extraction tasks. Additionally, in this article, we will cover experimental results on self-learning approaches for named entity recognition using GLiNER models.
[ "['Ihor Stepanov' 'Mykhailo Shtopko']" ]
null
null
2406.12928
null
null
http://arxiv.org/pdf/2406.12928v1
2024-06-15T12:02:14Z
2024-06-15T12:02:14Z
Evaluating the Generalization Ability of Quantized LLMs: Benchmark, Analysis, and Toolbox
Large language models (LLMs) have exhibited exciting progress in multiple scenarios, while the huge computational demands hinder their deployments in lots of real-world applications. As an effective means to reduce memory footprint and inference cost, quantization also faces challenges in performance degradation at low bit-widths. Understanding the impact of quantization on LLM capabilities, especially the generalization ability, is crucial. However, the community's main focus remains on the algorithms and models of quantization, with insufficient attention given to whether the quantized models can retain the strong generalization abilities of LLMs. In this work, we fill this gap by providing a comprehensive benchmark suite for this research topic, including an evaluation system, detailed analyses, and a general toolbox. Specifically, based on the dominant pipeline in LLM quantization, we primarily explore the impact of calibration data distribution on the generalization of quantized LLMs and conduct the benchmark using more than 40 datasets within two main scenarios. Based on this benchmark, we conduct extensive experiments with two well-known LLMs (English and Chinese) and four quantization algorithms to investigate this topic in-depth, yielding several counter-intuitive and valuable findings, e.g., models quantized using a calibration set with the same distribution as the test data are not necessarily optimal. Besides, to facilitate future research, we also release a modular-designed toolbox, which decouples the overall pipeline into several separate components, e.g., base LLM module, dataset module, quantizer module, etc. and allows subsequent researchers to easily assemble their methods through a simple configuration. Our benchmark suite is publicly available at https://github.com/TsingmaoAI/MI-optimize
[ "['Yijun Liu' 'Yuan Meng' 'Fang Wu' 'Shenhao Peng' 'Hang Yao' 'Chaoyu Guan'\n 'Chen Tang' 'Xinzhu Ma' 'Zhi Wang' 'Wenwu Zhu']" ]
null
null
2406.12929
null
null
http://arxiv.org/pdf/2406.12929v1
2024-06-15T13:22:47Z
2024-06-15T13:22:47Z
RMF: A Risk Measurement Framework for Machine Learning Models
Machine learning (ML) models are used in many safety- and security-critical applications nowadays. It is therefore important to measure the security of a system that uses ML as a component. This paper focuses on the field of ML, particularly the security of autonomous vehicles. For this purpose, a technical framework will be described, implemented, and evaluated in a case study. Based on ISO/IEC 27004:2016, risk indicators are utilized to measure and evaluate the extent of damage and the effort required by an attacker. It is not possible, however, to determine a single risk value that represents the attacker's effort. Therefore, four different values must be interpreted individually.
[ "['Jan Schröder' 'Jakub Breier']" ]
null
null
2406.12930
null
null
http://arxiv.org/pdf/2406.12930v1
2024-06-16T09:51:55Z
2024-06-16T09:51:55Z
Tender: Accelerating Large Language Models via Tensor Decomposition and Runtime Requantization
Large language models (LLMs) demonstrate outstanding performance in various tasks in machine learning and have thus become one of the most important workloads in today's computing landscape. However, deploying LLM inference poses challenges due to the high compute and memory requirements stemming from the enormous model size and the difficulty of running it in the integer pipelines. In this paper, we present Tender, an algorithm-hardware co-design solution that enables efficient deployment of LLM inference at low precision. Based on our analysis of outlier values in LLMs, we propose a decomposed quantization technique in which the scale factors of decomposed matrices are powers of two apart. The proposed scheme allows us to avoid explicit requantization (i.e., dequantization/quantization) when accumulating the partial sums from the decomposed matrices, with a minimal extension to the commodity tensor compute hardware. Our evaluation shows that Tender achieves higher accuracy and inference performance compared to the state-of-the-art methods while also being significantly less intrusive to the existing accelerators.
[ "['Jungi Lee' 'Wonbeom Lee' 'Jaewoong Sim']" ]
null
null
2406.12935
null
null
http://arxiv.org/pdf/2406.12935v1
2024-06-17T03:03:34Z
2024-06-17T03:03:34Z
ChatBug: A Common Vulnerability of Aligned LLMs Induced by Chat Templates
Large language models (LLMs) are expected to follow instructions from users and engage in conversations. Techniques to enhance LLMs' instruction-following capabilities typically fine-tune them using data structured according to a predefined chat template. Although chat templates are shown to be effective in optimizing LLM performance, their impact on safety alignment of LLMs has been less understood, which is crucial for deploying LLMs safely at scale. In this paper, we investigate how chat templates affect safety alignment of LLMs. We identify a common vulnerability, named ChatBug, that is introduced by chat templates. Our key insight to identify ChatBug is that the chat templates provide a rigid format that need to be followed by LLMs, but not by users. Hence, a malicious user may not necessarily follow the chat template when prompting LLMs. Instead, malicious users could leverage their knowledge of the chat template and accordingly craft their prompts to bypass safety alignments of LLMs. We develop two attacks to exploit the ChatBug vulnerability. We demonstrate that a malicious user can exploit the ChatBug vulnerability of eight state-of-the-art (SOTA) LLMs and effectively elicit unintended responses from these models. Moreover, we show that ChatBug can be exploited by existing jailbreak attacks to enhance their attack success rates. We investigate potential countermeasures to ChatBug. Our results show that while adversarial training effectively mitigates the ChatBug vulnerability, the victim model incurs significant performance degradation. These results highlight the trade-off between safety alignment and helpfulness. Developing new methods for instruction tuning to balance this trade-off is an open and critical direction for future research
[ "['Fengqing Jiang' 'Zhangchen Xu' 'Luyao Niu' 'Bill Yuchen Lin'\n 'Radha Poovendran']" ]
null
null
2406.12937
null
null
http://arxiv.org/pdf/2406.12937v1
2024-06-17T09:21:00Z
2024-06-17T09:21:00Z
Self-Train Before You Transcribe
When there is a mismatch between the training and test domains, current speech recognition systems show significant performance degradation. Self-training methods, such as noisy student teacher training, can help address this and enable the adaptation of models under such domain shifts. However, self-training typically requires a collection of unlabelled target domain data. For settings where this is not practical, we investigate the benefit of performing noisy student teacher training on recordings in the test set as a test-time adaptation approach. Similarly to the dynamic evaluation approach in language modelling, this enables the transfer of information across utterance boundaries and functions as a method of domain adaptation. A range of in-domain and out-of-domain datasets are used for experiments demonstrating large relative gains of up to 32.2%. Interestingly, our method showed larger gains than the typical self-training setup that utilises separate adaptation data.
[ "['Robert Flynn' 'Anton Ragni']" ]
null
null
2406.12945
null
null
http://arxiv.org/pdf/2406.12945v2
2024-07-12T07:16:33Z
2024-06-18T07:27:38Z
Under the Hood of Tabular Data Generation Models: the Strong Impact of Hyperparameter Tuning
We investigate the impact of dataset-specific hyperparameter, feature encoding, and architecture tuning on five recent model families for tabular data generation through an extensive benchmark on 16 datasets. This study addresses the practical need for a unified evaluation of models that fully considers hyperparameter optimization. Additionally, we propose a reduced search space for each model that allows for quick optimization, achieving nearly equivalent performance at a significantly lower cost.Our benchmark demonstrates that, for most models, large-scale dataset-specific tuning substantially improves performance compared to the original configurations. Furthermore, we confirm that diffusion-based models generally outperform other models on tabular data. However, this advantage is not significant when the entire tuning and training process is restricted to the same GPU budget for all models.
[ "['G. Charbel N. Kindji' 'Lina Maria Rojas-Barahona' 'Elisa Fromont'\n 'Tanguy Urvoy']" ]
null
null
2406.12946
null
null
http://arxiv.org/pdf/2406.12946v1
2024-06-18T08:27:00Z
2024-06-18T08:27:00Z
Instruction Data Generation and Unsupervised Adaptation for Speech Language Models
In this paper, we propose three methods for generating synthetic samples to train and evaluate multimodal large language models capable of processing both text and speech inputs. Addressing the scarcity of samples containing both modalities, synthetic data generation emerges as a crucial strategy to enhance the performance of such systems and facilitate the modeling of cross-modal relationships between the speech and text domains. Our process employs large language models to generate textual components and text-to-speech systems to generate speech components. The proposed methods offer a practical and effective means to expand the training dataset for these models. Experimental results show progress in achieving an integrated understanding of text and speech. We also highlight the potential of using unlabeled speech data to generate synthetic samples comparable in quality to those with available transcriptions, enabling the expansion of these models to more languages.
[ "['Vahid Noroozi' 'Zhehuai Chen' 'Somshubra Majumdar' 'Steve Huang'\n 'Jagadeesh Balam' 'Boris Ginsburg']" ]
null
null
2406.12948
null
null
http://arxiv.org/pdf/2406.12948v1
2024-06-18T12:07:59Z
2024-06-18T12:07:59Z
New Reservoir Computing Kernel Based on Chaotic Chua Circuit and Investigating Application to Post-Quantum Cryptography
The aim of this project was to develop a new Reservoir Computer implementation, based on a chaotic Chua circuit. In addition to suitable classification and regression benchmarks, the Reservoir Computer was applied to Post-Quantum Cryptography, with its suitability for this application investigated and assessed. The cryptographic algorithm utilised was the Learning with Errors problem, for both encryption and decryption. To achieve this, the Chua circuit was characterised, in simulation, and by physical circuit testing. The Reservoir Computer was designed and implemented using the results of the characterisation. As part of this development, noise was considered and mitigated. The benchmarks demonstrate that the Reservoir Computer can achieve current literature benchmarks with low error. However, the results with Learning with Errors suggest that a Chua-based Reservoir Computer is not sufficiently complex to tackle the high non-linearity in Post-Quantum Cryptography. Future work would involve researching the use of different combinations of multiple Chua Reservoir Computers in larger neural network architectures. Such architectures may produce the required high-dimensional behaviour to achieve the Learning with Errors problem. This project is believed to be only the second instance of a Chua-based Reservoir Computer in academia, and it is the first to be applied to challenging real-world tasks such as Post-Quantum Cryptography. It is also original by its investigation of hitherto unexplored parameters, and their impact on performance. It demonstrates a proof-of-concept for a mass-producible, inexpensive, low-power consumption hardware neural network. It also enables the next stages in research to occur, paving the road for using Chua-based Reservoir Computers across various applications.
[ "['Matthew John Cossins' 'Sendy Phang']" ]
null
null
2406.12950
null
null
http://arxiv.org/pdf/2406.12950v1
2024-06-18T12:54:47Z
2024-06-18T12:54:47Z
MolecularGPT: Open Large Language Model (LLM) for Few-Shot Molecular Property Prediction
Molecular property prediction (MPP) is a fundamental and crucial task in drug discovery. However, prior methods are limited by the requirement for a large number of labeled molecules and their restricted ability to generalize for unseen and new tasks, both of which are essential for real-world applications. To address these challenges, we present MolecularGPT for few-shot MPP. From a perspective on instruction tuning, we fine-tune large language models (LLMs) based on curated molecular instructions spanning over 1000 property prediction tasks. This enables building a versatile and specialized LLM that can be adapted to novel MPP tasks without any fine-tuning through zero- and few-shot in-context learning (ICL). MolecularGPT exhibits competitive in-context reasoning capabilities across 10 downstream evaluation datasets, setting new benchmarks for few-shot molecular prediction tasks. More importantly, with just two-shot examples, MolecularGPT can outperform standard supervised graph neural network methods on 4 out of 7 datasets. It also excels state-of-the-art LLM baselines by up to 16.6% increase on classification accuracy and decrease of 199.17 on regression metrics (e.g., RMSE) under zero-shot. This study demonstrates the potential of LLMs as effective few-shot molecular property predictors. The code is available at https://github.com/NYUSHCS/MolecularGPT.
[ "['Yuyan Liu' 'Sirui Ding' 'Sheng Zhou' 'Wenqi Fan' 'Qiaoyu Tan']" ]
null
null
2406.12952
null
null
http://arxiv.org/pdf/2406.12952v1
2024-06-18T14:54:37Z
2024-06-18T14:54:37Z
Code Agents are State of the Art Software Testers
Rigorous software testing is crucial for developing and maintaining high-quality code, making automated test generation a promising avenue for both improving software quality and boosting the effectiveness of code generation methods. However, while code generation with Large Language Models (LLMs) is an extraordinarily active research area, test generation remains relatively unexplored. We address this gap and investigate the capability of LLM-based Code Agents for formalizing user issues into test cases. To this end, we propose a novel benchmark based on popular GitHub repositories, containing real-world issues, ground-truth patches, and golden tests. We find that LLMs generally perform surprisingly well at generating relevant test cases with Code Agents designed for code repair exceeding the performance of systems designed specifically for test generation. Further, as test generation is a similar but more structured task than code generation, it allows for a more fine-grained analysis using fail-to-pass rate and coverage metrics, providing a dual metric for analyzing systems designed for code repair. Finally, we find that generated tests are an effective filter for proposed code fixes, doubling the precision of SWE-Agent.
[ "['Niels Mündler' 'Mark Niklas Müller' 'Jingxuan He' 'Martin Vechev']" ]
null
null
2406.12953
null
null
http://arxiv.org/pdf/2406.12953v1
2024-06-18T14:57:31Z
2024-06-18T14:57:31Z
Pattern or Artifact? Interactively Exploring Embedding Quality with TRACE
This paper presents TRACE, a tool to analyze the quality of 2D embeddings generated through dimensionality reduction techniques. Dimensionality reduction methods often prioritize preserving either local neighborhoods or global distances, but insights from visual structures can be misleading if the objective has not been achieved uniformly. TRACE addresses this challenge by providing a scalable and extensible pipeline for computing both local and global quality measures. The interactive browser-based interface allows users to explore various embeddings while visually assessing the pointwise embedding quality. The interface also facilitates in-depth analysis by highlighting high-dimensional nearest neighbors for any group of points and displaying high-dimensional distances between points. TRACE enables analysts to make informed decisions regarding the most suitable dimensionality reduction method for their specific use case, by showing the degree and location where structure is preserved in the reduced space.
[ "['Edith Heiter' 'Liesbet Martens' 'Ruth Seurinck' 'Martin Guilliams'\n 'Tijl De Bie' 'Yvan Saeys' 'Jefrey Lijffijt']" ]
null
null
2406.12954
null
null
http://arxiv.org/pdf/2406.12954v1
2024-06-18T15:48:20Z
2024-06-18T15:48:20Z
Skin Cancer Images Classification using Transfer Learning Techniques
Skin cancer is one of the most common and deadliest types of cancer. Early diagnosis of skin cancer at a benign stage is critical to reducing cancer mortality. To detect skin cancer at an earlier stage an automated system is compulsory that can save the life of many patients. Many previous studies have addressed the problem of skin cancer diagnosis using various deep learning and transfer learning models. However, existing literature has limitations in its accuracy and time-consuming procedure. In this work, we applied five different pre-trained transfer learning approaches for binary classification of skin cancer detection at benign and malignant stages. To increase the accuracy of these models we fine-tune different layers and activation functions. We used a publicly available ISIC dataset to evaluate transfer learning approaches. For model stability, data augmentation techniques are applied to improve the randomness of the input dataset. These approaches are evaluated using different hyperparameters such as batch sizes, epochs, and optimizers. The experimental results show that the ResNet-50 model provides an accuracy of 0.935, F1-score of 0.86, and precision of 0.94.
[ "['Md Sirajul Islam' 'Sanjeev Panta']" ]
null
null
2406.12983
null
null
http://arxiv.org/pdf/2406.12983v1
2024-06-18T18:02:35Z
2024-06-18T18:02:35Z
Reinforcement Learning for Corporate Bond Trading: A Sell Side Perspective
A corporate bond trader in a typical sell side institution such as a bank provides liquidity to the market participants by buying/selling securities and maintaining an inventory. Upon receiving a request for a buy/sell price quote (RFQ), the trader provides a quote by adding a spread over a textit{prevalent market price}. For illiquid bonds, the market price is harder to observe, and traders often resort to available benchmark bond prices (such as MarketAxess, Bloomberg, etc.). In cite{Bergault2023ModelingLI}, the concept of textit{Fair Transfer Price} for an illiquid corporate bond was introduced which is derived from an infinite horizon stochastic optimal control problem (for maximizing the trader's expected P&L, regularized by the quadratic variation). In this paper, we consider the same optimization objective, however, we approach the estimation of an optimal bid-ask spread quoting strategy in a data driven manner and show that it can be learned using Reinforcement Learning. Furthermore, we perform extensive outcome analysis to examine the reasonableness of the trained agent's behavior.
[ "['Samuel Atkins' 'Ali Fathi' 'Sammy Assefa']" ]
null
null
2406.12992
null
null
http://arxiv.org/pdf/2406.12992v1
2024-06-18T18:32:13Z
2024-06-18T18:32:13Z
Additive regularization schedule for neural architecture search
Neural network structures have a critical impact on the accuracy and stability of forecasting. Neural architecture search procedures help design an optimal neural network according to some loss function, which represents a set of quality criteria. This paper investigates the problem of neural network structure optimization. It proposes a way to construct a loss function, which contains a set of additive elements. Each element is called the regularizer. It corresponds to some part of the neural network structure and represents a criterion to optimize. The optimization procedure changes the structure in iterations. To optimize various parts of the structure, the procedure changes the set of regularizers according to some schedule. The authors propose a way to construct the additive regularization schedule. By comparing regularized models with non-regularized ones for a collection of datasets the computational experiments show that the proposed method finds efficient neural network structure and delivers accurate networks of low complexity.
[ "['Mark Potanin' 'Kirill Vayser' 'Vadim Strijov']" ]
null
null
2406.13012
null
null
http://arxiv.org/pdf/2406.13012v1
2024-06-18T19:05:24Z
2024-06-18T19:05:24Z
Data Plagiarism Index: Characterizing the Privacy Risk of Data-Copying in Tabular Generative Models
The promise of tabular generative models is to produce realistic synthetic data that can be shared and safely used without dangerous leakage of information from the training set. In evaluating these models, a variety of methods have been proposed to measure the tendency to copy data from the training dataset when generating a sample. However, these methods suffer from either not considering data-copying from a privacy threat perspective, not being motivated by recent results in the data-copying literature or being difficult to make compatible with the high dimensional, mixed type nature of tabular data. This paper proposes a new similarity metric and Membership Inference Attack called Data Plagiarism Index (DPI) for tabular data. We show that DPI evaluates a new intuitive definition of data-copying and characterizes the corresponding privacy risk. We show that the data-copying identified by DPI poses both privacy and fairness threats to common, high performing architectures; underscoring the necessity for more sophisticated generative modeling techniques to mitigate this issue.
[ "['Joshua Ward' 'Chi-Hua Wang' 'Guang Cheng']" ]
null
null
2406.13015
null
null
http://arxiv.org/pdf/2406.13015v1
2024-06-18T19:16:32Z
2024-06-18T19:16:32Z
Deriving Hematological Disease Classes Using Fuzzy Logic and Expert Knowledge: A Comprehensive Machine Learning Approach with CBC Parameters
In the intricate field of medical diagnostics, capturing the subtle manifestations of diseases remains a challenge. Traditional methods, often binary in nature, may not encapsulate the nuanced variances that exist in real-world clinical scenarios. This paper introduces a novel approach by leveraging Fuzzy Logic Rules to derive disease classes based on expert domain knowledge from a medical practitioner. By recognizing that diseases do not always fit into neat categories, and that expert knowledge can guide the fuzzification of these boundaries, our methodology offers a more sophisticated and nuanced diagnostic tool. Using a dataset procured from a prominent hospital, containing detailed patient blood count records, we harness Fuzzy Logic Rules, a computational technique celebrated for its ability to handle ambiguity. This approach, moving through stages of fuzzification, rule application, inference, and ultimately defuzzification, produces refined diagnostic predictions. When combined with the Random Forest classifier, the system adeptly predicts hematological conditions using Complete Blood Count (CBC) parameters. Preliminary results showcase high accuracy levels, underscoring the advantages of integrating fuzzy logic into the diagnostic process. When juxtaposed with traditional diagnostic techniques, it becomes evident that Fuzzy Logic, especially when guided by medical expertise, offers significant advancements in the realm of hematological diagnostics. This paper not only paves the path for enhanced patient care but also beckons a deeper dive into the potentialities of fuzzy logic in various medical diagnostic applications.
[ "['Salem Ameen' 'Ravivarman Balachandran' 'Theodoros Theodoridis']" ]
null
null
2406.13023
null
null
http://arxiv.org/pdf/2406.13023v3
2024-06-28T19:08:35Z
2024-06-18T19:30:46Z
Stackelberg Games with $k$-Submodular Function under Distributional Risk-Receptiveness and Robustness
We study submodular optimization in adversarial context, applicable to machine learning problems such as feature selection using data susceptible to uncertainties and attacks. We focus on Stackelberg games between an attacker (or interdictor) and a defender where the attacker aims to minimize the defender's objective of maximizing a $k$-submodular function. We allow uncertainties arising from the success of attacks and inherent data noise, and address challenges due to incomplete knowledge of the probability distribution of random parameters. Specifically, we introduce Distributionally Risk-Averse $k$-Submodular Interdiction Problem (DRA $k$-SIP) and Distributionally Risk-Receptive $k$-Submodular Interdiction Problem (DRR $k$-SIP) along with finitely convergent exact algorithms for solving them. The DRA $k$-SIP solution allows risk-averse interdictor to develop robust strategies for real-world uncertainties. Conversely, DRR $k$-SIP solution suggests aggressive tactics for attackers, willing to embrace (distributional) risk to inflict maximum damage, identifying critical vulnerable components, which can be used for the defender's defensive strategies. The optimal values derived from both DRA $k$-SIP and DRR $k$-SIP offer a confidence interval-like range for the expected value of the defender's objective function, capturing distributional ambiguity. We conduct computational experiments using instances of feature selection and sensor placement problems, and Wisconsin breast cancer data and synthetic data, respectively.
[ "['Seonghun Park' 'Manish Bansal']" ]
null
null
2406.13025
null
null
http://arxiv.org/pdf/2406.13025v1
2024-06-18T19:37:44Z
2024-06-18T19:37:44Z
ABNet: Attention BarrierNet for Safe and Scalable Robot Learning
Safe learning is central to AI-enabled robots where a single failure may lead to catastrophic results. Barrier-based method is one of the dominant approaches for safe robot learning. However, this method is not scalable, hard to train, and tends to generate unstable signals under noisy inputs that are challenging to be deployed for robots. To address these challenges, we propose a novel Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner. Each head of BarrierNet in the ABNet could learn safe robot control policies from different features and focus on specific part of the observation. In this way, we do not need to one-shotly construct a large model for complex tasks, which significantly facilitates the training of the model while ensuring its stable output. Most importantly, we can still formally prove the safety guarantees of the ABNet. We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving, with results showing much better robustness and guarantees over existing models.
[ "['Wei Xiao' 'Tsun-Hsuan Wang' 'Daniela Rus']" ]
null
null
2406.13036
null
null
http://arxiv.org/pdf/2406.13036v2
2024-06-21T12:09:38Z
2024-06-18T20:02:44Z
Sharp detection of low-dimensional structure in probability measures via dimensional logarithmic Sobolev inequalities
Identifying low-dimensional structure in high-dimensional probability measures is an essential pre-processing step for efficient sampling. We introduce a method for identifying and approximating a target measure $pi$ as a perturbation of a given reference measure $mu$ along a few significant directions of $mathbb{R}^{d}$. The reference measure can be a Gaussian or a nonlinear transformation of a Gaussian, as commonly arising in generative modeling. Our method extends prior work on minimizing majorizations of the Kullback--Leibler divergence to identify optimal approximations within this class of measures. Our main contribution unveils a connection between the emph{dimensional} logarithmic Sobolev inequality (LSI) and approximations with this ansatz. Specifically, when the target and reference are both Gaussian, we show that minimizing the dimensional LSI is equivalent to minimizing the KL divergence restricted to this ansatz. For general non-Gaussian measures, the dimensional LSI produces majorants that uniformly improve on previous majorants for gradient-based dimension reduction. We further demonstrate the applicability of this analysis to the squared Hellinger distance, where analogous reasoning shows that the dimensional Poincar'e inequality offers improved bounds.
[ "['Matthew T. C. Li' 'Tiangang Cui' 'Fengyi Li' 'Youssef Marzouk'\n 'Olivier Zahm']" ]
null
null
2406.13041
null
null
http://arxiv.org/pdf/2406.13041v1
2024-06-18T20:14:52Z
2024-06-18T20:14:52Z
Accelerated Stochastic Min-Max Optimization Based on Bias-corrected Momentum
Lower-bound analyses for nonconvex strongly-concave minimax optimization problems have shown that stochastic first-order algorithms require at least $mathcal{O}(varepsilon^{-4})$ oracle complexity to find an $varepsilon$-stationary point. Some works indicate that this complexity can be improved to $mathcal{O}(varepsilon^{-3})$ when the loss gradient is Lipschitz continuous. The question of achieving enhanced convergence rates under distinct conditions, remains unresolved. In this work, we address this question for optimization problems that are nonconvex in the minimization variable and strongly concave or Polyak-Lojasiewicz (PL) in the maximization variable. We introduce novel bias-corrected momentum algorithms utilizing efficient Hessian-vector products. We establish convergence conditions and demonstrate a lower iteration complexity of $mathcal{O}(varepsilon^{-3})$ for the proposed algorithms. The effectiveness of the method is validated through applications to robust logistic regression using real-world datasets.
[ "['Haoyuan Cai' 'Sulaiman A. Alghunaim' 'Ali H. Sayed']" ]
null
null
2406.13057
null
null
http://arxiv.org/pdf/2406.13057v1
2024-06-18T21:04:23Z
2024-06-18T21:04:23Z
Informed along the road: roadway capacity driven graph convolution network for network-wide traffic prediction
While deep learning has shown success in predicting traffic states, most methods treat it as a general prediction task without considering transportation aspects. Recently, graph neural networks have proven effective for this task, but few incorporate external factors that impact roadway capacity and traffic flow. This study introduces the Roadway Capacity Driven Graph Convolution Network (RCDGCN) model, which incorporates static and dynamic roadway capacity attributes in spatio-temporal settings to predict network-wide traffic states. The model was evaluated on two real-world datasets with different transportation factors: the ICM-495 highway network and an urban network in Manhattan, New York City. Results show RCDGCN outperformed baseline methods in forecasting accuracy. Analyses, including ablation experiments, weight analysis, and case studies, investigated the effect of capacity-related factors. The study demonstrates the potential of using RCDGCN for transportation system management.
[ "['Zilin Bian' 'Jingqin Gao' 'Kaan Ozbay' 'Fan Zuo' 'Dachuan Zuo'\n 'Zhenning Li']" ]
null
null
2406.13060
null
null
http://arxiv.org/pdf/2406.13060v1
2024-06-18T21:09:56Z
2024-06-18T21:09:56Z
Scale-Translation Equivariant Network for Oceanic Internal Solitary Wave Localization
Internal solitary waves (ISWs) are gravity waves that are often observed in the interior ocean rather than the surface. They hold significant importance due to their capacity to carry substantial energy, thus influence pollutant transport, oil platform operations, submarine navigation, etc. Researchers have studied ISWs through optical images, synthetic aperture radar (SAR) images, and altimeter data from remote sensing instruments. However, cloud cover in optical remote sensing images variably obscures ground information, leading to blurred or missing surface observations. As such, this paper aims at altimeter-based machine learning solutions to automatically locate ISWs. The challenges, however, lie in the following two aspects: 1) the altimeter data has low resolution, which requires a strong machine learner; 2) labeling data is extremely labor-intensive, leading to very limited data for training. In recent years, the grand progress of deep learning demonstrates strong learning capacity given abundant data. Besides, more recent studies on efficient learning and self-supervised learning laid solid foundations to tackle the aforementioned challenges. In this paper, we propose to inject prior knowledge to achieve a strong and efficient learner. Specifically, intrinsic patterns in altimetry data are efficiently captured using a scale-translation equivariant convolutional neural network (ST-ECNN). By considering inherent symmetries in neural network design, ST-ECNN achieves higher efficiency and better performance than baseline models. Furthermore, we also introduce prior knowledge from massive unsupervised data to enhance our solution using the SimCLR framework for pre-training. Our final solution achieves an overall better performance than baselines on our handcrafted altimetry dataset. Data and codes are available at https://github.com/ZhangWan-byte/Internal_Solitary_Wave_Localization .
[ "['Zhang Wan' 'Shuo Wang' 'Xudong Zhang']" ]
null
null
2406.13064
null
null
http://arxiv.org/pdf/2406.13064v1
2024-06-18T21:23:51Z
2024-06-18T21:23:51Z
Machine Learning and Optimization Techniques for Solving Inverse Kinematics in a 7-DOF Robotic Arm
As the pace of AI technology continues to accelerate, more tools have become available to researchers to solve longstanding problems, Hybrid approaches available today continue to push the computational limits of efficiency and precision. One of such problems is the inverse kinematics of redundant systems. This paper explores the complexities of a 7 degree of freedom manipulator and explores 13 optimization techniques to solve it. Additionally, a novel approach is proposed to contribute to the field of algorithmic research. This was found to be over 200 times faster than the well-known traditional Particle Swarm Optimization technique. This new method may serve as a new field of search that combines the explorative capabilities of Machine Learning with the exploitative capabilities of numerical methods.
[ "['Enoch Adediran' 'Salem Ameen']" ]
null
null
2406.13066
null
null
http://arxiv.org/pdf/2406.13066v1
2024-06-18T21:27:13Z
2024-06-18T21:27:13Z
MaskPure: Improving Defense Against Text Adversaries with Stochastic Purification
The improvement of language model robustness, including successful defense against adversarial attacks, remains an open problem. In computer vision settings, the stochastic noising and de-noising process provided by diffusion models has proven useful for purifying input images, thus improving model robustness against adversarial attacks. Similarly, some initial work has explored the use of random noising and de-noising to mitigate adversarial attacks in an NLP setting, but improving the quality and efficiency of these methods is necessary for them to remain competitive. We extend upon methods of input text purification that are inspired by diffusion processes, which randomly mask and refill portions of the input text before classification. Our novel method, MaskPure, exceeds or matches robustness compared to other contemporary defenses, while also requiring no adversarial classifier training and without assuming knowledge of the attack type. In addition, we show that MaskPure is provably certifiably robust. To our knowledge, MaskPure is the first stochastic-purification method with demonstrated success against both character-level and word-level attacks, indicating the generalizable and promising nature of stochastic denoising defenses. In summary: the MaskPure algorithm bridges literature on the current strongest certifiable and empirical adversarial defense methods, showing that both theoretical and practical robustness can be obtained together. Code is available on GitHub at https://github.com/hubarruby/MaskPure.
[ "['Harrison Gietz' 'Jugal Kalita']" ]
null
null
2406.13073
null
null
http://arxiv.org/pdf/2406.13073v1
2024-06-18T21:44:51Z
2024-06-18T21:44:51Z
NoiSec: Harnessing Noise for Security against Adversarial and Backdoor Attacks
The exponential adoption of machine learning (ML) is propelling the world into a future of intelligent automation and data-driven solutions. However, the proliferation of malicious data manipulation attacks against ML, namely adversarial and backdoor attacks, jeopardizes its reliability in safety-critical applications. The existing detection methods against such attacks are built upon assumptions, limiting them in diverse practical scenarios. Thus, motivated by the need for a more robust and unified defense mechanism, we investigate the shared traits of adversarial and backdoor attacks and propose NoiSec that leverages solely the noise, the foundational root cause of such attacks, to detect any malicious data alterations. NoiSec is a reconstruction-based detector that disentangles the noise from the test input, extracts the underlying features from the noise, and leverages them to recognize systematic malicious manipulation. Experimental evaluations conducted on the CIFAR10 dataset demonstrate the efficacy of NoiSec, achieving AUROC scores exceeding 0.954 and 0.852 under white-box and black-box adversarial attacks, respectively, and 0.992 against backdoor attacks. Notably, NoiSec maintains a high detection performance, keeping the false positive rate within only 1%. Comparative analyses against MagNet-based baselines reveal NoiSec's superior performance across various attack scenarios.
[ "['Md Hasan Shahriar' 'Ning Wang' 'Y. Thomas Hou' 'Wenjing Lou']" ]
null
null
2406.13074
null
null
http://arxiv.org/pdf/2406.13074v1
2024-06-18T21:47:28Z
2024-06-18T21:47:28Z
PIPPIN: Generating variable length full events from partons
This paper presents a novel approach for directly generating full events at detector-level from parton-level information, leveraging cutting-edge machine learning techniques. To address the challenge of multiplicity variations between parton and reconstructed object spaces, we employ transformers, score-based models and normalizing flows. Our method tackles the inherent complexities of the stochastic transition between these two spaces and achieves remarkably accurate results. The combination of innovative techniques and the achieved accuracy demonstrates the potential of our approach in advancing the field and opens avenues for further exploration. This research contributes to the ongoing efforts in high-energy physics and generative modelling, providing a promising direction for enhanced precision in fast detector simulation.
[ "['Guillaume Quétant' 'John Andrew Raine' 'Matthew Leigh'\n 'Debajyoti Sengupta' 'Tobias Golling']" ]
null
null
2406.13075
null
null
http://arxiv.org/pdf/2406.13075v1
2024-06-18T21:48:59Z
2024-06-18T21:48:59Z
Exact Community Recovery (under Side Information): Optimality of Spectral Algorithms
In this paper, we study the problem of exact community recovery in general, two-community block models considering both Bernoulli and Gaussian matrix models, capturing the Stochastic Block Model, submatrix localization, and $mathbb{Z}_2$-synchronization as special cases. We also study the settings where $side$ $information$ about community assignment labels is available, modeled as passing the true labels through a noisy channel: either the binary erasure channel (where some community labels are known while others are erased) or the binary symmetric channel (where some labels are flipped). We provide a unified analysis of the effect of side information on the information-theoretic limits of exact recovery, generalizing prior works and extending to new settings. Additionally, we design a simple but optimal spectral algorithm that incorporates side information (when present) along with the eigenvectors of the matrix observation. Using the powerful tool of entrywise eigenvector analysis [Abbe, Fan, Wang, Zhong 2020], we show that our spectral algorithm can mimic the so called $genie$-$aided$ $estimators$, where the $i^{mathrm{th}}$ genie-aided estimator optimally computes the estimate of the $i^{mathrm{th}}$ label, when all remaining labels are revealed by a genie. This perspective provides a unified understanding of the optimality of spectral algorithms for various exact recovery problems in a recent line of work.
[ "['Julia Gaudio' 'Nirmit Joshi']" ]
null
null
2406.13094
null
null
http://arxiv.org/pdf/2406.13094v1
2024-06-18T22:57:06Z
2024-06-18T22:57:06Z
Exploring and Benchmarking the Planning Capabilities of Large Language Models
We seek to elevate the planning capabilities of Large Language Models (LLMs)investigating four main directions. First, we construct a comprehensive benchmark suite encompassing both classical planning domains and natural language scenarios. This suite includes algorithms to generate instances with varying levels of difficulty, allowing for rigorous and systematic evaluation of LLM performance. Second, we investigate the use of in-context learning (ICL) to enhance LLM planning, exploring the direct relationship between increased context length and improved planning performance. Third, we demonstrate the positive impact of fine-tuning LLMs on optimal planning paths, as well as the effectiveness of incorporating model-driven search procedures. Finally, we investigate the performance of the proposed methods in out-of-distribution scenarios, assessing the ability to generalize to novel and unseen planning challenges.
[ "['Bernd Bohnet' 'Azade Nova' 'Aaron T Parisi' 'Kevin Swersky'\n 'Katayoon Goshvadi' 'Hanjun Dai' 'Dale Schuurmans' 'Noah Fiedel'\n 'Hanie Sedghi']" ]
null
null
2406.13099
null
null
http://arxiv.org/pdf/2406.13099v1
2024-06-18T23:14:29Z
2024-06-18T23:14:29Z
Sampling 3D Gaussian Scenes in Seconds with Latent Diffusion Models
We present a latent diffusion model over 3D scenes, that can be trained using only 2D image data. To achieve this, we first design an autoencoder that maps multi-view images to 3D Gaussian splats, and simultaneously builds a compressed latent representation of these splats. Then, we train a multi-view diffusion model over the latent space to learn an efficient generative model. This pipeline does not require object masks nor depths, and is suitable for complex scenes with arbitrary camera positions. We conduct careful experiments on two large-scale datasets of complex real-world scenes -- MVImgNet and RealEstate10K. We show that our approach enables generating 3D scenes in as little as 0.2 seconds, either from scratch, from a single input view, or from sparse input views. It produces diverse and high-quality results while running an order of magnitude faster than non-latent diffusion models and earlier NeRF-based generative models
[ "['Paul Henderson' 'Melonie de Almeida' 'Daniela Ivanova'\n 'Titas Anciukevičius']" ]
null
null
2406.13101
null
null
http://arxiv.org/pdf/2406.13101v1
2024-06-18T23:25:14Z
2024-06-18T23:25:14Z
On instabilities in neural network-based physics simulators
When neural networks are trained from data to simulate the dynamics of physical systems, they encounter a persistent challenge: the long-time dynamics they produce are often unphysical or unstable. We analyze the origin of such instabilities when learning linear dynamical systems, focusing on the training dynamics. We make several analytical findings which empirical observations suggest extend to nonlinear dynamical systems. First, the rate of convergence of the training dynamics is uneven and depends on the distribution of energy in the data. As a special case, the dynamics in directions where the data have no energy cannot be learned. Second, in the unlearnable directions, the dynamics produced by the neural network depend on the weight initialization, and common weight initialization schemes can produce unstable dynamics. Third, injecting synthetic noise into the data during training adds damping to the training dynamics and can stabilize the learned simulator, though doing so undesirably biases the learned dynamics. For each contributor to instability, we suggest mitigative strategies. We also highlight important differences between learning discrete-time and continuous-time dynamics, and discuss extensions to nonlinear systems.
[ "['Daniel Floryan']" ]
null
null
2406.13103
null
null
http://arxiv.org/pdf/2406.13103v1
2024-06-18T23:27:46Z
2024-06-18T23:27:46Z
A Generic Method for Fine-grained Category Discovery in Natural Language Texts
Fine-grained category discovery using only coarse-grained supervision is a cost-effective yet challenging task. Previous training methods focus on aligning query samples with positive samples and distancing them from negatives. They often neglect intra-category and inter-category semantic similarities of fine-grained categories when navigating sample distributions in the embedding space. Furthermore, some evaluation techniques that rely on pre-collected test samples are inadequate for real-time applications. To address these shortcomings, we introduce a method that successfully detects fine-grained clusters of semantically similar texts guided by a novel objective function. The method uses semantic similarities in a logarithmic space to guide sample distributions in the Euclidean space and to form distinct clusters that represent fine-grained categories. We also propose a centroid inference mechanism to support real-time applications. The efficacy of the method is both theoretically justified and empirically confirmed on three benchmark tasks. The proposed objective function is integrated in multiple contrastive learning based neural models. Its results surpass existing state-of-the-art approaches in terms of Accuracy, Adjusted Rand Index and Normalized Mutual Information of the detected fine-grained categories. Code and data will be available at https://github.com/XX upon publication.
[ "['Chang Tian' 'Matthew B. Blaschko' 'Wenpeng Yin' 'Mingzhe Xing'\n 'Yinliang Yue' 'Marie-Francine Moens']" ]
null
null
2406.13112
null
null
http://arxiv.org/pdf/2406.13112v1
2024-06-18T23:54:21Z
2024-06-18T23:54:21Z
Nutmeg and SPICE: Models and Data for Biomolecular Machine Learning
We describe version 2 of the SPICE dataset, a collection of quantum chemistry calculations for training machine learning potentials. It expands on the original dataset by adding much more sampling of chemical space and more data on non-covalent interactions. We train a set of potential energy functions called Nutmeg on it. They use a novel mechanism to improve performance on charged and polar molecules, injecting precomputed partial charges into the model to provide a reference for the large scale charge distribution. Evaluation of the new models shows they do an excellent job of reproducing energy differences between conformations, even on highly charged molecules or ones that are significantly larger than the molecules in the training set. They also produce stable molecular dynamics trajectories, and are fast enough to be useful for routine simulation of small molecules.
[ "['Peter Eastman' 'Benjamin P. Pritchard' 'John D. Chodera'\n 'Thomas E. Markland']" ]
null
null
2406.13126
null
null
http://arxiv.org/pdf/2406.13126v1
2024-06-19T00:42:35Z
2024-06-19T00:42:35Z
Guided Context Gating: Learning to leverage salient lesions in retinal fundus images
Effectively representing medical images, especially retinal images, presents a considerable challenge due to variations in appearance, size, and contextual information of pathological signs called lesions. Precise discrimination of these lesions is crucial for diagnosing vision-threatening issues such as diabetic retinopathy. While visual attention-based neural networks have been introduced to learn spatial context and channel correlations from retinal images, they often fall short in capturing localized lesion context. Addressing this limitation, we propose a novel attention mechanism called Guided Context Gating, an unique approach that integrates Context Formulation, Channel Correlation, and Guided Gating to learn global context, spatial correlations, and localized lesion context. Our qualitative evaluation against existing attention mechanisms emphasize the superiority of Guided Context Gating in terms of explainability. Notably, experiments on the Zenodo-DR-7 dataset reveal a substantial 2.63% accuracy boost over advanced attention mechanisms & an impressive 6.53% improvement over the state-of-the-art Vision Transformer for assessing the severity grade of retinopathy, even with imbalanced and limited training samples for each class.
[ "['Teja Krishna Cherukuri' 'Nagur Shareef Shaik' 'Dong Hye Ye']" ]
null
null
2406.13128
null
null
http://arxiv.org/pdf/2406.13128v1
2024-06-19T00:45:57Z
2024-06-19T00:45:57Z
A New Approach for Evaluating and Improving the Performance of Segmentation Algorithms on Hard-to-Detect Blood Vessels
Many studies regarding the vasculature of biological tissues involve the segmentation of the blood vessels in a sample followed by the creation of a graph structure to model the vasculature. The graph is then used to extract relevant vascular properties. Small segmentation errors can lead to largely distinct connectivity patterns and a high degree of variability of the extracted properties. Nevertheless, global metrics such as Dice, precision, and recall are commonly applied for measuring the performance of blood vessel segmentation algorithms. These metrics might conceal important information about the accuracy at specific regions of a sample. To tackle this issue, we propose a local vessel salience (LVS) index to quantify the expected difficulty in segmenting specific blood vessel segments. The LVS index is calculated for each vessel pixel by comparing the local intensity of the vessel with the image background around the pixel. The index is then used for defining a new accuracy metric called low-salience recall (LSRecall), which quantifies the performance of segmentation algorithms on blood vessel segments having low salience. The perspective provided by the LVS index is used to define a data augmentation procedure that can be used to improve the segmentation performance of convolutional neural networks. We show that segmentation algorithms having high Dice and recall values can display very low LSRecall values, which reveals systematic errors of these algorithms for vessels having low salience. The proposed data augmentation procedure is able to improve the LSRecall of some samples by as much as 25%. The developed methodology opens up new possibilities for comparing the performance of segmentation algorithms regarding hard-to-detect blood vessels as well as their capabilities for vascular topology preservation.
[ "['João Pedro Parella' 'Matheus Viana da Silva' 'Cesar Henrique Comin']" ]