categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2404.10548
null
null
http://arxiv.org/pdf/2404.10548v1
2024-04-16T13:18:02Z
2024-04-16T13:18:02Z
Classification of Prostate Cancer in 3D Magnetic Resonance Imaging Data based on Convolutional Neural Networks
Prostate cancer is a commonly diagnosed cancerous disease among men world-wide. Even with modern technology such as multi-parametric magnetic resonance tomography and guided biopsies, the process for diagnosing prostate cancer remains time consuming and requires highly trained professionals. In this paper, different convolutional neural networks (CNN) are evaluated on their abilities to reliably classify whether an MRI sequence contains malignant lesions. Implementations of a ResNet, a ConvNet and a ConvNeXt for 3D image data are trained and evaluated. The models are trained using different data augmentation techniques, learning rates, and optimizers. The data is taken from a private dataset, provided by Cantonal Hospital Aarau. The best result was achieved by a ResNet3D, yielding an average precision score of 0.4583 and AUC ROC score of 0.6214.
[ "['Malte Rippa' 'Ruben Schulze' 'Marian Himstedt' 'Felice Burn']" ]
null
null
2404.10550
null
null
http://arxiv.org/pdf/2404.10550v2
2024-05-07T14:00:29Z
2024-04-16T13:19:46Z
Analytical Approximation of the ELBO Gradient in the Context of the Clutter Problem
We propose an analytical solution for approximating the gradient of the Evidence Lower Bound (ELBO) in variational inference problems where the statistical model is a Bayesian network consisting of observations drawn from a mixture of a Gaussian distribution embedded in unrelated clutter, known as the clutter problem. The method employs the reparameterization trick to move the gradient operator inside the expectation and relies on the assumption that, because the likelihood factorizes over the observed data, the variational distribution is generally more compactly supported than the Gaussian distribution in the likelihood factors. This allows efficient local approximation of the individual likelihood factors, which leads to an analytical solution for the integral defining the gradient expectation. We integrate the proposed gradient approximation as the expectation step in an EM (Expectation Maximization) algorithm for maximizing ELBO and test against classical deterministic approaches in Bayesian inference, such as the Laplace approximation, Expectation Propagation and Mean-Field Variational Inference. The proposed method demonstrates good accuracy and rate of convergence together with linear computational complexity.
[ "['Roumen Nikolaev Popov']" ]
null
null
2404.10561
null
null
http://arxiv.org/pdf/2404.10561v1
2024-04-16T13:35:24Z
2024-04-16T13:35:24Z
HiGraphDTI: Hierarchical Graph Representation Learning for Drug-Target Interaction Prediction
The discovery of drug-target interactions (DTIs) plays a crucial role in pharmaceutical development. The deep learning model achieves more accurate results in DTI prediction due to its ability to extract robust and expressive features from drug and target chemical structures. However, existing deep learning methods typically generate drug features via aggregating molecular atom representations, ignoring the chemical properties carried by motifs, i.e., substructures of the molecular graph. The atom-drug double-level molecular representation learning can not fully exploit structure information and fails to interpret the DTI mechanism from the motif perspective. In addition, sequential model-based target feature extraction either fuses limited contextual information or requires expensive computational resources. To tackle the above issues, we propose a hierarchical graph representation learning-based DTI prediction method (HiGraphDTI). Specifically, HiGraphDTI learns hierarchical drug representations from triple-level molecular graphs to thoroughly exploit chemical information embedded in atoms, motifs, and molecules. Then, an attentional feature fusion module incorporates information from different receptive fields to extract expressive target features.Last, the hierarchical attention mechanism identifies crucial molecular segments, which offers complementary views for interpreting interaction mechanisms. The experiment results not only demonstrate the superiority of HiGraphDTI to the state-of-the-art methods, but also confirm the practical ability of our model in interaction interpretation and new DTI discovery.
[ "['Bin Liu' 'Siqi Wu' 'Jin Wang' 'Xin Deng' 'Ao Zhou']" ]
null
null
2404.10574
null
null
http://arxiv.org/pdf/2404.10574v1
2024-04-16T13:52:00Z
2024-04-16T13:52:00Z
Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation
Standard Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target but usually requires simultaneous access to both source and target data. Moreover, UDA approaches commonly assume that source and target domains share the same labels space. Yet, these two assumptions are hardly satisfied in real-world scenarios. This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting, where both assumptions are dropped. We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes. Starting from an initial clustering-based assignment, our method progressively improves the segregation of target-private samples by refining their pseudo-labels with the guide of an uncertainty-based sample selection module. Additionally, we propose a novel contrastive loss, named NL-InfoNCELoss, that, integrating negative learning into self-supervised contrastive learning, enhances the model robustness to noisy pseudo-labels. Extensive experiments on benchmark datasets demonstrate the superiority of the proposed method over existing approaches, establishing new state-of-the-art performance. Notably, additional analyses show that our method is able to learn the underlying semantics of novel classes, opening the possibility to perform novel class discovery.
[ "['Mattia Litrico' 'Davide Talon' 'Sebastiano Battiato' 'Alessio Del Bue'\n 'Mario Valerio Giuffrida' 'Pietro Morerio']" ]
null
null
2404.10575
null
null
http://arxiv.org/pdf/2404.10575v1
2024-04-16T13:53:58Z
2024-04-16T13:53:58Z
EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence
A key challenge in contrastive learning is to generate negative samples from a large sample set to contrast with positive samples, for learning better encoding of the data. These negative samples often follow a softmax distribution which are dynamically updated during the training process. However, sampling from this distribution is non-trivial due to the high computational costs in computing the partition function. In this paper, we propose an Efficient Markov Chain Monte Carlo negative sampling method for Contrastive learning (EMC$^2$). We follow the global contrastive learning loss as introduced in SogCLR, and propose EMC$^2$ which utilizes an adaptive Metropolis-Hastings subroutine to generate hardness-aware negative samples in an online fashion during the optimization. We prove that EMC$^2$ finds an $mathcal{O}(1/sqrt{T})$-stationary point of the global contrastive loss in $T$ iterations. Compared to prior works, EMC$^2$ is the first algorithm that exhibits global convergence (to stationarity) regardless of the choice of batch size while exhibiting low computation and memory cost. Numerical experiments validate that EMC$^2$ is effective with small batch training and achieves comparable or better performance than baseline algorithms. We report the results for pre-training image encoders on STL-10 and Imagenet-100.
[ "['Chung-Yiu Yau' 'Hoi-To Wai' 'Parameswaran Raman' 'Soumajyoti Sarkar'\n 'Mingyi Hong']" ]
null
null
2404.10580
null
null
http://arxiv.org/pdf/2404.10580v1
2024-04-16T14:05:29Z
2024-04-16T14:05:29Z
Data-driven subgrouping of patient trajectories with chronic diseases: Evidence from low back pain
Clinical data informs the personalization of health care with a potential for more effective disease management. In practice, this is achieved by subgrouping, whereby clusters with similar patient characteristics are identified and then receive customized treatment plans with the goal of targeting subgroup-specific disease dynamics. In this paper, we propose a novel mixture hidden Markov model for subgrouping patient trajectories from chronic diseases. Our model is probabilistic and carefully designed to capture different trajectory phases of chronic diseases (i.e., "severe", "moderate", and "mild") through tailored latent states. We demonstrate our subgrouping framework based on a longitudinal study across 847 patients with non-specific low back pain. Here, our subgrouping framework identifies 8 subgroups. Further, we show that our subgrouping framework outperforms common baselines in terms of cluster validity indices. Finally, we discuss the applicability of the model to other chronic and long-lasting diseases.
[ "['Christof Naumzik' 'Alice Kongsted' 'Werner Vach' 'Stefan Feuerriegel']" ]
null
null
2404.10588
null
null
http://arxiv.org/pdf/2404.10588v2
2024-04-17T12:09:17Z
2024-04-16T14:13:44Z
Do Counterfactual Examples Complicate Adversarial Training?
We leverage diffusion models to study the robustness-performance tradeoff of robust classifiers. Our approach introduces a simple, pretrained diffusion method to generate low-norm counterfactual examples (CEs): semantically altered data which results in different true class membership. We report that the confidence and accuracy of robust models on their clean training data are associated with the proximity of the data to their CEs. Moreover, robust models perform very poorly when evaluated on the CEs directly, as they become increasingly invariant to the low-norm, semantic changes brought by CEs. The results indicate a significant overlap between non-robust and semantic features, countering the common assumption that non-robust features are not interpretable.
[ "['Eric Yeats' 'Cameron Darwin' 'Eduardo Ortega' 'Frank Liu' 'Hai Li']" ]
null
null
2404.10589
null
null
http://arxiv.org/pdf/2404.10589v1
2024-03-19T21:19:54Z
2024-03-19T21:19:54Z
Thermal Crosstalk Modelling and Compensation Methods for Programmable Photonic Integrated Circuits
Photonic integrated circuits play an important role in the field of optical computing, promising faster and more energy-efficient operations compared to their digital counterparts. This advantage stems from the inherent suitability of optical signals to carry out matrix multiplication. However, even deterministic phenomena such as thermal crosstalk make precise programming of photonic chips a challenging task. Here, we train and experimentally evaluate three models incorporating varying degrees of physics intuition to predict the effect of thermal crosstalk in different locations of an integrated programmable photonic mesh. We quantify the effect of thermal crosstalk by the resonance wavelength shift in the power spectrum of a microring resonator implemented in the chip, achieving modelling errors <0.5 pm. We experimentally validate the models through compensation of the crosstalk-induced wavelength shift. Finally, we evaluate the generalization capabilities of one of the models by employing it to predict and compensate for the effect of thermal crosstalk for parts of the chip it was not trained on, revealing root-mean-square-errors of <2.0 pm.
[ "['Isidora Teofilovic' 'Ali Cem' 'David Sanchez-Jacome'\n 'Daniel Perez-Lopez' 'Francesco Da Ros']" ]
null
null
2404.10600
null
null
http://arxiv.org/pdf/2404.10600v1
2024-04-16T14:26:55Z
2024-04-16T14:26:55Z
Intra-operative tumour margin evaluation in breast-conserving surgery with deep learning
A positive margin may result in an increased risk of local recurrences after breast retention surgery for any malignant tumour. In order to reduce the number of positive margins would offer surgeon real-time intra-operative information on the presence of positive resection margins. This study aims to design an intra-operative tumour margin evaluation scheme by using specimen mammography in breast-conserving surgery. Total of 30 cases were evaluated and compared with the manually determined contours by experienced physicians and pathology report. The proposed method utilizes image thresholding to extract regions of interest and then performs a deep learning model, i.e. SegNet, to segment tumour tissue. The margin width of normal tissues surrounding it is evaluated as the result. The desired size of margin around the tumor was set for 10 mm. The smallest average difference to manual sketched margin (6.53 mm +- 5.84). In the all case, the SegNet architecture was utilized to obtain tissue specimen boundary and tumor contour, respectively. The simulation results indicated that this technology is helpful in discriminating positive from negative margins in the intra-operative setting. The aim of proposed scheme was a potential procedure in the intra-operative measurement system. The experimental results reveal that deep learning techniques can draw results that are consistent with pathology reports.
[ "['Wei-Chung Shia' 'Yu-Len Huang' 'Yi-Chun Chen' 'Hwa-Koon Wu'\n 'Dar-Ren Chen']" ]
null
null
2404.10606
null
null
http://arxiv.org/pdf/2404.10606v1
2024-03-14T14:14:04Z
2024-03-14T14:14:04Z
InfoCon: Concept Discovery with Generative and Discriminative Informativeness
We focus on the self-supervised discovery of manipulation concepts that can be adapted and reassembled to address various robotic tasks. We propose that the decision to conceptualize a physical procedure should not depend on how we name it (semantics) but rather on the significance of the informativeness in its representation regarding the low-level physical state and state changes. We model manipulation concepts (discrete symbols) as generative and discriminative goals and derive metrics that can autonomously link them to meaningful sub-trajectories from noisy, unlabeled demonstrations. Specifically, we employ a trainable codebook containing encodings (concepts) capable of synthesizing the end-state of a sub-trajectory given the current state (generative informativeness). Moreover, the encoding corresponding to a particular sub-trajectory should differentiate the state within and outside it and confidently predict the subsequent action based on the gradient of its discriminative score (discriminative informativeness). These metrics, which do not rely on human annotation, can be seamlessly integrated into a VQ-VAE framework, enabling the partitioning of demonstrations into semantically consistent sub-trajectories, fulfilling the purpose of discovering manipulation concepts and the corresponding sub-goal (key) states. We evaluate the effectiveness of the learned concepts by training policies that utilize them as guidance, demonstrating superior performance compared to other baselines. Additionally, our discovered manipulation concepts compare favorably to human-annotated ones while saving much manual effort.
[ "['Ruizhe Liu' 'Qian Luo' 'Yanchao Yang']" ]
null
null
2404.10617
null
null
http://arxiv.org/pdf/2404.10617v1
2024-03-16T01:40:46Z
2024-03-16T01:40:46Z
Optimizing Performance on Trinity Utilizing Machine Learning, Proxy Applications and Scheduling Priorities
The sheer number of nodes continues to increase in todays supercomputers, the first half of Trinity alone contains more than 9400 compute nodes. Since the speed of todays clusters are limited by the slowest nodes, it more important than ever to identify slow nodes, improve their performance if it can be done, and assure minimal usage of slower nodes during performance critical runs. This is an ongoing maintenance task that occurs on a regular basis and, therefore, it is important to minimize the impact upon its users by assessing and addressing slow performing nodes and mitigating their consequences while minimizing down time. These issues can be solved, in large part, through a systematic application of fast running hardware assessment tests, the application of Machine Learning, and making use of performance data to increase efficiency of large clusters. Proxy applications utilizing both MPI and OpenMP were developed to produce data as a substitute for long runtime applications to evaluate node performance. Machine learning is applied to identify underperforming nodes, and policies are being discussed to both minimize the impact of underperforming nodes and increase the efficiency of the system. In this paper, I will describe the process used to produce quickly performing proxy tests, consider various methods to isolate the outliers, and produce ordered lists for use in scheduling to accomplish this task.
[ "['Phil Romero']" ]
null
null
2404.10618
null
null
http://arxiv.org/pdf/2404.10618v1
2024-04-16T14:42:49Z
2024-04-16T14:42:49Z
Private Attribute Inference from Images with Vision-Language Models
As large language models (LLMs) become ubiquitous in our daily tasks and digital interactions, associated privacy risks are increasingly in focus. While LLM privacy research has primarily focused on the leakage of model training data, it has recently been shown that the increase in models' capabilities has enabled LLMs to make accurate privacy-infringing inferences from previously unseen texts. With the rise of multimodal vision-language models (VLMs), capable of understanding both images and text, a pertinent question is whether such results transfer to the previously unexplored domain of benign images posted online. To investigate the risks associated with the image reasoning capabilities of newly emerging VLMs, we compile an image dataset with human-annotated labels of the image owner's personal attributes. In order to understand the additional privacy risk posed by VLMs beyond traditional human attribute recognition, our dataset consists of images where the inferable private attributes do not stem from direct depictions of humans. On this dataset, we evaluate the inferential capabilities of 7 state-of-the-art VLMs, finding that they can infer various personal attributes at up to 77.6% accuracy. Concerningly, we observe that accuracy scales with the general capabilities of the models, implying that future models can be misused as stronger adversaries, establishing an imperative for the development of adequate defenses.
[ "['Batuhan Tömekçe' 'Mark Vero' 'Robin Staab' 'Martin Vechev']" ]
null
null
2404.10620
null
null
http://arxiv.org/pdf/2404.10620v1
2024-04-16T14:43:33Z
2024-04-16T14:43:33Z
PyTorchGeoNodes: Enabling Differentiable Shape Programs for 3D Shape Reconstruction
We propose PyTorchGeoNodes, a differentiable module for reconstructing 3D objects from images using interpretable shape programs. In comparison to traditional CAD model retrieval methods, the use of shape programs for 3D reconstruction allows for reasoning about the semantic properties of reconstructed objects, editing, low memory footprint, etc. However, the utilization of shape programs for 3D scene understanding has been largely neglected in past works. As our main contribution, we enable gradient-based optimization by introducing a module that translates shape programs designed in Blender, for example, into efficient PyTorch code. We also provide a method that relies on PyTorchGeoNodes and is inspired by Monte Carlo Tree Search (MCTS) to jointly optimize discrete and continuous parameters of shape programs and reconstruct 3D objects for input scenes. In our experiments, we apply our algorithm to reconstruct 3D objects in the ScanNet dataset and evaluate our results against CAD model retrieval-based reconstructions. Our experiments indicate that our reconstructions match well the input scenes while enabling semantic reasoning about reconstructed objects.
[ "['Sinisa Stekovic' 'Stefan Ainetter' \"Mattia D'Urso\"\n 'Friedrich Fraundorfer' 'Vincent Lepetit']" ]
null
null
2404.10630
null
null
http://arxiv.org/pdf/2404.10630v1
2024-04-16T15:02:46Z
2024-04-16T15:02:46Z
HLAT: High-quality Large Language Model Pre-trained on AWS Trainium
Getting large language models (LLMs) to perform well on the downstream tasks requires pre-training over trillions of tokens. This typically demands a large number of powerful computational devices in addition to a stable distributed training framework to accelerate the training. The growing number of applications leveraging AI/ML had led to a scarcity of the expensive conventional accelerators (such as GPUs), which begs the need for the alternative specialized-accelerators that are scalable and cost-efficient. AWS Trainium is the second-generation machine learning accelerator that has been purposely built for training large deep learning models. Its corresponding instance, Amazon EC2 trn1, is an alternative to GPU instances for LLM training. However, training LLMs with billions of parameters on trn1 is challenging due to its relatively nascent software ecosystem. In this paper, we showcase HLAT: a 7 billion parameter decoder-only LLM pre-trained using trn1 instances over 1.8 trillion tokens. The performance of HLAT is benchmarked against popular open source baseline models including LLaMA and OpenLLaMA, which have been trained on NVIDIA GPUs and Google TPUs, respectively. On various evaluation tasks, we show that HLAT achieves model quality on par with the baselines. We also share the best practice of using the Neuron Distributed Training Library (NDTL), a customized distributed training library for AWS Trainium to achieve efficient training. Our work demonstrates that AWS Trainium powered by the NDTL is able to successfully pre-train state-of-the-art LLM models with high performance and cost-effectiveness.
[ "['Haozheng Fan' 'Hao Zhou' 'Guangtai Huang' 'Parameswaran Raman'\n 'Xinwei Fu' 'Gaurav Gupta' 'Dhananjay Ram' 'Yida Wang' 'Jun Huan']" ]
null
null
2404.10631
null
null
http://arxiv.org/abs/2404.10631v1
2024-03-28T16:33:16Z
2024-03-28T16:33:16Z
Parallel Implementations Assessment of a Spatial-Spectral Classifier for Hyperspectral Clinical Applications
Hyperspectral (HS) imaging presents itself as a non-contact, non-ionizing and non-invasive technique, proven to be suitable for medical diagnosis. However, the volume of information contained in these images makes difficult providing the surgeon with information about the boundaries in real-time. To that end, High-Performance-Computing (HPC) platforms become necessary. This paper presents a comparison between the performances provided by five different HPC platforms while processing a spatial-spectral approach to classify HS images, assessing their main benefits and drawbacks. To provide a complete study, two different medical applications, with two different requirements, have been analyzed. The first application consists of HS images taken from neurosurgical operations; the second one presents HS images taken from dermatological interventions. While the main constraint for neurosurgical applications is the processing time, in other environments, as the dermatological one, other requirements can be considered. In that sense, energy efficiency is becoming a major challenge, since this kind of applications are usually developed as hand-held devices, thus depending on the battery capacity. These requirements have been considered to choose the target platforms: on the one hand, three of the most powerful Graphic Processing Units (GPUs) available in the market; and, on the other hand, a low-power GPU and a manycore architecture, both specifically thought for being used in battery-dependent environments.
[ "['Raquel Lazcano' 'Daniel Madroñal' 'Giordana Florimbi' 'Jaime Sancho'\n 'Sergio Sanchez' 'Raquel Leon' 'Himar Fabelo' 'Samuel Ortega'\n 'Emanuele Torti' 'Ruben Salvador' 'Margarita Marrero-Martin'\n 'Francesco Leporati' 'Eduardo Juarez' 'Gustavo M Callico' 'Cesar Sanz']" ]
null
null
2404.10635
null
null
http://arxiv.org/pdf/2404.10635v2
2024-06-05T16:04:47Z
2024-03-26T15:36:47Z
Compressed Federated Reinforcement Learning with a Generative Model
Reinforcement learning has recently gained unprecedented popularity, yet it still grapples with sample inefficiency. Addressing this challenge, federated reinforcement learning (FedRL) has emerged, wherein agents collaboratively learn a single policy by aggregating local estimations. However, this aggregation step incurs significant communication costs. In this paper, we propose CompFedRL, a communication-efficient FedRL approach incorporating both textit{periodic aggregation} and (direct/error-feedback) compression mechanisms. Specifically, we consider compressed federated $Q$-learning with a generative model setup, where a central server learns an optimal $Q$-function by periodically aggregating compressed $Q$-estimates from local agents. For the first time, we characterize the impact of these two mechanisms (which have remained elusive) by providing a finite-time analysis of our algorithm, demonstrating strong convergence behaviors when utilizing either direct or error-feedback compression. Our bounds indicate improved solution accuracy concerning the number of agents and other federated hyperparameters while simultaneously reducing communication costs. To corroborate our theory, we also conduct in-depth numerical experiments to verify our findings, considering Top-$K$ and Sparsified-$K$ sparsification operators.
[ "['Ali Beikmohammadi' 'Sarit Khirirat' 'Sindri Magnússon']" ]
null
null
2404.10636
null
null
http://arxiv.org/pdf/2404.10636v2
2024-04-17T16:27:37Z
2024-03-27T18:12:02Z
What are human values, and how do we align AI to them?
There is an emerging consensus that we need to align AI systems with human values (Gabriel, 2020; Ji et al., 2024), but it remains unclear how to apply this to language models in practice. We split the problem of "aligning to human values" into three parts: first, eliciting values from people; second, reconciling those values into an alignment target for training ML models; and third, actually training the model. In this paper, we focus on the first two parts, and ask the question: what are "good" ways to synthesize diverse human inputs about values into a target for aligning language models? To answer this question, we first define a set of 6 criteria that we believe must be satisfied for an alignment target to shape model behavior in accordance with human values. We then propose a process for eliciting and reconciling values called Moral Graph Elicitation (MGE), which uses a large language model to interview participants about their values in particular contexts; our approach is inspired by the philosophy of values advanced by Taylor (1977), Chang (2004), and others. We trial MGE with a representative sample of 500 Americans, on 3 intentionally divisive prompts (e.g. advice about abortion). Our results demonstrate that MGE is promising for improving model alignment across all 6 criteria. For example, almost all participants (89.1%) felt well represented by the process, and (89%) thought the final moral graph was fair, even if their value wasn't voted as the wisest. Our process often results in "expert" values (e.g. values from women who have solicited abortion advice) rising to the top of the moral graph, without defining who is considered an expert in advance.
[ "['Oliver Klingefjord' 'Ryan Lowe' 'Joe Edelman']" ]
null
null
2404.10642
null
null
http://arxiv.org/pdf/2404.10642v2
2024-05-23T06:38:15Z
2024-04-16T15:16:22Z
Self-playing Adversarial Language Game Enhances LLM Reasoning
We explore the self-play training procedure of large language models (LLMs) in a two-player adversarial language game called Adversarial Taboo. In this game, an attacker and a defender communicate around a target word only visible to the attacker. The attacker aims to induce the defender to speak the target word unconsciously, while the defender tries to infer the target word from the attacker's utterances. To win the game, both players should have sufficient knowledge about the target word and high-level reasoning ability to infer and express in this information-reserved conversation. Hence, we are curious about whether LLMs' reasoning ability can be further enhanced by self-play in this adversarial language game (SPAG). With this goal, we select several open-source LLMs and let each act as the attacker and play with a copy of itself as the defender on an extensive range of target words. Through reinforcement learning on the game outcomes, we observe that the LLMs' performances uniformly improve on a broad range of reasoning benchmarks. Furthermore, iteratively adopting this self-play process can continuously promote LLMs' reasoning abilities. The code is at https://github.com/Linear95/SPAG.
[ "['Pengyu Cheng' 'Tianhao Hu' 'Han Xu' 'Zhisong Zhang' 'Yong Dai' 'Lei Han'\n 'Nan Du']" ]
null
null
2404.10645
null
null
http://arxiv.org/pdf/2404.10645v1
2024-04-16T15:18:40Z
2024-04-16T15:18:40Z
Continuous Control Reinforcement Learning: Distributed Distributional DrQ Algorithms
Distributed Distributional DrQ is a model-free and off-policy RL algorithm for continuous control tasks based on the state and observation of the agent, which is an actor-critic method with the data-augmentation and the distributional perspective of critic value function. Aim to learn to control the agent and master some tasks in a high-dimensional continuous space. DrQ-v2 uses DDPG as the backbone and achieves out-performance in various continuous control tasks. Here Distributed Distributional DrQ uses Distributed Distributional DDPG as the backbone, and this modification aims to achieve better performance in some hard continuous control tasks through the better expression ability of distributional value function and distributed actor policies.
[ "['Zehao Zhou']" ]
null
null
2404.10662
null
null
http://arxiv.org/pdf/2404.10662v2
2024-04-18T04:49:02Z
2024-04-16T15:39:11Z
Continual Offline Reinforcement Learning via Diffusion-based Dual Generative Replay
We study continual offline reinforcement learning, a practical paradigm that facilitates forward transfer and mitigates catastrophic forgetting to tackle sequential offline tasks. We propose a dual generative replay framework that retains previous knowledge by concurrent replay of generated pseudo-data. First, we decouple the continual learning policy into a diffusion-based generative behavior model and a multi-head action evaluation model, allowing the policy to inherit distributional expressivity for encompassing a progressive range of diverse behaviors. Second, we train a task-conditioned diffusion model to mimic state distributions of past tasks. Generated states are paired with corresponding responses from the behavior generator to represent old tasks with high-fidelity replayed samples. Finally, by interleaving pseudo samples with real ones of the new task, we continually update the state and behavior generators to model progressively diverse behaviors, and regularize the multi-head critic via behavior cloning to mitigate forgetting. Experiments demonstrate that our method achieves better forward transfer with less forgetting, and closely approximates the results of using previous ground-truth data due to its high-fidelity replay of the sample space. Our code is available at href{https://github.com/NJU-RL/CuGRO}{https://github.com/NJU-RL/CuGRO}.
[ "['Jinmei Liu' 'Wenbin Li' 'Xiangyu Yue' 'Shilin Zhang' 'Chunlin Chen'\n 'Zhi Wang']" ]
null
null
2404.10664
null
null
http://arxiv.org/pdf/2404.10664v2
2024-05-12T21:42:09Z
2024-04-16T15:40:18Z
Assessing The Impact of CNN Auto Encoder-Based Image Denoising on Image Classification Tasks
Images captured from the real world are often affected by different types of noise, which can significantly impact the performance of Computer Vision systems and the quality of visual data. This study presents a novel approach for defect detection in casting product noisy images, specifically focusing on submersible pump impellers. The methodology involves utilizing deep learning models such as VGG16, InceptionV3, and other models in both the spatial and frequency domains to identify noise types and defect status. The research process begins with preprocessing images, followed by applying denoising techniques tailored to specific noise categories. The goal is to enhance the accuracy and robustness of defect detection by integrating noise detection and denoising into the classification pipeline. The study achieved remarkable results using VGG16 for noise type classification in the frequency domain, achieving an accuracy of over 99%. Removal of salt and pepper noise resulted in an average SSIM of 87.9, while Gaussian noise removal had an average SSIM of 64.0, and periodic noise removal yielded an average SSIM of 81.6. This comprehensive approach showcases the effectiveness of the deep AutoEncoder model and median filter, for denoising strategies in real-world industrial applications. Finally, our study reports significant improvements in binary classification accuracy for defect detection compared to previous methods. For the VGG16 classifier, accuracy increased from 94.6% to 97.0%, demonstrating the effectiveness of the proposed noise detection and denoising approach. Similarly, for the InceptionV3 classifier, accuracy improved from 84.7% to 90.0%, further validating the benefits of integrating noise analysis into the classification pipeline.
[ "['Mohsen Hami' 'Mahdi JameBozorg']" ]
null
null
2404.10678
null
null
http://arxiv.org/pdf/2404.10678v1
2024-04-16T15:53:41Z
2024-04-16T15:53:41Z
Automating REST API Postman Test Cases Using LLM
In the contemporary landscape of technological advancements, the automation of manual processes is crucial, compelling the demand for huge datasets to effectively train and test machines. This research paper is dedicated to the exploration and implementation of an automated approach to generate test cases specifically using Large Language Models. The methodology integrates the use of Open AI to enhance the efficiency and effectiveness of test case generation for training and evaluating Large Language Models. This formalized approach with LLMs simplifies the testing process, making it more efficient and comprehensive. Leveraging natural language understanding, LLMs can intelligently formulate test cases that cover a broad range of REST API properties, ensuring comprehensive testing. The model that is developed during the research is trained using manually collected postman test cases or instances for various Rest APIs. LLMs enhance the creation of Postman test cases by automating the generation of varied and intricate test scenarios. Postman test cases offer streamlined automation, collaboration, and dynamic data handling, providing a user-friendly and efficient approach to API testing compared to traditional test cases. Thus, the model developed not only conforms to current technological standards but also holds the promise of evolving into an idea of substantial importance in future technological advancements.
[ "['S Deepika Sri' 'Mohammed Aadil S' 'Sanjjushri Varshini R'\n 'Raja CSP Raman' 'Gopinath Rajagopal' 'S Taranath Chan']" ]
null
null
2404.10684
null
null
http://arxiv.org/pdf/2404.10684v1
2024-04-16T16:04:11Z
2024-04-16T16:04:11Z
Driver Fatigue Prediction using Randomly Activated Neural Networks for Smart Ridesharing Platforms
Drivers in ridesharing platforms exhibit cognitive atrophy and fatigue as they accept ride offers along the day, which can have a significant impact on the overall efficiency of the ridesharing platform. In contrast to the current literature which focuses primarily on modeling and learning driver's preferences across different ride offers, this paper proposes a novel Dynamic Discounted Satisficing (DDS) heuristic to model and predict driver's sequential ride decisions during a given shift. Based on DDS heuristic, a novel stochastic neural network with random activations is proposed to model DDS heuristic and predict the final decision made by a given driver. The presence of random activations in the network necessitated the development of a novel training algorithm called Sampling-Based Back Propagation Through Time (SBPTT), where gradients are computed for independent instances of neural networks (obtained via sampling the distribution of activation threshold) and aggregated to update the network parameters. Using both simulation experiments as well as on real Chicago taxi dataset, this paper demonstrates the improved performance of the proposed approach, when compared to state-of-the-art methods.
[ "['Sree Pooja Akula' 'Mukund Telukunta' 'Venkata Sriram Siddhardh Nadendla']" ]
null
null
2404.10688
null
null
http://arxiv.org/pdf/2404.10688v1
2024-04-16T16:08:59Z
2024-04-16T16:08:59Z
Efficient Conditional Diffusion Model with Probability Flow Sampling for Image Super-resolution
Image super-resolution is a fundamentally ill-posed problem because multiple valid high-resolution images exist for one low-resolution image. Super-resolution methods based on diffusion probabilistic models can deal with the ill-posed nature by learning the distribution of high-resolution images conditioned on low-resolution images, avoiding the problem of blurry images in PSNR-oriented methods. However, existing diffusion-based super-resolution methods have high time consumption with the use of iterative sampling, while the quality and consistency of generated images are less than ideal due to problems like color shifting. In this paper, we propose Efficient Conditional Diffusion Model with Probability Flow Sampling (ECDP) for image super-resolution. To reduce the time consumption, we design a continuous-time conditional diffusion model for image super-resolution, which enables the use of probability flow sampling for efficient generation. Additionally, to improve the consistency of generated images, we propose a hybrid parametrization for the denoiser network, which interpolates between the data-predicting parametrization and the noise-predicting parametrization for different noise scales. Moreover, we design an image quality loss as a complement to the score matching loss of diffusion models, further improving the consistency and quality of super-resolution. Extensive experiments on DIV2K, ImageNet, and CelebA demonstrate that our method achieves higher super-resolution quality than existing diffusion-based image super-resolution methods while having lower time consumption. Our code is available at https://github.com/Yuan-Yutao/ECDP.
[ "['Yutao Yuan' 'Chun Yuan']" ]
null
null
2404.10689
null
null
http://arxiv.org/pdf/2404.10689v1
2024-04-16T16:09:38Z
2024-04-16T16:09:38Z
Network architecture search of X-ray based scientific applications
X-ray and electron diffraction-based microscopy use bragg peak detection and ptychography to perform 3-D imaging at an atomic resolution. Typically, these techniques are implemented using computationally complex tasks such as a Psuedo-Voigt function or solving a complex inverse problem. Recently, the use of deep neural networks has improved the existing state-of-the-art approaches. However, the design and development of the neural network models depends on time and labor intensive tuning of the model by application experts. To that end, we propose a hyperparameter (HPS) and neural architecture search (NAS) approach to automate the design and optimization of the neural network models for model size, energy consumption and throughput. We demonstrate the improved performance of the auto-tuned models when compared to the manually tuned BraggNN and PtychoNN benchmark. We study and demonstrate the importance of the exploring the search space of tunable hyperparameters in enhancing the performance of bragg peak detection and ptychographic reconstruction. Our NAS and HPS of (1) BraggNN achieves a 31.03% improvement in bragg peak detection accuracy with a 87.57% reduction in model size, and (2) PtychoNN achieves a 16.77% improvement in model accuracy and a 12.82% reduction in model size when compared to the baseline PtychoNN model. When inferred on the Orin-AGX platform, the optimized Braggnn and Ptychonn models demonstrate a 10.51% and 9.47% reduction in inference latency and a 44.18% and 15.34% reduction in energy consumption when compared to their respective baselines, when inferred in the Orin-AGX edge platform.
[ "['Adarsha Balaji' 'Ramyad Hadidi' 'Gregory Kollmer' 'Mohammed E. Fouda'\n 'Prasanna Balaprakash']" ]
null
null
2404.10690
null
null
http://arxiv.org/pdf/2404.10690v1
2024-04-16T16:10:23Z
2024-04-16T16:10:23Z
MathWriting: A Dataset For Handwritten Mathematical Expression Recognition
We introduce MathWriting, the largest online handwritten mathematical expression dataset to date. It consists of 230k human-written samples and an additional 400k synthetic ones. MathWriting can also be used for offline HME recognition and is larger than all existing offline HME datasets like IM2LATEX-100K. We introduce a benchmark based on MathWriting data in order to advance research on both online and offline HME recognition.
[ "['Philippe Gervais' 'Asya Fadeeva' 'Andrii Maksai']" ]
null
null
2404.10700
null
null
http://arxiv.org/pdf/2404.10700v2
2024-07-15T14:09:28Z
2024-04-16T16:17:48Z
Rawformer: Unpaired Raw-to-Raw Translation for Learnable Camera ISPs
Modern smartphone camera quality heavily relies on the image signal processor (ISP) to enhance captured raw images, utilizing carefully designed modules to produce final output images encoded in a standard color space (e.g., sRGB). Neural-based end-to-end learnable ISPs offer promising advancements, potentially replacing traditional ISPs with their ability to adapt without requiring extensive tuning for each new camera model, as is often the case for nearly every module in traditional ISPs. However, the key challenge with the recent learning-based ISPs is the urge to collect large paired datasets for each distinct camera model due to the influence of intrinsic camera characteristics on the formation of input raw images. This paper tackles this challenge by introducing a novel method for unpaired learning of raw-to-raw translation across diverse cameras. Specifically, we propose Rawformer, an unsupervised Transformer-based encoder-decoder method for raw-to-raw translation. It accurately maps raw images captured by a certain camera to the target camera, facilitating the generalization of learnable ISPs to new unseen cameras. Our method demonstrates superior performance on real camera datasets, achieving higher accuracy compared to previous state-of-the-art techniques, and preserving a more robust correlation between the original and translated raw images. The codes and the pretrained models are available at https://github.com/gosha20777/rawformer.
[ "['Georgy Perevozchikov' 'Nancy Mehta' 'Mahmoud Afifi' 'Radu Timofte']" ]
null
null
2404.10715
null
null
http://arxiv.org/pdf/2404.10715v3
2024-05-23T18:17:43Z
2024-04-16T16:45:47Z
Dynamic Frequency-Based Fingerprinting Attacks against Modern Sandbox Environments
The cloud computing landscape has evolved significantly in recent years, embracing various sandboxes to meet the diverse demands of modern cloud applications. These sandboxes encompass container-based technologies like Docker and gVisor, microVM-based solutions like Firecracker, and security-centric sandboxes relying on Trusted Execution Environments (TEEs) such as Intel SGX and AMD SEV. However, the practice of placing multiple tenants on shared physical hardware raises security and privacy concerns, most notably side-channel attacks. In this paper, we investigate the possibility of fingerprinting containers through CPU frequency reporting sensors in Intel and AMD CPUs. One key enabler of our attack is that the current CPU frequency information can be accessed by user-space attackers. We demonstrate that Docker images exhibit a unique frequency signature, enabling the distinction of different containers with up to 84.5% accuracy even when multiple containers are running simultaneously in different cores. Additionally, we assess the effectiveness of our attack when performed against several sandboxes deployed in cloud environments, including Google's gVisor, AWS' Firecracker, and TEE-based platforms like Gramine (utilizing Intel SGX) and AMD SEV. Our empirical results show that these attacks can also be carried out successfully against all of these sandboxes in less than 40 seconds, with an accuracy of over 70% in all cases. Finally, we propose a noise injection-based countermeasure to mitigate the proposed attack on cloud environments.
[ "['Debopriya Roy Dipta' 'Thore Tiemann' 'Berk Gulmezoglu' 'Eduard Marin'\n 'Thomas Eisenbarth']" ]
null
null
2404.10726
null
null
http://arxiv.org/pdf/2404.10726v1
2024-04-16T16:59:50Z
2024-04-16T16:59:50Z
Automatic re-calibration of quantum devices by reinforcement learning
During their operation, due to shifts in environmental conditions, devices undergo various forms of detuning from their optimal settings. Typically, this is addressed through control loops, which monitor variables and the device performance, to maintain settings at their optimal values. Quantum devices are particularly challenging since their functionality relies on precisely tuning their parameters. At the same time, the detailed modeling of the environmental behavior is often computationally unaffordable, while a direct measure of the parameters defining the system state is costly and introduces extra noise in the mechanism. In this study, we investigate the application of reinforcement learning techniques to develop a model-free control loop for continuous recalibration of quantum device parameters. Furthermore, we explore the advantages of incorporating minimal environmental noise models. As an example, the application to numerical simulations of a Kennedy receiver-based long-distance quantum communication protocol is presented.
[ "['T. Crosta' 'L. Rebón' 'F. Vilariño' 'J. M. Matera' 'M. Bilkis']" ]
null
null
2404.10727
null
null
http://arxiv.org/pdf/2404.10727v2
2024-05-02T08:39:57Z
2024-04-16T17:01:27Z
How Deep Networks Learn Sparse and Hierarchical Data: the Sparse Random Hierarchy Model
Understanding what makes high-dimensional data learnable is a fundamental question in machine learning. On the one hand, it is believed that the success of deep learning lies in its ability to build a hierarchy of representations that become increasingly more abstract with depth, going from simple features like edges to more complex concepts. On the other hand, learning to be insensitive to invariances of the task, such as smooth transformations for image datasets, has been argued to be important for deep networks and it strongly correlates with their performance. In this work, we aim to explain this correlation and unify these two viewpoints. We show that by introducing sparsity to generative hierarchical models of data, the task acquires insensitivity to spatial transformations that are discrete versions of smooth transformations. In particular, we introduce the Sparse Random Hierarchy Model (SRHM), where we observe and rationalize that a hierarchical representation mirroring the hierarchical model is learnt precisely when such insensitivity is learnt, thereby explaining the strong correlation between the latter and performance. Moreover, we quantify how the sample complexity of CNNs learning the SRHM depends on both the sparsity and hierarchical structure of the task.
[ "['Umberto Tomasini' 'Matthieu Wyart']" ]
null
null
2404.10728
null
null
http://arxiv.org/pdf/2404.10728v1
2024-04-16T17:01:38Z
2024-04-16T17:01:38Z
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning
We present the first study on provably efficient randomized exploration in cooperative multi-agent reinforcement learning (MARL). We propose a unified algorithm framework for randomized exploration in parallel Markov Decision Processes (MDPs), and two Thompson Sampling (TS)-type algorithms, CoopTS-PHE and CoopTS-LMC, incorporating the perturbed-history exploration (PHE) strategy and the Langevin Monte Carlo exploration (LMC) strategy respectively, which are flexible in design and easy to implement in practice. For a special class of parallel MDPs where the transition is (approximately) linear, we theoretically prove that both CoopTS-PHE and CoopTS-LMC achieve a $widetilde{mathcal{O}}(d^{3/2}H^2sqrt{MK})$ regret bound with communication complexity $widetilde{mathcal{O}}(dHM^2)$, where $d$ is the feature dimension, $H$ is the horizon length, $M$ is the number of agents, and $K$ is the number of episodes. This is the first theoretical result for randomized exploration in cooperative MARL. We evaluate our proposed method on multiple parallel RL environments, including a deep exploration problem (textit{i.e.,} $N$-chain), a video game, and a real-world problem in energy systems. Our experimental results support that our framework can achieve better performance, even under conditions of misspecified transition models. Additionally, we establish a connection between our unified framework and the practical application of federated learning.
[ "['Hao-Lun Hsu' 'Weixin Wang' 'Miroslav Pajic' 'Pan Xu']" ]
null
null
2404.10730
null
null
http://arxiv.org/pdf/2404.10730v1
2024-04-16T17:02:52Z
2024-04-16T17:02:52Z
Insight Gained from Migrating a Machine Learning Model to Intelligence Processing Units
The discoveries in this paper show that Intelligence Processing Units (IPUs) offer a viable accelerator alternative to GPUs for machine learning (ML) applications within the fields of materials science and battery research. We investigate the process of migrating a model from GPU to IPU and explore several optimization techniques, including pipelining and gradient accumulation, aimed at enhancing the performance of IPU-based models. Furthermore, we have effectively migrated a specialized model to the IPU platform. This model is employed for predicting effective conductivity, a parameter crucial in ion transport processes, which govern the performance of multiple charge and discharge cycles of batteries. The model utilizes a Convolutional Neural Network (CNN) architecture to perform prediction tasks for effective conductivity. The performance of this model on the IPU is found to be comparable to its execution on GPUs. We also analyze the utilization and performance of Graphcore's Bow IPU. Through benchmark tests, we observe significantly improved performance with the Bow IPU when compared to its predecessor, the Colossus IPU.
[ "['Hieu Le' 'Zhenhua He' 'Mai Le' 'Dhruva K. Chakravorty' 'Lisa M. Perez'\n 'Akhil Chilumuru' 'Yan Yao' 'Jiefu Chen']" ]
null
null
2404.10745
null
null
http://arxiv.org/pdf/2404.10745v1
2024-04-16T17:23:19Z
2024-04-16T17:23:19Z
Settling Constant Regrets in Linear Markov Decision Processes
We study the constant regret guarantees in reinforcement learning (RL). Our objective is to design an algorithm that incurs only finite regret over infinite episodes with high probability. We introduce an algorithm, Cert-LSVI-UCB, for misspecified linear Markov decision processes (MDPs) where both the transition kernel and the reward function can be approximated by some linear function up to misspecification level $zeta$. At the core of Cert-LSVI-UCB is an innovative certified estimator, which facilitates a fine-grained concentration analysis for multi-phase value-targeted regression, enabling us to establish an instance-dependent regret bound that is constant w.r.t. the number of episodes. Specifically, we demonstrate that for an MDP characterized by a minimal suboptimality gap $Delta$, Cert-LSVI-UCB has a cumulative regret of $tilde{mathcal{O}}(d^3H^5/Delta)$ with high probability, provided that the misspecification level $zeta$ is below $tilde{mathcal{O}}(Delta / (sqrt{d}H^2))$. Remarkably, this regret bound remains constant relative to the number of episodes $K$. To the best of our knowledge, Cert-LSVI-UCB is the first algorithm to achieve a constant, instance-dependent, high-probability regret bound in RL with linear function approximation for infinite runs without relying on prior distribution assumptions. This not only highlights the robustness of Cert-LSVI-UCB to model misspecification but also introduces novel algorithmic designs and analytical techniques of independent interest.
[ "['Weitong Zhang' 'Zhiyuan Fan' 'Jiafan He' 'Quanquan Gu']" ]
null
null
2404.10746
null
null
http://arxiv.org/pdf/2404.10746v2
2024-04-29T17:21:51Z
2024-04-16T17:24:22Z
Interpolation and differentiation of alchemical degrees of freedom in machine learning interatomic potentials
Machine learning interatomic potentials (MLIPs) have become a workhorse of modern atomistic simulations, and recently published universal MLIPs, pre-trained on large datasets, have demonstrated remarkable accuracy and generalizability. However, the computational cost of MLIPs limits their applicability to chemically disordered systems requiring large simulation cells or to sample-intensive statistical methods. Here, we report the use of continuous and differentiable alchemical degrees of freedom in atomistic materials simulations, exploiting the fact that graph neural network MLIPs represent discrete elements as real-valued tensors. The proposed method introduces alchemical atoms with corresponding weights into the input graph, alongside modifications to the message-passing and readout mechanisms of MLIPs, and allows smooth interpolation between the compositional states of materials. The end-to-end differentiability of MLIPs enables efficient calculation of the gradient of energy with respect to the compositional weights. Leveraging these gradients, we propose methodologies for optimizing the composition of solid solutions towards target macroscopic properties and conducting alchemical free energy simulations to quantify the free energy of vacancy formation and composition changes. The approach offers an avenue for extending the capabilities of universal MLIPs in the modeling of compositional disorder and characterizing the phase stabilities of complex materials systems.
[ "['Juno Nam' 'Rafael Gómez-Bombarelli']" ]
null
null
2404.10757
null
null
http://arxiv.org/pdf/2404.10757v1
2024-04-16T17:35:25Z
2024-04-16T17:35:25Z
Deep Learning and LLM-based Methods Applied to Stellar Lightcurve Classification
Light curves serve as a valuable source of information on stellar formation and evolution. With the rapid advancement of machine learning techniques, it can be effectively processed to extract astronomical patterns and information. In this study, we present a comprehensive evaluation of deep-learning and large language model (LLM) based models for the automatic classification of variable star light curves, based on large datasets from the Kepler and K2 missions. Special emphasis is placed on Cepheids, RR Lyrae, and eclipsing binaries, examining the influence of observational cadence and phase distribution on classification precision. Employing AutoDL optimization, we achieve striking performance with the 1D-Convolution+BiLSTM architecture and the Swin Transformer, hitting accuracies of 94% and 99% correspondingly, with the latter demonstrating a notable 83% accuracy in discerning the elusive Type II Cepheids-comprising merely 0.02% of the total dataset.We unveil StarWhisper LightCurve (LC), an innovative Series comprising three LLM-based models: LLM, multimodal large language model (MLLM), and Large Audio Language Model (LALM). Each model is fine-tuned with strategic prompt engineering and customized training methods to explore the emergent abilities of these models for astronomical data. Remarkably, StarWhisper LC Series exhibit high accuracies around 90%, significantly reducing the need for explicit feature engineering, thereby paving the way for streamlined parallel data processing and the progression of multifaceted multimodal models in astronomical applications. The study furnishes two detailed catalogs illustrating the impacts of phase and sampling intervals on deep learning classification accuracy, showing that a substantial decrease of up to 14% in observation duration and 21% in sampling points can be realized without compromising accuracy by more than 10%.
[ "['Yu-Yang Li' 'Yu Bai' 'Cunshi Wang' 'Mengwei Qu' 'Ziteng Lu'\n 'Roberto Soria' 'Jifeng Liu']" ]
null
null
2404.10759
null
null
http://arxiv.org/pdf/2404.10759v2
2024-04-26T17:41:37Z
2024-04-16T17:36:21Z
Laplace-HDC: Understanding the geometry of binary hyperdimensional computing
This paper studies the geometry of binary hyperdimensional computing (HDC), a computational scheme in which data are encoded using high-dimensional binary vectors. We establish a result about the similarity structure induced by the HDC binding operator and show that the Laplace kernel naturally arises in this setting, motivating our new encoding method Laplace-HDC, which improves upon previous methods. We describe how our results indicate limitations of binary HDC in encoding spatial information from images and discuss potential solutions, including using Haar convolutional features and the definition of a translation-equivariant HDC encoding. Several numerical experiments highlighting the improved accuracy of Laplace-HDC in contrast to alternative methods are presented. We also numerically study other aspects of the proposed framework such as robustness and the underlying translation-equivariant encoding.
[ "['Saeid Pourmand' 'Wyatt D. Whiting' 'Alireza Aghasi'\n 'Nicholas F. Marshall']" ]
null
null
2404.10761
null
null
http://arxiv.org/pdf/2404.10761v2
2024-04-17T14:38:04Z
2024-04-16T17:41:17Z
TorchSurv: A Lightweight Package for Deep Survival Analysis
TorchSurv is a Python package that serves as a companion tool to perform deep survival modeling within the PyTorch environment. Unlike existing libraries that impose specific parametric forms, TorchSurv enables the use of custom PyTorch-based deep survival models. With its lightweight design, minimal input requirements, full PyTorch backend, and freedom from restrictive survival model parameterizations, TorchSurv facilitates efficient deep survival model implementation and is particularly beneficial for high-dimensional and complex input data scenarios.
[ "['Mélodie Monod' 'Peter Krusche' 'Qian Cao' 'Berkman Sahiner'\n 'Nicholas Petrick' 'David Ohlssen' 'Thibaud Coroller']" ]
null
null
2404.10764
null
null
http://arxiv.org/pdf/2404.10764v1
2024-04-16T17:47:27Z
2024-04-16T17:47:27Z
Confidential Federated Computations
Federated Learning and Analytics (FLA) have seen widespread adoption by technology platforms for processing sensitive on-device data. However, basic FLA systems have privacy limitations: they do not necessarily require anonymization mechanisms like differential privacy (DP), and provide limited protections against a potentially malicious service provider. Adding DP to a basic FLA system currently requires either adding excessive noise to each device's updates, or assuming an honest service provider that correctly implements the mechanism and only uses the privatized outputs. Secure multiparty computation (SMPC) -based oblivious aggregations can limit the service provider's access to individual user updates and improve DP tradeoffs, but the tradeoffs are still suboptimal, and they suffer from scalability challenges and susceptibility to Sybil attacks. This paper introduces a novel system architecture that leverages trusted execution environments (TEEs) and open-sourcing to both ensure confidentiality of server-side computations and provide externally verifiable privacy properties, bolstering the robustness and trustworthiness of private federated computations.
[ "['Hubert Eichner' 'Daniel Ramage' 'Kallista Bonawitz' 'Dzmitry Huba'\n 'Tiziano Santoro' 'Brett McLarnon' 'Timon Van Overveldt' 'Nova Fallen'\n 'Peter Kairouz' 'Albert Cheu' 'Katharine Daly' 'Adria Gascon'\n 'Marco Gruteser' 'Brendan McMahan']" ]
null
null
2404.10769
null
null
http://arxiv.org/pdf/2404.10769v1
2024-04-16T17:53:59Z
2024-04-16T17:53:59Z
Finite-dimensional approximations of push-forwards on locally analytic functionals and truncation of least-squares polynomials
This paper introduces a theoretical framework for investigating analytic maps from finite discrete data, elucidating mathematical machinery underlying the polynomial approximation with least-squares in multivariate situations. Our approach is to consider the push-forward on the space of locally analytic functionals, instead of directly handling the analytic map itself. We establish a methodology enabling appropriate finite-dimensional approximation of the push-forward from finite discrete data, through the theory of the Fourier--Borel transform and the Fock space. Moreover, we prove a rigorous convergence result with a convergence rate. As an application, we prove that it is not the least-squares polynomial, but the polynomial obtained by truncating its higher-degree terms, that approximates analytic functions and further allows for approximation beyond the support of the data distribution. One advantage of our theory is that it enables us to apply linear algebraic operations to the finite-dimensional approximation of the push-forward. Utilizing this, we prove the convergence of a method for approximating an analytic vector field from finite data of the flow map of an ordinary differential equation.
[ "['Isao Ishikawa']" ]
null
null
2404.10771
null
null
http://arxiv.org/pdf/2404.10771v2
2024-06-04T03:11:56Z
2024-04-16T17:55:31Z
TENG: Time-Evolving Natural Gradient for Solving PDEs With Deep Neural Nets Toward Machine Precision
Partial differential equations (PDEs) are instrumental for modeling dynamical systems in science and engineering. The advent of neural networks has initiated a significant shift in tackling these complexities though challenges in accuracy persist, especially for initial value problems. In this paper, we introduce the $textit{Time-Evolving Natural Gradient (TENG)}$, generalizing time-dependent variational principles and optimization-based time integration, leveraging natural gradient optimization to obtain high accuracy in neural-network-based PDE solutions. Our comprehensive development includes algorithms like TENG-Euler and its high-order variants, such as TENG-Heun, tailored for enhanced precision and efficiency. TENG's effectiveness is further validated through its performance, surpassing current leading methods and achieving $textit{machine precision}$ in step-by-step optimizations across a spectrum of PDEs, including the heat equation, Allen-Cahn equation, and Burgers' equation.
[ "['Zhuo Chen' 'Jacob McCarran' 'Esteban Vizcaino' 'Marin Soljačić' 'Di Luo']" ]
null
null
2404.10776
null
null
http://arxiv.org/pdf/2404.10776v1
2024-04-16T17:59:55Z
2024-04-16T17:59:55Z
Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback
Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM). However, the effectiveness of this approach can be influenced by adversaries, who may intentionally provide misleading preferences to manipulate the output in an undesirable or harmful direction. To tackle this challenge, we study a specific model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary. We propose an algorithm namely robust contextual dueling bandit (algo), which is based on uncertainty-weighted maximum likelihood estimation. Our algorithm achieves an $tilde O(dsqrt{T}+dC)$ regret bound, where $T$ is the number of rounds, $d$ is the dimension of the context, and $ 0 le C le T$ is the total number of adversarial feedback. We also prove a lower bound to show that our regret bound is nearly optimal, both in scenarios with and without ($C=0$) adversarial feedback. Additionally, we conduct experiments to evaluate our proposed algorithm against various types of adversarial feedback. Experimental results demonstrate its superiority over the state-of-the-art dueling bandit algorithms in the presence of adversarial feedback.
[ "['Qiwei Di' 'Jiafan He' 'Quanquan Gu']" ]
null
null
2404.10779
null
null
http://arxiv.org/pdf/2404.10779v1
2024-03-23T13:25:01Z
2024-03-23T13:25:01Z
Fine Tuning LLM for Enterprise: Practical Guidelines and Recommendations
There is a compelling necessity from enterprises for fine tuning LLMs (Large Language Models) o get them trained on proprietary domain knowledge. The challenge is to imbibe the LLMs with domain specific knowledge using the most optimial resource and cost and in the best possible time. Many enterprises rely on RAG (Retrieval Augmented Generation) which does not need LLMs to be ine-tuned but they are limited by the quality of vector databases and their retrieval capabilities rather than the intrinsic capabilities of the LLMs themselves. In our current work we focus on fine tuning LLaMA, an open source LLM using proprietary documents and code from an enterprise repository and use the fine tuned models to evaluate the quality of responses. As part of this work, we aim to guide beginners on how to start with fine tuning an LLM for documentation and code by making educated guesses on size of GPU required and options that are available for formatting the data. We also propose pre processing recipes for both documentation and code to prepare dataset in different formats. The proposed methods of data preparation for document datasets are forming paragraph chunks, forming question and answer pairs and forming keyword and paragraph chunk pairs. For code dataset we propose forming summary and function pairs. Further, we qualitatively evaluate the results of the models for domain specific queries. Finally, we also propose practical guidelines and recommendations for fine tuning LLMs.
[ "['Mathav Raj J' 'Kushala VM' 'Harikrishna Warrier' 'Yogesh Gupta']" ]
null
null
2404.10784
null
null
http://arxiv.org/pdf/2404.10784v1
2024-04-09T09:03:53Z
2024-04-09T09:03:53Z
Graph Vertex Embeddings: Distance, Regularization and Community Detection
Graph embeddings have emerged as a powerful tool for representing complex network structures in a low-dimensional space, enabling the use of efficient methods that employ the metric structure in the embedding space as a proxy for the topological structure of the data. In this paper, we explore several aspects that affect the quality of a vertex embedding of graph-structured data. To this effect, we first present a family of flexible distance functions that faithfully capture the topological distance between different vertices. Secondly, we analyze vertex embeddings as resulting from a fitted transformation of the distance matrix rather than as a direct result of optimization. Finally, we evaluate the effectiveness of our proposed embedding constructions by performing community detection on a host of benchmark datasets. The reported results are competitive with classical algorithms that operate on the entire graph while benefitting from a substantially reduced computational complexity due to the reduced dimensionality of the representations.
[ "['Radosław Nowak' 'Adam Małkowski' 'Daniel Cieślak' 'Piotr Sokół'\n 'Paweł Wawrzyński']" ]
null
null
2404.10786
null
null
http://arxiv.org/abs/2404.10786v1
2024-04-16T18:22:30Z
2024-04-16T18:22:30Z
Sustainability of Data Center Digital Twins with Reinforcement Learning
The rapid growth of machine learning (ML) has led to an increased demand for computational power, resulting in larger data centers (DCs) and higher energy consumption. To address this issue and reduce carbon emissions, intelligent design and control of DC components such as IT servers, cabinets, HVAC cooling, flexible load shifting, and battery energy storage are essential. However, the complexity of designing and controlling them in tandem presents a significant challenge. While some individual components like CFD-based design and Reinforcement Learning (RL) based HVAC control have been researched, there's a gap in the holistic design and optimization covering all elements simultaneously. To tackle this, we've developed DCRL-Green, a multi-agent RL environment that empowers the ML community to design data centers and research, develop, and refine RL controllers for carbon footprint reduction in DCs. It is a flexible, modular, scalable, and configurable platform that can handle large High Performance Computing (HPC) clusters. Furthermore, in its default setup, DCRL-Green provides a benchmark for evaluating single as well as multi-agent RL algorithms. It easily allows users to subclass the default implementations and design their own control approaches, encouraging community development for sustainable data centers. Open Source Link: https://github.com/HewlettPackard/dc-rl
[ "['Soumyendu Sarkar' 'Avisek Naug' 'Antonio Guillen' 'Ricardo Luna'\n 'Vineet Gundecha' 'Ashwin Ramesh Babu' 'Sajad Mousavi']" ]
null
null
2404.10789
null
null
http://arxiv.org/pdf/2404.10789v1
2024-04-12T21:22:21Z
2024-04-12T21:22:21Z
PASA: Attack Agnostic Unsupervised Adversarial Detection using Prediction & Attribution Sensitivity Analysis
Deep neural networks for classification are vulnerable to adversarial attacks, where small perturbations to input samples lead to incorrect predictions. This susceptibility, combined with the black-box nature of such networks, limits their adoption in critical applications like autonomous driving. Feature-attribution-based explanation methods provide relevance of input features for model predictions on input samples, thus explaining model decisions. However, we observe that both model predictions and feature attributions for input samples are sensitive to noise. We develop a practical method for this characteristic of model prediction and feature attribution to detect adversarial samples. Our method, PASA, requires the computation of two test statistics using model prediction and feature attribution and can reliably detect adversarial samples using thresholds learned from benign samples. We validate our lightweight approach by evaluating the performance of PASA on varying strengths of FGSM, PGD, BIM, and CW attacks on multiple image and non-image datasets. On average, we outperform state-of-the-art statistical unsupervised adversarial detectors on CIFAR-10 and ImageNet by 14% and 35% ROC-AUC scores, respectively. Moreover, our approach demonstrates competitive performance even when an adversary is aware of the defense mechanism.
[ "['Dipkamal Bhusal' 'Md Tanvirul Alam' 'Monish K. Veerabhadran'\n 'Michael Clifford' 'Sara Rampazzi' 'Nidhi Rastogi']" ]
null
null
2404.10790
null
null
http://arxiv.org/pdf/2404.10790v1
2024-04-13T01:31:25Z
2024-04-13T01:31:25Z
Multimodal Attack Detection for Action Recognition Models
Adversarial machine learning attacks on video action recognition models is a growing research area and many effective attacks were introduced in recent years. These attacks show that action recognition models can be breached in many ways. Hence using these models in practice raises significant security concerns. However, there are very few works which focus on defending against or detecting attacks. In this work, we propose a novel universal detection method which is compatible with any action recognition model. In our extensive experiments, we show that our method consistently detects various attacks against different target models with high true positive rates while satisfying very low false positive rates. Tested against four state-of-the-art attacks targeting four action recognition models, the proposed detector achieves an average AUC of 0.911 over 16 test cases while the best performance achieved by the existing detectors is 0.645 average AUC. This 41.2% improvement is enabled by the robustness of the proposed detector to varying attack methods and target models. The lowest AUC achieved by our detector across the 16 test cases is 0.837 while the competing detector's performance drops as low as 0.211. We also show that the proposed detector is robust to varying attack strengths. In addition, we analyze our method's real-time performance with different hardware setups to demonstrate its potential as a practical defense mechanism.
[ "['Furkan Mumcu' 'Yasin Yilmaz']" ]
null
null
2404.10792
null
null
http://arxiv.org/abs/2404.10792v1
2024-04-13T17:24:18Z
2024-04-13T17:24:18Z
Reconfigurable Edge Hardware for Intelligent IDS: Systematic Approach
Intrusion detection systems (IDS) are crucial security measures nowadays to enforce network security. Their task is to detect anomalies in network communication and identify, if not thwart, possibly malicious behavior. Recently, machine learning has been deployed to construct intelligent IDS. This approach, however, is quite challenging particularly in distributed, highly dynamic, yet resource-constrained systems like Edge setups. In this paper, we tackle this issue from multiple angles by analyzing the concept of intelligent IDS (I-IDS) while addressing the specific requirements of Edge devices with a special focus on reconfigurability. Then, we introduce a systematic approach to constructing the I-IDS on reconfigurable Edge hardware. For this, we implemented our proposed IDS on state-of-the-art Field Programmable Gate Arrays (FPGAs) technology as (1) a purely FPGA-based dataflow processor (DFP) and (2) a co-designed approach featuring RISC-V soft-core as FPGA-based soft-core processor (SCP). We complete our paper with a comparison of the state of the art (SoA) in this domain. The results show that DFP and SCP are both suitable for Edge applications from hardware resource and energy efficiency perspectives. Our proposed DFP solution clearly outperforms the SoA and demonstrates that required high performance can be achieved without prohibitively high hardware costs. This makes our proposed DFP suitable for Edge-based high-speed applications like modern communication technology.
[ "['Wadid Foudhaili' 'Anouar Nechi' 'Celine Thermann' 'Mohammad Al Johmani'\n 'Rainer Buchty' 'Mladen Berekovic' 'Saleh Mulhem']" ]
null
null
2404.10796
null
null
http://arxiv.org/abs/2404.10796v1
2024-04-15T06:56:28Z
2024-04-15T06:56:28Z
Black-box Adversarial Transferability: An Empirical Study in Cybersecurity Perspective
The rapid advancement of artificial intelligence within the realm of cybersecurity raises significant security concerns. The vulnerability of deep learning models in adversarial attacks is one of the major issues. In adversarial machine learning, malicious users try to fool the deep learning model by inserting adversarial perturbation inputs into the model during its training or testing phase. Subsequently, it reduces the model confidence score and results in incorrect classifications. The novel key contribution of the research is to empirically test the black-box adversarial transferability phenomena in cyber attack detection systems. It indicates that the adversarial perturbation input generated through the surrogate model has a similar impact on the target model in producing the incorrect classification. To empirically validate this phenomenon, surrogate and target models are used. The adversarial perturbation inputs are generated based on the surrogate-model for which the hacker has complete information. Based on these adversarial perturbation inputs, both surrogate and target models are evaluated during the inference phase. We have done extensive experimentation over the CICDDoS-2019 dataset, and the results are classified in terms of various performance metrics like accuracy, precision, recall, and f1-score. The findings indicate that any deep learning model is highly susceptible to adversarial attacks, even if the attacker does not have access to the internal details of the target model. The results also indicate that white-box adversarial attacks have a severe impact compared to black-box adversarial attacks. There is a need to investigate and explore adversarial defence techniques to increase the robustness of the deep learning models against adversarial attacks.
[ "['Khushnaseeb Roshan' 'Aasim Zafar']" ]
null
null
2404.10800
null
null
http://arxiv.org/pdf/2404.10800v3
2024-04-24T12:43:30Z
2024-04-16T00:02:12Z
Integrating Graph Neural Networks with Scattering Transform for Anomaly Detection
In this paper, we present two novel methods in Network Intrusion Detection Systems (NIDS) using Graph Neural Networks (GNNs). The first approach, Scattering Transform with E-GraphSAGE (STEG), utilizes the scattering transform to conduct multi-resolution analysis of edge feature vectors. This provides a detailed representation that is essential for identifying subtle anomalies in network traffic. The second approach improves node representation by initiating with Node2Vec, diverging from standard methods of using uniform values, thereby capturing a more accurate and holistic network picture. Our methods have shown significant improvements in performance compared to existing state-of-the-art methods in benchmark NIDS datasets.
[ "['Abdeljalil Zoubir' 'Badr Missaoui']" ]
null
null
2404.10824
null
null
http://arxiv.org/pdf/2404.10824v2
2024-04-22T20:31:04Z
2024-04-16T18:02:15Z
Decoupled Weight Decay for Any $p$ Norm
With the success of deep neural networks (NNs) in a variety of domains, the computational and storage requirements for training and deploying large NNs have become a bottleneck for further improvements. Sparsification has consequently emerged as a leading approach to tackle these issues. In this work, we consider a simple yet effective approach to sparsification, based on the Bridge, or $L_p$ regularization during training. We introduce a novel weight decay scheme, which generalizes the standard $L_2$ weight decay to any $p$ norm. We show that this scheme is compatible with adaptive optimizers, and avoids the gradient divergence associated with $0<p<1$ norms. We empirically demonstrate that it leads to highly sparse networks, while maintaining generalization performance comparable to standard $L_2$ regularization.
[ "['Nadav Joseph Outmezguine' 'Noam Levi']" ]
null
null
2404.10830
null
null
http://arxiv.org/pdf/2404.10830v2
2024-05-02T17:10:44Z
2024-04-16T18:08:29Z
Fewer Truncations Improve Language Modeling
In large language model training, input documents are typically concatenated together and then split into sequences of equal length to avoid padding tokens. Despite its efficiency, the concatenation approach compromises data integrity -- it inevitably breaks many documents into incomplete pieces, leading to excessive truncations that hinder the model from learning to compose logically coherent and factually consistent content that is grounded on the complete context. To address the issue, we propose Best-fit Packing, a scalable and efficient method that packs documents into training sequences through length-aware combinatorial optimization. Our method completely eliminates unnecessary truncations while retaining the same training efficiency as concatenation. Empirical results from both text and code pre-training show that our method achieves superior performance (e.g., relatively +4.7% on reading comprehension; +16.8% in context following; and +9.2% on program synthesis), and reduces closed-domain hallucination effectively by up to 58.3%.
[ "['Hantian Ding' 'Zijian Wang' 'Giovanni Paolini' 'Varun Kumar'\n 'Anoop Deoras' 'Dan Roth' 'Stefano Soatto']" ]
null
null
2404.10842
null
null
http://arxiv.org/pdf/2404.10842v1
2024-04-16T18:40:28Z
2024-04-16T18:40:28Z
Unsupervised Speaker Diarization in Distributed IoT Networks Using Federated Learning
This paper presents a computationally efficient and distributed speaker diarization framework for networked IoT-style audio devices. The work proposes a Federated Learning model which can identify the participants in a conversation without the requirement of a large audio database for training. An unsupervised online update mechanism is proposed for the Federated Learning model which depends on cosine similarity of speaker embeddings. Moreover, the proposed diarization system solves the problem of speaker change detection via. unsupervised segmentation techniques using Hotelling's t-squared Statistic and Bayesian Information Criterion. In this new approach, speaker change detection is biased around detected quasi-silences, which reduces the severity of the trade-off between the missed detection and false detection rates. Additionally, the computational overhead due to frame-by-frame identification of speakers is reduced via. unsupervised clustering of speech segments. The results demonstrate the effectiveness of the proposed training method in the presence of non-IID speech data. It also shows a considerable improvement in the reduction of false and missed detection at the segmentation stage, while reducing the computational overhead. Improved accuracy and reduced computational cost makes the mechanism suitable for real-time speaker diarization across a distributed IoT audio network.
[ "['Amit Kumar Bhuyan' 'Hrishikesh Dutta' 'Subir Biswas']" ]
null
null
2404.10843
null
null
http://arxiv.org/pdf/2404.10843v1
2024-04-16T18:43:27Z
2024-04-16T18:43:27Z
Geometric Neural Operators (GNPs) for Data-Driven Deep Learning of Non-Euclidean Operators
We introduce Geometric Neural Operators (GNPs) for accounting for geometric contributions in data-driven deep learning of operators. We show how GNPs can be used (i) to estimate geometric properties, such as the metric and curvatures, (ii) to approximate Partial Differential Equations (PDEs) on manifolds, (iii) learn solution maps for Laplace-Beltrami (LB) operators, and (iv) to solve Bayesian inverse problems for identifying manifold shapes. The methods allow for handling geometries of general shape including point-cloud representations. The developed GNPs provide approaches for incorporating the roles of geometry in data-driven learning of operators.
[ "['Blaine Quackenbush' 'Paul J. Atzberger']" ]
null
null
2404.10845
null
null
http://arxiv.org/pdf/2404.10845v1
2024-04-16T18:47:07Z
2024-04-16T18:47:07Z
Top-k Multi-Armed Bandit Learning for Content Dissemination in Swarms of Micro-UAVs
In communication-deprived disaster scenarios, this paper introduces a Micro-Unmanned Aerial Vehicle (UAV)- enhanced content management system. In the absence of cellular infrastructure, this system deploys a hybrid network of stationary and mobile UAVs to offer vital content access to isolated communities. Static anchor UAVs equipped with both vertical and lateral links cater to local users, while agile micro-ferrying UAVs, equipped with lateral links and greater mobility, reach users in various communities. The primary goal is to devise an adaptive content dissemination system that dynamically learns caching policies to maximize content accessibility. The paper proposes a decentralized Top-k Multi-Armed Bandit (Top-k MAB) learning approach for UAV caching decisions, accommodating geotemporal disparities in content popularity and diverse content demands. The proposed mechanism involves a Selective Caching Algorithm that algorithmically reduces redundant copies of the contents by leveraging the shared information between the UAVs. It is demonstrated that Top-k MAB learning, along with selective caching algorithm, can improve system performance while making the learning process adaptive. The paper does functional verification and performance evaluation of the proposed caching framework under a wide range of network size, swarm of micro-ferrying UAVs, and heterogeneous popularity distributions.
[ "['Amit Kumar Bhuyan' 'Hrishikesh Dutta' 'Subir Biswas']" ]
null
null
2404.10848
null
null
http://arxiv.org/pdf/2404.10848v1
2024-04-16T18:50:57Z
2024-04-16T18:50:57Z
A LayoutLMv3-Based Model for Enhanced Relation Extraction in Visually-Rich Documents
Document Understanding is an evolving field in Natural Language Processing (NLP). In particular, visual and spatial features are essential in addition to the raw text itself and hence, several multimodal models were developed in the field of Visual Document Understanding (VDU). However, while research is mainly focused on Key Information Extraction (KIE), Relation Extraction (RE) between identified entities is still under-studied. For instance, RE is crucial to regroup entities or obtain a comprehensive hierarchy of data in a document. In this paper, we present a model that, initialized from LayoutLMv3, can match or outperform the current state-of-the-art results in RE applied to Visually-Rich Documents (VRD) on FUNSD and CORD datasets, without any specific pre-training and with fewer parameters. We also report an extensive ablation study performed on FUNSD, highlighting the great impact of certain features and modelization choices on the performances.
[ "['Wiam Adnan' 'Joel Tang' 'Yassine Bel Khayat Zouggari'\n 'Seif Edinne Laatiri' 'Laurent Lam' 'Fabien Caspani']" ]
null
null
2404.10851
null
null
http://arxiv.org/pdf/2404.10851v2
2024-04-18T23:38:49Z
2024-04-16T18:54:57Z
Sample Complexity of the Linear Quadratic Regulator: A Reinforcement Learning Lens
We provide the first known algorithm that provably achieves $varepsilon$-optimality within $widetilde{mathcal{O}}(1/varepsilon)$ function evaluations for the discounted discrete-time LQR problem with unknown parameters, without relying on two-point gradient estimates. These estimates are known to be unrealistic in many settings, as they depend on using the exact same initialization, which is to be selected randomly, for two different policies. Our results substantially improve upon the existing literature outside the realm of two-point gradient estimates, which either leads to $widetilde{mathcal{O}}(1/varepsilon^2)$ rates or heavily relies on stability assumptions.
[ "['Amirreza Neshaei Moghaddam' 'Alex Olshevsky' 'Bahman Gharesifard']" ]
null
null
2404.10859
null
null
http://arxiv.org/pdf/2404.10859v1
2024-04-16T19:17:23Z
2024-04-16T19:17:23Z
Forcing Diffuse Distributions out of Language Models
Despite being trained specifically to follow user instructions, today's language models perform poorly when instructed to produce random outputs. For example, when prompted to pick a number uniformly between one and ten Llama-2-13B-chat disproportionately favors the number five, and when tasked with picking a first name at random, Mistral-7B-Instruct chooses Avery 40 times more often than we would expect based on the U.S. population. When these language models are used for real-world tasks where diversity of outputs is crucial, such as language model assisted dataset construction, their inability to produce diffuse distributions over valid choices is a major hurdle. In this work, we propose a fine-tuning method that encourages language models to output distributions that are diffuse over valid outcomes. The methods we introduce generalize across a variety of tasks and distributions and make large language models practical for synthetic dataset generation with little human intervention.
[ "['Yiming Zhang' 'Avi Schwarzschild' 'Nicholas Carlini' 'Zico Kolter'\n 'Daphne Ippolito']" ]
null
null
2404.10876
null
null
http://arxiv.org/abs/2404.10876v2
2024-05-01T09:48:00Z
2024-04-16T19:52:57Z
Course Recommender Systems Need to Consider the Job Market
Current course recommender systems primarily leverage learner-course interactions, course content, learner preferences, and supplementary course details like instructor, institution, ratings, and reviews, to make their recommendation. However, these systems often overlook a critical aspect: the evolving skill demand of the job market. This paper focuses on the perspective of academic researchers, working in collaboration with the industry, aiming to develop a course recommender system that incorporates job market skill demands. In light of the job market's rapid changes and the current state of research in course recommender systems, we outline essential properties for course recommender systems to address these demands effectively, including explainable, sequential, unsupervised, and aligned with the job market and user's goals. Our discussion extends to the challenges and research questions this objective entails, including unsupervised skill extraction from job listings, course descriptions, and resumes, as well as predicting recommendations that align with learner objectives and the job market and designing metrics to evaluate this alignment. Furthermore, we introduce an initial system that addresses some existing limitations of course recommender systems using large Language Models (LLMs) for skill extraction and Reinforcement Learning (RL) for alignment with the job market. We provide empirical results using open-source data to demonstrate its effectiveness.
[ "['Jibril Frej' 'Anna Dai' 'Syrielle Montariol' 'Antoine Bosselut'\n 'Tanja Käser']" ]
null
null
2404.10881
null
null
http://arxiv.org/pdf/2404.10881v1
2024-04-16T20:01:10Z
2024-04-16T20:01:10Z
Differentially Private Optimization with Sparse Gradients
Motivated by applications of large embedding models, we study differentially private (DP) optimization problems under sparsity of individual gradients. We start with new near-optimal bounds for the classic mean estimation problem but with sparse data, improving upon existing algorithms particularly for the high-dimensional regime. Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for stochastic convex optimization with sparse gradients; the former represents the first nearly dimension-independent rates for this problem. Finally, we study the approximation of stationary points for the empirical loss in approximate-DP optimization and obtain rates that depend on sparsity instead of dimension, modulo polylogarithmic factors.
[ "['Badih Ghazi' 'Cristóbal Guzmán' 'Pritish Kamath' 'Ravi Kumar'\n 'Pasin Manurangsi']" ]
null
null
2404.10883
null
null
http://arxiv.org/pdf/2404.10883v1
2024-04-16T20:04:29Z
2024-04-16T20:04:29Z
Automated Discovery of Functional Actual Causes in Complex Environments
Reinforcement learning (RL) algorithms often struggle to learn policies that generalize to novel situations due to issues such as causal confusion, overfitting to irrelevant factors, and failure to isolate control of state factors. These issues stem from a common source: a failure to accurately identify and exploit state-specific causal relationships in the environment. While some prior works in RL aim to identify these relationships explicitly, they rely on informal domain-specific heuristics such as spatial and temporal proximity. Actual causality offers a principled and general framework for determining the causes of particular events. However, existing definitions of actual cause often attribute causality to a large number of events, even if many of them rarely influence the outcome. Prior work on actual causality proposes normality as a solution to this problem, but its existing implementations are challenging to scale to complex and continuous-valued RL environments. This paper introduces functional actual cause (FAC), a framework that uses context-specific independencies in the environment to restrict the set of actual causes. We additionally introduce Joint Optimization for Actual Cause Inference (JACI), an algorithm that learns from observational data to infer functional actual causes. We demonstrate empirically that FAC agrees with known results on a suite of examples from the actual causality literature, and JACI identifies actual causes with significantly higher accuracy than existing heuristic methods in a set of complex, continuous-valued environments.
[ "['Caleb Chuck' 'Sankaran Vaidyanathan' 'Stephen Giguere' 'Amy Zhang'\n 'David Jensen' 'Scott Niekum']" ]
null
null
2404.10906
null
null
http://arxiv.org/pdf/2404.10906v1
2024-04-16T20:53:17Z
2024-04-16T20:53:17Z
Towards a Research Community in Interpretable Reinforcement Learning: the InterpPol Workshop
Embracing the pursuit of intrinsically explainable reinforcement learning raises crucial questions: what distinguishes explainability from interpretability? Should explainable and interpretable agents be developed outside of domains where transparency is imperative? What advantages do interpretable policies offer over neural networks? How can we rigorously define and measure interpretability in policies, without user studies? What reinforcement learning paradigms,are the most suited to develop interpretable agents? Can Markov Decision Processes integrate interpretable state representations? In addition to motivate an Interpretable RL community centered around the aforementioned questions, we propose the first venue dedicated to Interpretable RL: the InterpPol Workshop.
[ "['Hector Kohler' 'Quentin Delfosse' 'Paul Festor' 'Philippe Preux']" ]
null
null
2404.10921
null
null
http://arxiv.org/pdf/2404.10921v2
2024-04-29T22:14:32Z
2024-04-16T21:45:10Z
Tao: Re-Thinking DL-based Microarchitecture Simulation
Microarchitecture simulators are indispensable tools for microarchitecture designers to validate, estimate, and optimize new hardware that meets specific design requirements. While the quest for a fast, accurate and detailed microarchitecture simulation has been ongoing for decades, existing simulators excel and fall short at different aspects: (i) Although execution-driven simulation is accurate and detailed, it is extremely slow and requires expert-level experience to design. (ii) Trace-driven simulation reuses the execution traces in pursuit of fast simulation but faces accuracy concerns and fails to achieve significant speedup. (iii) Emerging deep learning (DL)-based simulations are remarkably fast and have acceptable accuracy but fail to provide adequate low-level microarchitectural performance metrics crucial for microarchitectural bottleneck analysis. Additionally, they introduce substantial overheads from trace regeneration and model re-training when simulating a new microarchitecture. Re-thinking the advantages and limitations of the aforementioned simulation paradigms, this paper introduces TAO that redesigns the DL-based simulation with three primary contributions: First, we propose a new training dataset design such that the subsequent simulation only needs functional trace as inputs, which can be rapidly generated and reused across microarchitectures. Second, we redesign the input features and the DL model using self-attention to support predicting various performance metrics. Third, we propose techniques to train a microarchitecture agnostic embedding layer that enables fast transfer learning between different microarchitectural configurations and reduces the re-training overhead of conventional DL-based simulators. Our extensive evaluation shows TAO can reduce the overall training and simulation time by 18.06x over the state-of-the-art DL-based endeavors.
[ "['Santosh Pandey' 'Amir Yazdanbakhsh' 'Hang Liu']" ]
null
null
2404.10933
null
null
http://arxiv.org/pdf/2404.10933v1
2024-04-16T22:11:35Z
2024-04-16T22:11:35Z
LLMem: Estimating GPU Memory Usage for Fine-Tuning Pre-Trained LLMs
Fine-tuning pre-trained large language models (LLMs) with limited hardware presents challenges due to GPU memory constraints. Various distributed fine-tuning methods have been proposed to alleviate memory constraints on GPU. However, determining the most effective method for achieving rapid fine-tuning while preventing GPU out-of-memory issues in a given environment remains unclear. To address this challenge, we introduce LLMem, a solution that estimates the GPU memory consumption when applying distributed fine-tuning methods across multiple GPUs and identifies the optimal method. We conduct GPU memory usage estimation prior to fine-tuning, leveraging the fundamental structure of transformer-based decoder models and the memory usage distribution of each method. Experimental results show that LLMem accurately estimates peak GPU memory usage on a single GPU, with error rates of up to 1.6%. Additionally, it shows an average error rate of 3.0% when applying distributed fine-tuning methods to LLMs with more than a billion parameters on multi-GPU setups.
[ "['Taeho Kim' 'Yanming Wang' 'Vatshank Chaturvedi' 'Lokesh Gupta'\n 'Seyeon Kim' 'Yongin Kwon' 'Sangtae Ha']" ]
null
null
2404.10934
null
null
http://arxiv.org/pdf/2404.10934v1
2024-04-16T22:12:36Z
2024-04-16T22:12:36Z
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search
Recently, several approaches successfully demonstrated that weight-sharing Neural Architecture Search (NAS) can effectively explore a search space of elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning (PEFT) and compression of large language models. In this paper, we introduce a novel approach called Shears, demonstrating how the integration of cost-effective sparsity and a proposed Neural Low-rank adapter Search (NLS) algorithm can further improve the efficiency of PEFT approaches. Results demonstrate the benefits of Shears compared to other methods, reaching high sparsity levels while improving or with little drop in accuracy, utilizing a single GPU for a pair of hours.
[ "['J. Pablo Muñoz' 'Jinjie Yuan' 'Nilesh Jain']" ]
null
null
2404.10935
null
null
http://arxiv.org/pdf/2404.10935v1
2024-04-16T22:15:52Z
2024-04-16T22:15:52Z
Molecular relaxation by reverse diffusion with time step prediction
Molecular relaxation, finding the equilibrium state of a non-equilibrium structure, is an essential component of computational chemistry to understand reactivity. Classical force field methods often rely on insufficient local energy minimization, while neural network force field models require large labeled datasets encompassing both equilibrium and non-equilibrium structures. As a remedy, we propose MoreRed, molecular relaxation by reverse diffusion, a conceptually novel and purely statistical approach where non-equilibrium structures are treated as noisy instances of their corresponding equilibrium states. To enable the denoising of arbitrarily noisy inputs via a generative diffusion model, we further introduce a novel diffusion time step predictor. Notably, MoreRed learns a simpler pseudo potential energy surface instead of the complex physical potential energy surface. It is trained on a significantly smaller, and thus computationally cheaper, dataset consisting of solely unlabeled equilibrium structures, avoiding the computation of non-equilibrium structures altogether. We compare MoreRed to classical force fields, equivariant neural network force fields trained on a large dataset of equilibrium and non-equilibrium data, as well as a semi-empirical tight-binding model. To assess this quantitatively, we evaluate the root-mean-square deviation between the found equilibrium structures and the reference equilibrium structures as well as their DFT energies.
[ "['Khaled Kahouli' 'Stefaan Simon Pierre Hessmann' 'Klaus-Robert Müller'\n 'Shinichi Nakajima' 'Stefan Gugler' 'Niklas Wolf Andreas Gebauer']" ]
null
null
2404.10936
null
null
http://arxiv.org/pdf/2404.10936v1
2024-04-16T22:27:23Z
2024-04-16T22:27:23Z
Beam Training in mmWave Vehicular Systems: Machine Learning for Decoupling Beam Selection
Codebook-based beam selection is one approach for configuring millimeter wave communication links. The overhead required to reconfigure the transmit and receive beam pair, though, increases in highly dynamic vehicular communication systems. Location information coupled with machine learning (ML) beam recommendation is one way to reduce the overhead of beam pair selection. In this paper, we develop ML-based location-aided approaches to decouple the beam selection between the user equipment (UE) and the base station (BS). We quantify the performance gaps due to decoupling beam selection and also disaggregating the UE's location information from the BS. Our simulation results show that decoupling beam selection with available location information at the BS performs comparable to joint beam pair selection at the BS. Moreover, decoupled beam selection without location closely approaches the performance of beam pair selection at the BS when sufficient beam pairs are swept.
[ "['Ibrahim Kilinc' 'Ryan M. Dreifuerst' 'Junghoon Kim' 'Robert W. Heath Jr']" ]
null
null
2404.10942
null
null
http://arxiv.org/pdf/2404.10942v2
2024-04-28T08:49:45Z
2024-04-16T22:47:59Z
What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning
In sequential decision-making problems involving sensitive attributes like race and gender, reinforcement learning (RL) agents must carefully consider long-term fairness while maximizing returns. Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear. In this paper, we address this gap in the literature by investigating the sources of inequality through a causal lens. We first analyse the causal relationships governing the data generation process and decompose the effect of sensitive attributes on long-term well-being into distinct components. We then introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics, distinguishing it from those induced by decision-making or inherited from the past. This notion requires evaluating the expected changes in the next state and the reward induced by changing the value of the sensitive attribute while holding everything else constant. To quantitatively evaluate this counterfactual concept, we derive identification formulas that allow us to obtain reliable estimations from data. Extensive experiments demonstrate the effectiveness of the proposed techniques in explaining, detecting, and reducing inequality in reinforcement learning. We publicly release code at https://github.com/familyld/InsightFair.
[ "['Zhihong Deng' 'Jing Jiang' 'Guodong Long' 'Chengqi Zhang']" ]
null
null
2404.10949
null
null
http://arxiv.org/pdf/2404.10949v1
2024-04-16T23:17:04Z
2024-04-16T23:17:04Z
Human-Algorithm Collaborative Bayesian Optimization for Engineering Systems
Bayesian optimization has been successfully applied throughout Chemical Engineering for the optimization of functions that are expensive-to-evaluate, or where gradients are not easily obtainable. However, domain experts often possess valuable physical insights that are overlooked in fully automated decision-making approaches, necessitating the inclusion of human input. In this article we re-introduce the human back into the data-driven decision making loop by outlining an approach for collaborative Bayesian optimization. Our methodology exploits the hypothesis that humans are more efficient at making discrete choices rather than continuous ones and enables experts to influence critical early decisions. We apply high-throughput (batch) Bayesian optimization alongside discrete decision theory to enable domain experts to influence the selection of experiments. At every iteration we apply a multi-objective approach that results in a set of alternate solutions that have both high utility and are reasonably distinct. The expert then selects the desired solution for evaluation from this set, allowing for the inclusion of expert knowledge and improving accountability, whilst maintaining the advantages of Bayesian optimization. We demonstrate our approach across a number of applied and numerical case studies including bioprocess optimization and reactor geometry design, demonstrating that even in the case of an uninformed practitioner our algorithm recovers the regret of standard Bayesian optimization. Through the inclusion of continuous expert opinion, our approach enables faster convergence, and improved accountability for Bayesian optimization in engineering systems.
[ "['Tom Savage' 'Ehecatl Antonio del Rio Chanona']" ]
null
null
2404.10957
null
null
http://arxiv.org/pdf/2404.10957v2
2024-04-21T18:08:28Z
2024-04-16T23:47:23Z
Personalized Federated Learning via Stacking
Traditional Federated Learning (FL) methods typically train a single global model collaboratively without exchanging raw data. In contrast, Personalized Federated Learning (PFL) techniques aim to create multiple models that are better tailored to individual clients' data. We present a novel personalization approach based on stacked generalization where clients directly send each other privacy-preserving models to be used as base models to train a meta-model on private data. Our approach is flexible, accommodating various privacy-preserving techniques and model types, and can be applied in horizontal, hybrid, and vertically partitioned federations. Additionally, it offers a natural mechanism for assessing each client's contribution to the federation. Through comprehensive evaluations across diverse simulated data heterogeneity scenarios, we showcase the effectiveness of our method.
[ "['Emilio Cantu-Cervini']" ]
null
null
2404.10976
null
null
http://arxiv.org/pdf/2404.10976v3
2024-05-11T23:50:21Z
2024-04-17T01:17:10Z
Group-Aware Coordination Graph for Multi-Agent Reinforcement Learning
Cooperative Multi-Agent Reinforcement Learning (MARL) necessitates seamless collaboration among agents, often represented by an underlying relation graph. Existing methods for learning this graph primarily focus on agent-pair relations, neglecting higher-order relationships. While several approaches attempt to extend cooperation modelling to encompass behaviour similarities within groups, they commonly fall short in concurrently learning the latent graph, thereby constraining the information exchange among partially observed agents. To overcome these limitations, we present a novel approach to infer the Group-Aware Coordination Graph (GACG), which is designed to capture both the cooperation between agent pairs based on current observations and group-level dependencies from behaviour patterns observed across trajectories. This graph is further used in graph convolution for information exchange between agents during decision-making. To further ensure behavioural consistency among agents within the same group, we introduce a group distance loss, which promotes group cohesion and encourages specialization between groups. Our evaluations, conducted on StarCraft II micromanagement tasks, demonstrate GACG's superior performance. An ablation study further provides experimental evidence of the effectiveness of each component of our method.
[ "['Wei Duan' 'Jie Lu' 'Junyu Xuan']" ]
null
null
2404.10978
null
null
http://arxiv.org/pdf/2404.10978v1
2024-04-17T01:23:49Z
2024-04-17T01:23:49Z
Leveraging 3D LiDAR Sensors to Enable Enhanced Urban Safety and Public Health: Pedestrian Monitoring and Abnormal Activity Detection
The integration of Light Detection and Ranging (LiDAR) and Internet of Things (IoT) technologies offers transformative opportunities for public health informatics in urban safety and pedestrian well-being. This paper proposes a novel framework utilizing these technologies for enhanced 3D object detection and activity classification in urban traffic scenarios. By employing elevated LiDAR, we obtain detailed 3D point cloud data, enabling precise pedestrian activity monitoring. To overcome urban data scarcity, we create a specialized dataset through simulated traffic environments in Blender, facilitating targeted model training. Our approach employs a modified Point Voxel-Region-based Convolutional Neural Network (PV-RCNN) for robust 3D detection and PointNet for classifying pedestrian activities, significantly benefiting urban traffic management and public health by offering insights into pedestrian behavior and promoting safer urban environments. Our dual-model approach not only enhances urban traffic management but also contributes significantly to public health by providing insights into pedestrian behavior and promoting safer urban environment.
[ "['Nawfal Guefrachi' 'Jian Shi' 'Hakim Ghazzai' 'Ahmad Alsharoa']" ]
null
null
2404.10980
null
null
http://arxiv.org/pdf/2404.10980v1
2024-04-17T01:26:15Z
2024-04-17T01:26:15Z
Hyper Evidential Deep Learning to Quantify Composite Classification Uncertainty
Deep neural networks (DNNs) have been shown to perform well on exclusive, multi-class classification tasks. However, when different classes have similar visual features, it becomes challenging for human annotators to differentiate them. This scenario necessitates the use of composite class labels. In this paper, we propose a novel framework called Hyper-Evidential Neural Network (HENN) that explicitly models predictive uncertainty due to composite class labels in training data in the context of the belief theory called Subjective Logic (SL). By placing a grouped Dirichlet distribution on the class probabilities, we treat predictions of a neural network as parameters of hyper-subjective opinions and learn the network that collects both single and composite evidence leading to these hyper-opinions by a deterministic DNN from data. We introduce a new uncertainty type called vagueness originally designed for hyper-opinions in SL to quantify composite classification uncertainty for DNNs. Our results demonstrate that HENN outperforms its state-of-the-art counterparts based on four image datasets. The code and datasets are available at: https://github.com/Hugo101/HyperEvidentialNN.
[ "['Changbin Li' 'Kangshuo Li' 'Yuzhe Ou' 'Lance M. Kaplan' 'Audun Jøsang'\n 'Jin-Hee Cho' 'Dong Hyun Jeong' 'Feng Chen']" ]
null
null
2404.10984
null
null
http://arxiv.org/pdf/2404.10984v1
2024-04-17T01:31:00Z
2024-04-17T01:31:00Z
Graph Continual Learning with Debiased Lossless Memory Replay
Real-life graph data often expands continually, rendering the learning of graph neural networks (GNNs) on static graph data impractical. Graph continual learning (GCL) tackles this problem by continually adapting GNNs to the expanded graph of the current task while maintaining the performance over the graph of previous tasks. Memory replay-based methods, which aim to replay data of previous tasks when learning new tasks, have been explored as one principled approach to mitigate the forgetting of the knowledge learned from the previous tasks. In this paper we extend this methodology with a novel framework, called Debiased Lossless Memory replay (DeLoMe). Unlike existing methods that sample nodes/edges of previous graphs to construct the memory, DeLoMe learns small lossless synthetic node representations as the memory. The learned memory can not only preserve the graph data privacy but also capture the holistic graph information, for which the sampling-based methods are not viable. Further, prior methods suffer from bias toward the current task due to the data imbalance between the classes in the memory data and the current data. A debiased GCL loss function is devised in DeLoMe to effectively alleviate this bias. Extensive experiments on four graph datasets show the effectiveness of DeLoMe under both class- and task-incremental learning settings.
[ "['Chaoxi Niu' 'Guansong Pang' 'Ling Chen']" ]
null
null
2404.10989
null
null
http://arxiv.org/pdf/2404.10989v1
2024-04-17T01:53:03Z
2024-04-17T01:53:03Z
FairSSD: Understanding Bias in Synthetic Speech Detectors
Methods that can generate synthetic speech which is perceptually indistinguishable from speech recorded by a human speaker, are easily available. Several incidents report misuse of synthetic speech generated from these methods to commit fraud. To counter such misuse, many methods have been proposed to detect synthetic speech. Some of these detectors are more interpretable, can generalize to detect synthetic speech in the wild and are robust to noise. However, limited work has been done on understanding bias in these detectors. In this work, we examine bias in existing synthetic speech detectors to determine if they will unfairly target a particular gender, age and accent group. We also inspect whether these detectors will have a higher misclassification rate for bona fide speech from speech-impaired speakers w.r.t fluent speakers. Extensive experiments on 6 existing synthetic speech detectors using more than 0.9 million speech signals demonstrate that most detectors are gender, age and accent biased, and future work is needed to ensure fairness. To support future research, we release our evaluation dataset, models used in our study and source code at https://gitlab.com/viper-purdue/fairssd.
[ "['Amit Kumar Singh Yadav' 'Kratika Bhagtani' 'Davide Salvi'\n 'Paolo Bestagini' 'Edward J. Delp']" ]
null
null
2404.10991
null
null
http://arxiv.org/abs/2404.10991v1
2024-04-17T02:04:10Z
2024-04-17T02:04:10Z
Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves
The industrial multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves. These complex devices in challenging circumstances need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves. The Multi-Agent Reinforcement Learning (MARL) controller trained with the Proximal Policy Optimization (PPO) algorithm can handle these complexities. In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics and find that they are key to better performance. We investigated the performance of a fully connected neural network (FCN), LSTM, and Transformer model variants with varying depths and gated residual connections. Our results show that the transformer model of moderate depth with gated residual connections around the multi-head attention, multi-layer perceptron, and the transformer block (STrXL) proposed in this paper is optimal and boosts energy efficiency by an average of 22.1% for these complex spread waves over the existing spring damper (SD) controller. Furthermore, unlike the default SD controller, the transformer controller almost eliminated the mechanical stress from the rotational yaw motion for angled waves. Demo: https://tinyurl.com/yueda3jh
[ "['Soumyendu Sarkar' 'Vineet Gundecha' 'Sahand Ghorbanpour'\n 'Alexander Shmakov' 'Ashwin Ramesh Babu' 'Avisek Naug'\n 'Alexandre Pichard' 'Mathieu Cocho']" ]
null
null
2404.10995
null
null
http://arxiv.org/pdf/2404.10995v1
2024-04-17T02:17:05Z
2024-04-17T02:17:05Z
Clipped SGD Algorithms for Privacy Preserving Performative Prediction: Bias Amplification and Remedies
Clipped stochastic gradient descent (SGD) algorithms are among the most popular algorithms for privacy preserving optimization that reduces the leakage of users' identity in model training. This paper studies the convergence properties of these algorithms in a performative prediction setting, where the data distribution may shift due to the deployed prediction model. For example, the latter is caused by strategical users during the training of loan policy for banks. Our contributions are two-fold. First, we show that the straightforward implementation of a projected clipped SGD (PCSGD) algorithm may converge to a biased solution compared to the performative stable solution. We quantify the lower and upper bound for the magnitude of the bias and demonstrate a bias amplification phenomenon where the bias grows with the sensitivity of the data distribution. Second, we suggest two remedies to the bias amplification effect. The first one utilizes an optimal step size design for PCSGD that takes the privacy guarantee into account. The second one uses the recently proposed DiceSGD algorithm [Zhang et al., 2024]. We show that the latter can successfully remove the bias and converge to the performative stable solution. Numerical experiments verify our analysis.
[ "['Qiang Li' 'Michal Yemini' 'Hoi-To Wai']" ]
null
null
2404.10997
null
null
http://arxiv.org/pdf/2404.10997v1
2024-04-17T02:17:23Z
2024-04-17T02:17:23Z
Online Algorithms with Limited Data Retention
We introduce a model of online algorithms subject to strict constraints on data retention. An online learning algorithm encounters a stream of data points, one per round, generated by some stationary process. Crucially, each data point can request that it be removed from memory $m$ rounds after it arrives. To model the impact of removal, we do not allow the algorithm to store any information or calculations between rounds other than a subset of the data points (subject to the retention constraints). At the conclusion of the stream, the algorithm answers a statistical query about the full dataset. We ask: what level of performance can be guaranteed as a function of $m$? We illustrate this framework for multidimensional mean estimation and linear regression problems. We show it is possible to obtain an exponential improvement over a baseline algorithm that retains all data as long as possible. Specifically, we show that $m = textsc{Poly}(d, log(1/epsilon))$ retention suffices to achieve mean squared error $epsilon$ after observing $O(1/epsilon)$ $d$-dimensional data points. This matches the error bound of the optimal, yet infeasible, algorithm that retains all data forever. We also show a nearly matching lower bound on the retention required to guarantee error $epsilon$. One implication of our results is that data retention laws are insufficient to guarantee the right to be forgotten even in a non-adversarial world in which firms merely strive to (approximately) optimize the performance of their algorithms. Our approach makes use of recent developments in the multidimensional random subset sum problem to simulate the progression of stochastic gradient descent under a model of adversarial noise, which may be of independent interest.
[ "['Nicole Immorlica' 'Brendan Lucier' 'Markus Mobius' 'James Siderius']" ]
null
null
2404.11013
null
null
http://arxiv.org/pdf/2404.11013v2
2024-05-19T23:01:53Z
2024-04-17T02:44:25Z
Control Theoretic Approach to Fine-Tuning and Transfer Learning
Given a training set in the form of a paired $(mathcal{X},mathcal{Y})$, we say that the control system $dot x = f(x,u)$ has learned the paired set via the control $u^*$ if the system steers each point of $mathcal{X}$ to its corresponding target in $mathcal{Y}$. If the training set is expanded, most existing methods for finding a new control $u^*$ require starting from scratch, resulting in a quadratic increase in complexity with the number of points. To overcome this limitation, we introduce the concept of $textit{ tuning without forgetting}$. We develop $textit{an iterative algorithm}$ to tune the control $u^*$ when the training set expands, whereby points already in the paired set are still matched, and new training samples are learned. At each update of our method, the control $u^*$ is projected onto the kernel of the end-point mapping generated by the controlled dynamics at the learned samples. It ensures keeping the end-points for the previously learned samples constant while iteratively learning additional samples.
[ "['Erkan Bayram' 'Shenyu Liu' 'Mohamed-Ali Belabbas' 'Tamer Başar']" ]
null
null
2404.11015
null
null
http://arxiv.org/pdf/2404.11015v2
2024-04-20T14:26:07Z
2024-04-17T02:46:59Z
FedFa: A Fully Asynchronous Training Paradigm for Federated Learning
Federated learning has been identified as an efficient decentralized training paradigm for scaling the machine learning model training on a large number of devices while guaranteeing the data privacy of the trainers. FedAvg has become a foundational parameter update strategy for federated learning, which has been promising to eliminate the effect of the heterogeneous data across clients and guarantee convergence. However, the synchronization parameter update barriers for each communication round during the training significant time on waiting, slowing down the training procedure. Therefore, recent state-of-the-art solutions propose using semi-asynchronous approaches to mitigate the waiting time cost with guaranteed convergence. Nevertheless, emerging semi-asynchronous approaches are unable to eliminate the waiting time completely. We propose a full asynchronous training paradigm, called FedFa, which can guarantee model convergence and eliminate the waiting time completely for federated learning by using a few buffered results on the server for parameter updating. Further, we provide theoretical proof of the convergence rate for our proposed FedFa. Extensive experimental results indicate our approach effectively improves the training performance of federated learning by up to 6x and 4x speedup compared to the state-of-the-art synchronous and semi-asynchronous strategies while retaining high accuracy in both IID and Non-IID scenarios.
[ "['Haotian Xu' 'Zhaorui Zhang' 'Sheng Di' 'Benben Liu'\n 'Khalid Ayed Alharthi' 'Jiannong Cao']" ]
null
null
2404.11018
null
null
http://arxiv.org/pdf/2404.11018v2
2024-05-22T17:06:10Z
2024-04-17T02:49:26Z
Many-Shot In-Context Learning
Large language models (LLMs) excel at few-shot in-context learning (ICL) -- learning from a few examples provided in context at inference, without any weight updates. Newly expanded context windows allow us to investigate ICL with hundreds or thousands of examples -- the many-shot regime. Going from few-shot to many-shot, we observe significant performance gains across a wide variety of generative and discriminative tasks. While promising, many-shot ICL can be bottlenecked by the available amount of human-generated examples. To mitigate this limitation, we explore two new settings: Reinforced and Unsupervised ICL. Reinforced ICL uses model-generated chain-of-thought rationales in place of human examples. Unsupervised ICL removes rationales from the prompt altogether, and prompts the model only with domain-specific questions. We find that both Reinforced and Unsupervised ICL can be quite effective in the many-shot regime, particularly on complex reasoning tasks. Finally, we demonstrate that, unlike few-shot learning, many-shot learning is effective at overriding pretraining biases, can learn high-dimensional functions with numerical inputs, and performs comparably to fine-tuning. Our analysis also reveals the limitations of next-token prediction loss as an indicator of downstream ICL performance.
[ "['Rishabh Agarwal' 'Avi Singh' 'Lei M. Zhang' 'Bernd Bohnet' 'Luis Rosias'\n 'Stephanie Chan' 'Biao Zhang' 'Ankesh Anand' 'Zaheer Abbas' 'Azade Nova'\n 'John D. Co-Reyes' 'Eric Chu' 'Feryal Behbahani' 'Aleksandra Faust'\n 'Hugo Larochelle']" ]
null
null
2404.11019
null
null
http://arxiv.org/pdf/2404.11019v1
2024-04-17T02:52:11Z
2024-04-17T02:52:11Z
You do not have to train Graph Neural Networks at all on text-attributed graphs
Graph structured data, specifically text-attributed graphs (TAG), effectively represent relationships among varied entities. Such graphs are essential for semi-supervised node classification tasks. Graph Neural Networks (GNNs) have emerged as a powerful tool for handling this graph-structured data. Although gradient descent is commonly utilized for training GNNs for node classification, this study ventures into alternative methods, eliminating the iterative optimization processes. We introduce TrainlessGNN, a linear GNN model capitalizing on the observation that text encodings from the same class often cluster together in a linear subspace. This model constructs a weight matrix to represent each class's node attribute subspace, offering an efficient approach to semi-supervised node classification on TAG. Extensive experiments reveal that our trainless models can either match or even surpass their conventionally trained counterparts, demonstrating the possibility of refraining from gradient descent in certain configurations.
[ "['Kaiwen Dong' 'Zhichun Guo' 'Nitesh V. Chawla']" ]
null
null
2404.11023
null
null
http://arxiv.org/pdf/2404.11023v1
2024-04-17T02:57:42Z
2024-04-17T02:57:42Z
Advancing Social Intelligence in AI Agents: Technical Challenges and Open Questions
Building socially-intelligent AI agents (Social-AI) is a multidisciplinary, multimodal research goal that involves creating agents that can sense, perceive, reason about, learn from, and respond to affect, behavior, and cognition of other agents (human or artificial). Progress towards Social-AI has accelerated in the past decade across several computing communities, including natural language processing, machine learning, robotics, human-machine interaction, computer vision, and speech. Natural language processing, in particular, has been prominent in Social-AI research, as language plays a key role in constructing the social world. In this position paper, we identify a set of underlying technical challenges and open questions for researchers across computing communities to advance Social-AI. We anchor our discussion in the context of social intelligence concepts and prior progress in Social-AI research.
[ "['Leena Mathur' 'Paul Pu Liang' 'Louis-Philippe Morency']" ]
null
null
2404.11032
null
null
http://arxiv.org/pdf/2404.11032v1
2024-04-17T03:20:42Z
2024-04-17T03:20:42Z
CORE: Data Augmentation for Link Prediction via Information Bottleneck
Link prediction (LP) is a fundamental task in graph representation learning, with numerous applications in diverse domains. However, the generalizability of LP models is often compromised due to the presence of noisy or spurious information in graphs and the inherent incompleteness of graph data. To address these challenges, we draw inspiration from the Information Bottleneck principle and propose a novel data augmentation method, COmplete and REduce (CORE) to learn compact and predictive augmentations for LP models. In particular, CORE aims to recover missing edges in graphs while simultaneously removing noise from the graph structures, thereby enhancing the model's robustness and performance. Extensive experiments on multiple benchmark datasets demonstrate the applicability and superiority of CORE over state-of-the-art methods, showcasing its potential as a leading approach for robust LP in graph representation learning.
[ "['Kaiwen Dong' 'Zhichun Guo' 'Nitesh V. Chawla']" ]
null
null
2404.11036
null
null
http://arxiv.org/pdf/2404.11036v1
2024-04-17T03:25:54Z
2024-04-17T03:25:54Z
Cross-Platform Hate Speech Detection with Weakly Supervised Causal Disentanglement
Content moderation faces a challenging task as social media's ability to spread hate speech contrasts with its role in promoting global connectivity. With rapidly evolving slang and hate speech, the adaptability of conventional deep learning to the fluid landscape of online dialogue remains limited. In response, causality inspired disentanglement has shown promise by segregating platform specific peculiarities from universal hate indicators. However, its dependency on available ground truth target labels for discerning these nuances faces practical hurdles with the incessant evolution of platforms and the mutable nature of hate speech. Using confidence based reweighting and contrastive regularization, this study presents HATE WATCH, a novel framework of weakly supervised causal disentanglement that circumvents the need for explicit target labeling and effectively disentangles input features into invariant representations of hate. Empirical validation across platforms two with target labels and two without positions HATE WATCH as a novel method in cross platform hate speech detection with superior performance. HATE WATCH advances scalable content moderation techniques towards developing safer online communities.
[ "['Paras Sheth' 'Tharindu Kumarage' 'Raha Moraffah' 'Aman Chadha'\n 'Huan Liu']" ]
null
null
2404.11041
null
null
http://arxiv.org/pdf/2404.11041v2
2024-06-18T02:03:35Z
2024-04-17T03:34:27Z
On the Empirical Complexity of Reasoning and Planning in LLMs
Chain-of-thought (CoT), tree-of-thought (ToT), and related techniques work surprisingly well in practice for some complex reasoning tasks with Large Language Models (LLMs), but why? This work seeks the underlying reasons by conducting experimental case studies and linking the performance benefits to well-established sample and computational complexity principles in machine learning. We experimented with 6 reasoning tasks, ranging from grade school math, air travel planning, ..., to Blocksworld. The results suggest that (i) both CoT and ToT benefit significantly from task decomposition, which breaks a complex reasoning task into a sequence of steps with low sample complexity and explicitly outlines the reasoning structure, and (ii) for computationally hard reasoning tasks, the more sophisticated tree structure of ToT outperforms the linear structure of CoT. These findings provide useful guidelines for the use of LLM in solving reasoning tasks in practice.
[ "['Liwei Kang' 'Zirui Zhao' 'David Hsu' 'Wee Sun Lee']" ]
null
null
2404.11046
null
null
http://arxiv.org/pdf/2404.11046v1
2024-04-17T03:42:48Z
2024-04-17T03:42:48Z
Lightweight Unsupervised Federated Learning with Pretrained Vision Language Model
Federated learning aims to tackle the ``isolated data island" problem, where it trains a collective model from physically isolated clients while safeguarding the privacy of users' data. However, supervised federated learning necessitates that each client labels their data for training, which can be both time-consuming and resource-intensive, and may even be impractical for edge devices. Moreover, the training and transmission of deep models present challenges to the computation and communication capabilities of the clients. To address these two inherent challenges in supervised federated learning, we propose a novel lightweight unsupervised federated learning approach that leverages unlabeled data on each client to perform lightweight model training and communication by harnessing pretrained vision-language models, such as CLIP. By capitalizing on the zero-shot prediction capability and the well-trained image encoder of the pre-trained CLIP model, we have carefully crafted an efficient and resilient self-training approach. This method refines the initial zero-shot predicted pseudo-labels of unlabeled instances through the sole training of a linear classifier on top of the fixed image encoder. Additionally, to address data heterogeneity within each client, we propose a class-balanced text feature sampling strategy for generating synthetic instances in the feature space to support local training. Experiments are conducted on multiple benchmark datasets. The experimental results demonstrate that our proposed method greatly enhances model performance in comparison to CLIP's zero-shot predictions and even outperforms supervised federated learning benchmark methods given limited computational and communication overhead.
[ "['Hao Yan' 'Yuhong Guo']" ]
null
null
2404.11049
null
null
http://arxiv.org/pdf/2404.11049v2
2024-05-23T01:13:41Z
2024-04-17T03:44:58Z
Stepwise Alignment for Constrained Language Model Policy Optimization
Safety and trustworthiness are indispensable requirements for real-world applications of AI systems using large language models (LLMs). This paper formulates human value alignment as an optimization problem of the language model policy to maximize reward under a safety constraint, and then proposes an algorithm, Stepwise Alignment for Constrained Policy Optimization (SACPO). One key idea behind SACPO, supported by theory, is that the optimal policy incorporating reward and safety can be directly obtained from a reward-aligned policy. Building on this key idea, SACPO aligns LLMs step-wise with each metric while leveraging simple yet powerful alignment algorithms such as direct preference optimization (DPO). SACPO offers several advantages, including simplicity, stability, computational efficiency, and flexibility of algorithms and datasets. Under mild assumptions, our theoretical analysis provides the upper bounds on optimality and safety constraint violation. Our experimental results show that SACPO can fine-tune Alpaca-7B better than the state-of-the-art method in terms of both helpfulness and harmlessness.
[ "['Akifumi Wachi' 'Thien Q. Tran' 'Rei Sato' 'Takumi Tanabe'\n 'Youhei Akimoto']" ]
null
null
2404.11052
null
null
http://arxiv.org/pdf/2404.11052v2
2024-04-18T01:59:27Z
2024-04-17T03:51:55Z
Supervised Contrastive Vision Transformer for Breast Histopathological Image Classification
Invasive ductal carcinoma (IDC) is the most prevalent form of breast cancer. Breast tissue histopathological examination is critical in diagnosing and classifying breast cancer. Although existing methods have shown promising results, there is still room for improvement in the classification accuracy and generalization of IDC using histopathology images. We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i.e., pre-trained vision transformer, and supervised contrastive learning. Our results on a benchmark breast cancer dataset demonstrate that SupCon-Vit achieves state-of-the-art performance in IDC classification, with an F1-score of 0.8188, precision of 0.7692, and specificity of 0.8971, outperforming existing methods. In addition, the proposed model demonstrates resilience in scenarios with minimal labeled data, making it highly efficient in real-world clinical settings where labelled data is limited. Our findings suggest that supervised contrastive learning in conjunction with pre-trained vision transformers appears to be a viable strategy for an accurate classification of IDC, thus paving the way for a more efficient and reliable diagnosis of breast cancer through histopathological image analysis.
[ "['Mohammad Shiri' 'Monalika Padma Reddy' 'Jiangwen Sun']" ]
null
null
2404.11056
null
null
http://arxiv.org/pdf/2404.11056v1
2024-04-17T04:08:38Z
2024-04-17T04:08:38Z
LMEraser: Large Model Unlearning through Adaptive Prompt Tuning
To address the growing demand for privacy protection in machine learning, we propose a novel and efficient machine unlearning approach for textbf{L}arge textbf{M}odels, called textbf{LM}Eraser. Existing unlearning research suffers from entangled training data and complex model architectures, incurring extremely high computational costs for large models. LMEraser takes a divide-and-conquer strategy with a prompt tuning architecture to isolate data influence. The training dataset is partitioned into public and private datasets. Public data are used to train the backbone of the model. Private data are adaptively clustered based on their diversity, and each cluster is used to optimize a prompt separately. This adaptive prompt tuning mechanism reduces unlearning costs and maintains model performance. Experiments demonstrate that LMEraser achieves a $100$-fold reduction in unlearning costs without compromising accuracy compared to prior work. Our code is available at: url{https://github.com/lmeraser/lmeraser}.
[ "['Jie Xu' 'Zihan Wu' 'Cong Wang' 'Xiaohua Jia']" ]
null
null
2404.11068
null
null
http://arxiv.org/pdf/2404.11068v1
2024-04-17T04:55:33Z
2024-04-17T04:55:33Z
ScaleFold: Reducing AlphaFold Initial Training Time to 10 Hours
AlphaFold2 has been hailed as a breakthrough in protein folding. It can rapidly predict protein structures with lab-grade accuracy. However, its implementation does not include the necessary training code. OpenFold is the first trainable public reimplementation of AlphaFold. AlphaFold training procedure is prohibitively time-consuming, and gets diminishing benefits from scaling to more compute resources. In this work, we conducted a comprehensive analysis on the AlphaFold training procedure based on Openfold, identified that inefficient communications and overhead-dominated computations were the key factors that prevented the AlphaFold training from effective scaling. We introduced ScaleFold, a systematic training method that incorporated optimizations specifically for these factors. ScaleFold successfully scaled the AlphaFold training to 2080 NVIDIA H100 GPUs with high resource utilization. In the MLPerf HPC v3.0 benchmark, ScaleFold finished the OpenFold benchmark in 7.51 minutes, shown over $6times$ speedup than the baseline. For training the AlphaFold model from scratch, ScaleFold completed the pretraining in 10 hours, a significant improvement over the seven days required by the original AlphaFold pretraining baseline.
[ "['Feiwen Zhu' 'Arkadiusz Nowaczynski' 'Rundong Li' 'Jie Xin' 'Yifei Song'\n 'Michal Marcinkiewicz' 'Sukru Burc Eryilmaz' 'Jun Yang'\n 'Michael Andersch']" ]
null
null
2404.11075
null
null
http://arxiv.org/pdf/2404.11075v1
2024-04-17T05:16:12Z
2024-04-17T05:16:12Z
EEG_GLT-Net: Optimising EEG Graphs for Real-time Motor Imagery Signals Classification
Brain-Computer Interfaces connect the brain to external control devices, necessitating the accurate translation of brain signals such as from electroencephalography (EEG) into executable commands. Graph Neural Networks (GCN) have been increasingly applied for classifying EEG Motor Imagery signals, primarily because they incorporates the spatial relationships among EEG channels, resulting in improved accuracy over traditional convolutional methods. Recent advances by GCNs-Net in real-time EEG MI signal classification utilised Pearson Coefficient Correlation (PCC) for constructing adjacency matrices, yielding significant results on the PhysioNet dataset. Our paper introduces the EEG Graph Lottery Ticket (EEG_GLT) algorithm, an innovative technique for constructing adjacency matrices for EEG channels. It does not require pre-existing knowledge of inter-channel relationships, and it can be tailored to suit both individual subjects and GCN model architectures. Our findings demonstrated that the PCC method outperformed the Geodesic approach by 9.65% in mean accuracy, while our EEG_GLT matrix consistently exceeded the performance of the PCC method by a mean accuracy of 13.39%. Also, we found that the construction of the adjacency matrix significantly influenced accuracy, to a greater extent than GCN model configurations. A basic GCN configuration utilising our EEG_GLT matrix exceeded the performance of even the most complex GCN setup with a PCC matrix in average accuracy. Our EEG_GLT method also reduced MACs by up to 97% compared to the PCC method, while maintaining or enhancing accuracy. In conclusion, the EEG_GLT algorithm marks a breakthrough in the development of optimal adjacency matrices, effectively boosting both computational accuracy and efficiency, making it well-suited for real-time classification of EEG MI signals that demand intensive computational resources.
[ "['Htoo Wai Aung' 'Jiao Jiao Li' 'Yang An' 'Steven W. Su']" ]
null
null
2404.11093
null
null
http://arxiv.org/pdf/2404.11093v1
2024-04-17T06:17:08Z
2024-04-17T06:17:08Z
Neural Network Approach for Non-Markovian Dissipative Dynamics of Many-Body Open Quantum Systems
Simulating the dynamics of open quantum systems coupled to non-Markovian environments remains an outstanding challenge due to exponentially scaling computational costs. We present an artificial intelligence strategy to overcome this obstacle by integrating the neural quantum states approach into the dissipaton-embedded quantum master equation in second quantization (DQME-SQ). Our approach utilizes restricted Boltzmann machines (RBMs) to compactly represent the reduced density tensor, explicitly encoding the combined effects of system-environment correlations and nonMarkovian memory. Applied to model systems exhibiting prominent effects of system-environment correlation and non-Markovian memory, our approach achieves comparable accuracy to conventional hierarchical equations of motion, while requiring significantly fewer dynamical variables. The novel RBM-based DQME-SQ approach paves the way for investigating non-Markovian open quantum dynamics in previously intractable regimes, with implications spanning various frontiers of modern science.
[ "['Long Cao' 'Liwei Ge' 'Daochi Zhang' 'Xiang Li' 'Yao Wang' 'Rui-Xue Xu'\n 'YiJing Yan' 'Xiao Zheng']" ]
null
null
2404.11100
null
null
http://arxiv.org/pdf/2404.11100v2
2024-07-09T12:09:32Z
2024-04-17T06:36:17Z
Synthesizing Realistic Data for Table Recognition
To overcome the limitations and challenges of current automatic table data annotation methods and random table data synthesis approaches, we propose a novel method for synthesizing annotation data specifically designed for table recognition. This method utilizes the structure and content of existing complex tables, facilitating the efficient creation of tables that closely replicate the authentic styles found in the target domain. By leveraging the actual structure and content of tables from Chinese financial announcements, we have developed the first extensive table annotation dataset in this domain. We used this dataset to train several recent deep learning-based end-to-end table recognition models. Additionally, we have established the inaugural benchmark for real-world complex tables in the Chinese financial announcement domain, using it to assess the performance of models trained on our synthetic data, thereby effectively validating our method's practicality and effectiveness. Furthermore, we applied our synthesis method to augment the FinTabNet dataset, extracted from English financial announcements, by increasing the proportion of tables with multiple spanning cells to introduce greater complexity. Our experiments show that models trained on this augmented dataset achieve comprehensive improvements in performance, especially in the recognition of tables with multiple spanning cells.
[ "['Qiyu Hou' 'Jun Wang' 'Meixuan Qiao' 'Lujun Tian']" ]
null
null
2404.11114
null
null
http://arxiv.org/pdf/2404.11114v1
2024-04-17T07:00:20Z
2024-04-17T07:00:20Z
Reuse out-of-year data to enhance land cover mapping via feature disentanglement and contrastive learning
Timely up-to-date land use/land cover (LULC) maps play a pivotal role in supporting agricultural territory management, environmental monitoring and facilitating well-informed and sustainable decision-making. Typically, when creating a land cover (LC) map, precise ground truth data is collected through time-consuming and expensive field campaigns. This data is then utilized in conjunction with satellite image time series (SITS) through advanced machine learning algorithms to get the final map. Unfortunately, each time this process is repeated (e.g., annually over a region to estimate agricultural production or potential biodiversity loss), new ground truth data must be collected, leading to the complete disregard of previously gathered reference data despite the substantial financial and time investment they have required. How to make value of historical data, from the same or similar study sites, to enhance the current LULC mapping process constitutes a significant challenge that could enable the financial and human-resource efforts invested in previous data campaigns to be valued again. Aiming to tackle this important challenge, we here propose a deep learning framework based on recent advances in domain adaptation and generalization to combine remote sensing and reference data coming from two different domains (e.g. historical data and fresh ones) to ameliorate the current LC mapping process. Our approach, namely REFeD (data Reuse with Effective Feature Disentanglement for land cover mapping), leverages a disentanglement strategy, based on contrastive learning, where invariant and specific per-domain features are derived to recover the intrinsic information related to the downstream LC mapping task and alleviate possible distribution shifts between domains. Additionally, REFeD is equipped with an effective supervision scheme where feature disentanglement is further enforced via multiple levels of supervision at different granularities. The experimental assessment over two study areas covering extremely diverse and contrasted landscapes, namely Koumbia (located in the West-Africa region, in Burkina Faso) and Centre Val de Loire (located in centre Europe, France), underlines the quality of our framework and the obtained findings demonstrate that out-of-year information coming from the same (or similar) study site, at different periods of time, can constitute a valuable additional source of information to enhance the LC mapping process.
[ "['Cassio F. Dantas' 'Raffaele Gaetano' 'Claudia Paris' 'Dino Ienco']" ]
null
null
2404.11116
null
null
http://arxiv.org/pdf/2404.11116v1
2024-04-17T07:01:29Z
2024-04-17T07:01:29Z
Music Enhancement with Deep Filters: A Technical Report for The ICASSP 2024 Cadenza Challenge
In this challenge, we disentangle the deep filters from the original DeepfilterNet and incorporate them into our Spec-UNet-based network to further improve a hybrid Demucs (hdemucs) based remixing pipeline. The motivation behind the use of the deep filter component lies at its potential in better handling temporal fine structures. We demonstrate an incremental improvement in both the Signal-to-Distortion Ratio (SDR) and the Hearing Aid Audio Quality Index (HAAQI) metrics when comparing the performance of hdemucs against different versions of our model.
[ "['Keren Shao' 'Ke Chen' 'Shlomo Dubnov']" ]
null
null
2404.11117
null
null
http://arxiv.org/pdf/2404.11117v1
2024-04-17T07:01:41Z
2024-04-17T07:01:41Z
Variational quantization for state space models
Forecasting tasks using large datasets gathering thousands of heterogeneous time series is a crucial statistical problem in numerous sectors. The main challenge is to model a rich variety of time series, leverage any available external signals and provide sharp predictions with statistical guarantees. In this work, we propose a new forecasting model that combines discrete state space hidden Markov models with recent neural network architectures and training procedures inspired by vector quantized variational autoencoders. We introduce a variational discrete posterior distribution of the latent states given the observations and a two-stage training procedure to alternatively train the parameters of the latent states and of the emission distributions. By learning a collection of emission laws and temporarily activating them depending on the hidden process dynamics, the proposed method allows to explore large datasets and leverage available external signals. We assess the performance of the proposed method using several datasets and show that it outperforms other state-of-the-art solutions.
[ "['Etienne David' 'Jean Bellot' 'Sylvain Le Corff']" ]
null
null
2404.11130
null
null
http://arxiv.org/pdf/2404.11130v2
2024-04-29T07:19:39Z
2024-04-17T07:21:17Z
Learning epidemic trajectories through Kernel Operator Learning: from modelling to optimal control
Since infectious pathogens start spreading into a susceptible population, mathematical models can provide policy makers with reliable forecasts and scenario analyses, which can be concretely implemented or solely consulted. In these complex epidemiological scenarios, machine learning architectures can play an important role, since they directly reconstruct data-driven models circumventing the specific modelling choices and the parameter calibration, typical of classical compartmental models. In this work, we discuss the efficacy of Kernel Operator Learning (KOL) to reconstruct population dynamics during epidemic outbreaks, where the transmission rate is ruled by an input strategy. In particular, we introduce two surrogate models, named KOL-m and KOL-$partial$, which reconstruct in two different ways the evolution of the epidemics. Moreover, we evaluate the generalization performances of the two approaches with different kernels, including the Neural Tangent Kernels, and compare them with a classical neural network model learning method. Employing synthetic but semi-realistic data, we show how the two introduced approaches are suitable for realizing fast and robust forecasts and scenario analyses, and how these approaches are competitive for determining optimal intervention strategies with respect to specific performance measures.
[ "['Giovanni Ziarelli' 'Nicola Parolini' 'Marco Verani']" ]
null
null
2404.11148
null
null
http://arxiv.org/pdf/2404.11148v1
2024-04-17T07:59:33Z
2024-04-17T07:59:33Z
Explainable Machine Learning System for Predicting Chronic Kidney Disease in High-Risk Cardiovascular Patients
As the global population ages, the incidence of Chronic Kidney Disease (CKD) is rising. CKD often remains asymptomatic until advanced stages, which significantly burdens both the healthcare system and patient quality of life. This research developed an explainable machine learning system for predicting CKD in patients with cardiovascular risks, utilizing medical history and laboratory data. The Random Forest model achieved the highest sensitivity of 88.2%. The study introduces a comprehensive explainability framework that extends beyond traditional feature importance methods, incorporating global and local interpretations, bias inspection, biomedical relevance, and safety assessments. Key predictive features identified in global interpretation were the use of diabetic and ACEI/ARB medications, and initial eGFR values. Local interpretation provided model insights through counterfactual explanations, which aligned with other system parts. After conducting a bias inspection, it was found that the initial eGFR values and CKD predictions exhibited some bias, but no significant gender bias was identified. The model's logic, extracted by scoped rules, was confirmed to align with existing medical literature. The safety assessment tested potentially dangerous cases and confirmed that the model behaved safely. This system enhances the explainability, reliability, and accountability of the model, promoting its potential integration into healthcare settings and compliance with upcoming regulatory standards, and showing promise for broader applications in healthcare machine learning.
[ "['Nantika Nguycharoen']" ]
null
null
2404.11161
null
null
http://arxiv.org/pdf/2404.11161v1
2024-04-17T08:21:02Z
2024-04-17T08:21:02Z
Pre-processing matters: A segment search method for WSI classification
Pre-processing for whole slide images can affect classification performance both in the training and inference stages. Our study analyzes the impact of pre-processing parameters on inference and training across single- and multiple-domain datasets. However, searching for an optimal parameter set is time-consuming. To overcome this, we propose a novel Similarity-based Simulated Annealing approach for fast parameter tuning to enhance inference performance on single-domain data. Our method demonstrates significant performance improvements in accuracy, which raise accuracy from 0.512 to 0.847 in a single domain. We further extend our insight into training performance in multi-domain data by employing a novel Bayesian optimization to search optimal pre-processing parameters, resulting in a high AUC of 0.967. We highlight that better pre-processing for WSI can contribute to further accuracy improvement in the histology area.
[ "['Jun Wang' 'Yufei Cui' 'Yu Mao' 'Nan Guan' 'Chun Jason Xue']" ]
null
null
2404.11163
null
null
http://arxiv.org/pdf/2404.11163v2
2024-04-18T05:50:53Z
2024-04-17T08:26:34Z
LongVQ: Long Sequence Modeling with Vector Quantization on Structured Memory
Transformer models have been successful in various sequence processing tasks, but the self-attention mechanism's computational cost limits its practicality for long sequences. Although there are existing attention variants that improve computational efficiency, they have a limited ability to abstract global information effectively based on their hand-crafted mixing strategies. On the other hand, state-space models (SSMs) are tailored for long sequences but cannot capture complicated local information. Therefore, the combination of them as a unified token mixer is a trend in recent long-sequence models. However, the linearized attention degrades performance significantly even when equipped with SSMs. To address the issue, we propose a new method called LongVQ. LongVQ uses the vector quantization (VQ) technique to compress the global abstraction as a length-fixed codebook, enabling the linear-time computation of the attention matrix. This technique effectively maintains dynamic global and local patterns, which helps to complement the lack of long-range dependency issues. Our experiments on the Long Range Arena benchmark, autoregressive language modeling, and image and speech classification demonstrate the effectiveness of LongVQ. Our model achieves significant improvements over other sequence models, including variants of Transformers, Convolutions, and recent State Space Models.
[ "['Zicheng Liu' 'Li Wang' 'Siyuan Li' 'Zedong Wang' 'Haitao Lin'\n 'Stan Z. Li']" ]