categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2403.07608
null
null
http://arxiv.org/pdf/2403.07608v1
2024-03-12T12:47:32Z
2024-03-12T12:47:32Z
Couler: Unified Machine Learning Workflow Optimization in Cloud
Machine Learning (ML) has become ubiquitous, fueling data-driven applications across various organizations. Contrary to the traditional perception of ML in research, ML workflows can be complex, resource-intensive, and time-consuming. Expanding an ML workflow to encompass a wider range of data infrastructure and data types may lead to larger workloads and increased deployment costs. Currently, numerous workflow engines are available (with over ten being widely recognized). This variety poses a challenge for end-users in terms of mastering different engine APIs. While efforts have primarily focused on optimizing ML Operations (MLOps) for a specific workflow engine, current methods largely overlook workflow optimization across different engines. In this work, we design and implement Couler, a system designed for unified ML workflow optimization in the cloud. Our main insight lies in the ability to generate an ML workflow using natural language (NL) descriptions. We integrate Large Language Models (LLMs) into workflow generation, and provide a unified programming interface for various workflow engines. This approach alleviates the need to understand various workflow engines' APIs. Moreover, Couler enhances workflow computation efficiency by introducing automated caching at multiple stages, enabling large workflow auto-parallelization and automatic hyperparameters tuning. These enhancements minimize redundant computational costs and improve fault tolerance during deep learning workflow training. Couler is extensively deployed in real-world production scenarios at Ant Group, handling approximately 22k workflows daily, and has successfully improved the CPU/Memory utilization by more than 15% and the workflow completion rate by around 17%.
[ "['Xiaoda Wang' 'Yuan Tang' 'Tengda Guo' 'Bo Sang' 'Jingji Wu' 'Jian Sha'\n 'Ke Zhang' 'Jiang Qian' 'Mingjie Tang']" ]
null
null
2403.07611
null
null
http://arxiv.org/pdf/2403.07611v1
2024-03-12T12:49:47Z
2024-03-12T12:49:47Z
Efficient Knowledge Deletion from Trained Models through Layer-wise Partial Machine Unlearning
Machine unlearning has garnered significant attention due to its ability to selectively erase knowledge obtained from specific training data samples in an already trained machine learning model. This capability enables data holders to adhere strictly to data protection regulations. However, existing unlearning techniques face practical constraints, often causing performance degradation, demanding brief fine-tuning post unlearning, and requiring significant storage. In response, this paper introduces a novel class of machine unlearning algorithms. First method is partial amnesiac unlearning, integration of layer-wise pruning with amnesiac unlearning. In this method, updates made to the model during training are pruned and stored, subsequently used to forget specific data from trained model. The second method assimilates layer-wise partial-updates into label-flipping and optimization-based unlearning to mitigate the adverse effects of data deletion on model efficacy. Through a detailed experimental evaluation, we showcase the effectiveness of proposed unlearning methods. Experimental results highlight that the partial amnesiac unlearning not only preserves model efficacy but also eliminates the necessity for brief post fine-tuning, unlike conventional amnesiac unlearning. Moreover, employing layer-wise partial updates in label-flipping and optimization-based unlearning techniques demonstrates superiority in preserving model efficacy compared to their naive counterparts.
[ "['Vinay Chakravarthi Gogineni' 'Esmaeil S. Nadimi']" ]
null
null
2403.07627
null
null
http://arxiv.org/abs/2403.07627v1
2024-03-12T13:09:15Z
2024-03-12T13:09:15Z
generAItor: Tree-in-the-Loop Text Generation for Language Model Explainability and Adaptation
Large language models (LLMs) are widely deployed in various downstream tasks, e.g., auto-completion, aided writing, or chat-based text generation. However, the considered output candidates of the underlying search algorithm are under-explored and under-explained. We tackle this shortcoming by proposing a tree-in-the-loop approach, where a visual representation of the beam search tree is the central component for analyzing, explaining, and adapting the generated outputs. To support these tasks, we present generAItor, a visual analytics technique, augmenting the central beam search tree with various task-specific widgets, providing targeted visualizations and interaction possibilities. Our approach allows interactions on multiple levels and offers an iterative pipeline that encompasses generating, exploring, and comparing output candidates, as well as fine-tuning the model based on adapted data. Our case study shows that our tool generates new insights in gender bias analysis beyond state-of-the-art template-based methods. Additionally, we demonstrate the applicability of our approach in a qualitative user study. Finally, we quantitatively evaluate the adaptability of the model to few samples, as occurring in text-generation use cases.
[ "['Thilo Spinner' 'Rebecca Kehlbeck' 'Rita Sevastjanova' 'Tobias Stähle'\n 'Daniel A. Keim' 'Oliver Deussen' 'Mennatallah El-Assady']" ]
null
null
2403.07632
null
null
http://arxiv.org/pdf/2403.07632v2
2024-05-10T15:19:22Z
2024-03-12T13:12:24Z
CardioGenAI: A Machine Learning-Based Framework for Re-Engineering Drugs for Reduced hERG Liability
The link between in vitro hERG ion channel inhibition and subsequent in vivo QT interval prolongation, a critical risk factor for the development of arrythmias such as Torsade de Pointes, is so well established that in vitro hERG activity alone is often sufficient to end the development of an otherwise promising drug candidate. It is therefore of tremendous interest to develop advanced methods for identifying hERG-active compounds in the early stages of drug development, as well as for proposing redesigned compounds with reduced hERG liability and preserved on-target potency. In this work, we present CardioGenAI, a machine learning-based framework for re-engineering both developmental and commercially available drugs for reduced hERG activity while preserving their pharmacological activity. The framework incorporates novel state-of-the-art discriminative models for predicting hERG channel activity, as well as activity against the voltage-gated NaV1.5 and CaV1.2 channels due to their potential implications in modulating the arrhythmogenic potential induced by hERG channel blockade. We applied the complete framework to pimozide, an FDA-approved antipsychotic agent that demonstrates high affinity to the hERG channel, and generated 100 refined candidates. Remarkably, among the candidates is fluspirilene, a compound which is of the same class of drugs (diphenylmethanes) as pimozide and therefore has similar pharmacological activity, yet exhibits over 700-fold weaker binding to hERG. We envision that this method can effectively be applied to developmental compounds exhibiting hERG liabilities to provide a means of rescuing drug development programs that have stalled due to hERG-related safety concerns. Additionally, the discriminative models can also serve independently as effective components of a virtual screening pipeline. We have made all of our software open-source.
[ "['Gregory W. Kyro' 'Matthew T. Martin' 'Eric D. Watt' 'Victor S. Batista']" ]
null
null
2403.07648
null
null
http://arxiv.org/pdf/2403.07648v2
2024-04-03T20:07:33Z
2024-03-12T13:31:14Z
Characterization of Large Language Model Development in the Datacenter
Large Language Models (LLMs) have presented impressive performance across several transformative tasks. However, it is non-trivial to efficiently utilize large-scale cluster resources to develop LLMs, often riddled with numerous challenges such as frequent hardware failures, intricate parallelization strategies, and imbalanced resource utilization. In this paper, we present an in-depth characterization study of a six-month LLM development workload trace collected from our GPU datacenter Acme. Specifically, we investigate discrepancies between LLMs and prior task-specific Deep Learning (DL) workloads, explore resource utilization patterns, and identify the impact of various job failures. Our analysis summarizes hurdles we encountered and uncovers potential opportunities to optimize systems tailored for LLMs. Furthermore, we introduce our system efforts: (1) fault-tolerant pretraining, which enhances fault tolerance through LLM-involved failure diagnosis and automatic recovery. (2) decoupled scheduling for evaluation, which achieves timely performance feedback via trial decomposition and scheduling optimization.
[ "['Qinghao Hu' 'Zhisheng Ye' 'Zerui Wang' 'Guoteng Wang' 'Meng Zhang'\n 'Qiaoling Chen' 'Peng Sun' 'Dahua Lin' 'Xiaolin Wang' 'Yingwei Luo'\n 'Yonggang Wen' 'Tianwei Zhang']" ]
null
null
2403.07652
null
null
http://arxiv.org/pdf/2403.07652v1
2024-03-12T13:41:15Z
2024-03-12T13:41:15Z
Harder Tasks Need More Experts: Dynamic Routing in MoE Models
In this paper, we introduce a novel dynamic expert selection framework for Mixture of Experts (MoE) models, aiming to enhance computational efficiency and model performance by adjusting the number of activated experts based on input difficulty. Unlike traditional MoE approaches that rely on fixed Top-K routing, which activates a predetermined number of experts regardless of the input's complexity, our method dynamically selects experts based on the confidence level in expert selection for each input. This allows for a more efficient utilization of computational resources, activating more experts for complex tasks requiring advanced reasoning and fewer for simpler tasks. Through extensive evaluations, our dynamic routing method demonstrates substantial improvements over conventional Top-2 routing across various benchmarks, achieving an average improvement of 0.7% with less than 90% activated parameters. Further analysis shows our model dispatches more experts to tasks requiring complex reasoning skills, like BBH, confirming its ability to dynamically allocate computational resources in alignment with the input's complexity. Our findings also highlight a variation in the number of experts needed across different layers of the transformer model, offering insights into the potential for designing heterogeneous MoE frameworks. The code and models are available at https://github.com/ZhenweiAn/Dynamic_MoE.
[ "['Quzhe Huang' 'Zhenwei An' 'Nan Zhuang' 'Mingxu Tao' 'Chen Zhang'\n 'Yang Jin' 'Kun Xu' 'Kun Xu' 'Liwei Chen' 'Songfang Huang' 'Yansong Feng']" ]
null
null
2403.07657
null
null
http://arxiv.org/pdf/2403.07657v1
2024-03-12T13:47:50Z
2024-03-12T13:47:50Z
Scalable Spatiotemporal Prediction with Bayesian Neural Fields
Spatiotemporal datasets, which consist of spatially-referenced time series, are ubiquitous in many scientific and business-intelligence applications, such as air pollution monitoring, disease tracking, and cloud-demand forecasting. As modern datasets continue to increase in size and complexity, there is a growing need for new statistical methods that are flexible enough to capture complex spatiotemporal dynamics and scalable enough to handle large prediction problems. This work presents the Bayesian Neural Field (BayesNF), a domain-general statistical model for inferring rich probability distributions over a spatiotemporal domain, which can be used for data-analysis tasks including forecasting, interpolation, and variography. BayesNF integrates a novel deep neural network architecture for high-capacity function estimation with hierarchical Bayesian inference for robust uncertainty quantification. By defining the prior through a sequence of smooth differentiable transforms, posterior inference is conducted on large-scale data using variationally learned surrogates trained via stochastic gradient descent. We evaluate BayesNF against prominent statistical and machine-learning baselines, showing considerable improvements on diverse prediction problems from climate and public health datasets that contain tens to hundreds of thousands of measurements. The paper is accompanied with an open-source software package (https://github.com/google/bayesnf) that is easy-to-use and compatible with modern GPU and TPU accelerators on the JAX machine learning platform.
[ "['Feras Saad' 'Jacob Burnim' 'Colin Carroll' 'Brian Patton' 'Urs Köster'\n 'Rif A. Saurous' 'Matthew Hoffman']" ]
null
null
2403.07669
null
null
http://arxiv.org/pdf/2403.07669v1
2024-03-12T14:00:50Z
2024-03-12T14:00:50Z
Machine Learning for Soccer Match Result Prediction
Machine learning has become a common approach to predicting the outcomes of soccer matches, and the body of literature in this domain has grown substantially in the past decade and a half. This chapter discusses available datasets, the types of models and features, and ways of evaluating model performance in this application domain. The aim of this chapter is to give a broad overview of the current state and potential future developments in machine learning for soccer match results prediction, as a resource for those interested in conducting future studies in the area. Our main findings are that while gradient-boosted tree models such as CatBoost, applied to soccer-specific ratings such as pi-ratings, are currently the best-performing models on datasets containing only goals as the match features, there needs to be a more thorough comparison of the performance of deep learning models and Random Forest on a range of datasets with different types of features. Furthermore, new rating systems using both player- and team-level information and incorporating additional information from, e.g., spatiotemporal tracking and event data, could be investigated further. Finally, the interpretability of match result prediction models needs to be enhanced for them to be more useful for team management.
[ "['Rory Bunker' 'Calvin Yeung' 'Keisuke Fujii']" ]
null
null
2403.07688
null
null
http://arxiv.org/pdf/2403.07688v1
2024-03-12T14:28:06Z
2024-03-12T14:28:06Z
Maxwell's Demon at Work: Efficient Pruning by Leveraging Saturation of Neurons
When training deep neural networks, the phenomenon of $textit{dying neurons}$ $unicode{x2013}$units that become inactive or saturated, output zero during training$unicode{x2013}$ has traditionally been viewed as undesirable, linked with optimization challenges, and contributing to plasticity loss in continual learning scenarios. In this paper, we reassess this phenomenon, focusing on sparsity and pruning. By systematically exploring the impact of various hyperparameter configurations on dying neurons, we unveil their potential to facilitate simple yet effective structured pruning algorithms. We introduce $textit{Demon Pruning}$ (DemP), a method that controls the proliferation of dead neurons, dynamically leading to network sparsity. Achieved through a combination of noise injection on active units and a one-cycled schedule regularization strategy, DemP stands out for its simplicity and broad applicability. Experiments on CIFAR10 and ImageNet datasets demonstrate that DemP surpasses existing structured pruning techniques, showcasing superior accuracy-sparsity tradeoffs and training speedups. These findings suggest a novel perspective on dying neurons as a valuable resource for efficient model compression and optimization.
[ "['Simon Dufort-Labbé' \"Pierluca D'Oro\" 'Evgenii Nikishin' 'Razvan Pascanu'\n 'Pierre-Luc Bacon' 'Aristide Baratin']" ]
null
null
2403.07704
null
null
http://arxiv.org/abs/2403.07704v1
2024-03-12T14:49:19Z
2024-03-12T14:49:19Z
Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning
In deep reinforcement learning, estimating the value function to evaluate the quality of states and actions is essential. The value function is often trained using the least squares method, which implicitly assumes a Gaussian error distribution. However, a recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator, and violates the implicit assumption of normal error distribution in the least squares method. To address this, we proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution. We evaluated the proposed method on continuous control benchmark tasks in MuJoCo. It improved the sample efficiency of a state-of-the-art reinforcement learning method by reducing the skewness of the error distribution.
[ "['Motoki Omura' 'Takayuki Osa' 'Yusuke Mukuta' 'Tatsuya Harada']" ]
null
null
2403.07706
null
null
http://arxiv.org/pdf/2403.07706v2
2024-03-15T15:46:31Z
2024-03-12T14:51:23Z
Fast and Simple Explainability for Point Cloud Networks
We propose a fast and simple explainable AI (XAI) method for point cloud data. It computes pointwise importance with respect to a trained network downstream task. This allows better understanding of the network properties, which is imperative for safety-critical applications. In addition to debugging and visualization, our low computational complexity facilitates online feedback to the network at inference. This can be used to reduce uncertainty and to increase robustness. In this work, we introduce emph{Feature Based Interpretability} (FBI), where we compute the features' norm, per point, before the bottleneck. We analyze the use of gradients and post- and pre-bottleneck strategies, showing pre-bottleneck is preferred, in terms of smoothness and ranking. We obtain at least three orders of magnitude speedup, compared to current XAI methods, thus, scalable for big point clouds or large-scale architectures. Our approach achieves SOTA results, in terms of classification explainability. We demonstrate how the proposed measure is helpful in analyzing and characterizing various aspects of 3D learning, such as rotation invariance, robustness to out-of-distribution (OOD) outliers or domain shift and dataset bias.
[ "['Meir Yossef Levi' 'Guy Gilboa']" ]
null
null
2403.07718
null
null
http://arxiv.org/pdf/2403.07718v3
2024-06-14T19:22:26Z
2024-03-12T14:58:45Z
WorkArena: How Capable Are Web Agents at Solving Common Knowledge Work Tasks?
We study the use of large language model-based agents for interacting with software via web browsers. Unlike prior work, we focus on measuring the agents' ability to perform tasks that span the typical daily work of knowledge workers utilizing enterprise software systems. To this end, we propose WorkArena, a remote-hosted benchmark of 33 tasks based on the widely-used ServiceNow platform. We also introduce BrowserGym, an environment for the design and evaluation of such agents, offering a rich set of actions as well as multimodal observations. Our empirical evaluation reveals that while current agents show promise on WorkArena, there remains a considerable gap towards achieving full task automation. Notably, our analysis uncovers a significant performance disparity between open and closed-source LLMs, highlighting a critical area for future exploration and development in the field.
[ "['Alexandre Drouin' 'Maxime Gasse' 'Massimo Caccia' 'Issam H. Laradji'\n 'Manuel Del Verme' 'Tom Marty' 'Léo Boisvert' 'Megh Thakkar'\n 'Quentin Cappart' 'David Vazquez' 'Nicolas Chapados' 'Alexandre Lacoste']" ]
null
null
2403.07723
null
null
http://arxiv.org/pdf/2403.07723v3
2024-06-06T01:52:22Z
2024-03-12T15:01:17Z
On the Last-Iterate Convergence of Shuffling Gradient Methods
Shuffling gradient methods are widely used in modern machine learning tasks and include three popular implementations: Random Reshuffle (RR), Shuffle Once (SO), and Incremental Gradient (IG). Compared to the empirical success, the theoretical guarantee of shuffling gradient methods was not well-understood for a long time. Until recently, the convergence rates had just been established for the average iterate for convex functions and the last iterate for strongly convex problems (using squared distance as the metric). However, when using the function value gap as the convergence criterion, existing theories cannot interpret the good performance of the last iterate in different settings (e.g., constrained optimization). To bridge this gap between practice and theory, we prove the first last-iterate convergence rates for shuffling gradient methods with respect to the objective value even without strong convexity. Our new results either (nearly) match the existing last-iterate lower bounds or are as fast as the previous best upper bounds for the average iterate.
[ "['Zijian Liu' 'Zhengyuan Zhou']" ]
null
null
2403.07724
null
null
http://arxiv.org/pdf/2403.07724v1
2024-03-12T15:01:27Z
2024-03-12T15:01:27Z
Balancing Fairness and Accuracy in Data-Restricted Binary Classification
Applications that deal with sensitive information may have restrictions placed on the data available to a machine learning (ML) classifier. For example, in some applications, a classifier may not have direct access to sensitive attributes, affecting its ability to produce accurate and fair decisions. This paper proposes a framework that models the trade-off between accuracy and fairness under four practical scenarios that dictate the type of data available for analysis. Prior works examine this trade-off by analyzing the outputs of a scoring function that has been trained to implicitly learn the underlying distribution of the feature vector, class label, and sensitive attribute of a dataset. In contrast, our framework directly analyzes the behavior of the optimal Bayesian classifier on this underlying distribution by constructing a discrete approximation it from the dataset itself. This approach enables us to formulate multiple convex optimization problems, which allow us to answer the question: How is the accuracy of a Bayesian classifier affected in different data restricting scenarios when constrained to be fair? Analysis is performed on a set of fairness definitions that include group and individual fairness. Experiments on three datasets demonstrate the utility of the proposed framework as a tool for quantifying the trade-offs among different fairness notions and their distributional dependencies.
[ "['Zachary McBride Lazri' 'Danial Dervovic' 'Antigoni Polychroniadou'\n 'Ivan Brugere' 'Dana Dachman-Soled' 'Min Wu']" ]
null
null
2403.07728
null
null
http://arxiv.org/pdf/2403.07728v2
2024-03-28T14:20:13Z
2024-03-12T15:07:20Z
CAP: A General Algorithm for Online Selective Conformal Prediction with FCR Control
We study the problem of post-selection predictive inference in an online fashion. To avoid devoting resources to unimportant units, a preliminary selection of the current individual before reporting its prediction interval is common and meaningful in online predictive tasks. Since the online selection causes a temporal multiplicity in the selected prediction intervals, it is important to control the real-time false coverage-statement rate (FCR) which measures the overall miscoverage level. We develop a general framework named CAP (Calibration after Adaptive Pick) that performs an adaptive pick rule on historical data to construct a calibration set if the current individual is selected and then outputs a conformal prediction interval for the unobserved label. We provide tractable procedures for constructing the calibration set for popular online selection rules. We proved that CAP can achieve an exact selection-conditional coverage guarantee in the finite-sample and distribution-free regimes. To account for the distribution shift in online data, we also embed CAP into some recent dynamic conformal prediction algorithms and show that the proposed method can deliver long-run FCR control. Numerical results on both synthetic and real data corroborate that CAP can effectively control FCR around the target level and yield more narrowed prediction intervals over existing baselines across various settings.
[ "['Yajie Bao' 'Yuyang Huo' 'Haojie Ren' 'Changliang Zou']" ]
null
null
2403.07735
null
null
http://arxiv.org/pdf/2403.07735v1
2024-03-12T15:13:21Z
2024-03-12T15:13:21Z
The Minimax Rate of HSIC Estimation for Translation-Invariant Kernels
Kernel techniques are among the most influential approaches in data science and statistics. Under mild conditions, the reproducing kernel Hilbert space associated to a kernel is capable of encoding the independence of $Mge 2$ random variables. Probably the most widespread independence measure relying on kernels is the so-called Hilbert-Schmidt independence criterion (HSIC; also referred to as distance covariance in the statistics literature). Despite various existing HSIC estimators designed since its introduction close to two decades ago, the fundamental question of the rate at which HSIC can be estimated is still open. In this work, we prove that the minimax optimal rate of HSIC estimation on $mathbb R^d$ for Borel measures containing the Gaussians with continuous bounded translation-invariant characteristic kernels is $mathcal O!left(n^{-1/2}right)$. Specifically, our result implies the optimality in the minimax sense of many of the most-frequently used estimators (including the U-statistic, the V-statistic, and the Nystr"om-based one) on $mathbb R^d$.
[ "['Florian Kalinke' 'Zoltan Szabo']" ]
null
null
2403.07743
null
null
http://arxiv.org/pdf/2403.07743v3
2024-05-23T09:10:49Z
2024-03-12T15:22:05Z
Equipping Computational Pathology Systems with Artifact Processing Pipelines: A Showcase for Computation and Performance Trade-offs
Histopathology is a gold standard for cancer diagnosis under a microscopic examination. However, histological tissue processing procedures result in artifacts, which are ultimately transferred to the digitized version of glass slides, known as whole slide images (WSIs). Artifacts are diagnostically irrelevant areas and may result in wrong deep learning (DL) algorithms predictions. Therefore, detecting and excluding artifacts in the computational pathology (CPATH) system is essential for reliable automated diagnosis. In this paper, we propose a mixture of experts (MoE) scheme for detecting five notable artifacts, including damaged tissue, blur, folded tissue, air bubbles, and histologically irrelevant blood from WSIs. First, we train independent binary DL models as experts to capture particular artifact morphology. Then, we ensemble their predictions using a fusion mechanism. We apply probabilistic thresholding over the final probability distribution to improve the sensitivity of the MoE. We developed DL pipelines using two MoEs and two multiclass models of state-of-the-art deep convolutional neural networks (DCNNs) and vision transformers (ViTs). DCNNs-based MoE and ViTs-based MoE schemes outperformed simpler multiclass models and were tested on datasets from different hospitals and cancer types, where MoE using DCNNs yielded the best results. The proposed MoE yields 86.15% F1 and 97.93% sensitivity scores on unseen data, retaining less computational cost for inference than MoE using ViTs. This best performance of MoEs comes with relatively higher computational trade-offs than multiclass models. The proposed artifact detection pipeline will not only ensure reliable CPATH predictions but may also provide quality control.
[ "['Neel Kanwal' 'Farbod Khoraminia' 'Umay Kiraz' 'Andres Mosquera-Zamudio'\n 'Carlos Monteagudo' 'Emiel A. M. Janssen' 'Tahlita C. M. Zuiverloon'\n 'Chunmig Rong' 'Kjersti Engan']" ]
null
null
2403.07745
null
null
http://arxiv.org/pdf/2403.07745v1
2024-03-12T15:28:21Z
2024-03-12T15:28:21Z
Probabilistic Easy Variational Causal Effect
Let $X$ and $Z$ be random vectors, and $Y=g(X,Z)$. In this paper, on the one hand, for the case that $X$ and $Z$ are continuous, by using the ideas from the total variation and the flux of $g$, we develop a point of view in causal inference capable of dealing with a broad domain of causal problems. Indeed, we focus on a function, called Probabilistic Easy Variational Causal Effect (PEACE), which can measure the direct causal effect of $X$ on $Y$ with respect to continuously and interventionally changing the values of $X$ while keeping the value of $Z$ constant. PEACE is a function of $dge 0$, which is a degree managing the strengths of probability density values $f(x|z)$. On the other hand, we generalize the above idea for the discrete case and show its compatibility with the continuous case. Further, we investigate some properties of PEACE using measure theoretical concepts. Furthermore, we provide some identifiability criteria and several examples showing the generic capability of PEACE. We note that PEACE can deal with the causal problems for which micro-level or just macro-level changes in the value of the input variables are important. Finally, PEACE is stable under small changes in $partial g_{in}/partial x$ and the joint distribution of $X$ and $Z$, where $g_{in}$ is obtained from $g$ by removing all functional relationships defining $X$ and $Z$.
[ "['Usef Faghihi' 'Amir Saki']" ]
null
null
2403.07767
null
null
http://arxiv.org/pdf/2403.07767v1
2024-03-12T15:54:32Z
2024-03-12T15:54:32Z
Beyond the Labels: Unveiling Text-Dependency in Paralinguistic Speech Recognition Datasets
Paralinguistic traits like cognitive load and emotion are increasingly recognized as pivotal areas in speech recognition research, often examined through specialized datasets like CLSE and IEMOCAP. However, the integrity of these datasets is seldom scrutinized for text-dependency. This paper critically evaluates the prevalent assumption that machine learning models trained on such datasets genuinely learn to identify paralinguistic traits, rather than merely capturing lexical features. By examining the lexical overlap in these datasets and testing the performance of machine learning models, we expose significant text-dependency in trait-labeling. Our results suggest that some machine learning models, especially large pre-trained models like HuBERT, might inadvertently focus on lexical characteristics rather than the intended paralinguistic features. The study serves as a call to action for the research community to reevaluate the reliability of existing datasets and methodologies, ensuring that machine learning models genuinely learn what they are designed to recognize.
[ "['Jan Pešán' 'Santosh Kesiraju' 'Lukáš Burget' \"Jan ''Honza'' Černocký\"]" ]
null
null
2403.07780
null
null
http://arxiv.org/pdf/2403.07780v1
2024-03-12T16:08:47Z
2024-03-12T16:08:47Z
FairRR: Pre-Processing for Group Fairness through Randomized Response
The increasing usage of machine learning models in consequential decision-making processes has spurred research into the fairness of these systems. While significant work has been done to study group fairness in the in-processing and post-processing setting, there has been little that theoretically connects these results to the pre-processing domain. This paper proposes that achieving group fairness in downstream models can be formulated as finding the optimal design matrix in which to modify a response variable in a Randomized Response framework. We show that measures of group fairness can be directly controlled for with optimal model utility, proposing a pre-processing algorithm called FairRR that yields excellent downstream model utility and fairness.
[ "['Xianli Zeng' 'Joshua Ward' 'Guang Cheng']" ]
null
null
2403.07788
null
null
http://arxiv.org/pdf/2403.07788v2
2024-07-04T04:35:04Z
2024-03-12T16:23:49Z
DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation
Imitation learning from human hand motion data presents a promising avenue for imbuing robots with human-like dexterity in real-world manipulation tasks. Despite this potential, substantial challenges persist, particularly with the portability of existing hand motion capture (mocap) systems and the complexity of translating mocap data into effective robotic policies. To tackle these issues, we introduce DexCap, a portable hand motion capture system, alongside DexIL, a novel imitation algorithm for training dexterous robot skills directly from human hand mocap data. DexCap offers precise, occlusion-resistant tracking of wrist and finger motions based on SLAM and electromagnetic field together with 3D observations of the environment. Utilizing this rich dataset, DexIL employs inverse kinematics and point cloud-based imitation learning to seamlessly replicate human actions with robot hands. Beyond direct learning from human motion, DexCap also offers an optional human-in-the-loop correction mechanism during policy rollouts to refine and further improve task performance. Through extensive evaluation across six challenging dexterous manipulation tasks, our approach not only demonstrates superior performance but also showcases the system's capability to effectively learn from in-the-wild mocap data, paving the way for future data collection methods in the pursuit of human-level robot dexterity. More details can be found at https://dex-cap.github.io
[ "['Chen Wang' 'Haochen Shi' 'Weizhuo Wang' 'Ruohan Zhang' 'Li Fei-Fei'\n 'C. Karen Liu']" ]
null
null
2403.07797
null
null
http://arxiv.org/pdf/2403.07797v1
2024-03-12T16:34:07Z
2024-03-12T16:34:07Z
Joint Selection: Adaptively Incorporating Public Information for Private Synthetic Data
Mechanisms for generating differentially private synthetic data based on marginals and graphical models have been successful in a wide range of settings. However, one limitation of these methods is their inability to incorporate public data. Initializing a data generating model by pre-training on public data has shown to improve the quality of synthetic data, but this technique is not applicable when model structure is not determined a priori. We develop the mechanism jam-pgm, which expands the adaptive measurements framework to jointly select between measuring public data and private data. This technique allows for public data to be included in a graphical-model-based mechanism. We show that jam-pgm is able to outperform both publicly assisted and non publicly assisted synthetic data generation mechanisms even when the public data distribution is biased.
[ "['Miguel Fuentes' 'Brett Mullins' 'Ryan McKenna' 'Gerome Miklau'\n 'Daniel Sheldon']" ]
null
null
2403.07802
null
null
http://arxiv.org/pdf/2403.07802v1
2024-03-12T16:41:31Z
2024-03-12T16:41:31Z
Boosting keyword spotting through on-device learnable user speech characteristics
Keyword spotting systems for always-on TinyML-constrained applications require on-site tuning to boost the accuracy of offline trained classifiers when deployed in unseen inference conditions. Adapting to the speech peculiarities of target users requires many in-domain samples, often unavailable in real-world scenarios. Furthermore, current on-device learning techniques rely on computationally intensive and memory-hungry backbone update schemes, unfit for always-on, battery-powered devices. In this work, we propose a novel on-device learning architecture, composed of a pretrained backbone and a user-aware embedding learning the user's speech characteristics. The so-generated features are fused and used to classify the input utterance. For domain shifts generated by unseen speakers, we measure error rate reductions of up to 19% from 30.1% to 24.3% based on the 35-class problem of the Google Speech Commands dataset, through the inexpensive update of the user projections. We moreover demonstrate the few-shot learning capabilities of our proposed architecture in sample- and class-scarce learning conditions. With 23.7 kparameters and 1 MFLOP per epoch required for on-device training, our system is feasible for TinyML applications aimed at battery-powered microcontrollers.
[ "['Cristian Cioflan' 'Lukas Cavigelli' 'Luca Benini']" ]
null
null
2403.07809
null
null
http://arxiv.org/pdf/2403.07809v1
2024-03-12T16:46:54Z
2024-03-12T16:46:54Z
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions
Interventions on model-internal states are fundamental operations in many areas of AI, including model editing, steering, robustness, and interpretability. To facilitate such research, we introduce $textbf{pyvene}$, an open-source Python library that supports customizable interventions on a range of different PyTorch modules. $textbf{pyvene}$ supports complex intervention schemes with an intuitive configuration format, and its interventions can be static or include trainable parameters. We show how $textbf{pyvene}$ provides a unified and extensible framework for performing interventions on neural models and sharing the intervened upon models with others. We illustrate the power of the library via interpretability analyses using causal abstraction and knowledge localization. We publish our library through Python Package Index (PyPI) and provide code, documentation, and tutorials at https://github.com/stanfordnlp/pyvene.
[ "['Zhengxuan Wu' 'Atticus Geiger' 'Aryaman Arora' 'Jing Huang' 'Zheng Wang'\n 'Noah D. Goodman' 'Christopher D. Manning' 'Christopher Potts']" ]
null
null
2403.07815
null
null
http://arxiv.org/pdf/2403.07815v2
2024-05-02T16:55:49Z
2024-03-12T16:53:54Z
Chronos: Learning the Language of Time Series
We introduce Chronos, a simple yet effective framework for pretrained probabilistic time series models. Chronos tokenizes time series values using scaling and quantization into a fixed vocabulary and trains existing transformer-based language model architectures on these tokenized time series via the cross-entropy loss. We pretrained Chronos models based on the T5 family (ranging from 20M to 710M parameters) on a large collection of publicly available datasets, complemented by a synthetic dataset that we generated via Gaussian processes to improve generalization. In a comprehensive benchmark consisting of 42 datasets, and comprising both classical local models and deep learning methods, we show that Chronos models: (a) significantly outperform other methods on datasets that were part of the training corpus; and (b) have comparable and occasionally superior zero-shot performance on new datasets, relative to methods that were trained specifically on them. Our results demonstrate that Chronos models can leverage time series data from diverse domains to improve zero-shot accuracy on unseen forecasting tasks, positioning pretrained models as a viable tool to greatly simplify forecasting pipelines.
[ "['Abdul Fatir Ansari' 'Lorenzo Stella' 'Caner Turkmen' 'Xiyuan Zhang'\n 'Pedro Mercado' 'Huibin Shen' 'Oleksandr Shchur'\n 'Syama Sundar Rangapuram' 'Sebastian Pineda Arango' 'Shubham Kapoor'\n 'Jasper Zschiegner' 'Danielle C. Maddix' 'Hao Wang' 'Michael W. Mahoney'\n 'Kari Torkkola' 'Andrew Gordon Wilson' 'Michael Bohlke-Schneider'\n 'Yuyang Wang']" ]
null
null
2403.07818
null
null
http://arxiv.org/pdf/2403.07818v1
2024-03-12T16:57:56Z
2024-03-12T16:57:56Z
Label Dropout: Improved Deep Learning Echocardiography Segmentation Using Multiple Datasets With Domain Shift and Partial Labelling
Echocardiography (echo) is the first imaging modality used when assessing cardiac function. The measurement of functional biomarkers from echo relies upon the segmentation of cardiac structures and deep learning models have been proposed to automate the segmentation process. However, in order to translate these tools to widespread clinical use it is important that the segmentation models are robust to a wide variety of images (e.g. acquired from different scanners, by operators with different levels of expertise etc.). To achieve this level of robustness it is necessary that the models are trained with multiple diverse datasets. A significant challenge faced when training with multiple diverse datasets is the variation in label presence, i.e. the combined data are often partially-labelled. Adaptations of the cross entropy loss function have been proposed to deal with partially labelled data. In this paper we show that training naively with such a loss function and multiple diverse datasets can lead to a form of shortcut learning, where the model associates label presence with domain characteristics, leading to a drop in performance. To address this problem, we propose a novel label dropout scheme to break the link between domain characteristics and the presence or absence of labels. We demonstrate that label dropout improves echo segmentation Dice score by 62% and 25% on two cardiac structures when training using multiple diverse partially labelled datasets.
[ "['Iman Islam' 'Esther Puyol-Antón' 'Bram Ruijsink' 'Andrew J. Reader'\n 'Andrew P. King']" ]
null
null
2403.07822
null
null
http://arxiv.org/pdf/2403.07822v1
2024-03-12T17:03:07Z
2024-03-12T17:03:07Z
Fusing Climate Data Products using a Spatially Varying Autoencoder
Autoencoders are powerful machine learning models used to compress information from multiple data sources. However, autoencoders, like all artificial neural networks, are often unidentifiable and uninterpretable. This research focuses on creating an identifiable and interpretable autoencoder that can be used to meld and combine climate data products. The proposed autoencoder utilizes a Bayesian statistical framework, allowing for probabilistic interpretations while also varying spatially to capture useful spatial patterns across the various data products. Constraints are placed on the autoencoder as it learns patterns in the data, creating an interpretable consensus that includes the important features from each input. We demonstrate the utility of the autoencoder by combining information from multiple precipitation products in High Mountain Asia.
[ "['Jacob A. Johnson' 'Matthew J. Heaton' 'William F. Christensen'\n 'Lynsie R. Warr' 'Summer B. Rupper']" ]
null
null
2403.07842
null
null
http://arxiv.org/pdf/2403.07842v1
2024-03-12T17:27:49Z
2024-03-12T17:27:49Z
Quantifying and Mitigating Privacy Risks for Tabular Generative Models
Synthetic data from generative models emerges as the privacy-preserving data-sharing solution. Such a synthetic data set shall resemble the original data without revealing identifiable private information. The backbone technology of tabular synthesizers is rooted in image generative models, ranging from Generative Adversarial Networks (GANs) to recent diffusion models. Recent prior work sheds light on the utility-privacy tradeoff on tabular data, revealing and quantifying privacy risks on synthetic data. We first conduct an exhaustive empirical analysis, highlighting the utility-privacy tradeoff of five state-of-the-art tabular synthesizers, against eight privacy attacks, with a special focus on membership inference attacks. Motivated by the observation of high data quality but also high privacy risk in tabular diffusion, we propose DP-TLDM, Differentially Private Tabular Latent Diffusion Model, which is composed of an autoencoder network to encode the tabular data and a latent diffusion model to synthesize the latent tables. Following the emerging f-DP framework, we apply DP-SGD to train the auto-encoder in combination with batch clipping and use the separation value as the privacy metric to better capture the privacy gain from DP algorithms. Our empirical evaluation demonstrates that DP-TLDM is capable of achieving a meaningful theoretical privacy guarantee while also significantly enhancing the utility of synthetic data. Specifically, compared to other DP-protected tabular generative models, DP-TLDM improves the synthetic quality by an average of 35% in data resemblance, 15% in the utility for downstream tasks, and 50% in data discriminability, all while preserving a comparable level of privacy risk.
[ "['Chaoyi Zhu' 'Jiayi Tang' 'Hans Brouwer' 'Juan F. Pérez'\n 'Marten van Dijk' 'Lydia Y. Chen']" ]
null
null
2403.07843
null
null
http://arxiv.org/abs/2403.07843v1
2024-03-12T17:32:52Z
2024-03-12T17:32:52Z
A Machine learning and Empirical Bayesian Approach for Predictive Buying in B2B E-commerce
In the context of developing nations like India, traditional business to business (B2B) commerce heavily relies on the establishment of robust relationships, trust, and credit arrangements between buyers and sellers. Consequently, ecommerce enterprises frequently. Established in 2016 with a vision to revolutionize trade in India through technology, Udaan is the countrys largest business to business ecommerce platform. Udaan operates across diverse product categories, including lifestyle, electronics, home and employ telecallers to cultivate buyer relationships, streamline order placement procedures, and promote special promotions. The accurate anticipation of buyer order placement behavior emerges as a pivotal factor for attaining sustainable growth, heightening competitiveness, and optimizing the efficiency of these telecallers. To address this challenge, we have employed an ensemble approach comprising XGBoost and a modified version of Poisson Gamma model to predict customer order patterns with precision. This paper provides an in-depth exploration of the strategic fusion of machine learning and an empirical Bayesian approach, bolstered by the judicious selection of pertinent features. This innovative approach has yielded a remarkable 3 times increase in customer order rates, show casing its potential for transformative impact in the ecommerce industry.
[ "['Tuhin Subhra De' 'Pranjal Singh' 'Alok Patel']" ]
null
null
2403.07849
null
null
http://arxiv.org/pdf/2403.07849v1
2024-03-12T17:41:27Z
2024-03-12T17:41:27Z
Iterative Graph Neural Network Enhancement via Frequent Subgraph Mining of Explanations
We formulate an XAI-based model improvement approach for Graph Neural Networks (GNNs) for node classification, called Explanation Enhanced Graph Learning (EEGL). The goal is to improve predictive performance of GNN using explanations. EEGL is an iterative self-improving algorithm, which starts with a learned "vanilla" GNN, and repeatedly uses frequent subgraph mining to find relevant patterns in explanation subgraphs. These patterns are then filtered further to obtain application-dependent features corresponding to the presence of certain subgraphs in the node neighborhoods. Giving an application-dependent algorithm for such a subgraph-based extension of the Weisfeiler-Leman (1-WL) algorithm has previously been posed as an open problem. We present experimental evidence, with synthetic and real-world data, which show that EEGL outperforms related approaches in predictive performance and that it has a node-distinguishing power beyond that of vanilla GNNs. We also analyze EEGL's training dynamics.
[ "['Harish G. Naik' 'Jan Polster' 'Raj Shekhar' 'Tamás Horváth'\n 'György Turán']" ]
null
null
2403.07851
null
null
http://arxiv.org/pdf/2403.07851v1
2024-03-12T17:43:20Z
2024-03-12T17:43:20Z
12 mJ per Class On-Device Online Few-Shot Class-Incremental Learning
Few-Shot Class-Incremental Learning (FSCIL) enables machine learning systems to expand their inference capabilities to new classes using only a few labeled examples, without forgetting the previously learned classes. Classical backpropagation-based learning and its variants are often unsuitable for battery-powered, memory-constrained systems at the extreme edge. In this work, we introduce Online Few-Shot Class-Incremental Learning (O-FSCIL), based on a lightweight model consisting of a pretrained and metalearned feature extractor and an expandable explicit memory storing the class prototypes. The architecture is pretrained with a novel feature orthogonality regularization and metalearned with a multi-margin loss. For learning a new class, our approach extends the explicit memory with novel class prototypes, while the remaining architecture is kept frozen. This allows learning previously unseen classes based on only a few examples with one single pass (hence online). O-FSCIL obtains an average accuracy of 68.62% on the FSCIL CIFAR100 benchmark, achieving state-of-the-art results. Tailored for ultra-low-power platforms, we implement O-FSCIL on the 60 mW GAP9 microcontroller, demonstrating online learning capabilities within just 12 mJ per new class.
[ "['Yoga Esa Wibowo' 'Cristian Cioflan' 'Thorir Mar Ingolfsson'\n 'Michael Hersche' 'Leo Zhao' 'Abbas Rahimi' 'Luca Benini']" ]
null
null
2403.07854
null
null
http://arxiv.org/pdf/2403.07854v1
2024-03-12T17:44:45Z
2024-03-12T17:44:45Z
Distilling the Knowledge in Data Pruning
With the increasing size of datasets used for training neural networks, data pruning becomes an attractive field of research. However, most current data pruning algorithms are limited in their ability to preserve accuracy compared to models trained on the full data, especially in high pruning regimes. In this paper we explore the application of data pruning while incorporating knowledge distillation (KD) when training on a pruned subset. That is, rather than relying solely on ground-truth labels, we also use the soft predictions from a teacher network pre-trained on the complete data. By integrating KD into training, we demonstrate significant improvement across datasets, pruning methods, and on all pruning fractions. We first establish a theoretical motivation for employing self-distillation to improve training on pruned data. Then, we empirically make a compelling and highly practical observation: using KD, simple random pruning is comparable or superior to sophisticated pruning methods across all pruning regimes. On ImageNet for example, we achieve superior accuracy despite training on a random subset of only 50% of the data. Additionally, we demonstrate a crucial connection between the pruning factor and the optimal knowledge distillation weight. This helps mitigate the impact of samples with noisy labels and low-quality images retained by typical pruning algorithms. Finally, we make an intriguing observation: when using lower pruning fractions, larger teachers lead to accuracy degradation, while surprisingly, employing teachers with a smaller capacity than the student's may improve results. Our code will be made available.
[ "['Emanuel Ben-Baruch' 'Adam Botach' 'Igor Kviatkovsky' 'Manoj Aggarwal'\n 'Gérard Medioni']" ]
null
null
2403.07856
null
null
http://arxiv.org/pdf/2403.07856v1
2024-03-12T17:46:38Z
2024-03-12T17:46:38Z
Quantum Support Vector Machine for Prostate Cancer Detection: A Performance Analysis
This study addresses the urgent need for improved prostate cancer detection methods by harnessing the power of advanced technological solutions. We introduce the application of Quantum Support Vector Machine (QSVM) to this critical healthcare challenge, showcasing an enhancement in diagnostic performance over the classical Support Vector Machine (SVM) approach. Our study not only outlines the remarkable improvements in diagnostic performance made by QSVM over the classic SVM technique, but it delves into the advancements brought about by the quantum feature map architecture, which has been carefully identified and evaluated, ensuring it aligns seamlessly with the unique characteristics of our prostate cancer dataset. This architecture succeded in creating a distinct feature space, enabling the detection of complex, non-linear patterns in the data. The findings reveal not only a comparable accuracy with classical SVM ($92%$) but also a $7.14%$ increase in sensitivity and a notably high F1-Score ($93.33%$). This study's important combination of quantum computing in medical diagnostics marks a pivotal step forward in cancer detection, offering promising implications for the future of healthcare technology.
[ "['Walid El Maouaki' 'Taoufik Said' 'Mohamed Bennai']" ]
null
null
2403.07857
null
null
http://arxiv.org/pdf/2403.07857v1
2024-03-12T17:48:08Z
2024-03-12T17:48:08Z
Fairness Feedback Loops: Training on Synthetic Data Amplifies Bias
Model-induced distribution shifts (MIDS) occur as previous model outputs pollute new model training sets over generations of models. This is known as model collapse in the case of generative models, and performative prediction or unfairness feedback loops for supervised models. When a model induces a distribution shift, it also encodes its mistakes, biases, and unfairnesses into the ground truth of its data ecosystem. We introduce a framework that allows us to track multiple MIDS over many generations, finding that they can lead to loss in performance, fairness, and minoritized group representation, even in initially unbiased datasets. Despite these negative consequences, we identify how models might be used for positive, intentional, interventions in their data ecosystems, providing redress for historical discrimination through a framework called algorithmic reparation (AR). We simulate AR interventions by curating representative training batches for stochastic gradient descent to demonstrate how AR can improve upon the unfairnesses of models and data ecosystems subject to other MIDS. Our work takes an important step towards identifying, mitigating, and taking accountability for the unfair feedback loops enabled by the idea that ML systems are inherently neutral and objective.
[ "['Sierra Wyllie' 'Ilia Shumailov' 'Nicolas Papernot']" ]
null
null
2403.07865
null
null
http://arxiv.org/pdf/2403.07865v4
2024-06-09T15:04:34Z
2024-03-12T17:55:38Z
CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion
The rapid advancement of Large Language Models (LLMs) has brought about remarkable generative capabilities but also raised concerns about their potential misuse. While strategies like supervised fine-tuning and reinforcement learning from human feedback have enhanced their safety, these methods primarily focus on natural languages, which may not generalize to other domains. This paper introduces CodeAttack, a framework that transforms natural language inputs into code inputs, presenting a novel environment for testing the safety generalization of LLMs. Our comprehensive studies on state-of-the-art LLMs including GPT-4, Claude-2, and Llama-2 series reveal a new and universal safety vulnerability of these models against code input: CodeAttack bypasses the safety guardrails of all models more than 80% of the time. We find that a larger distribution gap between CodeAttack and natural language leads to weaker safety generalization, such as encoding natural language input with data structures. Furthermore, we give our hypotheses about the success of CodeAttack: the misaligned bias acquired by LLMs during code training, prioritizing code completion over avoiding the potential safety risk. Finally, we analyze potential mitigation measures. These findings highlight new safety risks in the code domain and the need for more robust safety alignment algorithms to match the code capabilities of LLMs.
[ "['Qibing Ren' 'Chang Gao' 'Jing Shao' 'Junchi Yan' 'Xin Tan' 'Wai Lam'\n 'Lizhuang Ma']" ]
null
null
2403.07869
null
null
http://arxiv.org/pdf/2403.07869v2
2024-03-21T19:57:46Z
2024-03-12T17:58:01Z
TeleMoMa: A Modular and Versatile Teleoperation System for Mobile Manipulation
A critical bottleneck limiting imitation learning in robotics is the lack of data. This problem is more severe in mobile manipulation, where collecting demonstrations is harder than in stationary manipulation due to the lack of available and easy-to-use teleoperation interfaces. In this work, we demonstrate TeleMoMa, a general and modular interface for whole-body teleoperation of mobile manipulators. TeleMoMa unifies multiple human interfaces including RGB and depth cameras, virtual reality controllers, keyboard, joysticks, etc., and any combination thereof. In its more accessible version, TeleMoMa works using simply vision (e.g., an RGB-D camera), lowering the entry bar for humans to provide mobile manipulation demonstrations. We demonstrate the versatility of TeleMoMa by teleoperating several existing mobile manipulators - PAL Tiago++, Toyota HSR, and Fetch - in simulation and the real world. We demonstrate the quality of the demonstrations collected with TeleMoMa by training imitation learning policies for mobile manipulation tasks involving synchronized whole-body motion. Finally, we also show that TeleMoMa's teleoperation channel enables teleoperation on site, looking at the robot, or remote, sending commands and observations through a computer network, and perform user studies to evaluate how easy it is for novice users to learn to collect demonstrations with different combinations of human interfaces enabled by our system. We hope TeleMoMa becomes a helpful tool for the community enabling researchers to collect whole-body mobile manipulation demonstrations. For more information and video results, https://robin-lab.cs.utexas.edu/telemoma-web.
[ "['Shivin Dass' 'Wensi Ai' 'Yuqian Jiang' 'Samik Singh' 'Jiaheng Hu'\n 'Ruohan Zhang' 'Peter Stone' 'Ben Abbatematteo' 'Roberto Martín-Martín']" ]
null
null
2403.07890
null
null
http://arxiv.org/pdf/2403.07890v2
2024-04-23T05:40:37Z
2024-02-02T20:40:27Z
$\widetilde{O}(T^{-1})$ Convergence to (Coarse) Correlated Equilibria in Full-Information General-Sum Markov Games
No-regret learning has a long history of being closely connected to game theory. Recent works have devised uncoupled no-regret learning dynamics that, when adopted by all the players in normal-form games, converge to various equilibrium solutions at a near-optimal rate of $widetilde{O}(T^{-1})$, a significant improvement over the $O(1/sqrt{T})$ rate of classic no-regret learners. However, analogous convergence results are scarce in Markov games, a more generic setting that lays the foundation for multi-agent reinforcement learning. In this work, we close this gap by showing that the optimistic-follow-the-regularized-leader (OFTRL) algorithm, together with appropriate value update procedures, can find $widetilde{O}(T^{-1})$-approximate (coarse) correlated equilibria in full-information general-sum Markov games within $T$ iterations. Numerical results are also included to corroborate our theoretical findings.
[ "['Weichao Mao' 'Haoran Qiu' 'Chen Wang' 'Hubertus Franke'\n 'Zbigniew Kalbarczyk' 'Tamer Başar']" ]
null
null
2403.07891
null
null
http://arxiv.org/abs/2403.07891v1
2024-02-03T16:05:27Z
2024-02-03T16:05:27Z
Digital Video Manipulation Detection Technique Based on Compression Algorithms
Digital images and videos play a very important role in everyday life. Nowadays, people have access the affordable mobile devices equipped with advanced integrated cameras and powerful image processing applications. Technological development facilitates not only the generation of multimedia content, but also the intentional modification of it, either with recreational or malicious purposes. This is where forensic techniques to detect manipulation of images and videos become essential. This paper proposes a forensic technique by analysing compression algorithms used by the H.264 coding. The presence of recompression uses information of macroblocks, a characteristic of the H.264-MPEG4 standard, and motion vectors. A Vector Support Machine is used to create the model that allows to accurately detect if a video has been recompressed.
[ "['Edgar Gonzalez Fernandez' 'Ana Lucila Sandoval Orozco'\n 'Luis Javier Garcia Villalba']" ]
null
null
2403.07892
null
null
http://arxiv.org/pdf/2403.07892v1
2024-02-03T20:36:48Z
2024-02-03T20:36:48Z
Change Point Detection with Copula Entropy based Two-Sample Test
Change point detection is a typical task that aim to find changes in time series and can be tackled with two-sample test. Copula Entropy is a mathematical concept for measuring statistical independence and a two-sample test based on it was introduced recently. In this paper we propose a nonparametric multivariate method for multiple change point detection with the copula entropy-based two-sample test. The single change point detection is first proposed as a group of two-sample tests on every points of time series data and the change point is considered as with the maximum of the test statistics. The multiple change point detection is then proposed by combining the single change point detection method with binary segmentation strategy. We verified the effectiveness of our method and compared it with the other similar methods on the simulated univariate and multivariate data and the Nile data.
[ "['Jian Ma']" ]
null
null
2403.07901
null
null
http://arxiv.org/pdf/2403.07901v1
2024-02-26T02:19:01Z
2024-02-26T02:19:01Z
MIP: CLIP-based Image Reconstruction from PEFT Gradients
Contrastive Language-Image Pre-training (CLIP) model, as an effective pre-trained multimodal neural network, has been widely used in distributed machine learning tasks, especially Federated Learning (FL). Typically, CLIP-based FL adopts Parameter-Efficient Fine-Tuning (PEFT) for model training, which only fine-tunes adapter parameters or soft prompts rather than the full parameters. Although PEFT is different from the traditional training mode, in this paper, we theoretically analyze that the gradients of adapters or soft prompts can still be used to perform image reconstruction attacks. Based on our theoretical analysis, we propose Multm-In-Parvo (MIP), a proprietary reconstruction attack method targeting CLIP-based distributed machine learning architecture. Specifically, MIP can reconstruct CLIP training images according to the gradients of soft prompts or an adapter. In addition, MIP includes a label prediction strategy to accelerate convergence and an inverse gradient estimation mechanism to avoid the vanishing gradient problem on the text encoder. Experimental results show that MIP can effectively reconstruct training images according to the gradients of soft prompts or adapters of CLIP models.
[ "['Peiheng Zhou' 'Ming Hu' 'Xiaofei Xie' 'Yihao Huang' 'Kangjie Chen'\n 'Mingsong Chen']" ]
null
null
2403.07902
null
null
http://arxiv.org/pdf/2403.07902v1
2024-02-26T05:21:21Z
2024-02-26T05:21:21Z
DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based Drug Design
Designing 3D ligands within a target binding site is a fundamental task in drug discovery. Existing structured-based drug design methods treat all ligand atoms equally, which ignores different roles of atoms in the ligand for drug design and can be less efficient for exploring the large drug-like molecule space. In this paper, inspired by the convention in pharmaceutical practice, we decompose the ligand molecule into two parts, namely arms and scaffold, and propose a new diffusion model, DecompDiff, with decomposed priors over arms and scaffold. In order to facilitate the decomposed generation and improve the properties of the generated molecules, we incorporate both bond diffusion in the model and additional validity guidance in the sampling phase. Extensive experiments on CrossDocked2020 show that our approach achieves state-of-the-art performance in generating high-affinity molecules while maintaining proper molecular properties and conformational stability, with up to -8.39 Avg. Vina Dock score and 24.5 Success Rate. The code is provided at https://github.com/bytedance/DecompDiff
[ "['Jiaqi Guan' 'Xiangxin Zhou' 'Yuwei Yang' 'Yu Bao' 'Jian Peng'\n 'Jianzhu Ma' 'Qiang Liu' 'Liang Wang' 'Quanquan Gu']" ]
null
null
2403.07903
null
null
http://arxiv.org/pdf/2403.07903v1
2024-02-26T11:04:04Z
2024-02-26T11:04:04Z
Multiple Access in the Era of Distributed Computing and Edge Intelligence
This paper focuses on the latest research and innovations in fundamental next-generation multiple access (NGMA) techniques and the coexistence with other key technologies for the sixth generation (6G) of wireless networks. In more detail, we first examine multi-access edge computing (MEC), which is critical to meeting the growing demand for data processing and computational capacity at the edge of the network, as well as network slicing. We then explore over-the-air (OTA) computing, which is considered to be an approach that provides fast and efficient computation of various functions. We also explore semantic communications, identified as an effective way to improve communication systems by focusing on the exchange of meaningful information, thus minimizing unnecessary data and increasing efficiency. The interrelationship between machine learning (ML) and multiple access technologies is also reviewed, with an emphasis on federated learning, federated distillation, split learning, reinforcement learning, and the development of ML-based multiple access protocols. Finally, the concept of digital twinning and its role in network management is discussed, highlighting how virtual replication of physical networks can lead to improvements in network efficiency and reliability.
[ "['Nikos G. Evgenidis' 'Nikos A. Mitsiou' 'Vasiliki I. Koutsioumpa'\n 'Sotiris A. Tegos' 'Panagiotis D. Diamantoulakis'\n 'George K. Karagiannidis']" ]
null
null
2403.07904
null
null
http://arxiv.org/pdf/2403.07904v2
2024-05-17T23:17:16Z
2024-02-26T11:32:42Z
Addressing the Regulatory Gap: Moving Towards an EU AI Audit Ecosystem Beyond the AIA by Including Civil Society
The European legislature has proposed the Digital Services Act (DSA) and Artificial Intelligence Act (AIA) to regulate platforms and Artificial Intelligence (AI) products. We review to what extent third-party audits are part of both laws and to what extent access to models and data is provided. By considering the value of third-party audits and third-party data access in an audit ecosystem, we identify a regulatory gap in that the Artificial Intelligence Act does not provide access to data for researchers and civil society. Our contributions to the literature include: (1) Defining an AI audit ecosystem that incorporates compliance and oversight. (2) Highlighting a regulatory gap within the DSA and AIA regulatory framework, preventing the establishment of an AI audit ecosystem. (3) Emphasizing that third-party audits by research and civil society must be part of that ecosystem and demand that the AIA include data and model access for certain AI products. We call for the DSA to provide NGOs and investigative journalists with data access to platforms by delegated acts and for adaptions and amendments of the AIA to provide third-party audits and data and model access at least for high-risk systems to close the regulatory gap. Regulations modeled after European Union AI regulations should enable data access and third-party audits, fostering an AI audit ecosystem that promotes compliance and oversight mechanisms.
[ "['David Hartmann' 'José Renato Laranjeira de Pereira'\n 'Chiara Streitbörger' 'Bettina Berendt']" ]
null
null
2403.07905
null
null
http://arxiv.org/pdf/2403.07905v1
2024-02-26T13:12:44Z
2024-02-26T13:12:44Z
Enhancing Kubernetes Automated Scheduling with Deep Learning and Reinforcement Techniques for Large-Scale Cloud Computing Optimization
With the continuous expansion of the scale of cloud computing applications, artificial intelligence technologies such as Deep Learning and Reinforcement Learning have gradually become the key tools to solve the automated task scheduling of large-scale cloud computing systems. Aiming at the complexity and real-time requirement of task scheduling in large-scale cloud computing system, this paper proposes an automatic task scheduling scheme based on deep learning and reinforcement learning. Firstly, the deep learning technology is used to monitor and predict the parameters in the cloud computing system in real time to obtain the system status information. Then, combined with reinforcement learning algorithm, the task scheduling strategy is dynamically adjusted according to the real-time system state and task characteristics to achieve the optimal utilization of system resources and the maximum of task execution efficiency. This paper verifies the effectiveness and performance advantages of the proposed scheme in experiments, and proves the potential and application prospect of deep learning and reinforcement learning in automatic task scheduling in large-scale cloud computing systems.
[ "['Zheng Xu' 'Yulu Gong' 'Yanlin Zhou' 'Qiaozhi Bao' 'Wenpin Qian']" ]
null
null
2403.07916
null
null
http://arxiv.org/pdf/2403.07916v1
2024-02-27T14:08:31Z
2024-02-27T14:08:31Z
Advancing Investment Frontiers: Industry-grade Deep Reinforcement Learning for Portfolio Optimization
This research paper delves into the application of Deep Reinforcement Learning (DRL) in asset-class agnostic portfolio optimization, integrating industry-grade methodologies with quantitative finance. At the heart of this integration is our robust framework that not only merges advanced DRL algorithms with modern computational techniques but also emphasizes stringent statistical analysis, software engineering and regulatory compliance. To the best of our knowledge, this is the first study integrating financial Reinforcement Learning with sim-to-real methodologies from robotics and mathematical physics, thus enriching our frameworks and arguments with this unique perspective. Our research culminates with the introduction of AlphaOptimizerNet, a proprietary Reinforcement Learning agent (and corresponding library). Developed from a synthesis of state-of-the-art (SOTA) literature and our unique interdisciplinary methodology, AlphaOptimizerNet demonstrates encouraging risk-return optimization across various asset classes with realistic constraints. These preliminary results underscore the practical efficacy of our frameworks. As the finance sector increasingly gravitates towards advanced algorithmic solutions, our study bridges theoretical advancements with real-world applicability, offering a template for ensuring safety and robust standards in this technologically driven future.
[ "['Philip Ndikum' 'Serge Ndikum']" ]
null
null
2403.07917
null
null
http://arxiv.org/pdf/2403.07917v2
2024-03-14T13:00:37Z
2024-02-27T15:59:15Z
A Neural-Evolutionary Algorithm for Autonomous Transit Network Design
Planning a public transit network is a challenging optimization problem, but essential in order to realize the benefits of autonomous buses. We propose a novel algorithm for planning networks of routes for autonomous buses. We first train a graph neural net model as a policy for constructing route networks, and then use the policy as one of several mutation operators in a evolutionary algorithm. We evaluate this algorithm on a standard set of benchmarks for transit network design, and find that it outperforms the learned policy alone by up to 20% and a plain evolutionary algorithm approach by up to 53% on realistic benchmark instances.
[ "['Andrew Holliday' 'Gregory Dudek']" ]
null
null
2403.07918
null
null
http://arxiv.org/pdf/2403.07918v1
2024-02-27T16:49:53Z
2024-02-27T16:49:53Z
On the Societal Impact of Open Foundation Models
Foundation models are powerful technologies: how they are released publicly directly shapes their societal impact. In this position paper, we focus on open foundation models, defined here as those with broadly available model weights (e.g. Llama 2, Stable Diffusion XL). We identify five distinctive properties (e.g. greater customizability, poor monitoring) of open foundation models that lead to both their benefits and risks. Open foundation models present significant benefits, with some caveats, that span innovation, competition, the distribution of decision-making power, and transparency. To understand their risks of misuse, we design a risk assessment framework for analyzing their marginal risk. Across several misuse vectors (e.g. cyberattacks, bioweapons), we find that current research is insufficient to effectively characterize the marginal risk of open foundation models relative to pre-existing technologies. The framework helps explain why the marginal risk is low in some cases, clarifies disagreements about misuse risks by revealing that past work has focused on different subsets of the framework with different assumptions, and articulates a way forward for more constructive debate. Overall, our work helps support a more grounded assessment of the societal impact of open foundation models by outlining what research is needed to empirically validate their theoretical benefits and risks.
[ "['Sayash Kapoor' 'Rishi Bommasani' 'Kevin Klyman' 'Shayne Longpre'\n 'Ashwin Ramaswami' 'Peter Cihon' 'Aspen Hopkins' 'Kevin Bankston'\n 'Stella Biderman' 'Miranda Bogen' 'Rumman Chowdhury' 'Alex Engler'\n 'Peter Henderson' 'Yacine Jernite' 'Seth Lazar' 'Stefano Maffulli'\n 'Alondra Nelson' 'Joelle Pineau' 'Aviya Skowron' 'Dawn Song'\n 'Victor Storchan' 'Daniel Zhang' 'Daniel E. Ho' 'Percy Liang'\n 'Arvind Narayanan']" ]
null
null
2403.07920
null
null
http://arxiv.org/pdf/2403.07920v1
2024-02-28T01:29:55Z
2024-02-28T01:29:55Z
ProtLLM: An Interleaved Protein-Language LLM with Protein-as-Word Pre-Training
We propose ProtLLM, a versatile cross-modal large language model (LLM) for both protein-centric and protein-language tasks. ProtLLM features a unique dynamic protein mounting mechanism, enabling it to handle complex inputs where the natural language text is interspersed with an arbitrary number of proteins. Besides, we propose the protein-as-word language modeling approach to train ProtLLM. By developing a specialized protein vocabulary, we equip the model with the capability to predict not just natural language but also proteins from a vast pool of candidates. Additionally, we construct a large-scale interleaved protein-text dataset, named InterPT, for pre-training. This dataset comprehensively encompasses both (1) structured data sources like protein annotations and (2) unstructured data sources like biological research papers, thereby endowing ProtLLM with crucial knowledge for understanding proteins. We evaluate ProtLLM on classic supervised protein-centric tasks and explore its novel protein-language applications. Experimental results demonstrate that ProtLLM not only achieves superior performance against protein-specialized baselines on protein-centric tasks but also induces zero-shot and in-context learning capabilities on protein-language tasks.
[ "['Le Zhuo' 'Zewen Chi' 'Minghao Xu' 'Heyan Huang' 'Heqi Zheng'\n 'Conghui He' 'Xian-Ling Mao' 'Wentao Zhang']" ]
null
null
2403.07921
null
null
http://arxiv.org/pdf/2403.07921v1
2024-02-28T03:20:27Z
2024-02-28T03:20:27Z
Merino: Entropy-driven Design for Generative Language Models on IoT Devices
Generative Large Language Models (LLMs) stand as a revolutionary advancement in the modern era of artificial intelligence (AI). However, directly deploying LLMs in resource-constrained hardware, such as Internet-of-Things (IoT) devices, is difficult due to their high computational cost. In this paper, we propose a novel information-entropy framework for designing mobile-friendly generative language models. Our key design paradigm is to maximize the entropy of transformer decoders within the given computational budgets. The whole design procedure involves solving a mathematical programming (MP) problem, which can be done on the CPU within minutes, making it nearly zero-cost. We evaluate our designed models, termed MeRino, across nine NLP downstream tasks, showing their competitive performance against the state-of-the-art autoregressive transformer models under the mobile setting. Notably, MeRino achieves similar or better zero performance compared to the 350M parameter OPT while being 4.9x faster on NVIDIA Jetson Nano with 5.5x reduction in model size. Code will be made available soon.
[ "['Youpeng Zhao' 'Ming Lin' 'Huadong Tang' 'Qiang Wu' 'Jun Wang']" ]
null
null
2403.07923
null
null
http://arxiv.org/pdf/2403.07923v1
2024-02-28T12:01:06Z
2024-02-28T12:01:06Z
The Fusion of Deep Reinforcement Learning and Edge Computing for Real-time Monitoring and Control Optimization in IoT Environments
In response to the demand for real-time performance and control quality in industrial Internet of Things (IoT) environments, this paper proposes an optimization control system based on deep reinforcement learning and edge computing. The system leverages cloud-edge collaboration, deploys lightweight policy networks at the edge, predicts system states, and outputs controls at a high frequency, enabling monitoring and optimization of industrial objectives. Additionally, a dynamic resource allocation mechanism is designed to ensure rational scheduling of edge computing resources, achieving global optimization. Results demonstrate that this approach reduces cloud-edge communication latency, accelerates response to abnormal situations, reduces system failure rates, extends average equipment operating time, and saves costs for manual maintenance and replacement. This ensures real-time and stable control.
[ "['Jingyu Xu' 'Weixiang Wan' 'Linying Pan' 'Wenjian Sun' 'Yuxiang Liu']" ]
null
null
2403.07925
null
null
http://arxiv.org/abs/2403.07925v2
2024-03-15T00:21:25Z
2024-02-29T17:11:08Z
Physics-informed generative model for drug-like molecule conformers
We present a diffusion-based, generative model for conformer generation. Our model is focused on the reproduction of bonded structure and is constructed from the associated terms traditionally found in classical force fields to ensure a physically relevant representation. Techniques in deep learning are used to infer atom typing and geometric parameters from a training set. Conformer sampling is achieved by taking advantage of recent advancements in diffusion-based generation. By training on large, synthetic data sets of diverse, drug-like molecules optimized with the semiempirical GFN2-xTB method, high accuracy is achieved for bonded parameters, exceeding that of conventional, knowledge-based methods. Results are also compared to experimental structures from the Protein Databank (PDB) and Cambridge Structural Database (CSD).
[ "['David C. Williams' 'Neil Inala']" ]
null
null
2403.07926
null
null
http://arxiv.org/pdf/2403.07926v1
2024-02-29T18:30:13Z
2024-02-29T18:30:13Z
Value Prediction for Spatiotemporal Gait Data Using Deep Learning
Human gait has been commonly used for the diagnosis and evaluation of medical conditions and for monitoring the progress during treatment and rehabilitation. The use of wearable sensors that capture pressure or motion has yielded techniques that analyze the gait data to aid recovery, identify activity performed, or identify individuals. Deep learning, usually employing classification, has been successfully utilized in a variety of applications such as computer vision, biomedical imaging analysis, and natural language processing. We expand the application of deep learning to value prediction of time-series of spatiotemporal gait data. Moreover, we explore several deep learning architectures (Recurrent Neural Networks (RNN) and RNN combined with Convolutional Neural Networks (CNN)) to make short- and long-distance predictions using two different experimental setups. Our results show that short-distance prediction has an RMSE as low as 0.060675, and long-distance prediction RMSE as low as 0.106365. Additionally, the results show that the proposed deep learning models are capable of predicting the entire trial when trained and validated using the trials from the same participant. The proposed, customized models, used with value prediction open possibilities for additional applications, such as fall prediction, in-home progress monitoring, aiding of exoskeleton movement, and authentication.
[ "['Ryan Cavanagh' 'Jelena Trajkovic' 'Wenlu Zhang' 'I-Hung Khoo'\n 'Vennila Krishnan']" ]
null
null
2403.07927
null
null
http://arxiv.org/pdf/2403.07927v1
2024-02-29T19:40:32Z
2024-02-29T19:40:32Z
Intelligent Monitoring Framework for Cloud Services: A Data-Driven Approach
Cloud service owners need to continuously monitor their services to ensure high availability and reliability. Gaps in monitoring can lead to delay in incident detection and significant negative customer impact. Current process of monitor creation is ad-hoc and reactive in nature. Developers create monitors using their tribal knowledge and, primarily, a trial and error based process. As a result, monitors often have incomplete coverage which leads to production issues, or, redundancy which results in noise and wasted effort. In this work, we address this issue by proposing an intelligent monitoring framework that recommends monitors for cloud services based on their service properties. We start by mining the attributes of 30,000+ monitors from 791 production services at Microsoft and derive a structured ontology for monitors. We focus on two crucial dimensions: what to monitor (resources) and which metrics to monitor. We conduct an extensive empirical study and derive key insights on the major classes of monitors employed by cloud services at Microsoft, their associated dimensions, and the interrelationship between service properties and this ontology. Using these insights, we propose a deep learning based framework that recommends monitors based on the service properties. Finally, we conduct a user study with engineers from Microsoft which demonstrates the usefulness of the proposed framework. The proposed framework along with the ontology driven projections, succeeded in creating production quality recommendations for majority of resource classes. This was also validated by the users from the study who rated the framework's usefulness as 4.27 out of 5.
[ "['Pooja Srinivas' 'Fiza Husain' 'Anjaly Parayil' 'Ayush Choure'\n 'Chetan Bansal' 'Saravan Rajmohan']" ]
null
null
2403.07929
null
null
http://arxiv.org/pdf/2403.07929v1
2024-03-01T22:56:19Z
2024-03-01T22:56:19Z
Sketching the Heat Kernel: Using Gaussian Processes to Embed Data
This paper introduces a novel, non-deterministic method for embedding data in low-dimensional Euclidean space based on computing realizations of a Gaussian process depending on the geometry of the data. This type of embedding first appeared in (Adler et al, 2018) as a theoretical model for a generic manifold in high dimensions. In particular, we take the covariance function of the Gaussian process to be the heat kernel, and computing the embedding amounts to sketching a matrix representing the heat kernel. The Karhunen-Lo`eve expansion reveals that the straight-line distances in the embedding approximate the diffusion distance in a probabilistic sense, avoiding the need for sharp cutoffs and maintaining some of the smaller-scale structure. Our method demonstrates further advantage in its robustness to outliers. We justify the approach with both theory and experiments.
[ "['Anna C. Gilbert' \"Kevin O'Neill\"]" ]
null
null
2403.07933
null
null
http://arxiv.org/pdf/2403.07933v1
2024-03-04T12:48:25Z
2024-03-04T12:48:25Z
Corruption-Robust Offline Two-Player Zero-Sum Markov Games
We study data corruption robustness in offline two-player zero-sum Markov games. Given a dataset of realized trajectories of two players, an adversary is allowed to modify an $epsilon$-fraction of it. The learner's goal is to identify an approximate Nash Equilibrium policy pair from the corrupted data. We consider this problem in linear Markov games under different degrees of data coverage and corruption. We start by providing an information-theoretic lower bound on the suboptimality gap of any learner. Next, we propose robust versions of the Pessimistic Minimax Value Iteration algorithm, both under coverage on the corrupted data and under coverage only on the clean data, and show that they achieve (near)-optimal suboptimality gap bounds with respect to $epsilon$. We note that we are the first to provide such a characterization of the problem of learning approximate Nash Equilibrium policies in offline two-player zero-sum Markov games under data corruption.
[ "['Andi Nika' 'Debmalya Mandal' 'Adish Singla' 'Goran Radanović']" ]
null
null
2403.07937
null
null
http://arxiv.org/pdf/2403.07937v1
2024-03-08T08:10:29Z
2024-03-08T08:10:29Z
Speech Robust Bench: A Robustness Benchmark For Speech Recognition
As Automatic Speech Recognition (ASR) models become ever more pervasive, it is important to ensure that they make reliable predictions under corruptions present in the physical and digital world. We propose Speech Robust Bench (SRB), a comprehensive benchmark for evaluating the robustness of ASR models to diverse corruptions. SRB is composed of 69 input perturbations which are intended to simulate various corruptions that ASR models may encounter in the physical and digital world. We use SRB to evaluate the robustness of several state-of-the-art ASR models and observe that model size and certain modeling choices such as discrete representations, and self-training appear to be conducive to robustness. We extend this analysis to measure the robustness of ASR models on data from various demographic subgroups, namely English and Spanish speakers, and males and females, and observed noticeable disparities in the model's robustness across subgroups. We believe that SRB will facilitate future research towards robust ASR models, by making it easier to conduct comprehensive and comparable robustness evaluations.
[ "['Muhammad A. Shah' 'David Solans Noguero' 'Mikko A. Heikkila'\n 'Nicolas Kourtellis']" ]
null
null
2403.07938
null
null
http://arxiv.org/pdf/2403.07938v1
2024-03-08T22:27:38Z
2024-03-08T22:27:38Z
Text-to-Audio Generation Synchronized with Videos
In recent times, the focus on text-to-audio (TTA) generation has intensified, as researchers strive to synthesize audio from textual descriptions. However, most existing methods, though leveraging latent diffusion models to learn the correlation between audio and text embeddings, fall short when it comes to maintaining a seamless synchronization between the produced audio and its video. This often results in discernible audio-visual mismatches. To bridge this gap, we introduce a groundbreaking benchmark for Text-to-Audio generation that aligns with Videos, named T2AV-Bench. This benchmark distinguishes itself with three novel metrics dedicated to evaluating visual alignment and temporal consistency. To complement this, we also present a simple yet effective video-aligned TTA generation model, namely T2AV. Moving beyond traditional methods, T2AV refines the latent diffusion approach by integrating visual-aligned text embeddings as its conditional foundation. It employs a temporal multi-head attention transformer to extract and understand temporal nuances from video data, a feat amplified by our Audio-Visual ControlNet that adeptly merges temporal visual representations with text embeddings. Further enhancing this integration, we weave in a contrastive learning objective, designed to ensure that the visual-aligned text embeddings resonate closely with the audio features. Extensive evaluations on the AudioCaps and T2AV-Bench demonstrate that our T2AV sets a new standard for video-aligned TTA generation in ensuring visual alignment and temporal consistency.
[ "['Shentong Mo' 'Jing Shi' 'Yapeng Tian']" ]
null
null
2403.07940
null
null
http://arxiv.org/pdf/2403.07940v1
2024-03-09T04:49:40Z
2024-03-09T04:49:40Z
Hair and scalp disease detection using deep learning
In recent years, there has been a notable advancement in the integration of healthcare and technology, particularly evident in the field of medical image analysis. This paper introduces a pioneering approach in dermatology, presenting a robust method for the detection of hair and scalp diseases using state-of-the-art deep learning techniques. Our methodology relies on Convolutional Neural Networks (CNNs), well-known for their efficacy in image recognition, to meticulously analyze images for various dermatological conditions affecting the hair and scalp. Our proposed system represents a significant advancement in dermatological diagnostics, offering a non-invasive and highly efficient means of early detection and diagnosis. By leveraging the capabilities of CNNs, our model holds the potential to revolutionize dermatology, providing accessible and timely healthcare solutions. Furthermore, the seamless integration of our trained model into a web-based platform developed with the Django framework ensures broad accessibility and usability, democratizing advanced medical diagnostics. The integration of machine learning algorithms into web applications marks a pivotal moment in healthcare delivery, promising empowerment for both healthcare providers and patients. Through the synergy between technology and healthcare, our paper outlines the meticulous methodology, technical intricacies, and promising future prospects of our system. With a steadfast commitment to advancing healthcare frontiers, our goal is to significantly contribute to leveraging technology for improved healthcare outcomes globally. This endeavor underscores the profound impact of technological innovation in shaping the future of healthcare delivery and patient care, highlighting the transformative potential of our approach.
[ "['Kavita Sultanpure' 'Bhairavi Shirsath' 'Bhakti Bhande' 'Harshada Sawai'\n 'Srushti Gawade' 'Suraj Samgir']" ]
null
null
2403.07943
null
null
http://arxiv.org/pdf/2403.07943v1
2024-03-10T15:50:04Z
2024-03-10T15:50:04Z
Revisiting Edge Perturbation for Graph Neural Network in Graph Data Augmentation and Attack
Edge perturbation is a basic method to modify graph structures. It can be categorized into two veins based on their effects on the performance of graph neural networks (GNNs), i.e., graph data augmentation and attack. Surprisingly, both veins of edge perturbation methods employ the same operations, yet yield opposite effects on GNNs' accuracy. A distinct boundary between these methods in using edge perturbation has never been clearly defined. Consequently, inappropriate perturbations may lead to undesirable outcomes, necessitating precise adjustments to achieve desired effects. Therefore, questions of ``why edge perturbation has a two-faced effect?'' and ``what makes edge perturbation flexible and effective?'' still remain unanswered. In this paper, we will answer these questions by proposing a unified formulation and establishing a clear boundary between two categories of edge perturbation methods. Specifically, we conduct experiments to elucidate the differences and similarities between these methods and theoretically unify the workflow of these methods by casting it to one optimization problem. Then, we devise Edge Priority Detector (EPD) to generate a novel priority metric, bridging these methods up in the workflow. Experiments show that EPD can make augmentation or attack flexibly and achieve comparable or superior performance to other counterparts with less time overhead.
[ "['Xin Liu' 'Yuxiang Zhang' 'Meng Wu' 'Mingyu Yan' 'Kun He' 'Wei Yan'\n 'Shirui Pan' 'Xiaochun Ye' 'Dongrui Fan']" ]
null
null
2403.07945
null
null
http://arxiv.org/pdf/2403.07945v2
2024-04-19T19:05:52Z
2024-03-11T03:44:18Z
A Mathematical Framework for the Problem of Security for Cognition in Neurotechnology
The rapid advancement in neurotechnology in recent years has created an emerging critical intersection between neurotechnology and security. Implantable devices, non-invasive monitoring, and non-invasive therapies all carry with them the prospect of violating the privacy and autonomy of individuals' cognition. A growing number of scientists and physicians have made calls to address this issue, but applied efforts have been relatively limited. A major barrier hampering scientific and engineering efforts to address Cognitive Security is the lack of a clear means of describing and analyzing relevant problems. In this paper we develop Cognitive Security, a mathematical framework which enables such description and analysis by drawing on methods and results from multiple fields. We demonstrate certain statistical properties which have significant implications for Cognitive Security, and then present descriptions of the algorithmic problems faced by attackers attempting to violate privacy and autonomy, and defenders attempting to obstruct such attempts.
[ "['Bryce Allen Bagley']" ]
null
null
2403.07947
null
null
http://arxiv.org/pdf/2403.07947v1
2024-03-11T15:11:28Z
2024-03-11T15:11:28Z
The evaluation of a code-switched Sepedi-English automatic speech recognition system
Speech technology is a field that encompasses various techniques and tools used to enable machines to interact with speech, such as automatic speech recognition (ASR), spoken dialog systems, and others, allowing a device to capture spoken words through a microphone from a human speaker. End-to-end approaches such as Connectionist Temporal Classification (CTC) and attention-based methods are the most used for the development of ASR systems. However, these techniques were commonly used for research and development for many high-resourced languages with large amounts of speech data for training and evaluation, leaving low-resource languages relatively underdeveloped. While the CTC method has been successfully used for other languages, its effectiveness for the Sepedi language remains uncertain. In this study, we present the evaluation of the Sepedi-English code-switched automatic speech recognition system. This end-to-end system was developed using the Sepedi Prompted Code Switching corpus and the CTC approach. The performance of the system was evaluated using both the NCHLT Sepedi test corpus and the Sepedi Prompted Code Switching corpus. The model produced the lowest WER of 41.9%, however, the model faced challenges in recognizing the Sepedi only text.
[ "['Amanda Phaladi' 'Thipe Modipa']" ]
null
null
2403.07951
null
null
http://arxiv.org/pdf/2403.07951v1
2024-03-12T02:28:29Z
2024-03-12T02:28:29Z
SAMDA: Leveraging SAM on Few-Shot Domain Adaptation for Electronic Microscopy Segmentation
It has been shown that traditional deep learning methods for electronic microscopy segmentation usually suffer from low transferability when samples and annotations are limited, while large-scale vision foundation models are more robust when transferring between different domains but facing sub-optimal improvement under fine-tuning. In this work, we present a new few-shot domain adaptation framework SAMDA, which combines the Segment Anything Model(SAM) with nnUNet in the embedding space to achieve high transferability and accuracy. Specifically, we choose the Unet-based network as the "expert" component to learn segmentation features efficiently and design a SAM-based adaptation module as the "generic" component for domain transfer. By amalgamating the "generic" and "expert" components, we mitigate the modality imbalance in the complex pre-training knowledge inherent to large-scale Vision Foundation models and the challenge of transferability inherent to traditional neural networks. The effectiveness of our model is evaluated on two electron microscopic image datasets with different modalities for mitochondria segmentation, which improves the dice coefficient on the target domain by 6.7%. Also, the SAM-based adaptor performs significantly better with only a single annotated image than the 10-shot domain adaptation on nnUNet. We further verify our model on four MRI datasets from different sources to prove its generalization ability.
[ "['Yiran Wang' 'Li Xiao']" ]
null
null
2403.07953
null
null
http://arxiv.org/pdf/2403.07953v2
2024-03-31T23:47:47Z
2024-03-12T06:25:47Z
Abstracting Sparse DNN Acceleration via Structured Sparse Tensor Decomposition
Exploiting sparsity in deep neural networks (DNNs) has been a promising area to meet the growing computation need of modern DNNs. However, in practice, sparse DNN acceleration still faces a key challenge. To minimize the overhead of sparse acceleration, hardware designers have proposed structured sparse hardware support recently, which provides limited flexibility and requires extra model fine-tuning. Moreover, any sparse model fine-tuned for certain structured sparse hardware cannot be accelerated by other structured hardware. To bridge the gap between sparse DNN models and hardware, this paper proposes tensor approximation via structured decomposition (TASD), which leverages the distributive property in linear algebra to turn any sparse tensor into a series of structured sparse tensors. Next, we develop a software framework, TASDER, to accelerate DNNs by searching layer-wise, high-quality structured decomposition for both weight and activation tensors so that they can be accelerated by any systems with structured sparse hardware support. Evaluation results show that, by exploiting prior structured sparse hardware baselines, our method can accelerate off-the-shelf dense and sparse DNNs without fine-tuning and improves energy-delay-product by up to 83% and 74% on average.
[ "['Geonhwa Jeong' 'Po-An Tsai' 'Abhimanyu R. Bambhaniya'\n 'Stephen W. Keckler' 'Tushar Krishna']" ]
null
null
2403.07954
null
null
http://arxiv.org/pdf/2403.07954v2
2024-05-21T02:47:03Z
2024-03-12T06:26:17Z
Optimizing Polynomial Graph Filters: A Novel Adaptive Krylov Subspace Approach
Graph Neural Networks (GNNs), known as spectral graph filters, find a wide range of applications in web networks. To bypass eigendecomposition, polynomial graph filters are proposed to approximate graph filters by leveraging various polynomial bases for filter training. However, no existing studies have explored the diverse polynomial graph filters from a unified perspective for optimization. In this paper, we first unify polynomial graph filters, as well as the optimal filters of identical degrees into the Krylov subspace of the same order, thus providing equivalent expressive power theoretically. Next, we investigate the asymptotic convergence property of polynomials from the unified Krylov subspace perspective, revealing their limited adaptability in graphs with varying heterophily degrees. Inspired by those facts, we design a novel adaptive Krylov subspace approach to optimize polynomial bases with provable controllability over the graph spectrum so as to adapt various heterophily graphs. Subsequently, we propose AdaptKry, an optimized polynomial graph filter utilizing bases from the adaptive Krylov subspaces. Meanwhile, in light of the diverse spectral properties of complex graphs, we extend AdaptKry by leveraging multiple adaptive Krylov bases without incurring extra training costs. As a consequence, extended AdaptKry is able to capture the intricate characteristics of graphs and provide insights into their inherent complexity. We conduct extensive experiments across a series of real-world datasets. The experimental results demonstrate the superior filtering capability of AdaptKry, as well as the optimized efficacy of the adaptive Krylov basis.
[ "['Keke Huang' 'Wencai Cao' 'Hoang Ta' 'Xiaokui Xiao' 'Pietro Liò']" ]
null
null
2403.07955
null
null
http://arxiv.org/pdf/2403.07955v1
2024-03-12T07:24:17Z
2024-03-12T07:24:17Z
Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery
The remarkable success in neural networks provokes the selective rationalization. It explains the prediction results by identifying a small subset of the inputs sufficient to support them. Since existing methods still suffer from adopting the shortcuts in data to compose rationales and limited large-scale annotated rationales by human, in this paper, we propose a Shortcuts-fused Selective Rationalization (SSR) method, which boosts the rationalization by discovering and exploiting potential shortcuts. Specifically, SSR first designs a shortcuts discovery approach to detect several potential shortcuts. Then, by introducing the identified shortcuts, we propose two strategies to mitigate the problem of utilizing shortcuts to compose rationales. Finally, we develop two data augmentations methods to close the gap in the number of annotated rationales. Extensive experimental results on real-world datasets clearly validate the effectiveness of our proposed method.
[ "['Linan Yue' 'Qi Liu' 'Yichao Du' 'Li Wang' 'Weibo Gao' 'Yanqing An']" ]
null
null
2403.07956
null
null
http://arxiv.org/pdf/2403.07956v1
2024-03-12T08:07:06Z
2024-03-12T08:07:06Z
DeepCDCL: An CDCL-based Neural Network Verification Framework
Neural networks in safety-critical applications face increasing safety and security concerns due to their susceptibility to little disturbance. In this paper, we propose DeepCDCL, a novel neural network verification framework based on the Conflict-Driven Clause Learning (CDCL) algorithm. We introduce an asynchronous clause learning and management structure, reducing redundant time consumption compared to the direct application of the CDCL framework. Furthermore, we also provide a detailed evaluation of the performance of our approach on the ACAS Xu and MNIST datasets, showing that a significant speed-up is achieved in most cases.
[ "['Zongxin Liu' 'Pengfei Yang' 'Lijun Zhang' 'Xiaowei Huang']" ]
null
null
2403.07957
null
null
http://arxiv.org/pdf/2403.07957v1
2024-03-12T08:27:53Z
2024-03-12T08:27:53Z
Efficient Post-Training Augmentation for Adaptive Inference in Heterogeneous and Distributed IoT Environments
Early Exit Neural Networks (EENNs) present a solution to enhance the efficiency of neural network deployments. However, creating EENNs is challenging and requires specialized domain knowledge, due to the large amount of additional design choices. To address this issue, we propose an automated augmentation flow that focuses on converting an existing model into an EENN. It performs all required design decisions for the deployment to heterogeneous or distributed hardware targets: Our framework constructs the EENN architecture, maps its subgraphs to the hardware targets, and configures its decision mechanism. To the best of our knowledge, it is the first framework that is able to perform all of these steps. We evaluated our approach on a collection of Internet-of-Things and standard image classification use cases. For a speech command detection task, our solution was able to reduce the mean operations per inference by 59.67%. For an ECG classification task, it was able to terminate all samples early, reducing the mean inference energy by 74.9% and computations by 78.3%. On CIFAR-10, our solution was able to achieve up to a 58.75% reduction in computations. The search on a ResNet-152 base model for CIFAR-10 took less than nine hours on a laptop CPU. Our proposed approach enables the creation of EENN optimized for IoT environments and can reduce the inference cost of Deep Learning applications on embedded and fog platforms, while also significantly reducing the search cost - making it more accessible for scientists and engineers in industry and research. The low search cost improves the accessibility of EENNs, with the potential to improve the efficiency of neural networks in a wide range of practical applications.
[ "['Max Sponner' 'Lorenzo Servadei' 'Bernd Waschneck' 'Robert Wille'\n 'Akash Kumar']" ]
null
null
2403.07958
null
null
http://arxiv.org/pdf/2403.07958v1
2024-03-12T08:28:27Z
2024-03-12T08:28:27Z
Temporal Decisions: Leveraging Temporal Correlation for Efficient Decisions in Early Exit Neural Networks
Deep Learning is becoming increasingly relevant in Embedded and Internet-of-things applications. However, deploying models on embedded devices poses a challenge due to their resource limitations. This can impact the model's inference accuracy and latency. One potential solution are Early Exit Neural Networks, which adjust model depth dynamically through additional classifiers attached between their hidden layers. However, the real-time termination decision mechanism is critical for the system's efficiency, latency, and sustained accuracy. This paper introduces Difference Detection and Temporal Patience as decision mechanisms for Early Exit Neural Networks. They leverage the temporal correlation present in sensor data streams to efficiently terminate the inference. We evaluate their effectiveness in health monitoring, image classification, and wake-word detection tasks. Our novel contributions were able to reduce the computational footprint compared to established decision mechanisms significantly while maintaining higher accuracy scores. We achieved a reduction of mean operations per inference by up to 80% while maintaining accuracy levels within 5% of the original model. These findings highlight the importance of considering temporal correlation in sensor data to improve the termination decision.
[ "['Max Sponner' 'Lorenzo Servadei' 'Bernd Waschneck' 'Robert Wille'\n 'Akash Kumar']" ]
null
null
2403.07960
null
null
http://arxiv.org/pdf/2403.07960v1
2024-03-12T09:37:20Z
2024-03-12T09:37:20Z
Unsupervised self-organising map of prostate cell Raman spectra shows disease-state subclustering
Prostate cancer is a disease which poses an interesting clinical question: should it be treated? A small subset of prostate cancers are aggressive and require removal and treatment to prevent metastatic spread. However, conventional diagnostics remain challenged to risk-stratify such patients, hence, new methods of approach to biomolecularly subclassify the disease are needed. Here we use an unsupervised, self-organising map approach to analyse live-cell Raman spectroscopy data obtained from prostate cell-lines; our aim is to test the feasibility of this method to differentiate, at the single-cell-level, cancer from normal using high-dimensional datasets with minimal preprocessing. The results demonstrate not only successful separation of normal prostate and cancer cells, but also a new subclustering of the prostate cancer cell-line into two groups. Initial analysis of the spectra from each of the cancer subclusters demonstrates a differential expression of lipids, which, against the normal control, may be linked to disease-related changes in cellular signalling.
[ "['Daniel West' 'Susan Stepney' 'Y. Hancock']" ]
null
null
2403.07965
null
null
http://arxiv.org/abs/2403.07965v2
2024-07-08T09:21:00Z
2024-03-12T11:56:38Z
Conditional computation in neural networks: principles and research trends
This article summarizes principles and ideas from the emerging area of applying textit{conditional computation} methods to the design of neural networks. In particular, we focus on neural networks that can dynamically activate or de-activate parts of their computational graph conditionally on their input. Examples include the dynamic selection of, e.g., input tokens, layers (or sets of layers), and sub-modules inside each layer (e.g., channels in a convolutional filter). We first provide a general formalism to describe these techniques in an uniform way. Then, we introduce three notable implementations of these principles: mixture-of-experts (MoEs) networks, token selection mechanisms, and early-exit neural networks. The paper aims to provide a tutorial-like introduction to this growing field. To this end, we analyze the benefits of these modular designs in terms of efficiency, explainability, and transfer learning, with a focus on emerging applicative areas ranging from automated scientific discovery to semantic communication.
[ "['Simone Scardapane' 'Alessandro Baiocchi' 'Alessio Devoto'\n 'Valerio Marsocci' 'Pasquale Minervini' 'Jary Pomponi']" ]
null
null
2403.07966
null
null
http://arxiv.org/pdf/2403.07966v1
2024-03-12T12:59:00Z
2024-03-12T12:59:00Z
Applying ranking techniques for estimating influence of Earth variables on temperature forecast error
This paper describes how to analyze the influence of Earth system variables on the errors when providing temperature forecasts. The initial framework to get the data has been based on previous research work, which resulted in a very interesting discovery. However, the aforementioned study only worked on individual correlations of the variables with respect to the error. This research work is going to re-use the main ideas but introduce three main novelties: (1) applying a data science approach by a few representative locations; (2) taking advantage of the rankings created by Spearman correlation but enriching them with other metrics looking for a more robust ranking of the variables; (3) evaluation of the methodology by learning random forest models for regression with the distinct experimental variations. The main contribution is the framework that shows how to convert correlations into rankings and combine them into an aggregate ranking. We have carried out experiments on five chosen locations to analyze the behavior of this ranking-based methodology. The results show that the specific performance is dependent on the location and season, which is expected, and that this selection technique works properly with Random Forest models but can also improve simpler regression models such as Bayesian Ridge. This work also contributes with an extensive analysis of the results. We can conclude that this selection based on the top-k ranked variables seems promising for this real problem, and it could also be applied in other domains.
[ "['M. Julia Flores' 'Melissa Ruiz-Vásquez' 'Ana Bastos' 'René Orth']" ]
null
null
2403.07967
null
null
http://arxiv.org/pdf/2403.07967v1
2024-03-12T13:31:13Z
2024-03-12T13:31:13Z
Feasibility of machine learning-based rice yield prediction in India at the district level using climate reanalysis data
Yield forecasting, the science of predicting agricultural productivity before the crop harvest occurs, helps a wide range of stakeholders make better decisions around agricultural planning. This study aims to investigate whether machine learning-based yield prediction models can capably predict Kharif season rice yields at the district level in India several months before the rice harvest takes place. The methodology involved training 19 machine learning models such as CatBoost, LightGBM, Orthogonal Matching Pursuit, and Extremely Randomized Trees on 20 years of climate, satellite, and rice yield data across 247 of Indian rice-producing districts. In addition to model-building, a dynamic dashboard was built understand how the reliability of rice yield predictions varies across districts. The results of the proof-of-concept machine learning pipeline demonstrated that rice yields can be predicted with a reasonable degree of accuracy, with out-of-sample R2, MAE, and MAPE performance of up to 0.82, 0.29, and 0.16 respectively. These results outperformed test set performance reported in related literature on rice yield modeling in other contexts and countries. In addition, SHAP value analysis was conducted to infer both the importance and directional impact of the climate and remote sensing variables included in the model. Important features driving rice yields included temperature, soil water volume, and leaf area index. In particular, higher temperatures in August correlate with increased rice yields, particularly when the leaf area index in August is also high. Building on the results, a proof-of-concept dashboard was developed to allow users to easily explore which districts may experience a rise or fall in yield relative to the previous year.
[ "['Djavan De Clercq' 'Adam Mahdi']" ]
null
null
2403.07968
null
null
http://arxiv.org/pdf/2403.07968v2
2024-06-09T11:51:03Z
2024-03-12T13:59:23Z
Do Deep Neural Network Solutions Form a Star Domain?
It has recently been conjectured that neural network solution sets reachable via stochastic gradient descent (SGD) are convex, considering permutation invariances (Entezari et al., 2022). This means that a linear path can connect two independent solutions with low loss, given the weights of one of the models are appropriately permuted. However, current methods to test this theory often require very wide networks to succeed. In this work, we conjecture that more generally, the SGD solution set is a "star domain" that contains a "star model" that is linearly connected to all the other solutions via paths with low loss values, modulo permutations. We propose the Starlight algorithm that finds a star model of a given learning task. We validate our claim by showing that this star model is linearly connected with other independently found solutions. As an additional benefit of our study, we demonstrate better uncertainty estimates on the Bayesian Model Averaging over the obtained star domain. Further, we demonstrate star models as potential substitutes for model ensembles. Our code is available at https://github.com/aktsonthalia/starlight.
[ "['Ankit Sonthalia' 'Alexander Rubinstein' 'Ehsan Abbasnejad'\n 'Seong Joon Oh']" ]
null
null
2403.07969
null
null
http://arxiv.org/pdf/2403.07969v2
2024-03-14T02:47:41Z
2024-03-12T14:56:34Z
KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction
In this paper, we propose KnowCoder, a Large Language Model (LLM) to conduct Universal Information Extraction (UIE) via code generation. KnowCoder aims to develop a kind of unified schema representation that LLMs can easily understand and an effective learning framework that encourages LLMs to follow schemas and extract structured knowledge accurately. To achieve these, KnowCoder introduces a code-style schema representation method to uniformly transform different schemas into Python classes, with which complex schema information, such as constraints among tasks in UIE, can be captured in an LLM-friendly manner. We further construct a code-style schema library covering over $textbf{30,000}$ types of knowledge, which is the largest one for UIE, to the best of our knowledge. To ease the learning process of LLMs, KnowCoder contains a two-phase learning framework that enhances its schema understanding ability via code pretraining and its schema following ability via instruction tuning. After code pretraining on around $1.5$B automatically constructed data, KnowCoder already attains remarkable generalization ability and achieves relative improvements by $textbf{49.8%}$ F1, compared to LLaMA2, under the few-shot setting. After instruction tuning, KnowCoder further exhibits strong generalization ability on unseen schemas and achieves up to $textbf{12.5%}$ and $textbf{21.9%}$, compared to sota baselines, under the zero-shot setting and the low resource setting, respectively. Additionally, based on our unified schema representations, various human-annotated datasets can simultaneously be utilized to refine KnowCoder, which achieves significant improvements up to $textbf{7.5%}$ under the supervised setting.
[ "['Zixuan Li' 'Yutao Zeng' 'Yuxin Zuo' 'Weicheng Ren' 'Wenxuan Liu'\n 'Miao Su' 'Yucan Guo' 'Yantao Liu' 'Xiang Li' 'Zhilei Hu' 'Long Bai'\n 'Wei Li' 'Yidan Liu' 'Pan Yang' 'Xiaolong Jin' 'Jiafeng Guo'\n 'Xueqi Cheng']" ]
null
null
2403.07974
null
null
http://arxiv.org/pdf/2403.07974v2
2024-06-06T17:41:21Z
2024-03-12T17:58:04Z
LiveCodeBench: Holistic and Contamination Free Evaluation of Large Language Models for Code
Large Language Models (LLMs) applied to code-related applications have emerged as a prominent field, attracting significant interest from both academia and industry. However, as new and improved LLMs are developed, existing evaluation benchmarks (e.g., HumanEval, MBPP) are no longer sufficient for assessing their capabilities. In this work, we propose LiveCodeBench, a comprehensive and contamination-free evaluation of LLMs for code, which continuously collects new problems over time from contests across three competition platforms, namely LeetCode, AtCoder, and CodeForces. Notably, our benchmark also focuses on a broader range of code related capabilities, such as self-repair, code execution, and test output prediction, beyond just code generation. Currently, LiveCodeBench hosts four hundred high-quality coding problems that were published between May 2023 and May 2024. We have evaluated 18 base LLMs and 34 instruction-tuned LLMs on LiveCodeBench. We present empirical findings on contamination, holistic performance comparisons, potential overfitting in existing benchmarks as well as individual model comparisons. We will release all prompts and model completions for further community analysis, along with a general toolkit for adding new scenarios and model
[ "['Naman Jain' 'King Han' 'Alex Gu' 'Wen-Ding Li' 'Fanjia Yan'\n 'Tianjun Zhang' 'Sida Wang' 'Armando Solar-Lezama' 'Koushik Sen'\n 'Ion Stoica']" ]
null
null
2403.07979
null
null
http://arxiv.org/pdf/2403.07979v1
2024-03-12T18:00:02Z
2024-03-12T18:00:02Z
Do Agents Dream of Electric Sheep?: Improving Generalization in Reinforcement Learning through Generative Learning
The Overfitted Brain hypothesis suggests dreams happen to allow generalization in the human brain. Here, we ask if the same is true for reinforcement learning agents as well. Given limited experience in a real environment, we use imagination-based reinforcement learning to train a policy on dream-like episodes, where non-imaginative, predicted trajectories are modified through generative augmentations. Experiments on four ProcGen environments show that, compared to classic imagination and offline training on collected experience, our method can reach a higher level of generalization when dealing with sparsely rewarded environments.
[ "['Giorgio Franceschelli' 'Mirco Musolesi']" ]
null
null
2403.07995
null
null
http://arxiv.org/pdf/2403.07995v1
2024-03-12T18:03:08Z
2024-03-12T18:03:08Z
Motifs, Phrases, and Beyond: The Modelling of Structure in Symbolic Music Generation
Modelling musical structure is vital yet challenging for artificial intelligence systems that generate symbolic music compositions. This literature review dissects the evolution of techniques for incorporating coherent structure, from symbolic approaches to foundational and transformative deep learning methods that harness the power of computation and data across a wide variety of training paradigms. In the later stages, we review an emerging technique which we refer to as "sub-task decomposition" that involves decomposing music generation into separate high-level structural planning and content creation stages. Such systems incorporate some form of musical knowledge or neuro-symbolic methods by extracting melodic skeletons or structural templates to guide the generation. Progress is evident in capturing motifs and repetitions across all three eras reviewed, yet modelling the nuanced development of themes across extended compositions in the style of human composers remains difficult. We outline several key future directions to realize the synergistic benefits of combining approaches from all eras examined.
[ "['Keshav Bhandari' 'Simon Colton']" ]
null
null
2403.08011
null
null
http://arxiv.org/pdf/2403.08011v1
2024-03-12T18:21:20Z
2024-03-12T18:21:20Z
Gujarati-English Code-Switching Speech Recognition using ensemble prediction of spoken language
An important and difficult task in code-switched speech recognition is to recognize the language, as lots of words in two languages can sound similar, especially in some accents. We focus on improving performance of end-to-end Automatic Speech Recognition models by conditioning transformer layers on language ID of words and character in the output in an per layer supervised manner. To this end, we propose two methods of introducing language specific parameters and explainability in the multi-head attention mechanism, and implement a Temporal Loss that helps maintain continuity in input alignment. Despite being unable to reduce WER significantly, our method shows promise in predicting the correct language from just spoken data. We introduce regularization in the language prediction by dropping LID in the sequence, which helps align long repeated output sequences.
[ "['Yash Sharma' 'Basil Abraham' 'Preethi Jyothi']" ]
null
null
2403.08013
null
null
http://arxiv.org/pdf/2403.08013v1
2024-03-12T18:25:10Z
2024-03-12T18:25:10Z
Supervised Time Series Classification for Anomaly Detection in Subsea Engineering
Time series classification is of significant importance in monitoring structural systems. In this work, we investigate the use of supervised machine learning classification algorithms on simulated data based on a physical system with two states: Intact and Broken. We provide a comprehensive discussion of the preprocessing of temporal data, using measures of statistical dispersion and dimension reduction techniques. We present an intuitive baseline method and discuss its efficiency. We conclude with a comparison of the various methods based on different performance metrics, showing the advantage of using machine learning techniques as a tool in decision making.
[ "['Ergys Çokaj' 'Halvor Snersrud Gustad' 'Andrea Leone' 'Per Thomas Moe'\n 'Lasse Moldestad']" ]
null
null
2403.08016
null
null
http://arxiv.org/pdf/2403.08016v1
2024-03-12T18:28:13Z
2024-03-12T18:28:13Z
Aedes aegypti Egg Counting with Neural Networks for Object Detection
Aedes aegypti is still one of the main concerns when it comes to disease vectors. Among the many ways to deal with it, there are important protocols that make use of egg numbers in ovitraps to calculate indices, such as the LIRAa and the Breteau Index, which can provide information on predictable outbursts and epidemics. Also, there are many research lines that require egg numbers, specially when mass production of mosquitoes is needed. Egg counting is a laborious and error-prone task that can be automated via computer vision-based techniques, specially deep learning-based counting with object detection. In this work, we propose a new dataset comprising field and laboratory eggs, along with test results of three neural networks applied to the task: Faster R-CNN, Side-Aware Boundary Localization and FoveaBox.
[ "['Micheli Nayara de Oliveira Vicente' 'Gabriel Toshio Hirokawa Higa'\n 'João Vitor de Andrade Porto' 'Higor Henrique' 'Picoli Nucci'\n 'Asser Botelho Santana' 'Karla Rejane de Andrade Porto'\n 'Antonia Railda Roel' 'Hemerson Pistori']" ]
null
null
2403.08024
null
null
http://arxiv.org/pdf/2403.08024v1
2024-03-12T18:46:56Z
2024-03-12T18:46:56Z
xMLP: Revolutionizing Private Inference with Exclusive Square Activation
Private Inference (PI) enables deep neural networks (DNNs) to work on private data without leaking sensitive information by exploiting cryptographic primitives such as multi-party computation (MPC) and homomorphic encryption (HE). However, the use of non-linear activations such as ReLU in DNNs can lead to impractically high PI latency in existing PI systems, as ReLU requires the use of costly MPC computations, such as Garbled Circuits. Since square activations can be processed by Beaver's triples hundreds of times faster compared to ReLU, they are more friendly to PI tasks, but using them leads to a notable drop in model accuracy. This paper starts by exploring the reason for such an accuracy drop after using square activations, and concludes that this is due to an "information compounding" effect. Leveraging this insight, we propose xMLP, a novel DNN architecture that uses square activations exclusively while maintaining parity in both accuracy and efficiency with ReLU-based DNNs. Our experiments on CIFAR-100 and ImageNet show that xMLP models consistently achieve better performance than ResNet models with fewer activation layers and parameters while maintaining consistent performance with its ReLU-based variants. Remarkably, when compared to state-of-the-art PI Models, xMLP demonstrates superior performance, achieving a 0.58% increase in accuracy with 7x faster PI speed. Moreover, it delivers a significant accuracy improvement of 4.96% while maintaining the same PI latency. When offloading PI to the GPU, xMLP is up to 700x faster than the previous state-of-the-art PI model with comparable accuracy.
[ "['Jiajie Li' 'Jinjun Xiong']" ]
null
null
2403.08027
null
null
http://arxiv.org/pdf/2403.08027v1
2024-03-12T18:55:23Z
2024-03-12T18:55:23Z
McCatch: Scalable Microcluster Detection in Dimensional and Nondimensional Datasets
How could we have an outlier detector that works even with nondimensional data, and ranks together both singleton microclusters ('one-off' outliers) and nonsingleton microclusters by their anomaly scores? How to obtain scores that are principled in one scalable and 'hands-off' manner? Microclusters of outliers indicate coalition or repetition in fraud activities, etc.; their identification is thus highly desirable. This paper presents McCatch: a new algorithm that detects microclusters by leveraging our proposed 'Oracle' plot (1NN Distance versus Group 1NN Distance). We study 31 real and synthetic datasets with up to 1M data elements to show that McCatch is the only method that answers both of the questions above; and, it outperforms 11 other methods, especially when the data has nonsingleton microclusters or is nondimensional. We also showcase McCatch's ability to detect meaningful microclusters in graphs, fingerprints, logs of network connections, text data, and satellite imagery. For example, it found a 30-elements microcluster of confirmed 'Denial of Service' attacks in the network logs, taking only ~3 minutes for 222K data elements on a stock desktop.
[ "['Braulio V. Sánchez Vinces' 'Robson L. F. Cordeiro' 'Christos Faloutsos']" ]
null
null
2403.08040
null
null
http://arxiv.org/pdf/2403.08040v2
2024-07-09T09:33:45Z
2024-03-12T19:23:13Z
MicroT: Low-Energy and Adaptive Models for MCUs
We propose MicroT, a low-energy, multi-task adaptive model framework for resource-constrained MCUs. We divide the original model into a feature extractor and a classifier. The feature extractor is obtained through self-supervised knowledge distillation and further optimized into part and full models through model splitting and joint training. These models are then deployed on MCUs, with classifiers added and trained on local tasks, ultimately performing stage-decision for joint inference. In this process, the part model initially processes the sample, and if the confidence score falls below the set threshold, the full model will resume and continue the inference. We evaluate MicroT on two models, three datasets, and two MCU boards. Our experimental evaluation shows that MicroT effectively improves model performance and reduces energy consumption when dealing with multiple local tasks. Compared to the unoptimized feature extractor, MicroT can improve accuracy by up to 9.87%. On MCUs, compared to the standard full model inference, MicroT can save up to about 29.13% in energy consumption. MicroT also allows users to adaptively adjust the stage-decision ratio as needed, better balancing model performance and energy consumption. Under the standard stage-decision ratio configuration, MicroT can increase accuracy by 5.91% and save about 14.47% of energy consumption.
[ "['Yushan Huang' 'Ranya Aloufi' 'Xavier Cadet' 'Yuchen Zhao'\n 'Payam Barnaghi' 'Hamed Haddadi']" ]
null
null
2403.08042
null
null
http://arxiv.org/pdf/2403.08042v1
2024-03-12T19:34:50Z
2024-03-12T19:34:50Z
CT evaluation of 2D and 3D holistic deep learning methods for the volumetric segmentation of airway lesions
This research embarked on a comparative exploration of the holistic segmentation capabilities of Convolutional Neural Networks (CNNs) in both 2D and 3D formats, focusing on cystic fibrosis (CF) lesions. The study utilized data from two CF reference centers, covering five major CF structural changes. Initially, it compared the 2D and 3D models, highlighting the 3D model's superior capability in capturing complex features like mucus plugs and consolidations. To improve the 2D model's performance, a loss adapted to fine structures segmentation was implemented and evaluated, significantly enhancing its accuracy, though not surpassing the 3D model's performance. The models underwent further validation through external evaluation against pulmonary function tests (PFTs), confirming the robustness of the findings. Moreover, this study went beyond comparing metrics; it also included comprehensive assessments of the models' interpretability and reliability, providing valuable insights for their clinical application.
[ "['Amel Imene Hadj Bouzid' 'Baudouin Denis de Senneville' 'Fabien Baldacci'\n 'Pascal Desbarats' 'Patrick Berger' 'Ilyes Benlala' 'Gaël Dournes']" ]
null
null
2403.08049
null
null
http://arxiv.org/pdf/2403.08049v1
2024-03-12T19:46:59Z
2024-03-12T19:46:59Z
TutoAI: A Cross-domain Framework for AI-assisted Mixed-media Tutorial Creation on Physical Tasks
Mixed-media tutorials, which integrate videos, images, text, and diagrams to teach procedural skills, offer more browsable alternatives than timeline-based videos. However, manually creating such tutorials is tedious, and existing automated solutions are often restricted to a particular domain. While AI models hold promise, it is unclear how to effectively harness their powers, given the multi-modal data involved and the vast landscape of models. We present TutoAI, a cross-domain framework for AI-assisted mixed-media tutorial creation on physical tasks. First, we distill common tutorial components by surveying existing work; then, we present an approach to identify, assemble, and evaluate AI models for component extraction; finally, we propose guidelines for designing user interfaces (UI) that support tutorial creation based on AI-generated components. We show that TutoAI has achieved higher or similar quality compared to a baseline model in preliminary user studies.
[ "['Yuexi Chen' 'Vlad I. Morariu' 'Anh Truong' 'Zhicheng Liu']" ]
null
null
2403.08055
null
null
http://arxiv.org/pdf/2403.08055v1
2024-03-12T20:02:39Z
2024-03-12T20:02:39Z
DrivAerNet: A Parametric Car Dataset for Data-Driven Aerodynamic Design and Graph-Based Drag Prediction
This study introduces DrivAerNet, a large-scale high-fidelity CFD dataset of 3D industry-standard car shapes, and RegDGCNN, a dynamic graph convolutional neural network model, both aimed at aerodynamic car design through machine learning. DrivAerNet, with its 4000 detailed 3D car meshes using 0.5 million surface mesh faces and comprehensive aerodynamic performance data comprising of full 3D pressure, velocity fields, and wall-shear stresses, addresses the critical need for extensive datasets to train deep learning models in engineering applications. It is 60% larger than the previously available largest public dataset of cars, and is the only open-source dataset that also models wheels and underbody. RegDGCNN leverages this large-scale dataset to provide high-precision drag estimates directly from 3D meshes, bypassing traditional limitations such as the need for 2D image rendering or Signed Distance Fields (SDF). By enabling fast drag estimation in seconds, RegDGCNN facilitates rapid aerodynamic assessments, offering a substantial leap towards integrating data-driven methods in automotive design. Together, DrivAerNet and RegDGCNN promise to accelerate the car design process and contribute to the development of more efficient vehicles. To lay the groundwork for future innovations in the field, the dataset and code used in our study are publicly accessible at url{https://github.com/Mohamedelrefaie/DrivAerNet}
[ "['Mohamed Elrefaie' 'Angela Dai' 'Faez Ahmed']" ]
null
null
2403.08058
null
null
http://arxiv.org/pdf/2403.08058v2
2024-04-27T22:45:39Z
2024-03-12T20:10:04Z
CHAI: Clustered Head Attention for Efficient LLM Inference
Large Language Models (LLMs) with hundreds of billions of parameters have transformed the field of machine learning. However, serving these models at inference time is both compute and memory intensive, where a single request can require multiple GPUs and tens of Gigabytes of memory. Multi-Head Attention is one of the key components of LLMs, which can account for over 50% of LLMs memory and compute requirement. We observe that there is a high amount of redundancy across heads on which tokens they pay attention to. Based on this insight, we propose Clustered Head Attention (CHAI). CHAI combines heads with a high amount of correlation for self-attention at runtime, thus reducing both memory and compute. In our experiments, we show that CHAI is able to reduce the memory requirements for storing K,V cache by up to 21.4% and inference time latency by up to 1.73x without any fine-tuning required. CHAI achieves this with a maximum 3.2% deviation in accuracy across 3 different models (i.e. OPT-66B, LLAMA-7B, LLAMA-33B) and 5 different evaluation datasets.
[ "['Saurabh Agarwal' 'Bilge Acun' 'Basil Hosmer' 'Mostafa Elhoushi'\n 'Yejin Lee' 'Shivaram Venkataraman' 'Dimitris Papailiopoulos'\n 'Carole-Jean Wu']" ]
null
null
2403.08059
null
null
http://arxiv.org/pdf/2403.08059v2
2024-03-28T00:59:37Z
2024-03-12T20:11:38Z
FluoroSAM: A Language-aligned Foundation Model for X-ray Image Segmentation
Automated X-ray image segmentation would accelerate research and development in diagnostic and interventional precision medicine. Prior efforts have contributed task-specific models capable of solving specific image analysis problems, but the utility of these models is restricted to their particular task domain, and expanding to broader use requires additional data, labels, and retraining efforts. Recently, foundation models (FMs) -- machine learning models trained on large amounts of highly variable data thus enabling broad applicability -- have emerged as promising tools for automated image analysis. Existing FMs for medical image analysis focus on scenarios and modalities where objects are clearly defined by visually apparent boundaries, such as surgical tool segmentation in endoscopy. X-ray imaging, by contrast, does not generally offer such clearly delineated boundaries or structure priors. During X-ray image formation, complex 3D structures are projected in transmission onto the imaging plane, resulting in overlapping features of varying opacity and shape. To pave the way toward an FM for comprehensive and automated analysis of arbitrary medical X-ray images, we develop FluoroSAM, a language-aligned variant of the Segment-Anything Model, trained from scratch on 1.6M synthetic X-ray images. FluoroSAM is trained on data including masks for 128 organ types and 464 non-anatomical objects, such as tools and implants. In real X-ray images of cadaveric specimens, FluoroSAM is able to segment bony anatomical structures based on text-only prompting with 0.51 and 0.79 DICE with point-based refinement, outperforming competing SAM variants for all structures. FluoroSAM is also capable of zero-shot generalization to segmenting classes beyond the training set thanks to its language alignment, which we demonstrate for full lung segmentation on real chest X-rays.
[ "['Benjamin D. Killeen' 'Liam J. Wang' 'Han Zhang' 'Mehran Armand'\n 'Russell H. Taylor' 'Dave Dreizin' 'Greg Osgood' 'Mathias Unberath']" ]
null
null
2403.08081
null
null
http://arxiv.org/pdf/2403.08081v1
2024-03-12T21:15:38Z
2024-03-12T21:15:38Z
Mechanics of Next Token Prediction with Self-Attention
Transformer-based language models are trained on large datasets to predict the next token given an input sequence. Despite this simple training objective, they have led to revolutionary advances in natural language processing. Underlying this success is the self-attention mechanism. In this work, we ask: $textit{What}$ $textit{does}$ $textit{a}$ $textit{single}$ $textit{self-attention}$ $textit{layer}$ $textit{learn}$ $textit{from}$ $textit{next-token}$ $textit{prediction?}$ We show that training self-attention with gradient descent learns an automaton which generates the next token in two distinct steps: $textbf{(1)}$ $textbf{Hard}$ $textbf{retrieval:}$ Given input sequence, self-attention precisely selects the $textit{high-priority}$ $textit{input}$ $textit{tokens}$ associated with the last input token. $textbf{(2)}$ $textbf{Soft}$ $textbf{composition:}$ It then creates a convex combination of the high-priority tokens from which the next token can be sampled. Under suitable conditions, we rigorously characterize these mechanics through a directed graph over tokens extracted from the training data. We prove that gradient descent implicitly discovers the strongly-connected components (SCC) of this graph and self-attention learns to retrieve the tokens that belong to the highest-priority SCC available in the context window. Our theory relies on decomposing the model weights into a directional component and a finite component that correspond to hard retrieval and soft composition steps respectively. This also formalizes a related implicit bias formula conjectured in [Tarzanagh et al. 2023]. We hope that these findings shed light on how self-attention processes sequential data and pave the path toward demystifying more complex architectures.
[ "['Yingcong Li' 'Yixiao Huang' 'M. Emrullah Ildiz' 'Ankit Singh Rawat'\n 'Samet Oymak']" ]
null
null
2403.08100
null
null
http://arxiv.org/pdf/2403.08100v1
2024-03-12T22:21:48Z
2024-03-12T22:21:48Z
Efficient Language Model Architectures for Differentially Private Federated Learning
Cross-device federated learning (FL) is a technique that trains a model on data distributed across typically millions of edge devices without data leaving the devices. SGD is the standard client optimizer for on device training in cross-device FL, favored for its memory and computational efficiency. However, in centralized training of neural language models, adaptive optimizers are preferred as they offer improved stability and performance. In light of this, we ask if language models can be modified such that they can be efficiently trained with SGD client optimizers and answer this affirmatively. We propose a scale-invariant Coupled Input Forget Gate (SI CIFG) recurrent network by modifying the sigmoid and tanh activations in the recurrent cell and show that this new model converges faster and achieves better utility than the standard CIFG recurrent model in cross-device FL in large scale experiments. We further show that the proposed scale invariant modification also helps in federated learning of larger transformer models. Finally, we demonstrate the scale invariant modification is also compatible with other non-adaptive algorithms. Particularly, our results suggest an improved privacy utility trade-off in federated learning with differential privacy.
[ "['Jae Hun Ro' 'Srinadh Bhojanapalli' 'Zheng Xu' 'Yanxiang Zhang'\n 'Ananda Theertha Suresh']" ]
null
null
2403.08118
null
null
http://arxiv.org/pdf/2403.08118v1
2024-03-12T22:57:53Z
2024-03-12T22:57:53Z
Characterising harmful data sources when constructing multi-fidelity surrogate models
Surrogate modelling techniques have seen growing attention in recent years when applied to both modelling and optimisation of industrial design problems. These techniques are highly relevant when assessing the performance of a particular design carries a high cost, as the overall cost can be mitigated via the construction of a model to be queried in lieu of the available high-cost source. The construction of these models can sometimes employ other sources of information which are both cheaper and less accurate. The existence of these sources however poses the question of which sources should be used when constructing a model. Recent studies have attempted to characterise harmful data sources to guide practitioners in choosing when to ignore a certain source. These studies have done so in a synthetic setting, characterising sources using a large amount of data that is not available in practice. Some of these studies have also been shown to potentially suffer from bias in the benchmarks used in the analysis. In this study, we present a characterisation of harmful low-fidelity sources using only the limited data available to train a surrogate model. We employ recently developed benchmark filtering techniques to conduct a bias-free assessment, providing objectively varied benchmark suites of different sizes for future research. Analysing one of these benchmark suites with the technique known as Instance Space Analysis, we provide an intuitive visualisation of when a low-fidelity source should be used and use this analysis to provide guidelines that can be used in an applied industrial setting.
[ "['Nicolau Andrés-Thió' 'Mario Andrés Muñoz' 'Kate Smith-Miles']" ]
null
null
2403.08121
null
null
http://arxiv.org/pdf/2403.08121v1
2024-03-12T23:17:32Z
2024-03-12T23:17:32Z
Early Directional Convergence in Deep Homogeneous Neural Networks for Small Initializations
This paper studies the gradient flow dynamics that arise when training deep homogeneous neural networks, starting with small initializations. The present work considers neural networks that are assumed to have locally Lipschitz gradients and an order of homogeneity strictly greater than two. This paper demonstrates that for sufficiently small initializations, during the early stages of training, the weights of the neural network remain small in norm and approximately converge in direction along the Karush-Kuhn-Tucker (KKT) points of the neural correlation function introduced in [1]. Additionally, for square loss and under a separability assumption on the weights of neural networks, a similar directional convergence of gradient flow dynamics is shown near certain saddle points of the loss function.
[ "['Akshay Kumar' 'Jarvis Haupt']" ]
null
null
2403.08124
null
null
http://arxiv.org/pdf/2403.08124v1
2024-03-12T23:21:09Z
2024-03-12T23:21:09Z
Towards Independence Criterion in Machine Unlearning of Features and Labels
This work delves into the complexities of machine unlearning in the face of distributional shifts, particularly focusing on the challenges posed by non-uniform feature and label removal. With the advent of regulations like the GDPR emphasizing data privacy and the right to be forgotten, machine learning models face the daunting task of unlearning sensitive information without compromising their integrity or performance. Our research introduces a novel approach that leverages influence functions and principles of distributional independence to address these challenges. By proposing a comprehensive framework for machine unlearning, we aim to ensure privacy protection while maintaining model performance and adaptability across varying distributions. Our method not only facilitates efficient data removal but also dynamically adjusts the model to preserve its generalization capabilities. Through extensive experimentation, we demonstrate the efficacy of our approach in scenarios characterized by significant distributional shifts, making substantial contributions to the field of machine unlearning. This research paves the way for developing more resilient and adaptable unlearning techniques, ensuring models remain robust and accurate in the dynamic landscape of data privacy and machine learning.
[ "['Ling Han' 'Nanqing Luo' 'Hao Huang' 'Jing Chen' 'Mary-Anne Hartley']" ]
null
null
2403.08131
null
null
http://arxiv.org/pdf/2403.08131v1
2024-03-12T23:39:43Z
2024-03-12T23:39:43Z
Cost-Effective Methodology for Complex Tuning Searches in HPC: Navigating Interdependencies and Dimensionality
Tuning searches are pivotal in High-Performance Computing (HPC), addressing complex optimization challenges in computational applications. The complexity arises not only from finely tuning parameters within routines but also potential interdependencies among them, rendering traditional optimization methods inefficient. Instead of scrutinizing interdependencies among parameters and routines, practitioners often face the dilemma of conducting independent tuning searches for each routine, thereby overlooking interdependence, or pursuing a more resource-intensive joint search for all routines. This decision is driven by the consideration that some interdependence analysis and high-dimensional decomposition techniques in literature may be prohibitively expensive in HPC tuning searches. Our methodology adapts and refines these methods to ensure computational feasibility while maximizing performance gains in real-world scenarios. Our methodology leverages a cost-effective interdependence analysis to decide whether to merge several tuning searches into a joint search or conduct orthogonal searches. Tested on synthetic functions with varying levels of parameter interdependence, our methodology efficiently explores the search space. In comparison to Bayesian-optimization-based full independent or fully joint searches, our methodology suggested an optimized breakdown of independent and merged searches that led to final configurations up to 8% more accurate, reducing the search time by up to 95%. When applied to GPU-offloaded Real-Time Time-Dependent Density Functional Theory (RT-TDDFT), an application in computational materials science that challenges modern HPC autotuners, our methodology achieved an effective tuning search. Its adaptability and efficiency extend beyond RT-TDDFT, making it valuable for related applications in HPC.
[ "['Adrian Perez Dieguez' 'Min Choi' 'Mahmut Okyay' 'Mauro Del Ben'\n 'Bryan M. Wong' 'Khaled Z. Ibrahim']" ]
null
null
2403.08147
null
null
http://arxiv.org/pdf/2403.08147v3
2024-06-03T02:43:24Z
2024-03-13T00:19:06Z
Representing Molecules as Random Walks Over Interpretable Grammars
Recent research in molecular discovery has primarily been devoted to small, drug-like molecules, leaving many similarly important applications in material design without adequate technology. These applications often rely on more complex molecular structures with fewer examples that are carefully designed using known substructures. We propose a data-efficient and interpretable model for representing and reasoning over such molecules in terms of graph grammars that explicitly describe the hierarchical design space featuring motifs to be the design basis. We present a novel representation in the form of random walks over the design space, which facilitates both molecule generation and property prediction. We demonstrate clear advantages over existing methods in terms of performance, efficiency, and synthesizability of predicted molecules, and we provide detailed insights into the method's chemical interpretability.
[ "['Michael Sun' 'Minghao Guo' 'Weize Yuan' 'Veronika Thost'\n 'Crystal Elaine Owens' 'Aristotle Franklin Grosz' 'Sharvaa Selvan'\n 'Katelyn Zhou' 'Hassan Mohiuddin' 'Benjamin J Pedretti' 'Zachary P Smith'\n 'Jie Chen' 'Wojciech Matusik']" ]
null
null
2403.08151
null
null
http://arxiv.org/pdf/2403.08151v1
2024-03-13T00:27:19Z
2024-03-13T00:27:19Z
Measuring the Energy Consumption and Efficiency of Deep Neural Networks: An Empirical Analysis and Design Recommendations
Addressing the so-called ``Red-AI'' trend of rising energy consumption by large-scale neural networks, this study investigates the actual energy consumption, as measured by node-level watt-meters, of training various fully connected neural network architectures. We introduce the BUTTER-E dataset, an augmentation to the BUTTER Empirical Deep Learning dataset, containing energy consumption and performance data from 63,527 individual experimental runs spanning 30,582 distinct configurations: 13 datasets, 20 sizes (number of trainable parameters), 8 network ``shapes'', and 14 depths on both CPU and GPU hardware collected using node-level watt-meters. This dataset reveals the complex relationship between dataset size, network structure, and energy use, and highlights the impact of cache effects. We propose a straightforward and effective energy model that accounts for network size, computing, and memory hierarchy. Our analysis also uncovers a surprising, hardware-mediated non-linear relationship between energy efficiency and network design, challenging the assumption that reducing the number of parameters or FLOPs is the best way to achieve greater energy efficiency. Highlighting the need for cache-considerate algorithm development, we suggest a combined approach to energy efficient network, algorithm, and hardware design. This work contributes to the fields of sustainable computing and Green AI, offering practical guidance for creating more energy-efficient neural networks and promoting sustainable AI.
[ "['Charles Edison Tripp' 'Jordan Perr-Sauer' 'Jamil Gafur' 'Amabarish Nag'\n 'Avi Purkayastha' 'Sagi Zisman' 'Erik A. Bensen']" ]
null
null
2403.08154
null
null
http://arxiv.org/pdf/2403.08154v1
2024-03-13T00:32:30Z
2024-03-13T00:32:30Z
The Effect of Different Optimization Strategies to Physics-Constrained Deep Learning for Soil Moisture Estimation
Soil moisture is a key hydrological parameter that has significant importance to human society and the environment. Accurate modeling and monitoring of soil moisture in crop fields, especially in the root zone (top 100 cm of soil), is essential for improving agricultural production and crop yield with the help of precision irrigation and farming tools. Realizing the full sensor data potential depends greatly on advanced analytical and predictive domain-aware models. In this work, we propose a physics-constrained deep learning (P-DL) framework to integrate physics-based principles on water transport and water sensing signals for effective reconstruction of the soil moisture dynamics. We adopt three different optimizers, namely Adam, RMSprop, and GD, to minimize the loss function of P-DL during the training process. In the illustrative case study, we demonstrate the empirical convergence of Adam optimizers outperforms the other optimization methods in both mini-batch and full-batch training.
[ "['Jianxin Xie' 'Bing Yao' 'Zheyu Jiang']" ]
null
null
2403.08160
null
null
http://arxiv.org/pdf/2403.08160v1
2024-03-13T00:59:25Z
2024-03-13T00:59:25Z
Asymptotics of Random Feature Regression Beyond the Linear Scaling Regime
Recent advances in machine learning have been achieved by using overparametrized models trained until near interpolation of the training data. It was shown, e.g., through the double descent phenomenon, that the number of parameters is a poor proxy for the model complexity and generalization capabilities. This leaves open the question of understanding the impact of parametrization on the performance of these models. How does model complexity and generalization depend on the number of parameters $p$? How should we choose $p$ relative to the sample size $n$ to achieve optimal test error? In this paper, we investigate the example of random feature ridge regression (RFRR). This model can be seen either as a finite-rank approximation to kernel ridge regression (KRR), or as a simplified model for neural networks trained in the so-called lazy regime. We consider covariates uniformly distributed on the $d$-dimensional sphere and compute sharp asymptotics for the RFRR test error in the high-dimensional polynomial scaling, where $p,n,d to infty$ while $p/ d^{kappa_1}$ and $n / d^{kappa_2}$ stay constant, for all $kappa_1 , kappa_2 in mathbb{R}_{>0}$. These asymptotics precisely characterize the impact of the number of random features and regularization parameter on the test performance. In particular, RFRR exhibits an intuitive trade-off between approximation and generalization power. For $n = o(p)$, the sample size $n$ is the bottleneck and RFRR achieves the same performance as KRR (which is equivalent to taking $p = infty$). On the other hand, if $p = o(n)$, the number of random features $p$ is the limiting factor and RFRR test error matches the approximation error of the random feature model class (akin to taking $n = infty$). Finally, a double descent appears at $n= p$, a phenomenon that was previously only characterized in the linear scaling $kappa_1 = kappa_2 = 1$.
[ "['Hong Hu' 'Yue M. Lu' 'Theodor Misiakiewicz']" ]
null
null
2403.08162
null
null
http://arxiv.org/pdf/2403.08162v1
2024-03-13T01:18:55Z
2024-03-13T01:18:55Z
Iterative Learning for Joint Image Denoising and Motion Artifact Correction of 3D Brain MRI
Image noise and motion artifacts greatly affect the quality of brain MRI and negatively influence downstream medical image analysis. Previous studies often focus on 2D methods that process each volumetric MR image slice-by-slice, thus losing important 3D anatomical information. Additionally, these studies generally treat image denoising and artifact correction as two standalone tasks, without considering their potential relationship, especially on low-quality images where severe noise and motion artifacts occur simultaneously. To address these issues, we propose a Joint image Denoising and motion Artifact Correction (JDAC) framework via iterative learning to handle noisy MRIs with motion artifacts, consisting of an adaptive denoising model and an anti-artifact model. In the adaptive denoising model, we first design a novel noise level estimation strategy, and then adaptively reduce the noise through a U-Net backbone with feature normalization conditioning on the estimated noise variance. The anti-artifact model employs another U-Net for eliminating motion artifacts, incorporating a novel gradient-based loss function designed to maintain the integrity of brain anatomy during the motion correction process. These two models are iteratively employed for joint image denoising and artifact correction through an iterative learning framework. An early stopping strategy depending on noise level estimation is applied to accelerate the iteration process. The denoising model is trained with 9,544 T1-weighted MRIs with manually added Gaussian noise as supervision. The anti-artifact model is trained on 552 T1-weighted MRIs with motion artifacts and paired motion-free images. Experimental results on a public dataset and a clinical study suggest the effectiveness of JDAC in both tasks of denoising and motion artifact correction, compared with several state-of-the-art methods.
[ "['Lintao Zhang' 'Mengqi Wu' 'Lihong Wang' 'David C. Steffens'\n 'Guy G. Potter' 'Mingxia Liu']" ]
null
null
2403.08164
null
null
http://arxiv.org/pdf/2403.08164v2
2024-03-17T10:06:12Z
2024-03-13T01:27:57Z
EM-TTS: Efficiently Trained Low-Resource Mongolian Lightweight Text-to-Speech
Recently, deep learning-based Text-to-Speech (TTS) systems have achieved high-quality speech synthesis results. Recurrent neural networks have become a standard modeling technique for sequential data in TTS systems and are widely used. However, training a TTS model which includes RNN components requires powerful GPU performance and takes a long time. In contrast, CNN-based sequence synthesis techniques can significantly reduce the parameters and training time of a TTS model while guaranteeing a certain performance due to their high parallelism, which alleviate these economic costs of training. In this paper, we propose a lightweight TTS system based on deep convolutional neural networks, which is a two-stage training end-to-end TTS model and does not employ any recurrent units. Our model consists of two stages: Text2Spectrum and SSRN. The former is used to encode phonemes into a coarse mel spectrogram and the latter is used to synthesize the complete spectrum from the coarse mel spectrogram. Meanwhile, we improve the robustness of our model by a series of data augmentations, such as noise suppression, time warping, frequency masking and time masking, for solving the low resource mongolian problem. Experiments show that our model can reduce the training time and parameters while ensuring the quality and naturalness of the synthesized speech compared to using mainstream TTS models. Our method uses NCMMSC2022-MTTSC Challenge dataset for validation, which significantly reduces training time while maintaining a certain accuracy.
[ "['Ziqi Liang' 'Haoxiang Shi' 'Jiawei Wang' 'Keda Lu']" ]