categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.09478
| null | null |
http://arxiv.org/pdf/2402.09478v2
|
2024-06-27T04:45:00Z
|
2024-02-13T05:06:34Z
|
Data Reconstruction Attacks and Defenses: A Systematic Evaluation
|
Reconstruction attacks and defenses are essential in understanding the data leakage problem in machine learning. However, prior work has centered around empirical observations of gradient inversion attacks, lacks theoretical justifications, and cannot disentangle the usefulness of defending methods from the computational limitation of attacking methods. In this work, we propose to view the problem as an inverse problem, enabling us to theoretically, quantitatively, and systematically evaluate the data reconstruction problem. On various defense methods, we derived the algorithmic upper bound and the matching (in feature dimension and model width) information-theoretical lower bound on the reconstruction error for two-layer neural networks. To complement the theoretical results and investigate the utility-privacy trade-off, we defined a natural evaluation metric of the defense methods with similar utility loss among the strongest attacks. We further propose a strong reconstruction attack that helps update some previous understanding of the strength of defense methods under our proposed evaluation metric.
|
[
"['Sheng Liu' 'Zihan Wang' 'Yuxiao Chen' 'Qi Lei']"
] |
null | null |
2402.09483
| null | null |
http://arxiv.org/pdf/2402.09483v1
|
2024-02-13T23:40:50Z
|
2024-02-13T23:40:50Z
|
Oracle-Efficient Differentially Private Learning with Public Data
|
Due to statistical lower bounds on the learnability of many function classes under privacy constraints, there has been recent interest in leveraging public data to improve the performance of private learning algorithms. In this model, algorithms must always guarantee differential privacy with respect to the private samples while also ensuring learning guarantees when the private data distribution is sufficiently close to that of the public data. Previous work has demonstrated that when sufficient public, unlabelled data is available, private learning can be made statistically tractable, but the resulting algorithms have all been computationally inefficient. In this work, we present the first computationally efficient, algorithms to provably leverage public data to learn privately whenever a function class is learnable non-privately, where our notion of computational efficiency is with respect to the number of calls to an optimization oracle for the function class. In addition to this general result, we provide specialized algorithms with improved sample complexities in the special cases when the function class is convex or when the task is binary classification.
|
[
"['Adam Block' 'Mark Bun' 'Rathin Desai' 'Abhishek Shetty' 'Steven Wu']"
] |
null | null |
2402.09486
| null | null |
http://arxiv.org/pdf/2402.09486v1
|
2024-02-14T08:09:46Z
|
2024-02-14T08:09:46Z
|
UMOEA/D: A Multiobjective Evolutionary Algorithm for Uniform Pareto
Objectives based on Decomposition
|
Multiobjective optimization (MOO) is prevalent in numerous applications, in which a Pareto front (PF) is constructed to display optima under various preferences. Previous methods commonly utilize the set of Pareto objectives (particles on the PF) to represent the entire PF. However, the empirical distribution of the Pareto objectives on the PF is rarely studied, which implicitly impedes the generation of diverse and representative Pareto objectives in previous methods. To bridge the gap, we suggest in this paper constructing emph{uniformly distributed} Pareto objectives on the PF, so as to alleviate the limited diversity found in previous MOO approaches. We are the first to formally define the concept of ``uniformity" for an MOO problem. We optimize the maximal minimal distances on the Pareto front using a neural network, resulting in both asymptotically and non-asymptotically uniform Pareto objectives. Our proposed method is validated through experiments on real-world and synthetic problems, which demonstrates the efficacy in generating high-quality uniform Pareto objectives and the encouraging performance exceeding existing state-of-the-art methods. The detailed model implementation and the code are scheduled to be open-sourced upon publication.
|
[
"['Xiaoyuan Zhang' 'Xi Lin' 'Yichi Zhang' 'Yifan Chen' 'Qingfu Zhang']"
] |
null | null |
2402.09488
| null | null |
http://arxiv.org/pdf/2402.09488v1
|
2024-02-14T09:07:00Z
|
2024-02-14T09:07:00Z
|
Intelligent Agricultural Greenhouse Control System Based on Internet of
Things and Machine Learning
|
This study endeavors to conceptualize and execute a sophisticated agricultural greenhouse control system grounded in the amalgamation of the Internet of Things (IoT) and machine learning. Through meticulous monitoring of intrinsic environmental parameters within the greenhouse and the integration of machine learning algorithms, the conditions within the greenhouse are aptly modulated. The envisaged outcome is an enhancement in crop growth efficiency and yield, accompanied by a reduction in resource wastage. In the backdrop of escalating global population figures and the escalating exigencies of climate change, agriculture confronts unprecedented challenges. Conventional agricultural paradigms have proven inadequate in addressing the imperatives of food safety and production efficiency. Against this backdrop, greenhouse agriculture emerges as a viable solution, proffering a controlled milieu for crop cultivation to augment yields, refine quality, and diminish reliance on natural resources [b1]. Nevertheless, greenhouse agriculture contends with a gamut of challenges. Traditional greenhouse management strategies, often grounded in experiential knowledge and predefined rules, lack targeted personalized regulation, thereby resulting in resource inefficiencies. The exigencies of real-time monitoring and precise control of the greenhouse's internal environment gain paramount importance with the burgeoning scale of agriculture. To redress this challenge, the study introduces IoT technology and machine learning algorithms into greenhouse agriculture, aspiring to institute an intelligent agricultural greenhouse control system conducive to augmenting the efficiency and sustainability of agricultural production.
|
[
"['Cangqing Wang']"
] |
null | null |
2402.09492
| null | null |
http://arxiv.org/pdf/2402.09492v2
|
2024-02-16T14:01:44Z
|
2024-02-14T11:27:31Z
|
PMGDA: A Preference-based Multiple Gradient Descent Algorithm
|
It is desirable in many multi-objective machine learning applications, such as multi-task learning with conflicting objectives and multi-objective reinforcement learning, to find a Pareto solution that can match a given preference of a decision maker. These problems are often large-scale with available gradient information but cannot be handled very well by the existing algorithms. To tackle this critical issue, this paper proposes a novel predict-and-correct framework for locating a Pareto solution that fits the preference of a decision maker. In the proposed framework, a constraint function is introduced in the search progress to align the solution with a user-specific preference, which can be optimized simultaneously with multiple objective functions. Experimental results show that our proposed method can efficiently find a particular Pareto solution under the demand of a decision maker for standard multiobjective benchmark, multi-task learning, and multi-objective reinforcement learning problems with more than thousands of decision variables. Code is available at: https://github.com/xzhang2523/pmgda. Our code is current provided in the pgmda.rar attached file and will be open-sourced after publication.}
|
[
"['Xiaoyuan Zhang' 'Xi Lin' 'Qingfu Zhang']"
] |
null | null |
2402.09495
| null | null |
http://arxiv.org/pdf/2402.09495v2
|
2024-02-19T11:58:13Z
|
2024-02-14T13:20:09Z
|
On the Potential of Network-Based Features for Fraud Detection
|
Online transaction fraud presents substantial challenges to businesses and consumers, risking significant financial losses. Conventional rule-based systems struggle to keep pace with evolving fraud tactics, leading to high false positive rates and missed detections. Machine learning techniques offer a promising solution by leveraging historical data to identify fraudulent patterns. This article explores using the personalised PageRank (PPR) algorithm to capture the social dynamics of fraud by analysing relationships between financial accounts. The primary objective is to compare the performance of traditional features with the addition of PPR in fraud detection models. Results indicate that integrating PPR enhances the model's predictive power, surpassing the baseline model. Additionally, the PPR feature provides unique and valuable information, evidenced by its high feature importance score. Feature stability analysis confirms consistent feature distributions across training and test datasets.
|
[
"['Catayoun Azarm' 'Erman Acar' 'Mickey van Zeelt']"
] |
null | null |
2402.09497
| null | null |
http://arxiv.org/pdf/2402.09497v2
|
2024-07-12T15:45:57Z
|
2024-02-14T15:47:46Z
|
Instruction Tuning for Secure Code Generation
|
Modern language models (LMs) have gained widespread acceptance in everyday and professional contexts, particularly in programming. An essential procedure enabling this adoption is instruction tuning, which substantially enhances LMs' practical utility by training them to follow user instructions and human preferences. However, existing instruction tuning schemes overlook a crucial aspect: the security of generated code. As a result, even the state-of-the-art instruction-tuned LMs frequently produce unsafe code, posing significant security risks. In this work, we introduce SafeCoder to address this gap. SafeCoder performs security-centric fine-tuning using a diverse and high-quality dataset that we collected using an automated pipeline. We integrate the security fine-tuning with standard instruction tuning, to facilitate a joint optimization of both security and utility. Despite its simplicity, we show that SafeCoder is effective across a variety of popular LMs and datasets. It is able to drastically improve security (by about 30%), while preserving utility.
|
[
"['Jingxuan He' 'Mark Vero' 'Gabriela Krasnopolska' 'Martin Vechev']"
] |
null | null |
2402.09524
| null | null |
http://arxiv.org/pdf/2402.09524v1
|
2024-02-14T19:01:51Z
|
2024-02-14T19:01:51Z
|
Guided Quantum Compression for Higgs Identification
|
Quantum machine learning provides a fundamentally novel and promising approach to analyzing data. However, many data sets are too complex for currently available quantum computers. Consequently, quantum machine learning applications conventionally resort to dimensionality reduction algorithms, e.g., auto-encoders, before passing data through the quantum models. We show that using a classical auto-encoder as an independent preprocessing step can significantly decrease the classification performance of a quantum machine learning algorithm. To ameliorate this issue, we design an architecture that unifies the preprocessing and quantum classification algorithms into a single trainable model: the guided quantum compression model. The utility of this model is demonstrated by using it to identify the Higgs boson in proton-proton collisions at the LHC, where the conventional approach proves ineffective. Conversely, the guided quantum compression model excels at solving this classification problem, achieving a good accuracy. Additionally, the model developed herein shows better performance compared to the classical benchmark when using only low-level kinematic features.
|
[
"['Vasilis Belis' 'Patrick Odagiu' 'Michele Grossi' 'Florentin Reiter'\n 'Günther Dissertori' 'Sofia Vallecorsa']"
] |
null | null |
2402.09529
| null | null |
http://arxiv.org/pdf/2402.09529v1
|
2024-02-14T19:09:23Z
|
2024-02-14T19:09:23Z
|
The Manifold Density Function: An Intrinsic Method for the Validation of
Manifold Learning
|
We introduce the manifold density function, which is an intrinsic method to validate manifold learning techniques. Our approach adapts and extends Ripley's $K$-function, and categorizes in an unsupervised setting the extent to which an output of a manifold learning algorithm captures the structure of a latent manifold. Our manifold density function generalizes to broad classes of Riemannian manifolds. In particular, we extend the manifold density function to general two-manifolds using the Gauss-Bonnet theorem, and demonstrate that the manifold density function for hypersurfaces is well approximated using the first Laplacian eigenvalue. We prove desirable convergence and robustness properties.
|
[
"['Benjamin Holmgren' 'Eli Quist' 'Jordan Schupbach' 'Brittany Terese Fasy'\n 'Bastian Rieck']"
] |
null | null |
2402.09540
| null | null |
http://arxiv.org/pdf/2402.09540v1
|
2024-02-14T19:31:45Z
|
2024-02-14T19:31:45Z
|
Why Does Differential Privacy with Large Epsilon Defend Against
Practical Membership Inference Attacks?
|
For small privacy parameter $epsilon$, $epsilon$-differential privacy (DP) provides a strong worst-case guarantee that no membership inference attack (MIA) can succeed at determining whether a person's data was used to train a machine learning model. The guarantee of DP is worst-case because: a) it holds even if the attacker already knows the records of all but one person in the data set; and b) it holds uniformly over all data sets. In practical applications, such a worst-case guarantee may be overkill: practical attackers may lack exact knowledge of (nearly all of) the private data, and our data set might be easier to defend, in some sense, than the worst-case data set. Such considerations have motivated the industrial deployment of DP models with large privacy parameter (e.g. $epsilon geq 7$), and it has been observed empirically that DP with large $epsilon$ can successfully defend against state-of-the-art MIAs. Existing DP theory cannot explain these empirical findings: e.g., the theoretical privacy guarantees of $epsilon geq 7$ are essentially vacuous. In this paper, we aim to close this gap between theory and practice and understand why a large DP parameter can prevent practical MIAs. To tackle this problem, we propose a new privacy notion called practical membership privacy (PMP). PMP models a practical attacker's uncertainty about the contents of the private data. The PMP parameter has a natural interpretation in terms of the success rate of a practical MIA on a given data set. We quantitatively analyze the PMP parameter of two fundamental DP mechanisms: the exponential mechanism and Gaussian mechanism. Our analysis reveals that a large DP parameter often translates into a much smaller PMP parameter, which guarantees strong privacy against practical MIAs. Using our findings, we offer principled guidance for practitioners in choosing the DP parameter.
|
[
"['Andrew Lowy' 'Zhuohang Li' 'Jing Liu' 'Toshiaki Koike-Akino'\n 'Kieran Parsons' 'Ye Wang']"
] |
null | null |
2402.09542
| null | null |
http://arxiv.org/pdf/2402.09542v2
|
2024-06-07T09:51:12Z
|
2024-02-14T19:34:28Z
|
Layerwise Proximal Replay: A Proximal Point Method for Online Continual
Learning
|
In online continual learning, a neural network incrementally learns from a non-i.i.d. data stream. Nearly all online continual learning methods employ experience replay to simultaneously prevent catastrophic forgetting and underfitting on past data. Our work demonstrates a limitation of this approach: neural networks trained with experience replay tend to have unstable optimization trajectories, impeding their overall accuracy. Surprisingly, these instabilities persist even when the replay buffer stores all previous training examples, suggesting that this issue is orthogonal to catastrophic forgetting. We minimize these instabilities through a simple modification of the optimization geometry. Our solution, Layerwise Proximal Replay (LPR), balances learning from new and replay data while only allowing for gradual changes in the hidden activation of past data. We demonstrate that LPR consistently improves replay-based online continual learning methods across multiple problem settings, regardless of the amount of available replay memory.
|
[
"['Jason Yoo' 'Yunpeng Liu' 'Frank Wood' 'Geoff Pleiss']"
] |
null | null |
2402.09550
| null | null |
http://arxiv.org/pdf/2402.09550v1
|
2024-02-14T20:01:41Z
|
2024-02-14T20:01:41Z
|
Dataset Clustering for Improved Offline Policy Learning
|
Offline policy learning aims to discover decision-making policies from previously-collected datasets without additional online interactions with the environment. As the training dataset is fixed, its quality becomes a crucial determining factor in the performance of the learned policy. This paper studies a dataset characteristic that we refer to as multi-behavior, indicating that the dataset is collected using multiple policies that exhibit distinct behaviors. In contrast, a uni-behavior dataset would be collected solely using one policy. We observed that policies learned from a uni-behavior dataset typically outperform those learned from multi-behavior datasets, despite the uni-behavior dataset having fewer examples and less diversity. Therefore, we propose a behavior-aware deep clustering approach that partitions multi-behavior datasets into several uni-behavior subsets, thereby benefiting downstream policy learning. Our approach is flexible and effective; it can adaptively estimate the number of clusters while demonstrating high clustering accuracy, achieving an average Adjusted Rand Index of 0.987 across various continuous control task datasets. Finally, we present improved policy learning examples using dataset clustering and discuss several potential scenarios where our approach might benefit the offline policy learning community.
|
[
"['Qiang Wang' 'Yixin Deng' 'Francisco Roldan Sanchez' 'Keru Wang'\n 'Kevin McGuinness' \"Noel O'Connor\" 'Stephen J. Redmond']"
] |
null | null |
2402.09553
| null | null |
http://arxiv.org/abs/2402.09553v1
|
2024-02-14T20:10:30Z
|
2024-02-14T20:10:30Z
|
Statistical and Machine Learning Models for Predicting Fire and Other
Emergency Events
|
Emergency events in a city cause considerable economic loss to individuals, their families, and the community. Accurate and timely prediction of events can help the emergency fire and rescue services in preparing for and mitigating the consequences of emergency events. In this paper, we present a systematic development of predictive models for various types of emergency events in the City of Edmonton, Canada. We present methods for (i) data collection and dataset development; (ii) descriptive analysis of each event type and its characteristics at different spatiotemporal levels; (iii) feature analysis and selection based on correlation coefficient analysis and feature importance analysis; and (iv) development of prediction models for the likelihood of occurrence of each event type at different temporal and spatial resolutions. We analyze the association of event types with socioeconomic and demographic data at the neighborhood level, identify a set of predictors for each event type, and develop predictive models with negative binomial regression. We conduct evaluations at neighborhood and fire station service area levels. Our results show that the models perform well for most of the event types with acceptable prediction errors for weekly and monthly periods. The evaluation shows that the prediction accuracy is consistent at the level of the fire station, so the predictions can be used in management by fire rescue service departments for planning resource allocation for these time periods. We also examine the impact of the COVID-19 pandemic on the occurrence of events and on the accuracy of event predictor models. Our findings show that COVID-19 had a significant impact on the performance of the event prediction models.
|
[
"['Dilli Prasad Sharma' 'Nasim Beigi-Mohammadi' 'Hongxiang Geng'\n 'Dawn Dixon' 'Rob Madro' 'Phil Emmenegger' 'Carlos Tobar' 'Jeff Li'\n 'Alberto Leon-Garcia']"
] |
null | null |
2402.09558
| null | null |
http://arxiv.org/pdf/2402.09558v1
|
2024-02-14T20:19:24Z
|
2024-02-14T20:19:24Z
|
Bidirectional Generative Pre-training for Improving Time Series
Representation Learning
|
Learning time-series representations for discriminative tasks has been a long-standing challenge. Current pre-training methods are limited in either unidirectional next-token prediction or randomly masked token prediction. We propose a novel architecture called Bidirectional Timely Generative Pre-trained Transformer (BiTimelyGPT), which pre-trains on time-series data by both next-token and previous-token predictions in alternating transformer layers. This pre-training task preserves original distribution and data shapes of the time-series. Additionally, the full-rank forward and backward attention matrices exhibit more expressive representation capabilities. Using biosignal data, BiTimelyGPT demonstrates superior performance in predicting neurological functionality, disease diagnosis, and physiological signs. By visualizing the attention heatmap, we observe that the pre-trained BiTimelyGPT can identify discriminative segments from time-series sequences, even more so after fine-tuning on the task.
|
[
"['Ziyang Song' 'Qincheng Lu' 'He Zhu' 'Yue Li']"
] |
null | null |
2402.09560
| null | null |
http://arxiv.org/pdf/2402.09560v1
|
2024-02-14T20:21:43Z
|
2024-02-14T20:21:43Z
|
Distribution-Free Rates in Neyman-Pearson Classification
|
We consider the problem of Neyman-Pearson classification which models unbalanced classification settings where error w.r.t. a distribution $mu_1$ is to be minimized subject to low error w.r.t. a different distribution $mu_0$. Given a fixed VC class $mathcal{H}$ of classifiers to be minimized over, we provide a full characterization of possible distribution-free rates, i.e., minimax rates over the space of all pairs $(mu_0, mu_1)$. The rates involve a dichotomy between hard and easy classes $mathcal{H}$ as characterized by a simple geometric condition, a three-points-separation condition, loosely related to VC dimension.
|
[
"['Mohammadreza M. Kalan' 'Samory Kpotufe']"
] |
null | null |
2402.09573
| null | null |
http://arxiv.org/pdf/2402.09573v2
|
2024-06-13T21:22:02Z
|
2024-02-14T20:48:58Z
|
Changes by Butterflies: Farsighted Forecasting with Group Reservoir
Transformer
|
In Chaos, a minor divergence between two initial conditions exhibits exponential amplification over time, leading to far-away outcomes, known as the butterfly effect. Thus, the distant future is full of uncertainty and hard to forecast. We introduce Group Reservoir Transformer to predict long-term events more accurately and robustly by overcoming two challenges in Chaos: (1) the extensive historical sequences and (2) the sensitivity to initial conditions. A reservoir is attached to a Transformer to efficiently handle arbitrarily long historical lengths, with an extension of a group of reservoirs to reduce the sensitivity to the initialization variations. Our architecture consistently outperforms state-of-the-art models in multivariate time series, including TimeLLM, GPT2TS, PatchTST, DLinear, TimeNet, and the baseline Transformer, with an error reduction of up to -59% in various fields such as ETTh, ETTm, and air quality, demonstrating that an ensemble of butterfly learning can improve the adequacy and certainty of event prediction, despite of the traveling time to the unknown future.
|
[
"['Md Kowsher' 'Abdul Rafae Khan' 'Jia Xu']"
] |
null | null |
2402.09580
| null | null |
http://arxiv.org/pdf/2402.09580v1
|
2024-02-14T21:03:08Z
|
2024-02-14T21:03:08Z
|
Complexity Reduction in Machine Learning-Based Wireless Positioning:
Minimum Description Features
|
A recent line of research has been investigating deep learning approaches to wireless positioning (WP). Although these WP algorithms have demonstrated high accuracy and robust performance against diverse channel conditions, they also have a major drawback: they require processing high-dimensional features, which can be prohibitive for mobile applications. In this work, we design a positioning neural network (P-NN) that substantially reduces the complexity of deep learning-based WP through carefully crafted minimum description features. Our feature selection is based on maximum power measurements and their temporal locations to convey information needed to conduct WP. We also develop a novel methodology for adaptively selecting the size of feature space, which optimizes over balancing the expected amount of useful information and classification capability, quantified using information-theoretic measures on the signal bin selection. Numerical results show that P-NN achieves a significant advantage in performance-complexity tradeoff over deep learning baselines that leverage the full power delay profile (PDP).
|
[
"['Myeung Suk Oh' 'Anindya Bijoy Das' 'Taejoon Kim' 'David J. Love'\n 'Christopher G. Brinton']"
] |
null | null |
2402.09586
| null | null |
http://arxiv.org/pdf/2402.09586v1
|
2024-02-14T21:29:28Z
|
2024-02-14T21:29:28Z
|
WERank: Towards Rank Degradation Prevention for Self-Supervised Learning
Using Weight Regularization
|
A common phenomena confining the representation quality in Self-Supervised Learning (SSL) is dimensional collapse (also known as rank degeneration), where the learned representations are mapped to a low dimensional subspace of the representation space. The State-of-the-Art SSL methods have shown to suffer from dimensional collapse and fall behind maintaining full rank. Recent approaches to prevent this problem have proposed using contrastive losses, regularization techniques, or architectural tricks. We propose WERank, a new regularizer on the weight parameters of the network to prevent rank degeneration at different layers of the network. We provide empirical evidence and mathematical justification to demonstrate the effectiveness of the proposed regularization method in preventing dimensional collapse. We verify the impact of WERank on graph SSL where dimensional collapse is more pronounced due to the lack of proper data augmentation. We empirically demonstrate that WERank is effective in helping BYOL to achieve higher rank during SSL pre-training and consequently downstream accuracy during evaluation probing. Ablation studies and experimental analysis shed lights on the underlying factors behind the performance gains of the proposed approach.
|
[
"['Ali Saheb Pasand' 'Reza Moravej' 'Mahdi Biparva' 'Ali Ghodsi']"
] |
null | null |
2402.09589
| null | null |
http://arxiv.org/pdf/2402.09589v1
|
2024-02-14T21:33:18Z
|
2024-02-14T21:33:18Z
|
MLTCP: Congestion Control for DNN Training
|
We present MLTCP, a technique to augment today's congestion control algorithms to accelerate DNN training jobs in shared GPU clusters. MLTCP enables the communication phases of jobs that compete for network bandwidth to interleave with each other, thereby utilizing the network efficiently. At the heart of MLTCP lies a very simple principle based on a key conceptual insight: DNN training flows should scale their congestion window size based on the number of bytes sent at each training iteration. We show that integrating this principle into today's congestion control protocols is straightforward: by adding 30-60 lines of code to Reno, CUBIC, or DCQCN, MLTCP stabilizes flows of different jobs into an interleaved state within a few training iterations, regardless of the number of competing flows or the start time of each flow. Our experiments with popular DNN training jobs demonstrate that enabling MLTCP accelerates the average and 99th percentile training iteration time by up to 2x and 4x, respectively.
|
[
"['Sudarsanan Rajasekaran' 'Sanjoli Narang' 'Anton A. Zabreyko'\n 'Manya Ghobadi']"
] |
null | null |
2402.09591
| null | null |
http://arxiv.org/pdf/2402.09591v2
|
2024-06-11T01:50:34Z
|
2024-02-14T21:34:44Z
|
Reconstructing the Geometry of Random Geometric Graphs
|
Random geometric graphs are random graph models defined on metric spaces. Such a model is defined by first sampling points from a metric space and then connecting each pair of sampled points with probability that depends on their distance, independently among pairs. In this work, we show how to efficiently reconstruct the geometry of the underlying space from the sampled graph under the manifold assumption, i.e., assuming that the underlying space is a low dimensional manifold and that the connection probability is a strictly decreasing function of the Euclidean distance between the points in a given embedding of the manifold in $mathbb{R}^N$. Our work complements a large body of work on manifold learning, where the goal is to recover a manifold from sampled points sampled in the manifold along with their (approximate) distances.
|
[
"['Han Huang' 'Pakawut Jiradilok' 'Elchanan Mossel']"
] |
null | null |
2402.09596
| null | null |
http://arxiv.org/pdf/2402.09596v1
|
2024-02-14T22:00:57Z
|
2024-02-14T22:00:57Z
|
Pulmonologists-Level lung cancer detection based on standard blood test
results and smoking status using an explainable machine learning approach
|
Lung cancer (LC) remains the primary cause of cancer-related mortality, largely due to late-stage diagnoses. Effective strategies for early detection are therefore of paramount importance. In recent years, machine learning (ML) has demonstrated considerable potential in healthcare by facilitating the detection of various diseases. In this retrospective development and validation study, we developed an ML model based on dynamic ensemble selection (DES) for LC detection. The model leverages standard blood sample analysis and smoking history data from a large population at risk in Denmark. The study includes all patients examined on suspicion of LC in the Region of Southern Denmark from 2009 to 2018. We validated and compared the predictions by the DES model with diagnoses provided by five pulmonologists. Among the 38,944 patients, 9,940 had complete data of which 2,505 (25%) had LC. The DES model achieved an area under the roc curve of 0.77$pm$0.01, sensitivity of 76.2%$pm$2.4%, specificity of 63.8%$pm$2.3%, positive predictive value of 41.6%$pm$1.2%, and Ftextsubscript{1}-score of 53.8%$pm$1.1%. The DES model outperformed all five pulmonologists, achieving a sensitivity 9% higher than their average. The model identified smoking status, age, total calcium levels, neutrophil count, and lactate dehydrogenase as the most important factors for the detection of LC. The results highlight the successful application of the ML approach in detecting LC, surpassing pulmonologists' performance. Incorporating clinical and laboratory data in future risk assessment models can improve decision-making and facilitate timely referrals.
|
[
"['Ricco Noel Hansen Flyckt' 'Louise Sjodsholm'\n 'Margrethe Høstgaard Bang Henriksen' 'Claus Lohman Brasen' 'Ali Ebrahimi'\n 'Ole Hilberg' 'Torben Frøstrup Hansen' 'Uffe Kock Wiil'\n 'Lars Henrik Jensen' 'Abdolrahman Peimankar']"
] |
null | null |
2402.09598
| null | null |
http://arxiv.org/pdf/2402.09598v1
|
2024-02-14T22:10:42Z
|
2024-02-14T22:10:42Z
|
MCMC-driven learning
|
This paper is intended to appear as a chapter for the Handbook of Markov Chain Monte Carlo. The goal of this chapter is to unify various problems at the intersection of Markov chain Monte Carlo (MCMC) and machine learning$unicode{x2014}$which includes black-box variational inference, adaptive MCMC, normalizing flow construction and transport-assisted MCMC, surrogate-likelihood MCMC, coreset construction for MCMC with big data, Markov chain gradient descent, Markovian score climbing, and more$unicode{x2014}$within one common framework. By doing so, the theory and methods developed for each may be translated and generalized.
|
[
"['Alexandre Bouchard-Côté' 'Trevor Campbell' 'Geoff Pleiss'\n 'Nikola Surjanovic']"
] |
null | null |
2402.09600
| null | null |
http://arxiv.org/pdf/2402.09600v1
|
2024-02-14T22:15:37Z
|
2024-02-14T22:15:37Z
|
Low-Rank Graph Contrastive Learning for Node Classification
|
Graph Neural Networks (GNNs) have been widely used to learn node representations and with outstanding performance on various tasks such as node classification. However, noise, which inevitably exists in real-world graph data, would considerably degrade the performance of GNNs revealed by recent studies. In this work, we propose a novel and robust GNN encoder, Low-Rank Graph Contrastive Learning (LR-GCL). Our method performs transductive node classification in two steps. First, a low-rank GCL encoder named LR-GCL is trained by prototypical contrastive learning with low-rank regularization. Next, using the features produced by LR-GCL, a linear transductive classification algorithm is used to classify the unlabeled nodes in the graph. Our LR-GCL is inspired by the low frequency property of the graph data and its labels, and it is also theoretically motivated by our sharp generalization bound for transductive learning. To the best of our knowledge, our theoretical result is among the first to theoretically demonstrate the advantage of low-rank learning in graph contrastive learning supported by strong empirical performance. Extensive experiments on public benchmarks demonstrate the superior performance of LR-GCL and the robustness of the learned node representations. The code of LR-GCL is available at url{https://anonymous.4open.science/r/Low-Rank_Graph_Contrastive_Learning-64A6/}.
|
[
"['Yancheng Wang' 'Yingzhen Yang']"
] |
null | null |
2402.09603
| null | null |
http://arxiv.org/pdf/2402.09603v1
|
2024-02-14T22:23:35Z
|
2024-02-14T22:23:35Z
|
Scalable Graph Self-Supervised Learning
|
In regularization Self-Supervised Learning (SSL) methods for graphs, computational complexity increases with the number of nodes in graphs and embedding dimensions. To mitigate the scalability of non-contrastive graph SSL, we propose a novel approach to reduce the cost of computing the covariance matrix for the pre-training loss function with volume-maximization terms. Our work focuses on reducing the cost associated with the loss computation via graph node or dimension sampling. We provide theoretical insight into why dimension sampling would result in accurate loss computations and support it with mathematical derivation of the novel approach. We develop our experimental setup on the node-level graph prediction tasks, where SSL pre-training has shown to be difficult due to the large size of real world graphs. Our experiments demonstrate that the cost associated with the loss computation can be reduced via node or dimension sampling without lowering the downstream performance. Our results demonstrate that sampling mostly results in improved downstream performance. Ablation studies and experimental analysis are provided to untangle the role of the different factors in the experimental setup.
|
[
"['Ali Saheb Pasand' 'Reza Moravej' 'Mahdi Biparva' 'Raika Karimi'\n 'Ali Ghodsi']"
] |
null | null |
2402.09608
| null | null |
http://arxiv.org/pdf/2402.09608v1
|
2024-02-14T22:32:00Z
|
2024-02-14T22:32:00Z
|
Exact, Fast and Expressive Poisson Point Processes via Squared Neural
Families
|
We introduce squared neural Poisson point processes (SNEPPPs) by parameterising the intensity function by the squared norm of a two layer neural network. When the hidden layer is fixed and the second layer has a single neuron, our approach resembles previous uses of squared Gaussian process or kernel methods, but allowing the hidden layer to be learnt allows for additional flexibility. In many cases of interest, the integrated intensity function admits a closed form and can be computed in quadratic time in the number of hidden neurons. We enumerate a far more extensive number of such cases than has previously been discussed. Our approach is more memory and time efficient than naive implementations of squared or exponentiated kernel methods or Gaussian processes. Maximum likelihood and maximum a posteriori estimates in a reparameterisation of the final layer of the intensity function can be obtained by solving a (strongly) convex optimisation problem using projected gradient descent. We demonstrate SNEPPPs on real, and synthetic benchmarks, and provide a software implementation. https://github.com/RussellTsuchida/snefy
|
[
"['Russell Tsuchida' 'Cheng Soon Ong' 'Dino Sejdinovic']"
] |
null | null |
2402.09611
| null | null |
http://arxiv.org/pdf/2402.09611v1
|
2024-02-14T22:57:03Z
|
2024-02-14T22:57:03Z
|
Towards Privacy-Aware Sign Language Translation at Scale
|
A major impediment to the advancement of sign language translation (SLT) is data scarcity. Much of the sign language data currently available on the web cannot be used for training supervised models due to the lack of aligned captions. Furthermore, scaling SLT using large-scale web-scraped datasets bears privacy risks due to the presence of biometric information, which the responsible development of SLT technologies should account for. In this work, we propose a two-stage framework for privacy-aware SLT at scale that addresses both of these issues. We introduce SSVP-SLT, which leverages self-supervised video pretraining on anonymized and unannotated videos, followed by supervised SLT finetuning on a curated parallel dataset. SSVP-SLT achieves state-of-the-art finetuned and zero-shot gloss-free SLT performance on the How2Sign dataset, outperforming the strongest respective baselines by over 3 BLEU-4. Based on controlled experiments, we further discuss the advantages and limitations of self-supervised pretraining and anonymization via facial obfuscation for SLT.
|
[
"['Phillip Rust' 'Bowen Shi' 'Skyler Wang' 'Necati Cihan Camgöz'\n 'Jean Maillard']"
] |
null | null |
2402.09615
| null | null |
http://arxiv.org/pdf/2402.09615v4
|
2024-06-03T22:38:04Z
|
2024-02-14T23:09:15Z
|
API Pack: A Massive Multi-Programming Language Dataset for API Call
Generation
|
We introduce API Pack, a massive multi-programming language dataset containing more than 1 million instruction-API call pairs to improve the API call generation capabilities of large language models. By fine-tuning CodeLlama-13B on 20,000 Python instances from API Pack, we enable it to outperform GPT-3.5 and GPT-4 in generating unseen API calls. Fine-tuning on API Pack also facilitates cross-programming language generalization by leveraging a large amount of data in one language and small amounts of data from other languages. Scaling the training data to 1 million instances further improves the model's ability to generalize to new APIs not used in training. To facilitate further research, we open-source the API Pack dataset, trained model, and associated source code at https://github.com/zguo0525/API-Pack.
|
[
"['Zhen Guo' 'Adriana Meza Soria' 'Wei Sun' 'Yikang Shen' 'Rameswar Panda']"
] |
null | null |
2402.09623
| null | null |
http://arxiv.org/pdf/2402.09623v2
|
2024-05-15T04:38:46Z
|
2024-02-14T23:57:19Z
|
Conformalized Adaptive Forecasting of Heterogeneous Trajectories
|
This paper presents a new conformal method for generating simultaneous forecasting bands guaranteed to cover the entire path of a new random trajectory with sufficiently high probability. Prompted by the need for dependable uncertainty estimates in motion planning applications where the behavior of diverse objects may be more or less unpredictable, we blend different techniques from online conformal prediction of single and multiple time series, as well as ideas for addressing heteroscedasticity in regression. This solution is both principled, providing precise finite-sample guarantees, and effective, often leading to more informative predictions than prior methods.
|
[
"['Yanfei Zhou' 'Lars Lindemann' 'Matteo Sesia']"
] |
null | null |
2402.09629
| null | null |
http://arxiv.org/pdf/2402.09629v1
|
2024-02-15T00:14:41Z
|
2024-02-15T00:14:41Z
|
Smart Information Exchange for Unsupervised Federated Learning via
Reinforcement Learning
|
One of the main challenges of decentralized machine learning paradigms such as Federated Learning (FL) is the presence of local non-i.i.d. datasets. Device-to-device transfers (D2D) between distributed devices has been shown to be an effective tool for dealing with this problem and robust to stragglers. In an unsupervised case, however, it is not obvious how data exchanges should take place due to the absence of labels. In this paper, we propose an approach to create an optimal graph for data transfer using Reinforcement Learning. The goal is to form links that will provide the most benefit considering the environment's constraints and improve convergence speed in an unsupervised FL environment. Numerical analysis shows the advantages in terms of convergence speed and straggler resilience of the proposed method to different available FL schemes and benchmark datasets.
|
[
"['Seohyun Lee' 'Anindya Bijoy Das' 'Satyavrat Wagle'\n 'Christopher G. Brinton']"
] |
null | null |
2402.09631
| null | null |
http://arxiv.org/pdf/2402.09631v6
|
2024-07-05T08:14:29Z
|
2024-02-15T00:20:30Z
|
Representation Surgery: Theory and Practice of Affine Steering
|
Language models often exhibit undesirable behavior, e.g., generating toxic or gender-biased text. In the case of neural language models, an encoding of the undesirable behavior is often present in the model's representations. Thus, one natural (and common) approach to prevent the model from exhibiting undesirable behavior is to steer the model's representations in a manner that reduces the probability of it generating undesirable text. This paper investigates the formal and empirical properties of steering functions, i.e., transformation of the neural language model's representations that alter its behavior. First, we derive two optimal, in the least-squares sense, affine steering functions under different constraints. Our theory provides justification for existing approaches and offers a novel, improved steering approach. Second, we offer a series of experiments that demonstrate the empirical effectiveness of the methods in mitigating bias and reducing toxic generation.
|
[
"['Shashwat Singh' 'Shauli Ravfogel' 'Jonathan Herzig' 'Roee Aharoni'\n 'Ryan Cotterell' 'Ponnurangam Kumaraguru']"
] |
null | null |
2402.09638
| null | null |
http://arxiv.org/pdf/2402.09638v1
|
2024-02-15T00:52:34Z
|
2024-02-15T00:52:34Z
|
Multi-Fidelity Methods for Optimization: A Survey
|
Real-world black-box optimization often involves time-consuming or costly experiments and simulations. Multi-fidelity optimization (MFO) stands out as a cost-effective strategy that balances high-fidelity accuracy with computational efficiency through a hierarchical fidelity approach. This survey presents a systematic exploration of MFO, underpinned by a novel text mining framework based on a pre-trained language model. We delve deep into the foundational principles and methodologies of MFO, focusing on three core components -- multi-fidelity surrogate models, fidelity management strategies, and optimization techniques. Additionally, this survey highlights the diverse applications of MFO across several key domains, including machine learning, engineering design optimization, and scientific discovery, showcasing the adaptability and effectiveness of MFO in tackling complex computational challenges. Furthermore, we also envision several emerging challenges and prospects in the MFO landscape, spanning scalability, the composition of lower fidelities, and the integration of human-in-the-loop approaches at the algorithmic level. We also address critical issues related to benchmarking and the advancement of open science within the MFO community. Overall, this survey aims to catalyze further research and foster collaborations in MFO, setting the stage for future innovations and breakthroughs in the field.
|
[
"['Ke Li' 'Fan Li']"
] |
null | null |
2402.09650
| null | null |
http://arxiv.org/pdf/2402.09650v1
|
2024-02-15T01:25:19Z
|
2024-02-15T01:25:19Z
|
Foul prediction with estimated poses from soccer broadcast video
|
Recent advances in computer vision have made significant progress in tracking and pose estimation of sports players. However, there have been fewer studies on behavior prediction with pose estimation in sports, in particular, the prediction of soccer fouls is challenging because of the smaller image size of each player and of difficulty in the usage of e.g., the ball and pose information. In our research, we introduce an innovative deep learning approach for anticipating soccer fouls. This method integrates video data, bounding box positions, image details, and pose information by curating a novel soccer foul dataset. Our model utilizes a combination of convolutional and recurrent neural networks (CNNs and RNNs) to effectively merge information from these four modalities. The experimental results show that our full model outperformed the ablated models, and all of the RNN modules, bounding box position and image, and estimated pose were useful for the foul prediction. Our findings have important implications for a deeper understanding of foul play in soccer and provide a valuable reference for future research and practice in this area.
|
[
"['Jiale Fang' 'Calvin Yeung' 'Keisuke Fujii']"
] |
null | null |
2402.09651
| null | null |
http://arxiv.org/pdf/2402.09651v2
|
2024-05-14T04:44:29Z
|
2024-02-15T01:28:18Z
|
Practitioners' Challenges and Perceptions of CI Build Failure
Predictions at Atlassian
|
Continuous Integration (CI) build failures could significantly impact the software development process and teams, such as delaying the release of new features and reducing developers' productivity. In this work, we report on an empirical study that investigates CI build failures throughout product development at Atlassian. Our quantitative analysis found that the repository dimension is the key factor influencing CI build failures. In addition, our qualitative survey revealed that Atlassian developers perceive CI build failures as challenging issues in practice. Furthermore, we found that the CI build prediction can not only provide proactive insight into CI build failures but also facilitate the team's decision-making. Our study sheds light on the challenges and expectations involved in integrating CI build prediction tools into the Bitbucket environment, providing valuable insights for enhancing CI processes.
|
[
"['Yang Hong' 'Chakkrit Tantithamthavorn' 'Jirat Pasuksmit'\n 'Patanamon Thongtanunam' 'Arik Friedman' 'Xing Zhao' 'Anton Krasikov']"
] |
null | null |
2402.09657
| null | null |
http://arxiv.org/pdf/2402.09657v1
|
2024-02-15T01:50:46Z
|
2024-02-15T01:50:46Z
|
Digital versus Analog Transmissions for Federated Learning over Wireless
Networks
|
In this paper, we quantitatively compare these two effective communication schemes, i.e., digital and analog ones, for wireless federated learning (FL) over resource-constrained networks, highlighting their essential differences as well as their respective application scenarios. We first examine both digital and analog transmission methods, together with a unified and fair comparison scheme under practical constraints. A universal convergence analysis under various imperfections is established for FL performance evaluation in wireless networks. These analytical results reveal that the fundamental difference between the two paradigms lies in whether communication and computation are jointly designed or not. The digital schemes decouple the communication design from specific FL tasks, making it difficult to support simultaneous uplink transmission of massive devices with limited bandwidth. In contrast, the analog communication allows over-the-air computation (AirComp), thus achieving efficient spectrum utilization. However, computation-oriented analog transmission reduces power efficiency, and its performance is sensitive to computational errors. Finally, numerical simulations are conducted to verify these theoretical observations.
|
[
"['Jiacheng Yao' 'Wei Xu' 'Zhaohui Yang' 'Xiaohu You' 'Mehdi Bennis'\n 'H. Vincent Poor']"
] |
null | null |
2402.09660
| null | null |
http://arxiv.org/pdf/2402.09660v2
|
2024-02-20T23:43:20Z
|
2024-02-15T02:06:06Z
|
User Modeling and User Profiling: A Comprehensive Survey
|
The integration of artificial intelligence (AI) into daily life, particularly through information retrieval and recommender systems, has necessitated advanced user modeling and profiling techniques to deliver personalized experiences. These techniques aim to construct accurate user representations based on the rich amounts of data generated through interactions with these systems. This paper presents a comprehensive survey of the current state, evolution, and future directions of user modeling and profiling research. We provide a historical overview, tracing the development from early stereotype models to the latest deep learning techniques, and propose a novel taxonomy that encompasses all active topics in this research area, including recent trends. Our survey highlights the paradigm shifts towards more sophisticated user profiling methods, emphasizing implicit data collection, multi-behavior modeling, and the integration of graph data structures. We also address the critical need for privacy-preserving techniques and the push towards explainability and fairness in user modeling approaches. By examining the definitions of core terminology, we aim to clarify ambiguities and foster a clearer understanding of the field by proposing two novel encyclopedic definitions of the main terms. Furthermore, we explore the application of user modeling in various domains, such as fake news detection, cybersecurity, and personalized education. This survey serves as a comprehensive resource for researchers and practitioners, offering insights into the evolution of user modeling and profiling and guiding the development of more personalized, ethical, and effective AI systems.
|
[
"['Erasmo Purificato' 'Ludovico Boratto' 'Ernesto William De Luca']"
] |
null | null |
2402.09668
| null | null |
http://arxiv.org/pdf/2402.09668v1
|
2024-02-15T02:27:57Z
|
2024-02-15T02:27:57Z
|
How to Train Data-Efficient LLMs
|
The training of large language models (LLMs) is expensive. In this paper, we study data-efficient approaches for pre-training LLMs, i.e., techniques that aim to optimize the Pareto frontier of model quality and training resource/data consumption. We seek to understand the tradeoffs associated with data selection routines based on (i) expensive-to-compute data-quality estimates, and (ii) maximization of coverage and diversity-based measures in the feature space. Our first technique, Ask-LLM, leverages the zero-shot reasoning capabilities of instruction-tuned LLMs to directly assess the quality of a training example. To target coverage, we propose Density sampling, which models the data distribution to select a diverse sample. In our comparison of 19 samplers, involving hundreds of evaluation tasks and pre-training runs, we find that Ask-LLM and Density are the best methods in their respective categories. Coverage sampling can recover the performance of the full data, while models trained on Ask-LLM data consistently outperform full-data training -- even when we reject 90% of the original dataset, while converging up to 70% faster.
|
[
"['Noveen Sachdeva' 'Benjamin Coleman' 'Wang-Cheng Kang' 'Jianmo Ni'\n 'Lichan Hong' 'Ed H. Chi' 'James Caverlee' 'Julian McAuley'\n 'Derek Zhiyuan Cheng']"
] |
null | null |
2402.09671
| null | null |
http://arxiv.org/pdf/2402.09671v1
|
2024-02-15T02:38:23Z
|
2024-02-15T02:38:23Z
|
Exploiting Alpha Transparency In Language And Vision-Based AI Systems
|
This investigation reveals a novel exploit derived from PNG image file formats, specifically their alpha transparency layer, and its potential to fool multiple AI vision systems. Our method uses this alpha layer as a clandestine channel invisible to human observers but fully actionable by AI image processors. The scope tested for the vulnerability spans representative vision systems from Apple, Microsoft, Google, Salesforce, Nvidia, and Facebook, highlighting the attack's potential breadth. This vulnerability challenges the security protocols of existing and fielded vision systems, from medical imaging to autonomous driving technologies. Our experiments demonstrate that the affected systems, which rely on convolutional neural networks or the latest multimodal language models, cannot quickly mitigate these vulnerabilities through simple patches or updates. Instead, they require retraining and architectural changes, indicating a persistent hole in multimodal technologies without some future adversarial hardening against such vision-language exploits.
|
[
"['David Noever' 'Forrest McKee']"
] |
null | null |
2402.09674
| null | null |
http://arxiv.org/pdf/2402.09674v1
|
2024-02-15T02:54:49Z
|
2024-02-15T02:54:49Z
|
PAL: Proxy-Guided Black-Box Attack on Large Language Models
|
Large Language Models (LLMs) have surged in popularity in recent months, but they have demonstrated concerning capabilities to generate harmful content when manipulated. While techniques like safety fine-tuning aim to minimize harmful use, recent works have shown that LLMs remain vulnerable to attacks that elicit toxic responses. In this work, we introduce the Proxy-Guided Attack on LLMs (PAL), the first optimization-based attack on LLMs in a black-box query-only setting. In particular, it relies on a surrogate model to guide the optimization and a sophisticated loss designed for real-world LLM APIs. Our attack achieves 84% attack success rate (ASR) on GPT-3.5-Turbo and 48% on Llama-2-7B, compared to 4% for the current state of the art. We also propose GCG++, an improvement to the GCG attack that reaches 94% ASR on white-box Llama-2-7B, and the Random-Search Attack on LLMs (RAL), a strong but simple baseline for query-based attacks. We believe the techniques proposed in this work will enable more comprehensive safety testing of LLMs and, in the long term, the development of better security guardrails. The code can be found at https://github.com/chawins/pal.
|
[
"['Chawin Sitawarin' 'Norman Mu' 'David Wagner' 'Alexandre Araujo']"
] |
null | null |
2402.09676
| null | null |
http://arxiv.org/pdf/2402.09676v1
|
2024-02-15T03:05:45Z
|
2024-02-15T03:05:45Z
|
HyperMagNet: A Magnetic Laplacian based Hypergraph Neural Network
|
In data science, hypergraphs are natural models for data exhibiting multi-way relations, whereas graphs only capture pairwise. Nonetheless, many proposed hypergraph neural networks effectively reduce hypergraphs to undirected graphs via symmetrized matrix representations, potentially losing important information. We propose an alternative approach to hypergraph neural networks in which the hypergraph is represented as a non-reversible Markov chain. We use this Markov chain to construct a complex Hermitian Laplacian matrix - the magnetic Laplacian - which serves as the input to our proposed hypergraph neural network. We study HyperMagNet for the task of node classification, and demonstrate its effectiveness over graph-reduction based hypergraph neural networks.
|
[
"['Tatyana Benko' 'Martin Buck' 'Ilya Amburg' 'Stephen J. Young'\n 'Sinan G. Aksoy']"
] |
null | null |
2402.09687
| null | null |
http://arxiv.org/pdf/2402.09687v1
|
2024-02-15T03:45:44Z
|
2024-02-15T03:45:44Z
|
Robust Learning-Augmented Dictionaries
|
We present the first learning-augmented data structure for implementing dictionaries with optimal consistency and robustness. Our data structure, named RobustSL, is a skip list augmented by predictions of access frequencies of elements in a data sequence. With proper predictions, RobustSL has optimal consistency (achieves static optimality). At the same time, it maintains a logarithmic running time for each operation, ensuring optimal robustness, even if predictions are generated adversarially. Therefore, RobustSL has all the advantages of the recent learning-augmented data structures of Lin, Luo, and Woodruff (ICML 2022) and Cao et al. (arXiv 2023), while providing robustness guarantees that are absent in the previous work. Numerical experiments show that RobustSL outperforms alternative data structures using both synthetic and real datasets.
|
[
"['Ali Zeynali' 'Shahin Kamali' 'Mohammad Hajiesmaili']"
] |
null | null |
2402.09695
| null | null |
http://arxiv.org/pdf/2402.09695v1
|
2024-02-15T04:08:49Z
|
2024-02-15T04:08:49Z
|
Reward Poisoning Attack Against Offline Reinforcement Learning
|
We study the problem of reward poisoning attacks against general offline reinforcement learning with deep neural networks for function approximation. We consider a black-box threat model where the attacker is completely oblivious to the learning algorithm and its budget is limited by constraining both the amount of corruption at each data point, and the total perturbation. We propose an attack strategy called `policy contrast attack'. The high-level idea is to make some low-performing policies appear as high-performing while making high-performing policies appear as low-performing. To the best of our knowledge, we propose the first black-box reward poisoning attack in the general offline RL setting. We provide theoretical insights on the attack design and empirically show that our attack is efficient against current state-of-the-art offline RL algorithms in different kinds of learning datasets.
|
[
"['Yinglun Xu' 'Rohan Gumaste' 'Gagandeep Singh']"
] |
null | null |
2402.09698
| null | null |
http://arxiv.org/pdf/2402.09698v2
|
2024-05-28T07:03:24Z
|
2024-02-15T04:16:59Z
|
Combining Evidence Across Filtrations Using Adjusters
|
In anytime-valid sequential inference, it is known that any admissible procedure must be based on e-processes, which are composite generalizations of test martingales that quantify the accumulated evidence against a composite null hypothesis at any arbitrary stopping time. This paper studies methods for combining e-processes constructed using different information sets (filtrations) for the same null. Although e-processes constructed in the same filtration can be combined effortlessly (e.g., by averaging), e-processes constructed in different filtrations cannot, because their validity in a coarser filtration does not translate to validity in a finer filtration. This issue arises in exchangeability tests, independence tests, and tests for comparing forecasts with lags. We first establish that a class of functions called adjusters allows us to lift e-processes from a coarser filtration into any finer filtration. We then introduce a characterization theorem for adjusters, formalizing a sense in which using adjusters is necessary. There are two major implications. First, if we have a powerful e-process in a coarsened filtration, then we readily have a powerful e-process in the original filtration. Second, when we coarsen the filtration to construct an e-process, there is an asymptotically logarithmic cost of recovering anytime-validity in the original filtration.
|
[
"['Yo Joong Choe' 'Aaditya Ramdas']"
] |
null | null |
2402.09702
| null | null |
http://arxiv.org/pdf/2402.09702v3
|
2024-03-09T01:01:27Z
|
2024-02-15T04:36:52Z
|
Sparse and Faithful Explanations Without Sparse Models
|
Even if a model is not globally sparse, it is possible for decisions made from that model to be accurately and faithfully described by a small number of features. For instance, an application for a large loan might be denied to someone because they have no credit history, which overwhelms any evidence towards their creditworthiness. In this work, we introduce the Sparse Explanation Value (SEV), a new way of measuring sparsity in machine learning models. In the loan denial example above, the SEV is 1 because only one factor is needed to explain why the loan was denied. SEV is a measure of decision sparsity rather than overall model sparsity, and we are able to show that many machine learning models -- even if they are not sparse -- actually have low decision sparsity, as measured by SEV. SEV is defined using movements over a hypercube, allowing SEV to be defined consistently over various model classes, with movement restrictions reflecting real-world constraints. We proposed the algorithms that reduce SEV without sacrificing accuracy, providing sparse and completely faithful explanations, even without globally sparse models.
|
[
"['Yiyang Sun' 'Zhi Chen' 'Vittorio Orlandi' 'Tong Wang' 'Cynthia Rudin']"
] |
null | null |
2402.09710
| null | null |
http://arxiv.org/pdf/2402.09710v1
|
2024-02-15T05:06:53Z
|
2024-02-15T05:06:53Z
|
Preserving Data Privacy for ML-driven Applications in Open Radio Access
Networks
|
Deep learning offers a promising solution to improve spectrum access techniques by utilizing data-driven approaches to manage and share limited spectrum resources for emerging applications. For several of these applications, the sensitive wireless data (such as spectrograms) are stored in a shared database or multistakeholder cloud environment and are therefore prone to privacy leaks. This paper aims to address such privacy concerns by examining the representative case study of shared database scenarios in 5G Open Radio Access Network (O-RAN) networks where we have a shared database within the near-real-time (near-RT) RAN intelligent controller. We focus on securing the data that can be used by machine learning (ML) models for spectrum sharing and interference mitigation applications without compromising the model and network performances. The underlying idea is to leverage a (i) Shuffling-based learnable encryption technique to encrypt the data, following which, (ii) employ a custom Vision transformer (ViT) as the trained ML model that is capable of performing accurate inferences on such encrypted data. The paper offers a thorough analysis and comparisons with analogous convolutional neural networks (CNN) as well as deeper architectures (such as ResNet-50) as baselines. Our experiments showcase that the proposed approach significantly outperforms the baseline CNN with an improvement of 24.5% and 23.9% for the percent accuracy and F1-Score respectively when operated on encrypted data. Though deeper ResNet-50 architecture is obtained as a slightly more accurate model, with an increase of 4.4%, the proposed approach boasts a reduction of parameters by 99.32%, and thus, offers a much-improved prediction time by nearly 60%.
|
[
"['Pranshav Gajjar' 'Azuka Chiejina' 'Vijay K. Shah']"
] |
null | null |
2402.09711
| null | null |
http://arxiv.org/pdf/2402.09711v1
|
2024-02-15T05:07:39Z
|
2024-02-15T05:07:39Z
|
Node Duplication Improves Cold-start Link Prediction
|
Graph Neural Networks (GNNs) are prominent in graph machine learning and have shown state-of-the-art performance in Link Prediction (LP) tasks. Nonetheless, recent studies show that GNNs struggle to produce good results on low-degree nodes despite their overall strong performance. In practical applications of LP, like recommendation systems, improving performance on low-degree nodes is critical, as it amounts to tackling the cold-start problem of improving the experiences of users with few observed interactions. In this paper, we investigate improving GNNs' LP performance on low-degree nodes while preserving their performance on high-degree nodes and propose a simple yet surprisingly effective augmentation technique called NodeDup. Specifically, NodeDup duplicates low-degree nodes and creates links between nodes and their own duplicates before following the standard supervised LP training scheme. By leveraging a ''multi-view'' perspective for low-degree nodes, NodeDup shows significant LP performance improvements on low-degree nodes without compromising any performance on high-degree nodes. Additionally, as a plug-and-play augmentation module, NodeDup can be easily applied to existing GNNs with very light computational cost. Extensive experiments show that NodeDup achieves 38.49%, 13.34%, and 6.76% improvements on isolated, low-degree, and warm nodes, respectively, on average across all datasets compared to GNNs and state-of-the-art cold-start methods.
|
[
"['Zhichun Guo' 'Tong Zhao' 'Yozen Liu' 'Kaiwen Dong' 'William Shiao'\n 'Neil Shah' 'Nitesh V. Chawla']"
] |
null | null |
2402.09715
| null | null |
http://arxiv.org/pdf/2402.09715v1
|
2024-02-15T05:19:53Z
|
2024-02-15T05:19:53Z
|
DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated
Learning as a Service
|
Federated learning (FL) has emerged as a prevalent distributed machine learning scheme that enables collaborative model training without aggregating raw data. Cloud service providers further embrace Federated Learning as a Service (FLaaS), allowing data analysts to execute their FL training pipelines over differentially-protected data. Due to the intrinsic properties of differential privacy, the enforced privacy level on data blocks can be viewed as a privacy budget that requires careful scheduling to cater to diverse training pipelines. Existing privacy budget scheduling studies prioritize either efficiency or fairness individually. In this paper, we propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimizes both efficiency and fairness. We first develop a comprehensive utility function incorporating data analyst-level dominant shares and FL-specific performance metrics. A sequential allocation mechanism is then designed using the Lagrange multiplier method and effective greedy heuristics. We theoretically prove that DPBalance satisfies Pareto Efficiency, Sharing Incentive, Envy-Freeness, and Weak Strategy Proofness. We also theoretically prove the existence of a fairness-efficiency tradeoff in privacy budgeting. Extensive experiments demonstrate that DPBalance outperforms state-of-the-art solutions, achieving an average efficiency improvement of $1.44times sim 3.49 times$, and an average fairness improvement of $1.37times sim 24.32 times$.
|
[
"['Yu Liu' 'Zibo Wang' 'Yifei Zhu' 'Chen Chen']"
] |
null | null |
2402.09721
| null | null |
http://arxiv.org/pdf/2402.09721v3
|
2024-06-11T05:52:20Z
|
2024-02-15T05:30:47Z
|
Generalized Principal-Agent Problem with a Learning Agent
|
Generalized principal-agent problems, including Stackelberg games, contract design, and Bayesian persuasion, are a class of economic problems where an agent best responds to a principal's committed strategy. We study repeated generalized principal-agent problems under the assumption that the principal does not have commitment power and the agent uses algorithms to learn to respond to the principal. We reduce this problem to a one-shot generalized principal-agent problem with an approximately-best-responding agent. Using this reduction, we show that: (1) if the agent uses contextual no-regret learning algorithms, then the principal can guarantee a utility that is at least the principal's optimal utility in the classic non-learning model minus the square root of the agent's regret; (2) if the agent uses contextual no-swap-regret learning algorithms, then the principal cannot obtain any utility more than the optimal utility in the non-learning model plus the agent's swap regret. But (3) if the agent uses mean-based learning algorithms (which can be no-regret but not no-swap-regret), then the principal can do significantly better than the non-learning model. These general results not only refine previous results in Stackelberg games and contract design with learning agents but also lead to new results for Bayesian persuasion with a learning agent.
|
[
"['Tao Lin' 'Yiling Chen']"
] |
null | null |
2402.09723
| null | null |
http://arxiv.org/pdf/2402.09723v3
|
2024-05-30T19:40:21Z
|
2024-02-15T05:31:13Z
|
Efficient Prompt Optimization Through the Lens of Best Arm
Identification
|
The remarkable instruction-following capability of large language models (LLMs) has sparked a growing interest in automatically finding good prompts, i.e., prompt optimization. Most existing works follow the scheme of selecting from a pre-generated pool of candidate prompts. However, these designs mainly focus on the generation strategy, while limited attention has been paid to the selection method. Especially, the cost incurred during the selection (e.g., accessing LLM and evaluating the responses) is rarely explicitly considered. To overcome this limitation, this work provides a principled framework, TRIPLE, to efficiently perform prompt selection under an explicit budget constraint. TRIPLE is built on a novel connection established between prompt optimization and fixed-budget best arm identification (BAI-FB) in multi-armed bandits (MAB); thus, it is capable of leveraging the rich toolbox from BAI-FB systematically and also incorporating unique characteristics of prompt optimization. Extensive experiments on multiple well-adopted tasks using various LLMs demonstrate the remarkable performance improvement of TRIPLE over baselines while satisfying the limited budget constraints. As an extension, variants of TRIPLE are proposed to efficiently select examples for few-shot prompts, also achieving superior empirical performance.
|
[
"['Chengshuai Shi' 'Kun Yang' 'Zihan Chen' 'Jundong Li' 'Jing Yang'\n 'Cong Shen']"
] |
null | null |
2402.09730
| null | null |
http://arxiv.org/pdf/2402.09730v1
|
2024-02-15T05:59:21Z
|
2024-02-15T05:59:21Z
|
DOF: Accelerating High-order Differential Operators with Forward
Propagation
|
Solving partial differential equations (PDEs) efficiently is essential for analyzing complex physical systems. Recent advancements in leveraging deep learning for solving PDE have shown significant promise. However, machine learning methods, such as Physics-Informed Neural Networks (PINN), face challenges in handling high-order derivatives of neural network-parameterized functions. Inspired by Forward Laplacian, a recent method of accelerating Laplacian computation, we propose an efficient computational framework, Differential Operator with Forward-propagation (DOF), for calculating general second-order differential operators without losing any precision. We provide rigorous proof of the advantages of our method over existing methods, demonstrating two times improvement in efficiency and reduced memory consumption on any architectures. Empirical results illustrate that our method surpasses traditional automatic differentiation (AutoDiff) techniques, achieving 2x improvement on the MLP structure and nearly 20x improvement on the MLP with Jacobian sparsity.
|
[
"['Ruichen Li' 'Chuwei Wang' 'Haotian Ye' 'Di He' 'Liwei Wang']"
] |
null | null |
2402.09735
| null | null |
http://arxiv.org/pdf/2402.09735v1
|
2024-02-15T06:22:50Z
|
2024-02-15T06:22:50Z
|
DFORM: Diffeomorphic vector field alignment for assessing dynamics
across learned models
|
Dynamical system models such as Recurrent Neural Networks (RNNs) have become increasingly popular as hypothesis-generating tools in scientific research. Evaluating the dynamics in such networks is key to understanding their learned generative mechanisms. However, comparison of learned dynamics across models is challenging due to their inherent nonlinearity and because a priori there is no enforced equivalence of their coordinate systems. Here, we propose the DFORM (Diffeomorphic vector field alignment for comparing dynamics across learned models) framework. DFORM learns a nonlinear coordinate transformation which provides a continuous, maximally one-to-one mapping between the trajectories of learned models, thus approximating a diffeomorphism between them. The mismatch between DFORM-transformed vector fields defines the orbital similarity between two models, thus providing a generalization of the concepts of smooth orbital and topological equivalence. As an example, we apply DFORM to models trained on a canonical neuroscience task, showing that learned dynamics may be functionally similar, despite overt differences in attractor landscapes.
|
[
"['Ruiqi Chen' 'Giacomo Vedovati' 'Todd Braver' 'ShiNung Ching']"
] |
null | null |
2402.09739
| null | null |
http://arxiv.org/pdf/2402.09739v2
|
2024-06-13T18:55:23Z
|
2024-02-15T06:36:07Z
|
QuRating: Selecting High-Quality Data for Training Language Models
|
Selecting high-quality pre-training data is important for creating capable language models, but existing methods rely on simple heuristics. We introduce QuRating, a method for selecting pre-training data that can capture human intuitions about data quality. In this paper, we investigate four qualities - writing style, required expertise, facts & trivia, and educational value - and find that LLMs are able to discern these qualities, especially when making pairwise judgments of texts. We train a QuRater model to learn scalar ratings from pairwise judgments, and use it to annotate a 260B training corpus with quality ratings for each of the four criteria. In our experiments, we select 30B tokens according to the different quality ratings and train 1.3B-parameter language models on the selected data. We find that it is important to balance quality and diversity. When we sample using quality ratings as logits over documents, our models obtain lower perplexity and stronger in-context learning performance than baselines. Our best model is based on educational value and performs similarly to a model trained with uniform sampling for 50% more steps. Beyond data selection, we use the quality ratings to construct a training curriculum which improves performance without changing the training dataset. We extensively analyze the quality ratings and discuss their characteristics, biases, and wider implications.
|
[
"['Alexander Wettig' 'Aatmik Gupta' 'Saumya Malik' 'Danqi Chen']"
] |
null | null |
2402.09747
| null | null |
http://arxiv.org/pdf/2402.09747v1
|
2024-02-15T06:58:25Z
|
2024-02-15T06:58:25Z
|
Less is more: Ensemble Learning for Retinal Disease Recognition Under
Limited Resources
|
Retinal optical coherence tomography (OCT) images provide crucial insights into the health of the posterior ocular segment. Therefore, the advancement of automated image analysis methods is imperative to equip clinicians and researchers with quantitative data, thereby facilitating informed decision-making. The application of deep learning (DL)-based approaches has gained extensive traction for executing these analysis tasks, demonstrating remarkable performance compared to labor-intensive manual analyses. However, the acquisition of Retinal OCT images often presents challenges stemming from privacy concerns and the resource-intensive labeling procedures, which contradicts the prevailing notion that DL models necessitate substantial data volumes for achieving superior performance. Moreover, limitations in available computational resources constrain the progress of high-performance medical artificial intelligence, particularly in less developed regions and countries. This paper introduces a novel ensemble learning mechanism designed for recognizing retinal diseases under limited resources (e.g., data, computation). The mechanism leverages insights from multiple pre-trained models, facilitating the transfer and adaptation of their knowledge to Retinal OCT images. This approach establishes a robust model even when confronted with limited labeled data, eliminating the need for an extensive array of parameters, as required in learning from scratch. Comprehensive experimentation on real-world datasets demonstrates that the proposed approach can achieve superior performance in recognizing Retinal OCT images, even when dealing with exceedingly restricted labeled datasets. Furthermore, this method obviates the necessity of learning extensive-scale parameters, making it well-suited for deployment in low-resource scenarios.
|
[
"['Jiahao Wang' 'Hong Peng' 'Shengchao Chen' 'Sufen Ren']"
] |
null | null |
2402.09748
| null | null |
http://arxiv.org/pdf/2402.09748v1
|
2024-02-15T06:58:30Z
|
2024-02-15T06:58:30Z
|
Model Compression and Efficient Inference for Large Language Models: A
Survey
|
Transformer based large language models have achieved tremendous success. However, the significant memory and computational costs incurred during the inference process make it challenging to deploy large models on resource-constrained devices. In this paper, we investigate compression and efficient inference methods for large language models from an algorithmic perspective. Regarding taxonomy, similar to smaller models, compression and acceleration algorithms for large language models can still be categorized into quantization, pruning, distillation, compact architecture design, dynamic networks. However, Large language models have two prominent characteristics compared to smaller models: (1) Most of compression algorithms require finetuning or even retraining the model after compression. The most notable aspect of large models is the very high cost associated with model finetuning or training. Therefore, many algorithms for large models, such as quantization and pruning, start to explore tuning-free algorithms. (2) Large models emphasize versatility and generalization rather than performance on a single task. Hence, many algorithms, such as knowledge distillation, focus on how to preserving their versatility and generalization after compression. Since these two characteristics were not very pronounced in early large models, we further distinguish large language models into medium models and ``real'' large models. Additionally, we also provide an introduction to some mature frameworks for efficient inference of large models, which can support basic compression or acceleration algorithms, greatly facilitating model deployment for users.
|
[
"['Wenxiao Wang' 'Wei Chen' 'Yicong Luo' 'Yongliu Long' 'Zhengkai Lin'\n 'Liye Zhang' 'Binbin Lin' 'Deng Cai' 'Xiaofei He']"
] |
null | null |
2402.09754
| null | null |
http://arxiv.org/pdf/2402.09754v1
|
2024-02-15T07:08:11Z
|
2024-02-15T07:08:11Z
|
Robust SVD Made Easy: A fast and reliable algorithm for large-scale data
analysis
|
The singular value decomposition (SVD) is a crucial tool in machine learning and statistical data analysis. However, it is highly susceptible to outliers in the data matrix. Existing robust SVD algorithms often sacrifice speed for robustness or fail in the presence of only a few outliers. This study introduces an efficient algorithm, called Spherically Normalized SVD, for robust SVD approximation that is highly insensitive to outliers, computationally scalable, and provides accurate approximations of singular vectors. The proposed algorithm achieves remarkable speed by utilizing only two applications of a standard reduced-rank SVD algorithm to appropriately scaled data, significantly outperforming competing algorithms in computation times. To assess the robustness of the approximated singular vectors and their subspaces against data contamination, we introduce new notions of breakdown points for matrix-valued input, including row-wise, column-wise, and block-wise breakdown points. Theoretical and empirical analyses demonstrate that our algorithm exhibits higher breakdown points compared to standard SVD and its modifications. We empirically validate the effectiveness of our approach in applications such as robust low-rank approximation and robust principal component analysis of high-dimensional microarray datasets. Overall, our study presents a highly efficient and robust solution for SVD approximation that overcomes the limitations of existing algorithms in the presence of outliers.
|
[
"['Sangil Han' 'Kyoowon Kim' 'Sungkyu Jung']"
] |
null | null |
2402.09761
| null | null |
http://arxiv.org/pdf/2402.09761v1
|
2024-02-15T07:23:34Z
|
2024-02-15T07:23:34Z
|
A Framework For Gait-Based User Demography Estimation Using Inertial
Sensors
|
Human gait has been shown to provide crucial motion cues for various applications. Recognizing patterns in human gait has been widely adopted in various application areas such as security, virtual reality gaming, medical rehabilitation, and ailment identification. Furthermore, wearable inertial sensors have been widely used for not only recording gait but also to predict users' demography. Machine Learning techniques such as deep learning, combined with inertial sensor signals, have shown promising results in recognizing patterns in human gait and estimate users' demography. However, the black-box nature of such deep learning models hinders the researchers from uncovering the reasons behind the model's predictions. Therefore, we propose leveraging deep learning and Layer-Wise Relevance Propagation (LRP) to identify the important variables that play a vital role in identifying the users' demography such as age and gender. To assess the efficacy of this approach we train a deep neural network model on a large sensor-based gait dataset consisting of 745 subjects to identify users' age and gender. Using LRP we identify the variables relevant for characterizing the gait patterns. Thus, we enable interpretation of non-linear ML models which are experts in identifying the users' demography based on inertial signals. We believe this approach can not only provide clinicians information about the gait parameters relevant to age and gender but also can be expanded to analyze and diagnose gait disorders.
|
[
"['Chinmay Prakash Swami']"
] |
null | null |
2402.09766
| null | null |
http://arxiv.org/pdf/2402.09766v1
|
2024-02-15T07:35:52Z
|
2024-02-15T07:35:52Z
|
From Variability to Stability: Advancing RecSys Benchmarking Practices
|
In the rapidly evolving domain of Recommender Systems (RecSys), new algorithms frequently claim state-of-the-art performance based on evaluations over a limited set of arbitrarily selected datasets. However, this approach may fail to holistically reflect their effectiveness due to the significant impact of dataset characteristics on algorithm performance. Addressing this deficiency, this paper introduces a novel benchmarking methodology to facilitate a fair and robust comparison of RecSys algorithms, thereby advancing evaluation practices. By utilizing a diverse set of $30$ open datasets, including two introduced in this work, and evaluating $11$ collaborative filtering algorithms across $9$ metrics, we critically examine the influence of dataset characteristics on algorithm performance. We further investigate the feasibility of aggregating outcomes from multiple datasets into a unified ranking. Through rigorous experimental analysis, we validate the reliability of our methodology under the variability of datasets, offering a benchmarking strategy that balances quality and computational demands. This methodology enables a fair yet effective means of evaluating RecSys algorithms, providing valuable guidance for future research endeavors.
|
[
"['Valeriy Shevchenko' 'Nikita Belousov' 'Alexey Vasilev'\n 'Vladimir Zholobov' 'Artyom Sosedka' 'Natalia Semenova'\n 'Anna Volodkevich' 'Andrey Savchenko' 'Alexey Zaytsev']"
] |
null | null |
2402.09780
| null | null |
http://arxiv.org/pdf/2402.09780v1
|
2024-02-15T08:09:17Z
|
2024-02-15T08:09:17Z
|
TinyCL: An Efficient Hardware Architecture for Continual Learning on
Autonomous Systems
|
The Continuous Learning (CL) paradigm consists of continuously evolving the parameters of the Deep Neural Network (DNN) model to progressively learn to perform new tasks without reducing the performance on previous tasks, i.e., avoiding the so-called catastrophic forgetting. However, the DNN parameter update in CL-based autonomous systems is extremely resource-hungry. The existing DNN accelerators cannot be directly employed in CL because they only support the execution of the forward propagation. Only a few prior architectures execute the backpropagation and weight update, but they lack the control and management for CL. Towards this, we design a hardware architecture, TinyCL, to perform CL on resource-constrained autonomous systems. It consists of a processing unit that executes both forward and backward propagation, and a control unit that manages memory-based CL workload. To minimize the memory accesses, the sliding window of the convolutional layer moves in a snake-like fashion. Moreover, the Multiply-and-Accumulate units can be reconfigured at runtime to execute different operations. As per our knowledge, our proposed TinyCL represents the first hardware accelerator that executes CL on autonomous systems. We synthesize the complete TinyCL architecture in a 65 nm CMOS technology node with the conventional ASIC design flow. It executes 1 epoch of training on a Conv + ReLU + Dense model on the CIFAR10 dataset in 1.76 s, while 1 training epoch of the same model using an Nvidia Tesla P100 GPU takes 103 s, thus achieving a 58 x speedup, consuming 86 mW in a 4.74 mm2 die.
|
[
"['Eugenio Ressa' 'Alberto Marchisio' 'Maurizio Martina' 'Guido Masera'\n 'Muhammad Shafique']"
] |
null | null |
2402.09782
| null | null |
http://arxiv.org/pdf/2402.09782v3
|
2024-03-20T08:50:46Z
|
2024-02-15T08:21:50Z
|
MC-DBN: A Deep Belief Network-Based Model for Modality Completion
|
Recent advancements in multi-modal artificial intelligence (AI) have revolutionized the fields of stock market forecasting and heart rate monitoring. Utilizing diverse data sources can substantially improve prediction accuracy. Nonetheless, additional data may not always align with the original dataset. Interpolation methods are commonly utilized for handling missing values in modal data, though they may exhibit limitations in the context of sparse information. Addressing this challenge, we propose a Modality Completion Deep Belief Network-Based Model (MC-DBN). This approach utilizes implicit features of complete data to compensate for gaps between itself and additional incomplete data. It ensures that the enhanced multi-modal data closely aligns with the dynamic nature of the real world to enhance the effectiveness of the model. We conduct evaluations of the MC-DBN model in two datasets from the stock market forecasting and heart rate monitoring domains. Comprehensive experiments showcase the model's capacity to bridge the semantic divide present in multi-modal data, subsequently enhancing its performance. The source code is available at: https://github.com/logan-0623/DBN-generate
|
[
"['Zihong Luo' 'Zheng Tao' 'Yuxuan Huang' 'Kexin He' 'Chengzhi Liu']"
] |
null | null |
2402.09786
| null | null |
http://arxiv.org/pdf/2402.09786v3
|
2024-03-12T13:36:23Z
|
2024-02-15T08:34:21Z
|
Examining Pathological Bias in a Generative Adversarial Network
Discriminator: A Case Study on a StyleGAN3 Model
|
Generative adversarial networks (GANs) generate photorealistic faces that are often indistinguishable by humans from real faces. While biases in machine learning models are often assumed to be due to biases in training data, we find pathological internal color and luminance biases in the discriminator of a pre-trained StyleGAN3-r model that are not explicable by the training data. We also find that the discriminator systematically stratifies scores by both image- and face-level qualities and that this disproportionately affects images across gender, race, and other categories. We examine axes common in research on stereotyping in social psychology.
|
[
"['Alvin Grissom II' 'Ryan F. Lei' 'Matt Gusdorff'\n 'Jeova Farias Sales Rocha Neto' 'Bailey Lin' 'Ryan Trotter']"
] |
null | null |
2402.09796
| null | null |
http://arxiv.org/pdf/2402.09796v1
|
2024-02-15T08:51:49Z
|
2024-02-15T08:51:49Z
|
Closed-form Filtering for Non-linear Systems
|
Sequential Bayesian Filtering aims to estimate the current state distribution of a Hidden Markov Model, given the past observations. The problem is well-known to be intractable for most application domains, except in notable cases such as the tabular setting or for linear dynamical systems with gaussian noise. In this work, we propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency. We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models. When the transition and observations are approximated by Gaussian PSD Models, we show that our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities. In particular, we identify regimes in which our proposed filter attains a TV $epsilon$-error with memory and computational complexity of $O(epsilon^{-1})$ and $O(epsilon^{-3/2})$ respectively, including the offline learning step, in contrast to the $O(epsilon^{-2})$ complexity of sampling methods such as particle filtering.
|
[
"['Théophile Cantelobre' 'Carlo Ciliberto' 'Benjamin Guedj'\n 'Alessandro Rudi']"
] |
null | null |
2402.09802
| null | null |
http://arxiv.org/pdf/2402.09802v3
|
2024-05-21T05:09:21Z
|
2024-02-15T08:58:58Z
|
Criterion Collapse and Loss Distribution Control
|
In this work, we consider the notion of "criterion collapse," in which optimization of one metric implies optimality in another, with a particular focus on conditions for collapse into error probability minimizers under a wide variety of learning criteria, ranging from DRO and OCE risks (CVaR, tilted ERM) to non-monotonic criteria underlying recent ascent-descent algorithms explored in the literature (Flooding, SoftAD). We show how collapse in the context of losses with a Bernoulli distribution goes far beyond existing results for CVaR and DRO, then expand our scope to include surrogate losses, showing conditions where monotonic criteria such as tilted ERM cannot avoid collapse, whereas non-monotonic alternatives can.
|
[
"['Matthew J. Holland']"
] |
null | null |
2402.09807
| null | null |
http://arxiv.org/pdf/2402.09807v1
|
2024-02-15T09:13:59Z
|
2024-02-15T09:13:59Z
|
Two trust region type algorithms for solving nonconvex-strongly concave
minimax problems
|
In this paper, we propose a Minimax Trust Region (MINIMAX-TR) algorithm and a Minimax Trust Region Algorithm with Contractions and Expansions(MINIMAX-TRACE) algorithm for solving nonconvex-strongly concave minimax problems. Both algorithms can find an $(epsilon, sqrt{epsilon})$-second order stationary point(SSP) within $mathcal{O}(epsilon^{-1.5})$ iterations, which matches the best well known iteration complexity.
|
[
"['Tongliang Yao' 'Zi Xu']"
] |
null | null |
2402.09820
| null | null |
http://arxiv.org/pdf/2402.09820v2
|
2024-02-18T11:29:45Z
|
2024-02-15T09:35:57Z
|
Utilizing Deep Learning for Enhancing Network Resilience in Finance
|
In the age of the Internet, people's lives are increasingly dependent on today's network technology. Maintaining network integrity and protecting the legitimate interests of users is at the heart of network construction. Threat detection is an important part of a complete and effective defense system. How to effectively detect unknown threats is one of the concerns of network protection. Currently, network threat detection is usually based on rules and traditional machine learning methods, which create artificial rules or extract common spatiotemporal features, which cannot be applied to large-scale data applications, and the emergence of unknown risks causes the detection accuracy of the original model to decline. With this in mind, this paper uses deep learning for advanced threat detection to improve protective measures in the financial industry. Many network researchers have shifted their focus to exception-based intrusion detection techniques. The detection technology mainly uses statistical machine learning methods - collecting normal program and network behavior data, extracting multidimensional features, and training decision machine learning models on this basis (commonly used include naive Bayes, decision trees, support vector machines, random forests, etc.).
|
[
"['Yulu Gong' 'Mengran Zhu' 'Shuning Huo' 'Yafei Xiang' 'Hanyi Yu']"
] |
null | null |
2402.09821
| null | null |
http://arxiv.org/pdf/2402.09821v2
|
2024-07-15T10:15:12Z
|
2024-02-15T09:36:36Z
|
Diffusion Models for Audio Restoration
|
With the development of audio playback devices and fast data transmission, the demand for high sound quality is rising for both entertainment and communications. In this quest for better sound quality, challenges emerge from distortions and interferences originating at the recording side or caused by an imperfect transmission pipeline. To address this problem, audio restoration methods aim to recover clean sound signals from the corrupted input data. We present here audio restoration algorithms based on diffusion models, with a focus on speech enhancement and music restoration tasks. Traditional approaches, often grounded in handcrafted rules and statistical heuristics, have shaped our understanding of audio signals. In the past decades, there has been a notable shift towards data-driven methods that exploit the modeling capabilities of DNNs. Deep generative models, and among them diffusion models, have emerged as powerful techniques for learning complex data distributions. However, relying solely on DNN-based learning approaches carries the risk of reducing interpretability, particularly when employing end-to-end models. Nonetheless, data-driven approaches allow more flexibility in comparison to statistical model-based frameworks, whose performance depends on distributional and statistical assumptions that can be difficult to guarantee. Here, we aim to show that diffusion models can combine the best of both worlds and offer the opportunity to design audio restoration algorithms with a good degree of interpretability and a remarkable performance in terms of sound quality. We explain the diffusion formalism and its application to the conditional generation of clean audio signals. We believe that diffusion models open an exciting field of research with the potential to spawn new audio restoration algorithms that are natural-sounding and remain robust in difficult acoustic situations.
|
[
"['Jean-Marie Lemercier' 'Julius Richter' 'Simon Welker' 'Eloi Moliner'\n 'Vesa Välimäki' 'Timo Gerkmann']"
] |
null | null |
2402.09830
| null | null |
http://arxiv.org/pdf/2402.09830v1
|
2024-02-15T09:48:20Z
|
2024-02-15T09:48:20Z
|
Utilizing GANs for Fraud Detection: Model Training with Synthetic
Transaction Data
|
Anomaly detection is a critical challenge across various research domains, aiming to identify instances that deviate from normal data distributions. This paper explores the application of Generative Adversarial Networks (GANs) in fraud detection, comparing their advantages with traditional methods. GANs, a type of Artificial Neural Network (ANN), have shown promise in modeling complex data distributions, making them effective tools for anomaly detection. The paper systematically describes the principles of GANs and their derivative models, emphasizing their application in fraud detection across different datasets. And by building a collection of adversarial verification graphs, we will effectively prevent fraud caused by bots or automated systems and ensure that the users in the transaction are real. The objective of the experiment is to design and implement a fake face verification code and fraud detection system based on Generative Adversarial network (GANs) algorithm to enhance the security of the transaction process.The study demonstrates the potential of GANs in enhancing transaction security through deep learning techniques.
|
[
"['Mengran Zhu' 'Yulu Gong' 'Yafei Xiang' 'Hanyi Yu' 'Shuning Huo']"
] |
null | null |
2402.09834
| null | null |
http://arxiv.org/abs/2402.09834v2
|
2024-06-22T13:29:36Z
|
2024-02-15T09:55:39Z
|
All in One and One for All: A Simple yet Effective Method towards
Cross-domain Graph Pretraining
|
Large Language Models (LLMs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP). One of the most notable advancements of LLMs is that a single model is trained on vast and diverse datasets spanning multiple domains -- a paradigm we term `All in One'. This methodology empowers LLMs with super generalization capabilities, facilitating an encompassing comprehension of varied data distributions. Leveraging these capabilities, a single LLM demonstrates remarkable versatility across a variety of domains -- a paradigm we term `One for All'. However, applying this idea to the graph field remains a formidable challenge, with cross-domain pretraining often resulting in negative transfer. This issue is particularly important in few-shot learning scenarios, where the paucity of training data necessitates the incorporation of external knowledge sources. In response to this challenge, we propose a novel approach called Graph COordinators for PrEtraining (GCOPE), that harnesses the underlying commonalities across diverse graph datasets to enhance few-shot learning. Our novel methodology involves a unification framework that amalgamates disparate graph datasets during the pretraining phase to distill and transfer meaningful knowledge to target tasks. Extensive experiments across multiple graph datasets demonstrate the superior efficacy of our approach. By successfully leveraging the synergistic potential of multiple graph datasets for pretraining, our work stands as a pioneering contribution to the realm of graph foundational model.
|
[
"['Haihong Zhao' 'Aochuan Chen' 'Xiangguo Sun' 'Hong Cheng' 'Jia Li']"
] |
null | null |
2402.09838
| null | null |
http://arxiv.org/pdf/2402.09838v2
|
2024-05-31T13:59:44Z
|
2024-02-15T10:00:13Z
|
Performative Reinforcement Learning in Gradually Shifting Environments
|
When Reinforcement Learning (RL) agents are deployed in practice, they might impact their environment and change its dynamics. We propose a new framework to model this phenomenon, where the current environment depends on the deployed policy as well as its previous dynamics. This is a generalization of Performative RL (PRL) [Mandal et al., 2023]. Unlike PRL, our framework allows to model scenarios where the environment gradually adjusts to a deployed policy. We adapt two algorithms from the performative prediction literature to our setting and propose a novel algorithm called Mixed Delayed Repeated Retraining (MDRR). We provide conditions under which these algorithms converge and compare them using three metrics: number of retrainings, approximation guarantee, and number of samples per deployment. MDRR is the first algorithm in this setting which combines samples from multiple deployments in its training. This makes MDRR particularly suitable for scenarios where the environment's response strongly depends on its previous dynamics, which are common in practice. We experimentally compare the algorithms using a simulation-based testbed and our results show that MDRR converges significantly faster than previous approaches.
|
[
"['Ben Rank' 'Stelios Triantafyllou' 'Debmalya Mandal' 'Goran Radanovic']"
] |
null | null |
2402.09841
| null | null |
http://arxiv.org/pdf/2402.09841v1
|
2024-02-15T10:00:49Z
|
2024-02-15T10:00:49Z
|
LAPDoc: Layout-Aware Prompting for Documents
|
Recent advances in training large language models (LLMs) using massive amounts of solely textual data lead to strong generalization across many domains and tasks, including document-specific tasks. Opposed to that there is a trend to train multi-modal transformer architectures tailored for document understanding that are designed specifically to fuse textual inputs with the corresponding document layout. This involves a separate fine-tuning step for which additional training data is required. At present, no document transformers with comparable generalization to LLMs are available That raises the question which type of model is to be preferred for document understanding tasks. In this paper we investigate the possibility to use purely text-based LLMs for document-specific tasks by using layout enrichment. We explore drop-in modifications and rule-based methods to enrich purely textual LLM prompts with layout information. In our experiments we investigate the effects on the commercial ChatGPT model and the open-source LLM Solar. We demonstrate that using our approach both LLMs show improved performance on various standard document benchmarks. In addition, we study the impact of noisy OCR and layout errors, as well as the limitations of LLMs when it comes to utilizing document layout. Our results indicate that layout enrichment can improve the performance of purely text-based LLMs for document understanding by up to 15% compared to just using plain document text. In conclusion, this approach should be considered for the best model choice between text-based LLM or multi-modal document transformers.
|
[
"['Marcel Lamott' 'Yves-Noel Weweler' 'Adrian Ulges' 'Faisal Shafait'\n 'Dirk Krechel' 'Darko Obradovic']"
] |
null | null |
2402.09846
| null | null |
http://arxiv.org/abs/2402.09846v1
|
2024-02-15T10:05:18Z
|
2024-02-15T10:05:18Z
|
A Deep Learning Approach to Radar-based QPE
|
In this study, we propose a volume-to-point framework for quantitative precipitation estimation (QPE) based on the Quantitative Precipitation Estimation and Segregation Using Multiple Sensor (QPESUMS) Mosaic Radar data set. With a data volume consisting of the time series of gridded radar reflectivities over the Taiwan area, we used machine learning algorithms to establish a statistical model for QPE in weather stations. The model extracts spatial and temporal features from the input data volume and then associates these features with the location-specific precipitations. In contrast to QPE methods based on the Z-R relation, we leverage the machine learning algorithms to automatically detect the evolution and movement of weather systems and associate these patterns to a location with specific topographic attributes. Specifically, we evaluated this framework with the hourly precipitation data of 45 weather stations in Taipei during 2013-2016. In comparison to the operational QPE scheme used by the Central Weather Bureau, the volume-to-point framework performed comparably well in general cases and excelled in detecting heavy-rainfall events. By using the current results as the reference benchmark, the proposed method can integrate the heterogeneous data sources and potentially improve the forecast in extreme precipitation scenarios.
|
[
"['Ting-Shuo Yo' 'Shih-Hao Su' 'Jung-Lien Chu' 'Chiao-Wei Chang'\n 'Hung-Chi Kuo']"
] |
null | null |
2402.09849
| null | null |
http://arxiv.org/pdf/2402.09849v1
|
2024-02-15T10:11:28Z
|
2024-02-15T10:11:28Z
|
Recommendations for Baselines and Benchmarking Approximate Gaussian
Processes
|
Gaussian processes (GPs) are a mature and widely-used component of the ML toolbox. One of their desirable qualities is automatic hyperparameter selection, which allows for training without user intervention. However, in many realistic settings, approximations are typically needed, which typically do require tuning. We argue that this requirement for tuning complicates evaluation, which has led to a lack of a clear recommendations on which method should be used in which situation. To address this, we make recommendations for comparing GP approximations based on a specification of what a user should expect from a method. In addition, we develop a training procedure for the variational method of Titsias [2009] that leaves no choices to the user, and show that this is a strong baseline that meets our specification. We conclude that benchmarking according to our suggestions gives a clearer view of the current state of the field, and uncovers problems that are still open that future papers should address.
|
[
"['Sebastian W. Ober' 'Artem Artemev' 'Marcel Wagenländer'\n 'Rudolfs Grobins' 'Mark van der Wilk']"
] |
null | null |
2402.09867
| null | null |
http://arxiv.org/pdf/2402.09867v1
|
2024-02-15T10:50:42Z
|
2024-02-15T10:50:42Z
|
Characterizing Accuracy Trade-offs of EEG Applications on Embedded HMPs
|
Electroencephalography (EEG) recordings are analyzed using battery-powered wearable devices to monitor brain activities and neurological disorders. These applications require long and continuous processing to generate feasible results. However, wearable devices are constrained with limited energy and computation resources, owing to their small sizes for practical use cases. Embedded heterogeneous multi-core platforms (HMPs) can provide better performance within limited energy budgets for EEG applications. Error resilience of the EEG application pipeline can be exploited further to maximize the performance and energy gains with HMPs. However, disciplined tuning of approximation on embedded HMPs requires a thorough exploration of the accuracy-performance-power trade-off space. In this work, we characterize the error resilience of three EEG applications, including Epileptic Seizure Detection, Sleep Stage Classification, and Stress Detection on the real-world embedded HMP test-bed of the Odroid XU3 platform. We present a combinatorial evaluation of power-performance-accuracy trade-offs of EEG applications at different approximation, power, and performance levels to provide insights into the disciplined tuning of approximation in EEG applications on embedded platforms.
|
[
"['Zain Taufique' 'Muhammad Awais Bin Altaf' 'Antonio Miele'\n 'Pasi Liljeberg' 'Anil Kanduri']"
] |
null | null |
2402.09881
| null | null |
http://arxiv.org/pdf/2402.09881v1
|
2024-02-15T11:08:23Z
|
2024-02-15T11:08:23Z
|
Explaining Kernel Clustering via Decision Trees
|
Despite the growing popularity of explainable and interpretable machine learning, there is still surprisingly limited work on inherently interpretable clustering methods. Recently, there has been a surge of interest in explaining the classic k-means algorithm, leading to efficient algorithms that approximate k-means clusters using axis-aligned decision trees. However, interpretable variants of k-means have limited applicability in practice, where more flexible clustering methods are often needed to obtain useful partitions of the data. In this work, we investigate interpretable kernel clustering, and propose algorithms that construct decision trees to approximate the partitions induced by kernel k-means, a nonlinear extension of k-means. We further build on previous work on explainable k-means and demonstrate how a suitable choice of features allows preserving interpretability without sacrificing approximation guarantees on the interpretable model.
|
[
"['Maximilian Fleissner' 'Leena Chennuru Vankadara'\n 'Debarghya Ghoshdastidar']"
] |
null | null |
2402.09891
| null | null |
http://arxiv.org/pdf/2402.09891v1
|
2024-02-15T11:34:38Z
|
2024-02-15T11:34:38Z
|
Predictors from causal features do not generalize better to new domains
|
We study how well machine learning models trained on causal features generalize across domains. We consider 16 prediction tasks on tabular datasets covering applications in health, employment, education, social benefits, and politics. Each dataset comes with multiple domains, allowing us to test how well a model trained in one domain performs in another. For each prediction task, we select features that have a causal influence on the target of prediction. Our goal is to test the hypothesis that models trained on causal features generalize better across domains. Without exception, we find that predictors using all available features, regardless of causality, have better in-domain and out-of-domain accuracy than predictors using causal features. Moreover, even the absolute drop in accuracy from one domain to the other is no better for causal predictors than for models that use all features. If the goal is to generalize to new domains, practitioners might as well train the best possible model on all available features.
|
[
"['Vivian Y. Nastl' 'Moritz Hardt']"
] |
null | null |
2402.09897
| null | null |
http://arxiv.org/pdf/2402.09897v1
|
2024-02-15T11:45:34Z
|
2024-02-15T11:45:34Z
|
COVIDHealth: A Benchmark Twitter Dataset and Machine Learning based Web
Application for Classifying COVID-19 Discussions
|
The COVID-19 pandemic has had adverse effects on both physical and mental health. During this pandemic, numerous studies have focused on gaining insights into health-related perspectives from social media. In this study, our primary objective is to develop a machine learning-based web application for automatically classifying COVID-19-related discussions on social media. To achieve this, we label COVID-19-related Twitter data, provide benchmark classification results, and develop a web application. We collected data using the Twitter API and labeled a total of 6,667 tweets into five different classes: health risks, prevention, symptoms, transmission, and treatment. We extracted features using various feature extraction methods and applied them to seven different traditional machine learning algorithms, including Decision Tree, Random Forest, Stochastic Gradient Descent, Adaboost, K-Nearest Neighbour, Logistic Regression, and Linear SVC. Additionally, we used four deep learning algorithms: LSTM, CNN, RNN, and BERT, for classification. Overall, we achieved a maximum F1 score of 90.43% with the CNN algorithm in deep learning. The Linear SVC algorithm exhibited the highest F1 score at 86.13%, surpassing other traditional machine learning approaches. Our study not only contributes to the field of health-related data analysis but also provides a valuable resource in the form of a web-based tool for efficient data classification, which can aid in addressing public health challenges and increasing awareness during pandemics. We made the dataset and application publicly available, which can be downloaded from this link https://github.com/Bishal16/COVID19-Health-Related-Data-Classification-Website.
|
[
"['Mahathir Mohammad Bishal' 'Md. Rakibul Hassan Chowdory' 'Anik Das'\n 'Muhammad Ashad Kabir']"
] |
null | null |
2402.09900
| null | null |
http://arxiv.org/pdf/2402.09900v2
|
2024-03-17T15:16:28Z
|
2024-02-15T11:56:53Z
|
Revisiting Recurrent Reinforcement Learning with Memory Monoids
|
Memory models such as Recurrent Neural Networks (RNNs) and Transformers address Partially Observable Markov Decision Processes (POMDPs) by mapping trajectories to latent Markov states. Neither model scales particularly well to long sequences, especially compared to an emerging class of memory models sometimes called linear recurrent models. We discover that we can model the recurrent update of these models using a monoid, leading us to reformulate existing models using a novel memory monoid framework. We revisit the traditional approach to batching in recurrent RL, highlighting both theoretical and empirical deficiencies. We leverage the properties of memory monoids to propose a batching method that improves sample efficiency, increases the return, and simplifies the implementation of recurrent loss functions in RL.
|
[
"['Steven Morad' 'Chris Lu' 'Ryan Kortvelesy' 'Stephan Liwicki'\n 'Jakob Foerster' 'Amanda Prorok']"
] |
null | null |
2402.09906
| null | null |
http://arxiv.org/pdf/2402.09906v2
|
2024-04-17T17:12:05Z
|
2024-02-15T12:12:19Z
|
Generative Representational Instruction Tuning
|
All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by > 60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.
|
[
"['Niklas Muennighoff' 'Hongjin Su' 'Liang Wang' 'Nan Yang' 'Furu Wei'\n 'Tao Yu' 'Amanpreet Singh' 'Douwe Kiela']"
] |
null | null |
2402.09910
| null | null |
http://arxiv.org/pdf/2402.09910v2
|
2024-06-25T10:33:41Z
|
2024-02-15T12:17:15Z
|
DE-COP: Detecting Copyrighted Content in Language Models Training Data
|
How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content was included in training. DE-COP's core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model's training cutoff, along with their paraphrases. Our experiments show that DE-COP surpasses the prior best method by 9.6% in detection performance (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give approximately 4% accuracy. The code and datasets are available at https://github.com/LeiLiLab/DE-COP.
|
[
"['André V. Duarte' 'Xuandong Zhao' 'Arlindo L. Oliveira' 'Lei Li']"
] |
null | null |
2402.09916
| null | null |
http://arxiv.org/abs/2402.09916v1
|
2024-02-15T12:39:57Z
|
2024-02-15T12:39:57Z
|
BUSTER: a "BUSiness Transaction Entity Recognition" dataset
|
Albeit Natural Language Processing has seen major breakthroughs in the last few years, transferring such advances into real-world business cases can be challenging. One of the reasons resides in the displacement between popular benchmarks and actual data. Lack of supervision, unbalanced classes, noisy data and long documents often affect real problems in vertical domains such as finance, law and health. To support industry-oriented research, we present BUSTER, a BUSiness Transaction Entity Recognition dataset. The dataset consists of 3779 manually annotated documents on financial transactions. We establish several baselines exploiting both general-purpose and domain-specific language models. The best performing model is also used to automatically annotate 6196 documents, which we release as an additional silver corpus to BUSTER.
|
[
"['Andrea Zugarini' 'Andrew Zamai' 'Marco Ernandes' 'Leonardo Rigutini']"
] |
null | null |
2402.09939
| null | null |
http://arxiv.org/pdf/2402.09939v1
|
2024-02-15T13:39:55Z
|
2024-02-15T13:39:55Z
|
Generative AI in the Construction Industry: A State-of-the-art Analysis
|
The construction industry is a vital sector of the global economy, but it faces many productivity challenges in various processes, such as design, planning, procurement, inspection, and maintenance. Generative artificial intelligence (AI), which can create novel and realistic data or content, such as text, image, video, or code, based on some input or prior knowledge, offers innovative and disruptive solutions to address these challenges. However, there is a gap in the literature on the current state, opportunities, and challenges of generative AI in the construction industry. This study aims to fill this gap by providing a state-of-the-art analysis of generative AI in construction, with three objectives: (1) to review and categorize the existing and emerging generative AI opportunities and challenges in the construction industry; (2) to propose a framework for construction firms to build customized generative AI solutions using their own data, comprising steps such as data collection, dataset curation, training custom large language model (LLM), model evaluation, and deployment; and (3) to demonstrate the framework via a case study of developing a generative model for querying contract documents. The results show that retrieval augmented generation (RAG) improves the baseline LLM by 5.2, 9.4, and 4.8% in terms of quality, relevance, and reproducibility. This study provides academics and construction professionals with a comprehensive analysis and practical framework to guide the adoption of generative AI techniques to enhance productivity, quality, safety, and sustainability across the construction industry.
|
[
"['Ridwan Taiwo' 'Idris Temitope Bello' 'Sulemana Fatoama Abdulai'\n 'Abdul-Mugis Yussif' 'Babatunde Abiodun Salami' 'Abdullahi Saka'\n 'Tarek Zayed']"
] |
null | null |
2402.09941
| null | null |
http://arxiv.org/pdf/2402.09941v1
|
2024-02-15T13:41:23Z
|
2024-02-15T13:41:23Z
|
FedLion: Faster Adaptive Federated Optimization with Fewer Communication
|
In Federated Learning (FL), a framework to train machine learning models across distributed data, well-known algorithms like FedAvg tend to have slow convergence rates, resulting in high communication costs during training. To address this challenge, we introduce FedLion, an adaptive federated optimization algorithm that seamlessly incorporates key elements from the recently proposed centralized adaptive algorithm, Lion (Chen et al. 2o23), into the FL framework. Through comprehensive evaluations on two widely adopted FL benchmarks, we demonstrate that FedLion outperforms previous state-of-the-art adaptive algorithms, including FAFED (Wu et al. 2023) and FedDA. Moreover, thanks to the use of signed gradients in local training, FedLion substantially reduces data transmission requirements during uplink communication when compared to existing adaptive algorithms, further reducing communication costs. Last but not least, this work also includes a novel theoretical analysis, showcasing that FedLion attains faster convergence rate than established FL algorithms like FedAvg.
|
[
"['Zhiwei Tang' 'Tsung-Hui Chang']"
] |
null | null |
2402.09947
| null | null |
http://arxiv.org/pdf/2402.09947v2
|
2024-06-14T17:18:11Z
|
2024-02-15T13:50:00Z
|
Explaining Probabilistic Models with Distributional Values
|
A large branch of explainable machine learning is grounded in cooperative game theory. However, research indicates that game-theoretic explanations may mislead or be hard to interpret. We argue that often there is a critical mismatch between what one wishes to explain (e.g. the output of a classifier) and what current methods such as SHAP explain (e.g. the scalar probability of a class). This paper addresses such gap for probabilistic models by generalising cooperative games and value operators. We introduce the distributional values, random variables that track changes in the model output (e.g. flipping of the predicted class) and derive their analytic expressions for games with Gaussian, Bernoulli and Categorical payoffs. We further establish several characterising properties, and show that our framework provides fine-grained and insightful explanations with case studies on vision and language models.
|
[
"['Luca Franceschi' 'Michele Donini' 'Cédric Archambeau' 'Matthias Seeger']"
] |
null | null |
2402.09948
| null | null |
http://arxiv.org/pdf/2402.09948v1
|
2024-02-15T13:51:21Z
|
2024-02-15T13:51:21Z
|
Neural 5G Indoor Localization with IMU Supervision
|
Radio signals are well suited for user localization because they are ubiquitous, can operate in the dark and maintain privacy. Many prior works learn mappings between channel state information (CSI) and position fully-supervised. However, that approach relies on position labels which are very expensive to acquire. In this work, this requirement is relaxed by using pseudo-labels during deployment, which are calculated from an inertial measurement unit (IMU). We propose practical algorithms for IMU double integration and training of the localization system. We show decimeter-level accuracy on simulated and challenging real data of 5G measurements. Our IMU-supervised method performs similarly to fully-supervised, but requires much less effort to deploy.
|
[
"['Aleksandr Ermolov' 'Shreya Kadambi' 'Maximilian Arnold'\n 'Mohammed Hirzallah' 'Roohollah Amiri' 'Deepak Singh Mahendar Singh'\n 'Srinivas Yerramalli' 'Daniel Dijkman' 'Fatih Porikli' 'Taesang Yoo'\n 'Bence Major']"
] |
null | null |
2402.09949
| null | null |
http://arxiv.org/abs/2402.09949v2
|
2024-04-04T22:50:25Z
|
2024-02-15T13:52:23Z
|
Multi-word Tokenization for Sequence Compression
|
Large Language Models have proven highly successful at modelling a variety of tasks. However, this comes at a steep computational cost that hinders wider industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that goes beyond word boundaries by representing frequent multi-word expressions as single tokens. MWTs produce a more compact and efficient tokenization that yields two benefits: (1) Increase in performance due to a greater coverage of input data given a fixed sequence length budget; (2) Faster and lighter inference due to the ability to reduce the sequence length with negligible drops in performance. Our results show that MWT is more robust across shorter sequence lengths, thus allowing for major speedups via early sequence truncation.
|
[
"['Leonidas Gee' 'Leonardo Rigutini' 'Marco Ernandes' 'Andrea Zugarini']"
] |
null | null |
2402.09954
| null | null |
http://arxiv.org/pdf/2402.09954v2
|
2024-02-17T06:11:58Z
|
2024-02-15T14:03:33Z
|
Crafting a Good Prompt or Providing Exemplary Dialogues? A Study of
In-Context Learning for Persona-based Dialogue Generation
|
Previous in-context learning (ICL) research has focused on tasks such as classification, machine translation, text2table, etc., while studies on whether ICL can improve human-like dialogue generation are scarce. Our work fills this gap by systematically investigating the ICL capabilities of large language models (LLMs) in persona-based dialogue generation, conducting extensive experiments on high-quality real human Chinese dialogue datasets. From experimental results, we draw three conclusions: 1) adjusting prompt instructions is the most direct, effective, and economical way to improve generation quality; 2) randomly retrieving demonstrations (demos) achieves the best results, possibly due to the greater diversity and the amount of effective information; counter-intuitively, retrieving demos with a context identical to the query performs the worst; 3) even when we destroy the multi-turn associations and single-turn semantics in the demos, increasing the number of demos still improves dialogue performance, proving that LLMs can learn from corrupted dialogue demos. Previous explanations of the ICL mechanism, such as $n$-gram induction head, cannot fully account for this phenomenon.
|
[
"['Jiashu Pu' 'Yajing Wan' 'Yuru Zhang' 'Jing Chen' 'Ling Cheng'\n 'Qian Shao' 'Yongzhu Chang' 'Tangjie Lv' 'Rongsheng Zhang']"
] |
null | null |
2402.09957
| null | null |
http://arxiv.org/pdf/2402.09957v1
|
2024-02-15T14:08:08Z
|
2024-02-15T14:08:08Z
|
On Designing Features for Condition Monitoring of Rotating Machines
|
Various methods for designing input features have been proposed for fault recognition in rotating machines using one-dimensional raw sensor data. The available methods are complex, rely on empirical approaches, and may differ depending on the condition monitoring data used. Therefore, this article proposes a novel algorithm to design input features that unifies the feature extraction process for different time-series sensor data. This new insight for designing/extracting input features is obtained through the lens of histogram theory. The proposed algorithm extracts discriminative input features, which are suitable for a simple classifier to deep neural network-based classifiers. The designed input features are given as input to the classifier with end-to-end training in a single framework for machine conditions recognition. The proposed scheme has been validated through three real-time datasets: a) acoustic dataset, b) CWRU vibration dataset, and c) IMS vibration dataset. The real-time results and comparative study show the effectiveness of the proposed scheme for the prediction of the machine's health states.
|
[
"['Seetaram Maurya' 'Nishchal K. Verma']"
] |
null | null |
2402.09961
| null | null |
http://arxiv.org/pdf/2402.09961v1
|
2024-02-15T14:15:51Z
|
2024-02-15T14:15:51Z
|
Enhancing Courier Scheduling in Crowdsourced Last-Mile Delivery through
Dynamic Shift Extensions: A Deep Reinforcement Learning Approach
|
Crowdsourced delivery platforms face complex scheduling challenges to match couriers and customer orders. We consider two types of crowdsourced couriers, namely, committed and occasional couriers, each with different compensation schemes. Crowdsourced delivery platforms usually schedule committed courier shifts based on predicted demand. Therefore, platforms may devise an offline schedule for committed couriers before the planning period. However, due to the unpredictability of demand, there are instances where it becomes necessary to make online adjustments to the offline schedule. In this study, we focus on the problem of dynamically adjusting the offline schedule through shift extensions for committed couriers. This problem is modeled as a sequential decision process. The objective is to maximize platform profit by determining the shift extensions of couriers and the assignments of requests to couriers. To solve the model, a Deep Q-Network (DQN) learning approach is developed. Comparing this model with the baseline policy where no extensions are allowed demonstrates the benefits that platforms can gain from allowing shift extensions in terms of reward, reduced lost order costs, and lost requests. Additionally, sensitivity analysis showed that the total extension compensation increases in a nonlinear manner with the arrival rate of requests, and in a linear manner with the arrival rate of occasional couriers. On the compensation sensitivity, the results showed that the normal scenario exhibited the highest average number of shift extensions and, consequently, the fewest average number of lost requests. These findings serve as evidence of the successful learning of such dynamics by the DQN algorithm.
|
[
"['Zead Saleh' 'Ahmad Al Hanbali' 'Ahmad Baubaid']"
] |
null | null |
2402.09963
| null | null |
http://arxiv.org/pdf/2402.09963v4
|
2024-05-27T17:01:29Z
|
2024-02-15T14:17:51Z
|
Why are Sensitive Functions Hard for Transformers?
|
Empirical studies have identified a range of learnability biases and limitations of transformers, such as a persistent difficulty in learning to compute simple formal languages such as PARITY, and a bias towards low-degree functions. However, theoretical understanding remains limited, with existing expressiveness theory either overpredicting or underpredicting realistic learning abilities. We prove that, under the transformer architecture, the loss landscape is constrained by the input-space sensitivity: Transformers whose output is sensitive to many parts of the input string inhabit isolated points in parameter space, leading to a low-sensitivity bias in generalization. We show theoretically and empirically that this theory unifies a broad array of empirical observations about the learning abilities and biases of transformers, such as their generalization bias towards low sensitivity and low degree, and difficulty in length generalization for PARITY. This shows that understanding transformers' inductive biases requires studying not just their in-principle expressivity, but also their loss landscape.
|
[
"['Michael Hahn' 'Mark Rofin']"
] |
null | null |
2402.09965
| null | null |
http://arxiv.org/pdf/2402.09965v1
|
2023-11-30T01:49:52Z
|
2023-11-30T01:49:52Z
|
Hierarchy Representation of Data in Machine Learnings
|
When there are models with clear-cut judgment results for several data points, it is possible that most models exhibit a relationship where if they correctly judge one target, they also correctly judge another target. Conversely, if most models incorrectly judge one target, they may also incorrectly judge another target. We propose a method for visualizing this hierarchy among targets. This information is expected to be beneficial for model improvement.
|
[
"['Han Yegang' 'Park Minjun' 'Byun Duwon' 'Park Inkyu']"
] |
null | null |
2402.09970
| null | null |
http://arxiv.org/pdf/2402.09970v2
|
2024-05-27T09:23:24Z
|
2024-02-15T14:27:58Z
|
Accelerating Parallel Sampling of Diffusion Models
|
Diffusion models have emerged as state-of-the-art generative models for image generation. However, sampling from diffusion models is usually time-consuming due to the inherent autoregressive nature of their sampling process. In this work, we propose a novel approach that accelerates the sampling of diffusion models by parallelizing the autoregressive process. Specifically, we reformulate the sampling process as solving a system of triangular nonlinear equations through fixed-point iteration. With this innovative formulation, we explore several systematic techniques to further reduce the iteration steps required by the solving process. Applying these techniques, we introduce ParaTAA, a universal and training-free parallel sampling algorithm that can leverage extra computational and memory resources to increase the sampling speed. Our experiments demonstrate that ParaTAA can decrease the inference steps required by common sequential sampling algorithms such as DDIM and DDPM by a factor of 4$sim$14 times. Notably, when applying ParaTAA with 100 steps DDIM for Stable Diffusion, a widely-used text-to-image diffusion model, it can produce the same images as the sequential sampling in only 7 inference steps. The code is available at https://github.com/TZW1998/ParaTAA-Diffusion.
|
[
"['Zhiwei Tang' 'Jiasheng Tang' 'Hao Luo' 'Fan Wang' 'Tsung-Hui Chang']"
] |
null | null |
2402.09977
| null | null |
http://arxiv.org/abs/2402.09977v1
|
2024-02-15T14:37:07Z
|
2024-02-15T14:37:07Z
|
Fast Vocabulary Transfer for Language Model Compression
|
Real-world business applications require a trade-off between language model performance and size. We propose a new method for model compression that relies on vocabulary transfer. We evaluate the method on various vertical domains and downstream tasks. Our results indicate that vocabulary transfer can be effectively used in combination with other compression techniques, yielding a significant reduction in model size and inference time while marginally compromising on performance.
|
[
"['Leonidas Gee' 'Andrea Zugarini' 'Leonardo Rigutini' 'Paolo Torroni']"
] |
null | null |
2402.09978
| null | null |
http://arxiv.org/pdf/2402.09978v1
|
2024-02-15T14:41:55Z
|
2024-02-15T14:41:55Z
|
Deep learning for the design of non-Hermitian topolectrical circuits
|
Non-Hermitian topological phases can produce some remarkable properties, compared with their Hermitian counterpart, such as the breakdown of conventional bulk-boundary correspondence and the non-Hermitian topological edge mode. Here, we introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we use the smallest module of the periodic circuit as one unit to construct high-dimensional circuit data features. Further, we use the Dense Convolutional Network (DenseNet), a type of convolutional neural network that utilizes dense connections between layers to design a non-Hermitian topolectrical Chern circuit, as the DenseNet algorithm is more suitable for processing high-dimensional data. Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.
|
[
"['Xi Chen' 'Jinyang Sun' 'Xiumei Wang' 'Hengxuan Jiang' 'Dandan Zhu'\n 'Xingping Zhou']"
] |
null | null |
2402.09982
| null | null |
http://arxiv.org/abs/2402.09982v1
|
2024-02-15T14:46:03Z
|
2024-02-15T14:46:03Z
|
Data Augmentation and Transfer Learning Approaches Applied to Facial
Expressions Recognition
|
The face expression is the first thing we pay attention to when we want to understand a person's state of mind. Thus, the ability to recognize facial expressions in an automatic way is a very interesting research field. In this paper, because the small size of available training datasets, we propose a novel data augmentation technique that improves the performances in the recognition task. We apply geometrical transformations and build from scratch GAN models able to generate new synthetic images for each emotion type. Thus, on the augmented datasets we fine tune pretrained convolutional neural networks with different architectures. To measure the generalization ability of the models, we apply extra-database protocol approach, namely we train models on the augmented versions of training dataset and test them on two different databases. The combination of these techniques allows to reach average accuracy values of the order of 85% for the InceptionResNetV2 model.
|
[
"['Enrico Randellini' 'Leonardo Rigutini' \"Claudio Sacca'\"]"
] |
null | null |
2402.09984
| null | null |
http://arxiv.org/pdf/2402.09984v1
|
2024-02-15T14:49:28Z
|
2024-02-15T14:49:28Z
|
Symmetry-Breaking Augmentations for Ad Hoc Teamwork
|
In many collaborative settings, artificial intelligence (AI) agents must be able to adapt to new teammates that use unknown or previously unobserved strategies. While often simple for humans, this can be challenging for AI agents. For example, if an AI agent learns to drive alongside others (a training set) that only drive on one side of the road, it may struggle to adapt this experience to coordinate with drivers on the opposite side, even if their behaviours are simply flipped along the left-right symmetry. To address this we introduce symmetry-breaking augmentations (SBA), which increases diversity in the behaviour of training teammates by applying a symmetry-flipping operation. By learning a best-response to the augmented set of teammates, our agent is exposed to a wider range of behavioural conventions, improving performance when deployed with novel teammates. We demonstrate this experimentally in two settings, and show that our approach improves upon previous ad hoc teamwork results in the challenging card game Hanabi. We also propose a general metric for estimating symmetry-dependency amongst a given set of policies.
|
[
"['Ravi Hammond' 'Dustin Craggs' 'Mingyu Guo' 'Jakob Foerster' 'Ian Reid']"
] |
null | null |
2402.09990
| null | null |
http://arxiv.org/pdf/2402.09990v1
|
2024-02-15T14:54:46Z
|
2024-02-15T14:54:46Z
|
TIAViz: A Browser-based Visualization Tool for Computational Pathology
Models
|
Digital pathology has gained significant traction in modern healthcare systems. This shift from optical microscopes to digital imagery brings with it the potential for improved diagnosis, efficiency, and the integration of AI tools into the pathologists workflow. A critical aspect of this is visualization. Throughout the development of a machine learning (ML) model in digital pathology, it is crucial to have flexible, openly available tools to visualize models, from their outputs and predictions to the underlying annotations and images used to train or test a model. We introduce TIAViz, a Python-based visualization tool built into TIAToolbox which allows flexible, interactive, fully zoomable overlay of a wide variety of information onto whole slide images, including graphs, heatmaps, segmentations, annotations and other WSIs. The UI is browser-based, allowing use either locally, on a remote machine, or on a server to provide publicly available demos. This tool is open source and is made available at: https://github.com/TissueImageAnalytics/tiatoolbox and via pip installation (pip install tiatoolbox) and conda as part of TIAToolbox.
|
[
"['Mark Eastwood' 'John Pocock' 'Mostafa Jahanifar' 'Adam Shephard'\n 'Skiros Habib' 'Ethar Alzaid' 'Abdullah Alsalemi' 'Jan Lukas Robertus'\n 'Nasir Rajpoot' 'Shan Raza' 'Fayyaz Minhas']"
] |
null | null |
2402.09992
| null | null |
http://arxiv.org/pdf/2402.09992v1
|
2024-02-15T14:55:38Z
|
2024-02-15T14:55:38Z
|
Risk-Sensitive Soft Actor-Critic for Robust Deep Reinforcement Learning
under Distribution Shifts
|
We study the robustness of deep reinforcement learning algorithms against distribution shifts within contextual multi-stage stochastic combinatorial optimization problems from the operations research domain. In this context, risk-sensitive algorithms promise to learn robust policies. While this field is of general interest to the reinforcement learning community, most studies up-to-date focus on theoretical results rather than real-world performance. With this work, we aim to bridge this gap by formally deriving a novel risk-sensitive deep reinforcement learning algorithm while providing numerical evidence for its efficacy. Specifically, we introduce discrete Soft Actor-Critic for the entropic risk measure by deriving a version of the Bellman equation for the respective Q-values. We establish a corresponding policy improvement result and infer a practical algorithm. We introduce an environment that represents typical contextual multi-stage stochastic combinatorial optimization problems and perform numerical experiments to empirically validate our algorithm's robustness against realistic distribution shifts, without compromising performance on the training distribution. We show that our algorithm is superior to risk-neutral Soft Actor-Critic as well as to two benchmark approaches for robust deep reinforcement learning. Thereby, we provide the first structured analysis on the robustness of reinforcement learning under distribution shifts in the realm of contextual multi-stage stochastic combinatorial optimization problems.
|
[
"['Tobias Enders' 'James Harrison' 'Maximilian Schiffer']"
] |
null | null |
2402.09997
| null | null |
http://arxiv.org/pdf/2402.09997v1
|
2024-02-15T15:02:46Z
|
2024-02-15T15:02:46Z
|
LoraRetriever: Input-Aware LoRA Retrieval and Composition for Mixed
Tasks in the Wild
|
Low-Rank Adaptation (LoRA) provides an effective yet efficient solution for fine-tuning large language models (LLM). The modular and plug-and-play nature of LoRA enables the integration of diverse domain-specific LoRAs to enhance the capabilities of LLMs. Previous research on exploiting multiple LoRAs either focuses on specific isolated downstream tasks or fixes the selection of LoRAs during training. However, in real-world scenarios, LLMs receive diverse prompts covering different tasks, and the pool of candidate LoRAs is often dynamically updated. To bridge this gap, we propose LoraRetriever, a retrieve-then-compose framework that adaptively retrieves and composes multiple LoRAs according to the input prompts. LoraRetriever contains three main components: firstly, identifying and retrieving LoRAs relevant to the given input; secondly, formulating strategies for effectively integrating the retrieved LoRAs; and thirdly, developing efficient batch inference to accommodate heterogeneous requests. Experimental results indicate that LoraRetriever consistently outperforms the baselines, highlighting its practical effectiveness and versatility.
|
[
"['Ziyu Zhao' 'Leilei Gan' 'Guoyin Wang' 'Wangchunshu Zhou' 'Hongxia Yang'\n 'Kun Kuang' 'Fei Wu']"
] |
null | null |
2402.10001
| null | null |
http://arxiv.org/pdf/2402.10001v2
|
2024-06-04T12:34:25Z
|
2024-02-15T15:06:33Z
|
Privacy Attacks in Decentralized Learning
|
Decentralized Gradient Descent (D-GD) allows a set of users to perform collaborative learning without sharing their data by iteratively averaging local model updates with their neighbors in a network graph. The absence of direct communication between non-neighbor nodes might lead to the belief that users cannot infer precise information about the data of others. In this work, we demonstrate the opposite, by proposing the first attack against D-GD that enables a user (or set of users) to reconstruct the private data of other users outside their immediate neighborhood. Our approach is based on a reconstruction attack against the gossip averaging protocol, which we then extend to handle the additional challenges raised by D-GD. We validate the effectiveness of our attack on real graphs and datasets, showing that the number of users compromised by a single or a handful of attackers is often surprisingly large. We empirically investigate some of the factors that affect the performance of the attack, namely the graph topology, the number of attackers, and their position in the graph.
|
[
"['Abdellah El Mrini' 'Edwige Cyffers' 'Aurélien Bellet']"
] |
null | null |
2402.10005
| null | null |
http://arxiv.org/pdf/2402.10005v1
|
2023-12-18T03:04:42Z
|
2023-12-18T03:04:42Z
|
ML-ASPA: A Contemplation of Machine Learning-based Acoustic Signal
Processing Analysis for Sounds, & Strains Emerging Technology
|
Acoustic data serves as a fundamental cornerstone in advancing scientific and engineering understanding across diverse disciplines, spanning biology, communications, and ocean and Earth science. This inquiry meticulously explores recent advancements and transformative potential within the domain of acoustics, specifically focusing on machine learning (ML) and deep learning. ML, comprising an extensive array of statistical techniques, proves indispensable for autonomously discerning and leveraging patterns within data. In contrast to traditional acoustics and signal processing, ML adopts a data-driven approach, unveiling intricate relationships between features and desired labels or actions, as well as among features themselves, given ample training data. The application of ML to expansive sets of training data facilitates the discovery of models elucidating complex acoustic phenomena such as human speech and reverberation. The dynamic evolution of ML in acoustics yields compelling results and holds substantial promise for the future. The advent of electronic stethoscopes and analogous recording and data logging devices has expanded the application of acoustic signal processing concepts to the analysis of bowel sounds. This paper critically reviews existing literature on acoustic signal processing for bowel sound analysis, outlining fundamental approaches and applicable machine learning principles. It chronicles historical progress in signal processing techniques that have facilitated the extraction of valuable information from bowel sounds, emphasizing advancements in noise reduction, segmentation, signal enhancement, feature extraction, sound localization, and machine learning techniques...
|
[
"['Ratul Ali' 'Aktarul Islam' 'Md. Shohel Rana' 'Saila Nasrin'\n 'Sohel Afzal Shajol' 'Professor Dr. A. H. M. Saifullah Sadi']"
] |
null | null |
2402.10009
| null | null |
http://arxiv.org/pdf/2402.10009v4
|
2024-05-29T11:27:24Z
|
2024-02-15T15:17:26Z
|
Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion
|
Editing signals using large pre-trained models, in a zero-shot manner, has recently seen rapid advancements in the image domain. However, this wave has yet to reach the audio domain. In this paper, we explore two zero-shot editing techniques for audio signals, which use DDPM inversion with pre-trained diffusion models. The first, which we coin ZEro-shot Text-based Audio (ZETA) editing, is adopted from the image domain. The second, named ZEro-shot UnSupervized (ZEUS) editing, is a novel approach for discovering semantically meaningful editing directions without supervision. When applied to music signals, this method exposes a range of musically interesting modifications, from controlling the participation of specific instruments to improvisations on the melody. Samples and code can be found in https://hilamanor.github.io/AudioEditing/ .
|
[
"['Hila Manor' 'Tomer Michaeli']"
] |
null | null |
2402.10024
| null | null |
http://arxiv.org/pdf/2402.10024v2
|
2024-06-05T13:38:42Z
|
2024-02-15T15:43:05Z
|
Self-Augmented In-Context Learning for Unsupervised Word Translation
|
Recent work has shown that, while large language models (LLMs) demonstrate strong word translation or bilingual lexicon induction (BLI) capabilities in few-shot setups, they still cannot match the performance of 'traditional' mapping-based approaches in the unsupervised scenario where no seed translation pairs are available, especially for lower-resource languages. To address this challenge with LLMs, we propose self-augmented in-context learning (SAIL) for unsupervised BLI: starting from a zero-shot prompt, SAIL iteratively induces a set of high-confidence word translation pairs for in-context learning (ICL) from an LLM, which it then reapplies to the same LLM in the ICL fashion. Our method shows substantial gains over zero-shot prompting of LLMs on two established BLI benchmarks spanning a wide range of language pairs, also outperforming mapping-based baselines across the board. In addition to achieving state-of-the-art unsupervised BLI performance, we also conduct comprehensive analyses on SAIL and discuss its limitations.
|
[
"['Yaoyiran Li' 'Anna Korhonen' 'Ivan Vulić']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.