categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2405.00136 | null | null | http://arxiv.org/pdf/2405.00136v2 | 2024-05-05T02:41:47Z | 2024-04-30T18:32:24Z | Data-Driven Permissible Safe Control with Barrier Certificates | This paper introduces a method of identifying a maximal set of safe strategies from data for stochastic systems with unknown dynamics using barrier certificates. The first step is learning the dynamics of the system via Gaussian process (GP) regression and obtaining probabilistic errors for this estimate. Then, we develop an algorithm for constructing piecewise stochastic barrier functions to find a maximal permissible strategy set using the learned GP model, which is based on sequentially pruning the worst controls until a maximal set is identified. The permissible strategies are guaranteed to maintain probabilistic safety for the true system. This is especially important for learning-enabled systems, because a rich strategy space enables additional data collection and complex behaviors while remaining safe. Case studies on linear and nonlinear systems demonstrate that increasing the size of the dataset for learning the system grows the permissible strategy set. | [
"['Rayan Mazouz' 'John Skovbekk' 'Frederik Baymler Mathiesen' 'Eric Frew'\n 'Luca Laurenti' 'Morteza Lahijanian']"
]
|
null | null | 2405.00142 | null | null | http://arxiv.org/pdf/2405.00142v2 | 2024-05-02T00:44:21Z | 2024-04-30T18:39:41Z | Utilizing Machine Learning and 3D Neuroimaging to Predict Hearing Loss:
A Comparative Analysis of Dimensionality Reduction and Regression Techniques | In this project, we have explored machine learning approaches for predicting hearing loss thresholds on the brain's gray matter 3D images. We have solved the problem statement in two phases. In the first phase, we used a 3D CNN model to reduce high-dimensional input into latent space and decode it into an original image to represent the input in rich feature space. In the second phase, we utilized this model to reduce input into rich features and used these features to train standard machine learning models for predicting hearing thresholds. We have experimented with autoencoders and variational autoencoders in the first phase for dimensionality reduction and explored random forest, XGBoost and multi-layer perceptron for regressing the thresholds. We split the given data set into training and testing sets and achieved an 8.80 range and 22.57 range for PT500 and PT4000 on the test set, respectively. We got the lowest RMSE using multi-layer perceptron among the other models. Our approach leverages the unique capabilities of VAEs to capture complex, non-linear relationships within high-dimensional neuroimaging data. We rigorously evaluated the models using various metrics, focusing on the root mean squared error (RMSE). The results highlight the efficacy of the multi-layer neural network model, which outperformed other techniques in terms of accuracy. This project advances the application of data mining in medical diagnostics and enhances our understanding of age-related hearing loss through innovative machine-learning frameworks. | [
"['Trinath Sai Subhash Reddy Pittala' 'Uma Maheswara R Meleti'\n 'Manasa Thatipamula']"
]
|
null | null | 2405.00156 | null | null | http://arxiv.org/pdf/2405.00156v1 | 2024-04-30T19:06:37Z | 2024-04-30T19:06:37Z | Expanding the Horizon: Enabling Hybrid Quantum Transfer Learning for
Long-Tailed Chest X-Ray Classification | Quantum machine learning (QML) has the potential for improving the multi-label classification of rare, albeit critical, diseases in large-scale chest x-ray (CXR) datasets due to theoretical quantum advantages over classical machine learning (CML) in sample efficiency and generalizability. While prior literature has explored QML with CXRs, it has focused on binary classification tasks with small datasets due to limited access to quantum hardware and computationally expensive simulations. To that end, we implemented a Jax-based framework that enables the simulation of medium-sized qubit architectures with significant improvements in wall-clock time over current software offerings. We evaluated the performance of our Jax-based framework in terms of efficiency and performance for hybrid quantum transfer learning for long-tailed classification across 8, 14, and 19 disease labels using large-scale CXR datasets. The Jax-based framework resulted in up to a 58% and 95% speed-up compared to PyTorch and TensorFlow implementations, respectively. However, compared to CML, QML demonstrated slower convergence and an average AUROC of 0.70, 0.73, and 0.74 for the classification of 8, 14, and 19 CXR disease labels. In comparison, the CML models had an average AUROC of 0.77, 0.78, and 0.80 respectively. In conclusion, our work presents an accessible implementation of hybrid quantum transfer learning for long-tailed CXR classification with a computationally efficient Jax-based framework. | [
"['Skylar Chan' 'Pranav Kulkarni' 'Paul H. Yi' 'Vishwa S. Parekh']"
]
|
null | null | 2405.00158 | null | null | http://arxiv.org/pdf/2405.00158v1 | 2024-04-30T19:15:33Z | 2024-04-30T19:15:33Z | BayesBlend: Easy Model Blending using Pseudo-Bayesian Model Averaging,
Stacking and Hierarchical Stacking in Python | Averaging predictions from multiple competing inferential models frequently outperforms predictions from any single model, providing that models are optimally weighted to maximize predictive performance. This is particularly the case in so-called $mathcal{M}$-open settings where the true model is not in the set of candidate models, and may be neither mathematically reifiable nor known precisely. This practice of model averaging has a rich history in statistics and machine learning, and there are currently a number of methods to estimate the weights for constructing model-averaged predictive distributions. Nonetheless, there are few existing software packages that can estimate model weights from the full variety of methods available, and none that blend model predictions into a coherent predictive distribution according to the estimated weights. In this paper, we introduce the BayesBlend Python package, which provides a user-friendly programming interface to estimate weights and blend multiple (Bayesian) models' predictive distributions. BayesBlend implements pseudo-Bayesian model averaging, stacking and, uniquely, hierarchical Bayesian stacking to estimate model weights. We demonstrate the usage of BayesBlend with examples of insurance loss modeling. | [
"['Nathaniel Haines' 'Conor Goold']"
]
|
null | null | 2405.00166 | null | null | http://arxiv.org/pdf/2405.00166v1 | 2024-04-30T19:31:31Z | 2024-04-30T19:31:31Z | Discovering intrinsic multi-compartment pharmacometric models using
Physics Informed Neural Networks | Pharmacometric models are pivotal across drug discovery and development, playing a decisive role in determining the progression of candidate molecules. However, the derivation of mathematical equations governing the system is a labor-intensive trial-and-error process, often constrained by tight timelines. In this study, we introduce PKINNs, a novel purely data-driven pharmacokinetic-informed neural network model. PKINNs efficiently discovers and models intrinsic multi-compartment-based pharmacometric structures, reliably forecasting their derivatives. The resulting models are both interpretable and explainable through Symbolic Regression methods. Our computational framework demonstrates the potential for closed-form model discovery in pharmacometric applications, addressing the labor-intensive nature of traditional model derivation. With the increasing availability of large datasets, this framework holds the potential to significantly enhance model-informed drug discovery. | [
"['Imran Nasim' 'Adam Nasim']"
]
|
null | null | 2405.00172 | null | null | http://arxiv.org/pdf/2405.00172v1 | 2024-04-30T19:43:01Z | 2024-04-30T19:43:01Z | Re-visiting Skip-Gram Negative Sampling: Dimension Regularization for
More Efficient Dissimilarity Preservation in Graph Embeddings | A wide range of graph embedding objectives decompose into two components: one that attracts the embeddings of nodes that are perceived as similar, and another that repels embeddings of nodes that are perceived as dissimilar. Because real-world graphs are sparse and the number of dissimilar pairs grows quadratically with the number of nodes, Skip-Gram Negative Sampling (SGNS) has emerged as a popular and efficient repulsion approach. SGNS repels each node from a sample of dissimilar nodes, as opposed to all dissimilar nodes. In this work, we show that node-wise repulsion is, in aggregate, an approximate re-centering of the node embedding dimensions. Such dimension operations are much more scalable than node operations. The dimension approach, in addition to being more efficient, yields a simpler geometric interpretation of the repulsion. Our result extends findings from the self-supervised learning literature to the skip-gram model, establishing a connection between skip-gram node contrast and dimension regularization. We show that in the limit of large graphs, under mild regularity conditions, the original node repulsion objective converges to optimization with dimension regularization. We use this observation to propose an algorithm augmentation framework that speeds up any existing algorithm, supervised or unsupervised, using SGNS. The framework prioritizes node attraction and replaces SGNS with dimension regularization. We instantiate this generic framework for LINE and node2vec and show that the augmented algorithms preserve downstream performance while dramatically increasing efficiency. | [
"['David Liu' 'Arjun Seshadri' 'Tina Eliassi-Rad' 'Johan Ugander']"
]
|
null | null | 2405.00182 | null | null | http://arxiv.org/pdf/2405.00182v1 | 2024-04-30T20:13:18Z | 2024-04-30T20:13:18Z | M-DEW: Extending Dynamic Ensemble Weighting to Handle Missing Values | Missing value imputation is a crucial preprocessing step for many machine learning problems. However, it is often considered as a separate subtask from downstream applications such as classification, regression, or clustering, and thus is not optimized together with them. We hypothesize that treating the imputation model and downstream task model together and optimizing over full pipelines will yield better results than treating them separately. Our work describes a novel AutoML technique for making downstream predictions with missing data that automatically handles preprocessing, model weighting, and selection during inference time, with minimal compute overhead. Specifically we develop M-DEW, a Dynamic missingness-aware Ensemble Weighting (DEW) approach, that constructs a set of two-stage imputation-prediction pipelines, trains each component separately, and dynamically calculates a set of pipeline weights for each sample during inference time. We thus extend previous work on dynamic ensemble weighting to handle missing data at the level of full imputation-prediction pipelines, improving performance and calibration on downstream machine learning tasks over standard model averaging techniques. M-DEW is shown to outperform the state-of-the-art in that it produces statistically significant reductions in model perplexity in 17 out of 18 experiments, while improving average precision in 13 out of 18 experiments. | [
"['Adam Catto' 'Nan Jia' 'Ansaf Salleb-Aouissi' 'Anita Raja']"
]
|
null | null | 2405.00184 | null | null | http://arxiv.org/pdf/2405.00184v1 | 2024-04-30T20:16:40Z | 2024-04-30T20:16:40Z | Semi-Supervised Hierarchical Multi-Label Classifier Based on Local
Information | Scarcity of labeled data is a common problem in supervised classification, since hand-labeling can be time consuming, expensive or hard to label; on the other hand, large amounts of unlabeled information can be found. The problem of scarcity of labeled data is even more notorious in hierarchical classification, because the data of a node is split among its children, which results in few instances associated to the deepest nodes of the hierarchy. In this work it is proposed the semi-supervised hierarchical multi-label classifier based on local information (SSHMC-BLI) which can be trained with labeled and unlabeled data to perform hierarchical classification tasks. The method can be applied to any type of hierarchical problem, here we focus on the most difficult case: hierarchies of DAG type, where the instances can be associated to multiple paths of labels which can finish in an internal node. SSHMC-BLI builds pseudo-labels for each unlabeled instance from the paths of labels of its labeled neighbors, while it considers whether the unlabeled instance is similar to its neighbors. Experiments on 12 challenging datasets from functional genomics show that making use of unlabeled along with labeled data can help to improve the performance of a supervised hierarchical classifier trained only on labeled data, even with statistical significance. | [
"['Jonathan Serrano-Pérez' 'L. Enrique Sucar']"
]
|
null | null | 2405.00202 | null | null | http://arxiv.org/pdf/2405.00202v1 | 2024-04-30T21:10:51Z | 2024-04-30T21:10:51Z | Leveraging Active Subspaces to Capture Epistemic Model Uncertainty in
Deep Generative Models for Molecular Design | Deep generative models have been accelerating the inverse design process in material and drug design. Unlike their counterpart property predictors in typical molecular design frameworks, generative molecular design models have seen fewer efforts on uncertainty quantification (UQ) due to computational challenges in Bayesian inference posed by their large number of parameters. In this work, we focus on the junction-tree variational autoencoder (JT-VAE), a popular model for generative molecular design, and address this issue by leveraging the low dimensional active subspace to capture the uncertainty in the model parameters. Specifically, we approximate the posterior distribution over the active subspace parameters to estimate the epistemic model uncertainty in an extremely high dimensional parameter space. The proposed UQ scheme does not require alteration of the model architecture, making it readily applicable to any pre-trained model. Our experiments demonstrate the efficacy of the AS-based UQ and its potential impact on molecular optimization by exploring the model diversity under epistemic uncertainty. | [
"['A N M Nafiz Abeer' 'Sanket Jantre' 'Nathan M Urban' 'Byung-Jun Yoon']"
]
|
null | null | 2405.00205 | null | null | http://arxiv.org/pdf/2405.00205v1 | 2024-04-30T21:16:38Z | 2024-04-30T21:16:38Z | A Logic for Reasoning About Aggregate-Combine Graph Neural Networks | We propose a modal logic in which counting modalities appear in linear inequalities. We show that each formula can be transformed into an equivalent graph neural network (GNN). We also show that a broad class of GNNs can be transformed efficiently into a formula, thus significantly improving upon the literature about the logical expressiveness of GNNs. We also show that the satisfiability problem is PSPACE-complete. These results bring together the promise of using standard logical methods for reasoning about GNNs and their properties, particularly in applications such as GNN querying, equivalence checking, etc. We prove that such natural problems can be solved in polynomial space. | [
"['Pierre Nunn' 'Marco Sälzer' 'François Schwarzentruber'\n 'Nicolas Troquard']"
]
|
null | null | 2405.00213 | null | null | http://arxiv.org/pdf/2405.00213v1 | 2024-04-30T21:37:08Z | 2024-04-30T21:37:08Z | Block-As-Domain Adaptation for Workload Prediction from fNIRS Data | Functional near-infrared spectroscopy (fNIRS) is a non-intrusive way to measure cortical hemodynamic activity. Predicting cognitive workload from fNIRS data has taken on a diffuse set of methods. To be applicable in real-world settings, models are needed, which can perform well across different sessions as well as different subjects. However, most existing works assume that training and testing data come from the same subjects and/or cannot generalize well across never-before-seen subjects. Additional challenges imposed by fNIRS data include the high variations in inter-subject fNIRS data and also in intra-subject data collected across different blocks of sessions. To address these issues, we propose an effective method, referred to as the class-aware-block-aware domain adaptation (CABA-DA) which explicitly minimize intra-session variance by viewing different blocks from the same subject same session as different domains. We minimize the intra-class domain discrepancy and maximize the inter-class domain discrepancy accordingly. In addition, we propose an MLPMixer-based model for cognitive load classification. Experimental results demonstrate the proposed model has better performance compared with three different baseline models on three public-available datasets of cognitive workload. Two of them are collected from n-back tasks and one of them is from finger tapping. From our experiments, we also show the proposed contrastive learning method can also improve baseline models we compared with. | [
"['Jiyang Wang' 'Ayse Altay' 'Senem Velipasalar']"
]
|
null | null | 2405.00216 | null | null | http://arxiv.org/pdf/2405.00216v1 | 2024-04-30T21:41:53Z | 2024-04-30T21:41:53Z | Graphical Reasoning: LLM-based Semi-Open Relation Extraction | This paper presents a comprehensive exploration of relation extraction utilizing advanced language models, specifically Chain of Thought (CoT) and Graphical Reasoning (GRE) techniques. We demonstrate how leveraging in-context learning with GPT-3.5 can significantly enhance the extraction process, particularly through detailed example-based reasoning. Additionally, we introduce a novel graphical reasoning approach that dissects relation extraction into sequential sub-tasks, improving precision and adaptability in processing complex relational data. Our experiments, conducted on multiple datasets, including manually annotated data, show considerable improvements in performance metrics, underscoring the effectiveness of our methodologies. | [
"['Yicheng Tao' 'Yiqun Wang' 'Longju Bai']"
]
|
null | null | 2405.00217 | null | null | http://arxiv.org/pdf/2405.00217v1 | 2024-04-30T21:52:15Z | 2024-04-30T21:52:15Z | GMC-PINNs: A new general Monte Carlo PINNs method for solving fractional
partial differential equations on irregular domains | Physics-Informed Neural Networks (PINNs) have been widely used for solving partial differential equations (PDEs) of different types, including fractional PDEs (fPDES) [29]. Herein, we propose a new general (quasi) Monte Carlo PINN for solving fPDEs on irregular domains. Specifically, instead of approximating fractional derivatives by Monte Carlo approximations of integrals as was done previously in [31], we use a more general Monte Carlo approximation method to solve different fPDEs, which is valid for fractional differentiation under any definition. Moreover, based on the ensemble probability density function, the generated nodes are all located in denser regions near the target point where we perform the differentiation. This has an unexpected connection with known finite difference methods on non-equidistant or nested grids, and hence our method inherits their advantages. At the same time, the generated nodes exhibit a block-like dense distribution, leading to a good computational efficiency of this approach. We present the framework for using this algorithm and apply it to several examples. Our results demonstrate the effectiveness of GMC-PINNs in dealing with irregular domain problems and show a higher computational efficiency compared to the original fPINN method. We also include comparisons with the Monte Carlo fPINN [31]. Finally, we use examples to demonstrate the effectiveness of the method in dealing with fuzzy boundary location problems, and then use the method to solve the coupled 3D fractional Bloch-Torrey equation defined in the ventricular domain of the human brain, and compare the results with classical numerical methods. | [
"['Shupeng Wang' 'George Em Karniadakis']"
]
|
null | null | 2405.00218 | null | null | http://arxiv.org/pdf/2405.00218v2 | 2024-06-07T06:47:15Z | 2024-04-30T21:52:19Z | Constrained Decoding for Secure Code Generation | Code Large Language Models (Code LLMs) have been increasingly used by developers to boost productivity, but they often generate vulnerable code. Thus, there is an urgent need to ensure that code generated by Code LLMs is correct and secure. Previous research has primarily focused on generating secure code, overlooking the fact that secure code also needs to be correct. This oversight can lead to a false sense of security. Currently, the community lacks a method to measure actual progress in this area, and we need solutions that address both security and correctness of code generation. This paper introduces a new benchmark, CodeGuard+, along with two new metrics, to measure Code LLMs' ability to generate both secure and correct code. Using our new evaluation methods, we show that the state-of-the-art defense technique, prefix tuning, may not be as strong as previously believed, since it generates secure code but sacrifices functional correctness. We also demonstrate that different decoding methods significantly affect the security of Code LLMs. Furthermore, we explore a new defense direction: constrained decoding for secure code generation. We propose new constrained decoding techniques to generate secure code. Our results reveal that constrained decoding is more effective than prefix tuning to improve the security of Code LLMs, without requiring a specialized training dataset. Moreover, our evaluations over eight state-of-the-art Code LLMs show that constrained decoding has strong performance to improve the security of Code LLMs, and our technique outperforms GPT-4. | [
"['Yanjun Fu' 'Ethan Baker' 'Yu Ding' 'Yizheng Chen']"
]
|
null | null | 2405.00219 | null | null | http://arxiv.org/pdf/2405.00219v1 | 2024-04-30T21:53:11Z | 2024-04-30T21:53:11Z | Machine Learning-based Estimation of Respiratory Fluctuations in a
Healthy Adult Population using BOLD fMRI and Head Motion Parameters | Motivation: In many fMRI studies, respiratory signals are often missing or of poor quality. Therefore, it could be highly beneficial to have a tool to extract respiratory variation (RV) waveforms directly from fMRI data without the need for peripheral recording devices. Goal(s): Investigate the hypothesis that head motion parameters contain valuable information regarding respiratory patter, which can help machine learning algorithms estimate the RV waveform. Approach: This study proposes a CNN model for reconstruction of RV waveforms using head motion parameters and BOLD signals. Results: This study showed that combining head motion parameters with BOLD signals enhances RV waveform estimation. Impact: It is expected that application of the proposed method will lower the cost of fMRI studies, reduce complexity, and decrease the burden on participants as they will not be required to wear a respiratory bellows. | [
"['Abdoljalil Addeh' 'Fernando Vega' 'Rebecca J. Williams' 'G. Bruce Pike'\n 'M. Ethan MacDonald']"
]
|
null | null | 2405.00220 | null | null | http://arxiv.org/pdf/2405.00220v1 | 2024-04-30T21:55:49Z | 2024-04-30T21:55:49Z | Context-Aware Mobile Network Performance Prediction Using Network &
Remote Sensing Data | Accurate estimation of Network Performance is crucial for several tasks in telecom networks. Telecom networks regularly serve a vast number of radio nodes. Each radio node provides services to end-users in the associated coverage areas. The task of predicting Network Performance for telecom networks necessitates considering complex spatio-temporal interactions and incorporating geospatial information where the radio nodes are deployed. Instead of relying on historical data alone, our approach augments network historical performance datasets with satellite imagery data. Our comprehensive experiments, using real-world data collected from multiple different regions of an operational network, show that the model is robust and can generalize across different scenarios. The results indicate that the model, utilizing satellite imagery, performs very well across the tested regions. Additionally, the model demonstrates a robust approach to the cold-start problem, offering a promising alternative for initial performance estimation in newly deployed sites. | [
"['Ali Shibli' 'Tahar Zanouda']"
]
|
null | null | 2405.00236 | null | null | http://arxiv.org/pdf/2405.00236v1 | 2024-04-30T23:04:36Z | 2024-04-30T23:04:36Z | STT: Stateful Tracking with Transformers for Autonomous Driving | Tracking objects in three-dimensional space is critical for autonomous driving. To ensure safety while driving, the tracker must be able to reliably track objects across frames and accurately estimate their states such as velocity and acceleration in the present. Existing works frequently focus on the association task while either neglecting the model performance on state estimation or deploying complex heuristics to predict the states. In this paper, we propose STT, a Stateful Tracking model built with Transformers, that can consistently track objects in the scenes while also predicting their states accurately. STT consumes rich appearance, geometry, and motion signals through long term history of detections and is jointly optimized for both data association and state estimation tasks. Since the standard tracking metrics like MOTA and MOTP do not capture the combined performance of the two tasks in the wider spectrum of object states, we extend them with new metrics called S-MOTA and MOTPS that address this limitation. STT achieves competitive real-time performance on the Waymo Open Dataset. | [
"['Longlong Jing' 'Ruichi Yu' 'Xu Chen' 'Zhengli Zhao' 'Shiwei Sheng'\n 'Colin Graber' 'Qi Chen' 'Qinru Li' 'Shangxuan Wu' 'Han Deng'\n 'Sangjin Lee' 'Chris Sweeney' 'Qiurui He' 'Wei-Chih Hung' 'Tong He'\n 'Xingyi Zhou' 'Farshid Moussavi' 'Zijian Guo' 'Yin Zhou' 'Mingxing Tan'\n 'Weilong Yang' 'Congcong Li']"
]
|
null | null | 2405.00239 | null | null | http://arxiv.org/pdf/2405.00239v1 | 2024-04-30T23:09:54Z | 2024-04-30T23:09:54Z | IgCONDA-PET: Implicitly-Guided Counterfactual Diffusion for Detecting
Anomalies in PET Images | Minimizing the need for pixel-level annotated data for training PET anomaly segmentation networks is crucial, particularly due to time and cost constraints related to expert annotations. Current un-/weakly-supervised anomaly detection methods rely on autoencoder or generative adversarial networks trained only on healthy data, although these are more challenging to train. In this work, we present a weakly supervised and Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images, branded as IgCONDA-PET. The training is conditioned on image class labels (healthy vs. unhealthy) along with implicit guidance to generate counterfactuals for an unhealthy image with anomalies. The counterfactual generation process synthesizes the healthy counterpart for a given unhealthy image, and the difference between the two facilitates the identification of anomaly locations. The code is available at: https://github.com/igcondapet/IgCONDA-PET.git | [
"['Shadab Ahamed' 'Yixi Xu' 'Arman Rahmim']"
]
|
null | null | 2405.00251 | null | null | http://arxiv.org/pdf/2405.00251v1 | 2024-04-30T23:49:26Z | 2024-04-30T23:49:26Z | Semantically Consistent Video Inpainting with Conditional Diffusion
Models | Current state-of-the-art methods for video inpainting typically rely on optical flow or attention-based approaches to inpaint masked regions by propagating visual information across frames. While such approaches have led to significant progress on standard benchmarks, they struggle with tasks that require the synthesis of novel content that is not present in other frames. In this paper we reframe video inpainting as a conditional generative modeling problem and present a framework for solving such problems with conditional video diffusion models. We highlight the advantages of using a generative approach for this task, showing that our method is capable of generating diverse, high-quality inpaintings and synthesizing new content that is spatially, temporally, and semantically consistent with the provided context. | [
"['Dylan Green' 'William Harvey' 'Saeid Naderiparizi' 'Matthew Niedoba'\n 'Yunpeng Liu' 'Xiaoxuan Liang' 'Jonathan Lavington' 'Ke Zhang'\n 'Vasileios Lioutas' 'Setareh Dabiri' 'Adam Scibior' 'Berend Zwartsenberg'\n 'Frank Wood']"
]
|
null | null | 2405.00252 | null | null | http://arxiv.org/pdf/2405.00252v1 | 2024-04-30T23:55:03Z | 2024-04-30T23:55:03Z | Hybrid Quantum-Classical Scheduling for Accelerating Neural Network
Training with Newton's Gradient Descent | Optimization techniques in deep learning are predominantly led by first-order gradient methodologies, such as SGD. However, neural network training can greatly benefit from the rapid convergence characteristics of second-order optimization. Newton's GD stands out in this category, by rescaling the gradient using the inverse Hessian. Nevertheless, one of its major bottlenecks is matrix inversion, which is notably time-consuming in $O(N^3)$ time with weak scalability. Matrix inversion can be translated into solving a series of linear equations. Given that quantum linear solver algorithms (QLSAs), leveraging the principles of quantum superposition and entanglement, can operate within a $text{polylog}(N)$ time frame, they present a promising approach with exponential acceleration. Specifically, one of the most recent QLSAs demonstrates a complexity scaling of $O(dcdotkappa log(Ncdotkappa/epsilon))$, depending on: {size~$N$, condition number~$kappa$, error tolerance~$epsilon$, quantum oracle sparsity~$d$} of the matrix. However, this also implies that their potential exponential advantage may be hindered by certain properties (i.e. $kappa$ and $d$). We propose Q-Newton, a hybrid quantum-classical scheduler for accelerating neural network training with Newton's GD. Q-Newton utilizes a streamlined scheduling module that coordinates between quantum and classical linear solvers, by estimating & reducing $kappa$ and constructing $d$ for the quantum solver. Our evaluation showcases the potential for Q-Newton to significantly reduce the total training time compared to commonly used optimizers like SGD. We hypothesize a future scenario where the gate time of quantum machines is reduced, possibly realized by attoseconds physics. Our evaluation establishes an ambitious and promising target for the evolution of quantum computing. | [
"['Pingzhi Li' 'Junyu Liu' 'Hanrui Wang' 'Tianlong Chen']"
]
|
null | null | 2405.00254 | null | null | http://arxiv.org/pdf/2405.00254v2 | 2024-05-27T14:08:40Z | 2024-04-30T23:57:23Z | RLHF from Heterogeneous Feedback via Personalization and Preference
Aggregation | Reinforcement learning from human feedback (RLHF) has been an effective technique for aligning AI systems with human values, with remarkable successes in fine-tuning large-language models recently. Most existing RLHF paradigms make the underlying assumption that human preferences are relatively homogeneous, and can be encoded by a single reward model. In this paper, we focus on addressing the issues due to the inherent heterogeneity in human preferences, as well as their potential strategic behavior in providing feedback. Specifically, we propose two frameworks to address heterogeneous human feedback in principled ways: personalization-based one and aggregation-based one. For the former, we propose two approaches based on representation learning and clustering, respectively, for learning multiple reward models that trades off the bias (due to preference heterogeneity) and variance (due to the use of fewer data for learning each model by personalization). We then establish sample complexity guarantees for both approaches. For the latter, we aim to adhere to the single-model framework, as already deployed in the current RLHF paradigm, by carefully aggregating diverse and truthful preferences from humans. We propose two approaches based on reward and preference aggregation, respectively: the former utilizes both utilitarianism and Leximin approaches to aggregate individual reward models, with sample complexity guarantees; the latter directly aggregates the human feedback in the form of probabilistic opinions. Under the probabilistic-opinion-feedback model, we also develop an approach to handle strategic human labelers who may bias and manipulate the aggregated preferences with untruthful feedback. Based on the ideas in mechanism design, our approach ensures truthful preference reporting, with the induced aggregation rule maximizing social welfare functions. | [
"['Chanwoo Park' 'Mingyang Liu' 'Dingwen Kong' 'Kaiqing Zhang'\n 'Asuman Ozdaglar']"
]
|
null | null | 2405.00263 | null | null | http://arxiv.org/pdf/2405.00263v1 | 2024-05-01T00:46:22Z | 2024-05-01T00:46:22Z | Clover: Regressive Lightweight Speculative Decoding with Sequential
Knowledge | Large language models (LLMs) suffer from low efficiency as the mismatch between the requirement of auto-regressive decoding and the design of most contemporary GPUs. Specifically, billions to trillions of parameters must be loaded to the GPU cache through its limited memory bandwidth for computation, but only a small batch of tokens is actually computed. Consequently, the GPU spends most of its time on memory transfer instead of computation. Recently, parallel decoding, a type of speculative decoding algorithms, is becoming more popular and has demonstrated impressive efficiency improvement in generation. It introduces extra decoding heads to large models, enabling them to predict multiple subsequent tokens simultaneously and verify these candidate continuations in a single decoding step. However, this approach deviates from the training objective of next token prediction used during pre-training, resulting in a low hit rate for candidate tokens. In this paper, we propose a new speculative decoding algorithm, Clover, which integrates sequential knowledge into the parallel decoding process. This enhancement improves the hit rate of speculators and thus boosts the overall efficiency. Clover transmits the sequential knowledge from pre-speculated tokens via the Regressive Connection, then employs an Attention Decoder to integrate these speculated tokens. Additionally, Clover incorporates an Augmenting Block that modifies the hidden states to better align with the purpose of speculative generation rather than next token prediction. The experiment results demonstrate that Clover outperforms the baseline by up to 91% on Baichuan-Small and 146% on Baichuan-Large, respectively, and exceeds the performance of the previously top-performing method, Medusa, by up to 37% on Baichuan-Small and 57% on Baichuan-Large, respectively. | [
"['Bin Xiao' 'Chunan Shi' 'Xiaonan Nie' 'Fan Yang' 'Xiangwei Deng' 'Lei Su'\n 'Weipeng Chen' 'Bin Cui']"
]
|
null | null | 2405.00282 | null | null | http://arxiv.org/pdf/2405.00282v1 | 2024-05-01T02:19:31Z | 2024-05-01T02:19:31Z | MF-OML: Online Mean-Field Reinforcement Learning with Occupation
Measures for Large Population Games | Reinforcement learning for multi-agent games has attracted lots of attention recently. However, given the challenge of solving Nash equilibria for large population games, existing works with guaranteed polynomial complexities either focus on variants of zero-sum and potential games, or aim at solving (coarse) correlated equilibria, or require access to simulators, or rely on certain assumptions that are hard to verify. This work proposes MF-OML (Mean-Field Occupation-Measure Learning), an online mean-field reinforcement learning algorithm for computing approximate Nash equilibria of large population sequential symmetric games. MF-OML is the first fully polynomial multi-agent reinforcement learning algorithm for provably solving Nash equilibria (up to mean-field approximation gaps that vanish as the number of players $N$ goes to infinity) beyond variants of zero-sum and potential games. When evaluated by the cumulative deviation from Nash equilibria, the algorithm is shown to achieve a high probability regret bound of $tilde{O}(M^{3/4}+N^{-1/2}M)$ for games with the strong Lasry-Lions monotonicity condition, and a regret bound of $tilde{O}(M^{11/12}+N^{- 1/6}M)$ for games with only the Lasry-Lions monotonicity condition, where $M$ is the total number of episodes and $N$ is the number of agents of the game. As a byproduct, we also obtain the first tractable globally convergent computational algorithm for computing approximate Nash equilibria of monotone mean-field games. | [
"['Anran Hu' 'Junzi Zhang']"
]
|
null | null | 2405.00285 | null | null | http://arxiv.org/pdf/2405.00285v2 | 2024-05-06T19:05:23Z | 2024-05-01T02:26:13Z | iMTSP: Solving Min-Max Multiple Traveling Salesman Problem with
Imperative Learning | This paper considers a Min-Max Multiple Traveling Salesman Problem (MTSP), where the goal is to find a set of tours, one for each agent, to collectively visit all the cities while minimizing the length of the longest tour. Though MTSP has been widely studied, obtaining near-optimal solutions for large-scale problems is still challenging due to its NP-hardness. Recent efforts in data-driven methods face challenges of the need for hard-to-obtain supervision and issues with high variance in gradient estimations, leading to slow convergence and highly suboptimal solutions. We address these issues by reformulating MTSP as a bilevel optimization problem, using the concept of imperative learning (IL). This involves introducing an allocation network that decomposes the MTSP into multiple single-agent traveling salesman problems (TSPs). The longest tour from these TSP solutions is then used to self-supervise the allocation network, resulting in a new self-supervised, bilevel, end-to-end learning framework, which we refer to as imperative MTSP (iMTSP). Additionally, to tackle the high-variance gradient issues during the optimization, we introduce a control variate-based gradient estimation algorithm. Our experiments showed that these innovative designs enable our gradient estimator to converge 20% faster than the advanced reinforcement learning baseline and find up to 80% shorter tour length compared with Google OR-Tools MTSP solver, especially in large-scale problems (e.g. 1000 cities and 15 agents). | [
"['Yifan Guo' 'Zhongqiang Ren' 'Chen Wang']"
]
|
null | null | 2405.00287 | null | null | http://arxiv.org/pdf/2405.00287v1 | 2024-05-01T02:27:59Z | 2024-05-01T02:27:59Z | Stochastic Sampling for Contrastive Views and Hard Negative Samples in
Graph-based Collaborative Filtering | Graph-based collaborative filtering (CF) has emerged as a promising approach in recommendation systems. Despite its achievements, graph-based CF models face challenges due to data sparsity and negative sampling. In this paper, we propose a novel Stochastic sampling for i) COntrastive views and ii) hard NEgative samples (SCONE) to overcome these issues. By considering that they are both sampling tasks, we generate dynamic augmented views and diverse hard negative samples via our unified stochastic sampling framework based on score-based generative models. In our comprehensive evaluations with 6 benchmark datasets, our proposed SCONE significantly improves recommendation accuracy and robustness, and demonstrates the superiority of our approach over existing CF models. Furthermore, we prove the efficacy of user-item specific stochastic sampling for addressing the user sparsity and item popularity issues. The integration of the stochastic sampling and graph-based CF obtains the state-of-the-art in personalized recommendation systems, making significant strides in information-rich environments. | [
"['Chaejeong Lee' 'Jeongwhan Choi' 'Hyowon Wi' 'Sung-Bae Cho'\n 'Noseong Park']"
]
|
null | null | 2405.00303 | null | null | http://arxiv.org/pdf/2405.00303v2 | 2024-06-18T18:40:09Z | 2024-05-01T03:59:06Z | Joint Optimization of Piecewise Linear Ensembles | Tree ensembles achieve state-of-the-art performance on numerous prediction tasks. We propose Joint Optimization of Piecewise Linear ENsembles (JOPLEN), which jointly fits piecewise linear models at all leaf nodes of an existing tree ensemble. In addition to enhancing the expressiveness of an ensemble, JOPLEN allows several common penalties, including sparsity-promoting matrix norms and subspace-norms, to be applied to nonlinear prediction. We demonstrate the performance of JOPLEN on over 100 regression and classification datasets and with a variety of penalties. JOPLEN leads to improved prediction performance relative to not only standard random forest and gradient boosted tree ensembles, but also other methods for enhancing tree ensembles. We demonstrate that JOPLEN with a nuclear norm penalty learns subspace-aligned functions. Additionally, JOPLEN combined with a Dirty LASSO penalty is an effective feature selection method for nonlinear prediction in multitask learning. | [
"['Matt Raymond' 'Angela Violi' 'Clayton Scott']"
]
|
null | null | 2405.00304 | null | null | http://arxiv.org/pdf/2405.00304v1 | 2024-05-01T04:00:09Z | 2024-05-01T04:00:09Z | QUACK: Quantum Aligned Centroid Kernel | Quantum computing (QC) seems to show potential for application in machine learning (ML). In particular quantum kernel methods (QKM) exhibit promising properties for use in supervised ML tasks. However, a major disadvantage of kernel methods is their unfavorable quadratic scaling with the number of training samples. Together with the limits imposed by currently available quantum hardware (NISQ devices) with their low qubit coherence times, small number of qubits, and high error rates, the use of QC in ML at an industrially relevant scale is currently impossible. As a small step in improving the potential applications of QKMs, we introduce QUACK, a quantum kernel algorithm whose time complexity scales linear with the number of samples during training, and independent of the number of training samples in the inference stage. In the training process, only the kernel entries for the samples and the centers of the classes are calculated, i.e. the maximum shape of the kernel for n samples and c classes is (n, c). During training, the parameters of the quantum kernel and the positions of the centroids are optimized iteratively. In the inference stage, for every new sample the circuit is only evaluated for every centroid, i.e. c times. We show that the QUACK algorithm nevertheless provides satisfactory results and can perform at a similar level as classical kernel methods with quadratic scaling during training. In addition, our (simulated) algorithm is able to handle high-dimensional datasets such as MNIST with 784 features without any dimensionality reduction. | [
"['Kilian Tscharke' 'Sebastian Issel' 'Pascal Debus']"
]
|
null | null | 2405.00311 | null | null | http://arxiv.org/pdf/2405.00311v2 | 2024-07-11T15:03:49Z | 2024-05-01T04:28:44Z | Three-layer deep learning network random trees for fault detection in
chemical production process | With the development of technology, the chemical production process is becoming increasingly complex and large-scale, making fault detection particularly important. However, current detective methods struggle to address the complexities of large-scale production processes. In this paper, we integrate the strengths of deep learning and machine learning technologies, combining the advantages of bidirectional long and short-term memory neural networks, fully connected neural networks, and the extra trees algorithm to propose a novel fault detection model named three-layer deep learning network random trees (TDLN-trees). First, the deep learning component extracts temporal features from industrial data, combining and transforming them into a higher-level data representation. Second, the machine learning component processes and classifies the features extracted in the first step. An experimental analysis based on the Tennessee Eastman process verifies the superiority of the proposed method. | [
"['Ming Lu' 'Zhen Gao' 'Ying Zou' 'Zuguo Chen' 'Pei Li']"
]
|
null | null | 2405.00314 | null | null | http://arxiv.org/pdf/2405.00314v1 | 2024-05-01T04:32:07Z | 2024-05-01T04:32:07Z | Model Quantization and Hardware Acceleration for Vision Transformers: A
Comprehensive Survey | Vision Transformers (ViTs) have recently garnered considerable attention, emerging as a promising alternative to convolutional neural networks (CNNs) in several vision-related applications. However, their large model sizes and high computational and memory demands hinder deployment, especially on resource-constrained devices. This underscores the necessity of algorithm-hardware co-design specific to ViTs, aiming to optimize their performance by tailoring both the algorithmic structure and the underlying hardware accelerator to each other's strengths. Model quantization, by converting high-precision numbers to lower-precision, reduces the computational demands and memory needs of ViTs, allowing the creation of hardware specifically optimized for these quantized algorithms, boosting efficiency. This article provides a comprehensive survey of ViTs quantization and its hardware acceleration. We first delve into the unique architectural attributes of ViTs and their runtime characteristics. Subsequently, we examine the fundamental principles of model quantization, followed by a comparative analysis of the state-of-the-art quantization techniques for ViTs. Additionally, we explore the hardware acceleration of quantized ViTs, highlighting the importance of hardware-friendly algorithm design. In conclusion, this article will discuss ongoing challenges and future research paths. We consistently maintain the related open-source materials at https://github.com/DD-DuDa/awesome-vit-quantization-acceleration. | [
"['Dayou Du' 'Gu Gong' 'Xiaowen Chu']"
]
|
null | null | 2405.00318 | null | null | http://arxiv.org/pdf/2405.00318v2 | 2024-05-07T23:54:23Z | 2024-05-01T04:51:10Z | Covariant spatio-temporal receptive fields for neuromorphic computing | Biological nervous systems constitute important sources of inspiration towards computers that are faster, cheaper, and more energy efficient. Neuromorphic disciplines view the brain as a coevolved system, simultaneously optimizing the hardware and the algorithms running on it. There are clear efficiency gains when bringing the computations into a physical substrate, but we presently lack theories to guide efficient implementations. Here, we present a principled computational model for neuromorphic systems in terms of spatio-temporal receptive fields, based on affine Gaussian kernels over space and leaky-integrator and leaky integrate-and-fire models over time. Our theory is provably covariant to spatial affine and temporal scaling transformations, and with close similarities to the visual processing in mammalian brains. We use these spatio-temporal receptive fields as a prior in an event-based vision task, and show that this improves the training of spiking networks, which otherwise is known as problematic for event-based vision. This work combines efforts within scale-space theory and computational neuroscience to identify theoretically well-founded ways to process spatio-temporal signals in neuromorphic systems. Our contributions are immediately relevant for signal processing and event-based vision, and can be extended to other processing tasks over space and time, such as memory and control. | [
"['Jens Egholm Pedersen' 'Jörg Conradt' 'Tony Lindeberg']"
]
|
null | null | 2405.00319 | null | null | http://arxiv.org/pdf/2405.00319v1 | 2024-05-01T04:55:51Z | 2024-05-01T04:55:51Z | Data Augmentation Policy Search for Long-Term Forecasting | Data augmentation serves as a popular regularization technique to combat overfitting challenges in neural networks. While automatic augmentation has demonstrated success in image classification tasks, its application to time-series problems, particularly in long-term forecasting, has received comparatively less attention. To address this gap, we introduce a time-series automatic augmentation approach named TSAA, which is both efficient and easy to implement. The solution involves tackling the associated bilevel optimization problem through a two-step process: initially training a non-augmented model for a limited number of epochs, followed by an iterative split procedure. During this iterative process, we alternate between identifying a robust augmentation policy through Bayesian optimization and refining the model while discarding suboptimal runs. Extensive evaluations on challenging univariate and multivariate forecasting benchmark problems demonstrate that TSAA consistently outperforms several robust baselines, suggesting its potential integration into prediction pipelines. | [
"['Liran Nochumsohn' 'Omri Azencot']"
]
|
null | null | 2405.00332 | null | null | http://arxiv.org/pdf/2405.00332v3 | 2024-05-03T17:53:26Z | 2024-05-01T05:52:05Z | A Careful Examination of Large Language Model Performance on Grade
School Arithmetic | Large language models (LLMs) have achieved impressive success on many benchmarks for mathematical reasoning. However, there is growing concern that some of this performance actually reflects dataset contamination, where data closely resembling benchmark questions leaks into the training data, instead of true reasoning ability. To investigate this claim rigorously, we commission Grade School Math 1000 (GSM1k). GSM1k is designed to mirror the style and complexity of the established GSM8k benchmark, the gold standard for measuring elementary mathematical reasoning. We ensure that the two benchmarks are comparable across important metrics such as human solve rates, number of steps in solution, answer magnitude, and more. When evaluating leading open- and closed-source LLMs on GSM1k, we observe accuracy drops of up to 13%, with several families of models (e.g., Phi and Mistral) showing evidence of systematic overfitting across almost all model sizes. At the same time, many models, especially those on the frontier, (e.g., Gemini/GPT/Claude) show minimal signs of overfitting. Further analysis suggests a positive relationship (Spearman's r^2=0.32) between a model's probability of generating an example from GSM8k and its performance gap between GSM8k and GSM1k, suggesting that many models may have partially memorized GSM8k. | [
"['Hugh Zhang' 'Jeff Da' 'Dean Lee' 'Vaughn Robinson' 'Catherine Wu'\n 'Will Song' 'Tiffany Zhao' 'Pranav Raja' 'Dylan Slack' 'Qin Lyu'\n 'Sean Hendryx' 'Russell Kaplan' 'Michele Lunati' 'Summer Yue']"
]
|
null | null | 2405.00334 | null | null | http://arxiv.org/pdf/2405.00334v2 | 2024-07-15T10:07:56Z | 2024-05-01T05:54:33Z | A Survey on Deep Active Learning: Recent Advances and New Frontiers | Active learning seeks to achieve strong performance with fewer training samples. It does this by iteratively asking an oracle to label new selected samples in a human-in-the-loop manner. This technique has gained increasing popularity due to its broad applicability, yet its survey papers, especially for deep learning-based active learning (DAL), remain scarce. Therefore, we conduct an advanced and comprehensive survey on DAL. We first introduce reviewed paper collection and filtering. Second, we formally define the DAL task and summarize the most influential baselines and widely used datasets. Third, we systematically provide a taxonomy of DAL methods from five perspectives, including annotation types, query strategies, deep model architectures, learning paradigms, and training processes, and objectively analyze their strengths and weaknesses. Then, we comprehensively summarize main applications of DAL in Natural Language Processing (NLP), Computer Vision (CV), and Data Mining (DM), etc. Finally, we discuss challenges and perspectives after a detailed analysis of current studies. This work aims to serve as a useful and quick guide for researchers in overcoming difficulties in DAL. We hope that this survey will spur further progress in this burgeoning field. | [
"['Dongyuan Li' 'Zhen Wang' 'Yankai Chen' 'Renhe Jiang' 'Weiping Ding'\n 'Manabu Okumura']"
]
|
null | null | 2405.00348 | null | null | http://arxiv.org/pdf/2405.00348v1 | 2024-05-01T06:41:27Z | 2024-05-01T06:41:27Z | Practical Dataset Distillation Based on Deep Support Vectors | Conventional dataset distillation requires significant computational resources and assumes access to the entire dataset, an assumption impractical as it presumes all data resides on a central server. In this paper, we focus on dataset distillation in practical scenarios with access to only a fraction of the entire dataset. We introduce a novel distillation method that augments the conventional process by incorporating general model knowledge via the addition of Deep KKT (DKKT) loss. In practical settings, our approach showed improved performance compared to the baseline distribution matching distillation method on the CIFAR-10 dataset. Additionally, we present experimental evidence that Deep Support Vectors (DSVs) offer unique information to the original distillation, and their integration results in enhanced performance. | [
"['Hyunho Lee' 'Junhoo Lee' 'Nojun Kwak']"
]
|
null | null | 2405.00349 | null | null | http://arxiv.org/pdf/2405.00349v2 | 2024-05-05T19:11:25Z | 2024-05-01T06:50:18Z | A Self-explaining Neural Architecture for Generalizable Concept Learning | With the wide proliferation of Deep Neural Networks in high-stake applications, there is a growing demand for explainability behind their decision-making process. Concept learning models attempt to learn high-level 'concepts' - abstract entities that align with human understanding, and thus provide interpretability to DNN architectures. However, in this paper, we demonstrate that present SOTA concept learning approaches suffer from two major problems - lack of concept fidelity wherein the models fail to learn consistent concepts among similar classes and limited concept interoperability wherein the models fail to generalize learned concepts to new domains for the same task. Keeping these in mind, we propose a novel self-explaining architecture for concept learning across domains which - i) incorporates a new concept saliency network for representative concept selection, ii) utilizes contrastive learning to capture representative domain invariant concepts, and iii) uses a novel prototype-based concept grounding regularization to improve concept alignment across domains. We demonstrate the efficacy of our proposed approach over current SOTA concept learning approaches on four widely used real-world datasets. Empirical results show that our method improves both concept fidelity measured through concept overlap and concept interoperability measured through domain adaptation performance. | [
"['Sanchit Sinha' 'Guangzhi Xiong' 'Aidong Zhang']"
]
|
null | null | 2405.00358 | null | null | http://arxiv.org/pdf/2405.00358v1 | 2024-05-01T07:27:04Z | 2024-05-01T07:27:04Z | Arbitrary Time Information Modeling via Polynomial Approximation for
Temporal Knowledge Graph Embedding | Distinguished from traditional knowledge graphs (KGs), temporal knowledge graphs (TKGs) must explore and reason over temporally evolving facts adequately. However, existing TKG approaches still face two main challenges, i.e., the limited capability to model arbitrary timestamps continuously and the lack of rich inference patterns under temporal constraints. In this paper, we propose an innovative TKGE method (PTBox) via polynomial decomposition-based temporal representation and box embedding-based entity representation to tackle the above-mentioned problems. Specifically, we decompose time information by polynomials and then enhance the model's capability to represent arbitrary timestamps flexibly by incorporating the learnable temporal basis tensor. In addition, we model every entity as a hyperrectangle box and define each relation as a transformation on the head and tail entity boxes. The entity boxes can capture complex geometric structures and learn robust representations, improving the model's inductive capability for rich inference patterns. Theoretically, our PTBox can encode arbitrary time information or even unseen timestamps while capturing rich inference patterns and higher-arity relations of the knowledge base. Extensive experiments on real-world datasets demonstrate the effectiveness of our method. | [
"['Zhiyu Fang' 'Jingyan Qin' 'Xiaobin Zhu' 'Chun Yang' 'Xu-Cheng Yin']"
]
|
null | null | 2405.00385 | null | null | http://arxiv.org/pdf/2405.00385v1 | 2024-05-01T08:36:13Z | 2024-05-01T08:36:13Z | Variational Bayesian Methods for a Tree-Structured Stick-Breaking
Process Mixture of Gaussians | The Bayes coding algorithm for context tree source is a successful example of Bayesian tree estimation in text compression in information theory. This algorithm provides an efficient parametric representation of the posterior tree distribution and exact updating of its parameters. We apply this algorithm to a clustering task in machine learning. More specifically, we apply it to Bayesian estimation of the tree-structured stick-breaking process (TS-SBP) mixture models. For TS-SBP mixture models, only Markov chain Monte Carlo methods have been proposed so far, but any variational Bayesian methods have not been proposed yet. In this paper, we propose a variational Bayesian method that has a subroutine similar to the Bayes coding algorithm for context tree sources. We confirm its behavior by a numerical experiment on a toy example. | [
"['Yuta Nakahara']"
]
|
null | null | 2405.00387 | null | null | http://arxiv.org/pdf/2405.00387v1 | 2024-05-01T08:38:07Z | 2024-05-01T08:38:07Z | Cell Switching in HAPS-Aided Networking: How the Obscurity of Traffic
Loads Affects the Decision | This study aims to introduce the cell load estimation problem of cell switching approaches in cellular networks specially-presented in a high-altitude platform station (HAPS)-assisted network. The problem arises from the fact that the traffic loads of sleeping base stations for the next time slot cannot be perfectly known, but they can rather be estimated, and any estimation error could result in divergence from the optimal decision, which subsequently affects the performance of energy efficiency. The traffic loads of the sleeping base stations for the next time slot are required because the switching decisions are made proactively in the current time slot. Two different Q-learning algorithms are developed; one is full-scale, focusing solely on the performance, while the other one is lightweight and addresses the computational cost. Results confirm that the estimation error is capable of changing cell switching decisions that yields performance divergence compared to no-error scenarios. Moreover, the developed Q-learning algorithms perform well since an insignificant difference (i.e., 0.3%) is observed between them and the optimum algorithm. | [
"['Berk Çiloğlu' 'Görkem Berkay Koç' 'Metin Ozturk' 'Halim Yanikomeroglu']"
]
|
null | null | 2405.00389 | null | null | http://arxiv.org/pdf/2405.00389v1 | 2024-05-01T08:42:22Z | 2024-05-01T08:42:22Z | Employing Federated Learning for Training Autonomous HVAC Systems | Buildings account for 40 % of global energy consumption. A considerable portion of building energy consumption stems from heating, ventilation, and air conditioning (HVAC), and thus implementing smart, energy-efficient HVAC systems has the potential to significantly impact the course of climate change. In recent years, model-free reinforcement learning algorithms have been increasingly assessed for this purpose due to their ability to learn and adapt purely from experience. They have been shown to outperform classical controllers in terms of energy cost and consumption, as well as thermal comfort. However, their weakness lies in their relatively poor data efficiency, requiring long periods of training to reach acceptable policies, making them inapplicable to real-world controllers directly. Hence, common research goals are to improve the learning speed, as well as to improve their ability to generalize, in order to facilitate transfer learning to unseen building environments. In this paper, we take a federated learning approach to training the reinforcement learning controller of an HVAC system. A global control policy is learned by aggregating local policies trained on multiple data centers located in different climate zones. The goal of the policy is to simultaneously minimize energy consumption and maximize thermal comfort. The federated optimization strategy indirectly increases both the rate at which experience data is collected and the variation in the data. We demonstrate through experimental evaluation that these effects lead to a faster learning speed, as well as greater generalization capabilities in the federated policy compared to any individually trained policy. | [
"['Fredrik Hagström' 'Vikas Garg' 'Fabricio Oliveira']"
]
|
null | null | 2405.00394 | null | null | http://arxiv.org/pdf/2405.00394v1 | 2024-05-01T08:49:22Z | 2024-05-01T08:49:22Z | Enhancing Mutual Trustworthiness in Federated Learning for Data-Rich
Smart Cities | Federated learning is a promising collaborative and privacy-preserving machine learning approach in data-rich smart cities. Nevertheless, the inherent heterogeneity of these urban environments presents a significant challenge in selecting trustworthy clients for collaborative model training. The usage of traditional approaches, such as the random client selection technique, poses several threats to the system's integrity due to the possibility of malicious client selection. Primarily, the existing literature focuses on assessing the trustworthiness of clients, neglecting the crucial aspect of trust in federated servers. To bridge this gap, in this work, we propose a novel framework that addresses the mutual trustworthiness in federated learning by considering the trust needs of both the client and the server. Our approach entails: (1) Creating preference functions for servers and clients, allowing them to rank each other based on trust scores, (2) Establishing a reputation-based recommendation system leveraging multiple clients to assess newly connected servers, (3) Assigning credibility scores to recommending devices for better server trustworthiness measurement, (4) Developing a trust assessment mechanism for smart devices using a statistical Interquartile Range (IQR) method, (5) Designing intelligent matching algorithms considering the preferences of both parties. Based on simulation and experimental results, our approach outperforms baseline methods by increasing trust levels, global model accuracy, and reducing non-trustworthy clients in the system. | [
"['Osama Wehbi' 'Sarhad Arisdakessian' 'Mohsen Guizani' 'Omar Abdel Wahab'\n 'Azzam Mourad' 'Hadi Otrok' 'Hoda Al khzaimi' 'Bassem Ouni']"
]
|
null | null | 2405.00410 | null | null | http://arxiv.org/pdf/2405.00410v2 | 2024-05-16T14:11:46Z | 2024-05-01T09:34:42Z | UCB-driven Utility Function Search for Multi-objective Reinforcement
Learning | In Multi-objective Reinforcement Learning (MORL) agents are tasked with optimising decision-making behaviours that trade-off between multiple, possibly conflicting, objectives. MORL based on decomposition is a family of solution methods that employ a number of utility functions to decompose the multi-objective problem into individual single-objective problems solved simultaneously in order to approximate a Pareto front of policies. We focus on the case of linear utility functions parameterised by weight vectors w. We introduce a method based on Upper Confidence Bound to efficiently search for the most promising weight vectors during different stages of the learning process, with the aim of maximising the hypervolume of the resulting Pareto front. The proposed method is shown to outperform various MORL baselines on Mujoco benchmark problems across different random seeds. The code is online at: https://github.com/SYCAMORE-1/ucb-MOPPO. | [
"['Yucheng Shi' 'Alexandros Agapitos' 'David Lynch' 'Giorgio Cruciata'\n 'Cengis Hasan' 'Hao Wang' 'Yayu Yao' 'Aleksandar Milenovic']"
]
|
null | null | 2405.00417 | null | null | http://arxiv.org/pdf/2405.00417v1 | 2024-05-01T09:55:31Z | 2024-05-01T09:55:31Z | Conformal Risk Control for Ordinal Classification | As a natural extension to the standard conformal prediction method, several conformal risk control methods have been recently developed and applied to various learning problems. In this work, we seek to control the conformal risk in expectation for ordinal classification tasks, which have broad applications to many real problems. For this purpose, we firstly formulated the ordinal classification task in the conformal risk control framework, and provided theoretic risk bounds of the risk control method. Then we proposed two types of loss functions specially designed for ordinal classification tasks, and developed corresponding algorithms to determine the prediction set for each case to control their risks at a desired level. We demonstrated the effectiveness of our proposed methods, and analyzed the difference between the two types of risks on three different datasets, including a simulated dataset, the UTKFace dataset and the diabetic retinopathy detection dataset. | [
"['Yunpeng Xu' 'Wenge Guo' 'Zhi Wei']"
]
|
null | null | 2405.00420 | null | null | http://arxiv.org/pdf/2405.00420v1 | 2024-05-01T09:58:57Z | 2024-05-01T09:58:57Z | Self-supervised Pre-training of Text Recognizers | In this paper, we investigate self-supervised pre-training methods for document text recognition. Nowadays, large unlabeled datasets can be collected for many research tasks, including text recognition, but it is costly to annotate them. Therefore, methods utilizing unlabeled data are researched. We study self-supervised pre-training methods based on masked label prediction using three different approaches -- Feature Quantization, VQ-VAE, and Post-Quantized AE. We also investigate joint-embedding approaches with VICReg and NT-Xent objectives, for which we propose an image shifting technique to prevent model collapse where it relies solely on positional encoding while completely ignoring the input image. We perform our experiments on historical handwritten (Bentham) and historical printed datasets mainly to investigate the benefits of the self-supervised pre-training techniques with different amounts of annotated target domain data. We use transfer learning as strong baselines. The evaluation shows that the self-supervised pre-training on data from the target domain is very effective, but it struggles to outperform transfer learning from closely related domains. This paper is one of the first researches exploring self-supervised pre-training in document text recognition, and we believe that it will become a cornerstone for future research in this area. We made our implementation of the investigated methods publicly available at https://github.com/DCGM/pero-pretraining. | [
"['Martin Kišš' 'Michal Hradiš']"
]
|
null | null | 2405.00433 | null | null | http://arxiv.org/pdf/2405.00433v1 | 2024-05-01T10:33:36Z | 2024-05-01T10:33:36Z | Weight Sparsity Complements Activity Sparsity in Neuromorphic Language
Models | Activity and parameter sparsity are two standard methods of making neural networks computationally more efficient. Event-based architectures such as spiking neural networks (SNNs) naturally exhibit activity sparsity, and many methods exist to sparsify their connectivity by pruning weights. While the effect of weight pruning on feed-forward SNNs has been previously studied for computer vision tasks, the effects of pruning for complex sequence tasks like language modeling are less well studied since SNNs have traditionally struggled to achieve meaningful performance on these tasks. Using a recently published SNN-like architecture that works well on small-scale language modeling, we study the effects of weight pruning when combined with activity sparsity. Specifically, we study the trade-off between the multiplicative efficiency gains the combination affords and its effect on task performance for language modeling. To dissect the effects of the two sparsities, we conduct a comparative analysis between densely activated models and sparsely activated event-based models across varying degrees of connectivity sparsity. We demonstrate that sparse activity and sparse connectivity complement each other without a proportional drop in task performance for an event-based neural network trained on the Penn Treebank and WikiText-2 language modeling datasets. Our results suggest sparsely connected event-based neural networks are promising candidates for effective and efficient sequence modeling. | [
"['Rishav Mukherji' 'Mark Schöne' 'Khaleelulla Khan Nazeer'\n 'Christian Mayr' 'David Kappel' 'Anand Subramoney']"
]
|
null | null | 2405.00438 | null | null | http://arxiv.org/pdf/2405.00438v1 | 2024-05-01T10:43:55Z | 2024-05-01T10:43:55Z | MetaRM: Shifted Distributions Alignment via Meta-Learning | The success of Reinforcement Learning from Human Feedback (RLHF) in language model alignment is critically dependent on the capability of the reward model (RM). However, as the training process progresses, the output distribution of the policy model shifts, leading to the RM's reduced ability to distinguish between responses. This issue is further compounded when the RM, trained on a specific data distribution, struggles to generalize to examples outside of that distribution. These two issues can be united as a challenge posed by the shifted distribution of the environment. To surmount this challenge, we introduce MetaRM, a method leveraging meta-learning to align the RM with the shifted environment distribution. MetaRM is designed to train the RM by minimizing data loss, particularly for data that can improve the differentiation ability to examples of the shifted target distribution. Extensive experiments demonstrate that MetaRM significantly improves the RM's distinguishing ability in iterative RLHF optimization, and also provides the capacity to identify subtle differences in out-of-distribution samples. | [
"['Shihan Dou' 'Yan Liu' 'Enyu Zhou' 'Tianlong Li' 'Haoxiang Jia'\n 'Limao Xiong' 'Xin Zhao' 'Junjie Ye' 'Rui Zheng' 'Tao Gui' 'Qi Zhang'\n 'Xuanjing Huang']"
]
|
null | null | 2405.00442 | null | null | http://arxiv.org/pdf/2405.00442v1 | 2024-05-01T10:53:54Z | 2024-05-01T10:53:54Z | Geometric Insights into Focal Loss: Reducing Curvature for Enhanced
Model Calibration | The key factor in implementing machine learning algorithms in decision-making situations is not only the accuracy of the model but also its confidence level. The confidence level of a model in a classification problem is often given by the output vector of a softmax function for convenience. However, these values are known to deviate significantly from the actual expected model confidence. This problem is called model calibration and has been studied extensively. One of the simplest techniques to tackle this task is focal loss, a generalization of cross-entropy by introducing one positive parameter. Although many related studies exist because of the simplicity of the idea and its formalization, the theoretical analysis of its behavior is still insufficient. In this study, our objective is to understand the behavior of focal loss by reinterpreting this function geometrically. Our analysis suggests that focal loss reduces the curvature of the loss surface in training the model. This indicates that curvature may be one of the essential factors in achieving model calibration. We design numerical experiments to support this conjecture to reveal the behavior of focal loss and the relationship between calibration performance and curvature. | [
"['Masanari Kimura' 'Hiroki Naganuma']"
]
|
null | null | 2405.00449 | null | null | http://arxiv.org/pdf/2405.00449v1 | 2024-05-01T11:06:31Z | 2024-05-01T11:06:31Z | RAG-based Explainable Prediction of Road Users Behaviors for Automated
Driving using Knowledge Graphs and Large Language Models | Prediction of road users' behaviors in the context of autonomous driving has gained considerable attention by the scientific community in the last years. Most works focus on predicting behaviors based on kinematic information alone, a simplification of the reality since road users are humans, and as such they are highly influenced by their surrounding context. In addition, a large plethora of research works rely on powerful Deep Learning techniques, which exhibit high performance metrics in prediction tasks but may lack the ability to fully understand and exploit the contextual semantic information contained in the road scene, not to mention their inability to provide explainable predictions that can be understood by humans. In this work, we propose an explainable road users' behavior prediction system that integrates the reasoning abilities of Knowledge Graphs (KG) and the expressiveness capabilities of Large Language Models (LLM) by using Retrieval Augmented Generation (RAG) techniques. For that purpose, Knowledge Graph Embeddings (KGE) and Bayesian inference are combined to allow the deployment of a fully inductive reasoning system that enables the issuing of predictions that rely on legacy information contained in the graph as well as on current evidence gathered in real time by onboard sensors. Two use cases have been implemented following the proposed approach: 1) Prediction of pedestrians' crossing actions; 2) Prediction of lane change maneuvers. In both cases, the performance attained surpasses the current state of the art in terms of anticipation and F1-score, showing a promising avenue for future research in this field. | [
"['Mohamed Manzour Hussien' 'Angie Nataly Melo' 'Augusto Luis Ballardini'\n 'Carlota Salinas Maldonado' 'Rubén Izquierdo' 'Miguel Ángel Sotelo']"
]
|
null | null | 2405.00451 | null | null | http://arxiv.org/pdf/2405.00451v2 | 2024-06-17T22:11:49Z | 2024-05-01T11:10:24Z | Monte Carlo Tree Search Boosts Reasoning via Iterative Preference
Learning | We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process inspired by the successful strategy employed by AlphaZero. Our work leverages Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals. To enhance consistency in intermediate steps, we combine outcome validation and stepwise self-evaluation, continually updating the quality assessment of newly generated data. The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data. Theoretical analysis reveals the importance of using on-policy sampled data for successful self-improving. Extensive evaluations on various arithmetic and commonsense reasoning tasks demonstrate remarkable performance improvements over existing models. For instance, our approach outperforms the Mistral-7B Supervised Fine-Tuning (SFT) baseline on GSM8K, MATH, and ARC-C, with substantial increases in accuracy to $81.8%$ (+$5.9%$), $34.7%$ (+$5.8%$), and $76.4%$ (+$15.8%$), respectively. Additionally, our research delves into the training and inference compute tradeoff, providing insights into how our method effectively maximizes performance gains. Our code is publicly available at https://github.com/YuxiXie/MCTS-DPO. | [
"['Yuxi Xie' 'Anirudh Goyal' 'Wenyue Zheng' 'Min-Yen Kan'\n 'Timothy P. Lillicrap' 'Kenji Kawaguchi' 'Michael Shieh']"
]
|
null | null | 2405.00454 | null | null | http://arxiv.org/pdf/2405.00454v1 | 2024-05-01T11:16:02Z | 2024-05-01T11:16:02Z | Robust Semi-supervised Learning via $f$-Divergence and $α$-Rényi
Divergence | This paper investigates a range of empirical risk functions and regularization methods suitable for self-training methods in semi-supervised learning. These approaches draw inspiration from various divergence measures, such as $f$-divergences and $alpha$-R'enyi divergences. Inspired by the theoretical foundations rooted in divergences, i.e., $f$-divergences and $alpha$-R'enyi divergence, we also provide valuable insights to enhance the understanding of our empirical risk functions and regularization techniques. In the pseudo-labeling and entropy minimization techniques as self-training methods for effective semi-supervised learning, the self-training process has some inherent mismatch between the true label and pseudo-label (noisy pseudo-labels) and some of our empirical risk functions are robust, concerning noisy pseudo-labels. Under some conditions, our empirical risk functions demonstrate better performance when compared to traditional self-training methods. | [
"['Gholamali Aminian' 'Amirhossien Bagheri' 'Mahyar JafariNodeh'\n 'Radmehr Karimian' 'Mohammad-Hossein Yassaee']"
]
|
null | null | 2405.00456 | null | null | http://arxiv.org/pdf/2405.00456v1 | 2024-05-01T11:26:31Z | 2024-05-01T11:26:31Z | Counterfactual Explanations for Deep Learning-Based Traffic Forecasting | Deep learning models are widely used in traffic forecasting and have achieved state-of-the-art prediction accuracy. However, the black-box nature of those models makes the results difficult to interpret by users. This study aims to leverage an Explainable AI approach, counterfactual explanations, to enhance the explainability and usability of deep learning-based traffic forecasting models. Specifically, the goal is to elucidate relationships between various input contextual features and their corresponding predictions. We present a comprehensive framework that generates counterfactual explanations for traffic forecasting and provides usable insights through the proposed scenario-driven counterfactual explanations. The study first implements a deep learning model to predict traffic speed based on historical traffic data and contextual variables. Counterfactual explanations are then used to illuminate how alterations in these input variables affect predicted outcomes, thereby enhancing the transparency of the deep learning model. We investigated the impact of contextual features on traffic speed prediction under varying spatial and temporal conditions. The scenario-driven counterfactual explanations integrate two types of user-defined constraints, directional and weighting constraints, to tailor the search for counterfactual explanations to specific use cases. These tailored explanations benefit machine learning practitioners who aim to understand the model's learning mechanisms and domain experts who seek insights for real-world applications. The results showcase the effectiveness of counterfactual explanations in revealing traffic patterns learned by deep learning models, showing its potential for interpreting black-box deep learning models used for spatiotemporal predictions in general. | [
"['Rushan Wang' 'Yanan Xin' 'Yatao Zhang' 'Fernando Perez-Cruz'\n 'Martin Raubal']"
]
|
null | null | 2405.00476 | null | null | http://arxiv.org/pdf/2405.00476v1 | 2024-05-01T12:23:16Z | 2024-05-01T12:23:16Z | A Comprehensive Survey of Dynamic Graph Neural Networks: Models,
Frameworks, Benchmarks, Experiments and Challenges | Dynamic Graph Neural Networks (GNNs) combine temporal information with GNNs to capture structural, temporal, and contextual relationships in dynamic graphs simultaneously, leading to enhanced performance in various applications. As the demand for dynamic GNNs continues to grow, numerous models and frameworks have emerged to cater to different application needs. There is a pressing need for a comprehensive survey that evaluates the performance, strengths, and limitations of various approaches in this domain. This paper aims to fill this gap by offering a thorough comparative analysis and experimental evaluation of dynamic GNNs. It covers 81 dynamic GNN models with a novel taxonomy, 12 dynamic GNN training frameworks, and commonly used benchmarks. We also conduct experimental results from testing representative nine dynamic GNN models and three frameworks on six standard graph datasets. Evaluation metrics focus on convergence accuracy, training efficiency, and GPU memory usage, enabling a thorough comparison of performance across various models and frameworks. From the analysis and evaluation results, we identify key challenges and offer principles for future research to enhance the design of models and frameworks in the dynamic GNNs field. | [
"['ZhengZhao Feng' 'Rui Wang' 'TianXing Wang' 'Mingli Song' 'Sai Wu'\n 'Shuibing He']"
]
|
null | null | 2405.00482 | null | null | http://arxiv.org/pdf/2405.00482v1 | 2024-05-01T12:46:57Z | 2024-05-01T12:46:57Z | PackVFL: Efficient HE Packing for Vertical Federated Learning | As an essential tool of secure distributed machine learning, vertical federated learning (VFL) based on homomorphic encryption (HE) suffers from severe efficiency problems due to data inflation and time-consuming operations. To this core, we propose PackVFL, an efficient VFL framework based on packed HE (PackedHE), to accelerate the existing HE-based VFL algorithms. PackVFL packs multiple cleartexts into one ciphertext and supports single-instruction-multiple-data (SIMD)-style parallelism. We focus on designing a high-performant matrix multiplication (MatMult) method since it takes up most of the ciphertext computation time in HE-based VFL. Besides, devising the MatMult method is also challenging for PackedHE because a slight difference in the packing way could predominantly affect its computation and communication costs. Without domain-specific design, directly applying SOTA MatMult methods is hard to achieve optimal. Therefore, we make a three-fold design: 1) we systematically explore the current design space of MatMult and quantify the complexity of existing approaches to provide guidance; 2) we propose a hybrid MatMult method according to the unique characteristics of VFL; 3) we adaptively apply our hybrid method in representative VFL algorithms, leveraging distinctive algorithmic properties to further improve efficiency. As the batch size, feature dimension and model size of VFL scale up to large sizes, PackVFL consistently delivers enhanced performance. Empirically, PackVFL propels existing VFL algorithms to new heights, achieving up to a 51.52X end-to-end speedup. This represents a substantial 34.51X greater speedup compared to the direct application of SOTA MatMult methods. | [
"['Liu Yang' 'Shuowei Cai' 'Di Chai' 'Junxue Zhang' 'Han Tian' 'Yilun Jin'\n 'Kun Guo' 'Kai Chen' 'Qiang Yang']"
]
|
null | null | 2405.00489 | null | null | http://arxiv.org/pdf/2405.00489v1 | 2024-05-01T12:56:14Z | 2024-05-01T12:56:14Z | Explainable Automatic Grading with Neural Additive Models | The use of automatic short answer grading (ASAG) models may help alleviate the time burden of grading while encouraging educators to frequently incorporate open-ended items in their curriculum. However, current state-of-the-art ASAG models are large neural networks (NN) often described as "black box", providing no explanation for which characteristics of an input are important for the produced output. This inexplicable nature can be frustrating to teachers and students when trying to interpret, or learn from an automatically-generated grade. To create a powerful yet intelligible ASAG model, we experiment with a type of model called a Neural Additive Model that combines the performance of a NN with the explainability of an additive model. We use a Knowledge Integration (KI) framework from the learning sciences to guide feature engineering to create inputs that reflect whether a student includes certain ideas in their response. We hypothesize that indicating the inclusion (or exclusion) of predefined ideas as features will be sufficient for the NAM to have good predictive power and interpretability, as this may guide a human scorer using a KI rubric. We compare the performance of the NAM with another explainable model, logistic regression, using the same features, and to a non-explainable neural model, DeBERTa, that does not require feature engineering. | [
"['Aubrey Condor' 'Zachary Pardos']"
]
|
null | null | 2405.00491 | null | null | http://arxiv.org/pdf/2405.00491v1 | 2024-05-01T12:57:14Z | 2024-05-01T12:57:14Z | On the Relevance of Byzantine Robust Optimization Against Data Poisoning | The success of machine learning (ML) has been intimately linked with the availability of large amounts of data, typically collected from heterogeneous sources and processed on vast networks of computing devices (also called {em workers}). Beyond accuracy, the use of ML in critical domains such as healthcare and autonomous driving calls for robustness against {em data poisoning}and some {em faulty workers}. The problem of {em Byzantine ML} formalizes these robustness issues by considering a distributed ML environment in which workers (storing a portion of the global dataset) can deviate arbitrarily from the prescribed algorithm. Although the problem has attracted a lot of attention from a theoretical point of view, its practical importance for addressing realistic faults (where the behavior of any worker is locally constrained) remains unclear. It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable. We prove that, while tolerating a wider range of faulty behaviors, Byzantine ML yields solutions that are, in a precise sense, optimal even under the weaker data poisoning threat model. Then, we study a generic data poisoning model wherein some workers have {em fully-poisonous local data}, i.e., their datasets are entirely corruptible, and the remainders have {em partially-poisonous local data}, i.e., only a fraction of their local datasets is corruptible. We prove that Byzantine-robust schemes yield optimal solutions against both these forms of data poisoning, and that the former is more harmful when workers have {em heterogeneous} local data. | [
"['Sadegh Farhadkhani' 'Rachid Guerraoui' 'Nirupam Gupta' 'Rafael Pinot']"
]
|
null | null | 2405.00505 | null | null | http://arxiv.org/pdf/2405.00505v1 | 2024-05-01T13:37:27Z | 2024-05-01T13:37:27Z | KVP10k : A Comprehensive Dataset for Key-Value Pair Extraction in
Business Documents | In recent years, the challenge of extracting information from business documents has emerged as a critical task, finding applications across numerous domains. This effort has attracted substantial interest from both industry and academy, highlighting its significance in the current technological landscape. Most datasets in this area are primarily focused on Key Information Extraction (KIE), where the extraction process revolves around extracting information using a specific, predefined set of keys. Unlike most existing datasets and benchmarks, our focus is on discovering key-value pairs (KVPs) without relying on predefined keys, navigating through an array of diverse templates and complex layouts. This task presents unique challenges, primarily due to the absence of comprehensive datasets and benchmarks tailored for non-predetermined KVP extraction. To address this gap, we introduce KVP10k , a new dataset and benchmark specifically designed for KVP extraction. The dataset contains 10707 richly annotated images. In our benchmark, we also introduce a new challenging task that combines elements of KIE as well as KVP in a single task. KVP10k sets itself apart with its extensive diversity in data and richly detailed annotations, paving the way for advancements in the field of information extraction from complex business documents. | [
"['Oshri Naparstek' 'Roi Pony' 'Inbar Shapira' 'Foad Abo Dahood'\n 'Ophir Azulai' 'Yevgeny Yaroker' 'Nadav Rubinstein' 'Maksym Lysak'\n 'Peter Staar' 'Ahmed Nassar' 'Nikolaos Livathinos' 'Christoph Auer'\n 'Elad Amrani' 'Idan Friedman' 'Orit Prince' 'Yevgeny Burshtein'\n 'Adi Raz Goldfarb' 'Udi Barzelay']"
]
|
null | null | 2405.00516 | null | null | http://arxiv.org/abs/2405.00516v1 | 2024-05-01T13:51:45Z | 2024-05-01T13:51:45Z | Navigating WebAI: Training Agents to Complete Web Tasks with Large
Language Models and Reinforcement Learning | Recent advancements in language models have demonstrated remarkable improvements in various natural language processing (NLP) tasks such as web navigation. Supervised learning (SL) approaches have achieved impressive performance while utilizing significantly less training data compared to previous methods. However, these SL-based models fall short when compared to reinforcement learning (RL) approaches, which have shown superior results. In this paper, we propose a novel approach that combines SL and RL techniques over the MiniWoB benchmark to leverage the strengths of both methods. We also address a critical limitation in previous models' understanding of HTML content, revealing a tendency to memorize target elements rather than comprehend the underlying structure. To rectify this, we propose methods to enhance true understanding and present a new baseline of results. Our experiments demonstrate that our approach outperforms previous SL methods on certain tasks using less data and narrows the performance gap with RL models, achieving 43.58% average accuracy in SL and 36.69% when combined with a multimodal RL approach. This study sets a new direction for future web navigation and offers insights into the limitations and potential of language modeling for computer tasks. | [
"['Lucas-Andreï Thil' 'Mirela Popa' 'Gerasimos Spanakis']"
]
|
null | null | 2405.00524 | null | null | http://arxiv.org/pdf/2405.00524v1 | 2024-05-01T13:58:28Z | 2024-05-01T13:58:28Z | FMLFS: A federated multi-label feature selection based on information
theory in IoT environment | In certain emerging applications such as health monitoring wearable and traffic monitoring systems, Internet-of-Things (IoT) devices generate or collect a huge amount of multi-label datasets. Within these datasets, each instance is linked to a set of labels. The presence of noisy, redundant, or irrelevant features in these datasets, along with the curse of dimensionality, poses challenges for multi-label classifiers. Feature selection (FS) proves to be an effective strategy in enhancing classifier performance and addressing these challenges. Yet, there is currently no existing distributed multi-label FS method documented in the literature that is suitable for distributed multi-label datasets within IoT environments. This paper introduces FMLFS, the first federated multi-label feature selection method. Here, mutual information between features and labels serves as the relevancy metric, while the correlation distance between features, derived from mutual information and joint entropy, is utilized as the redundancy measure. Following aggregation of these metrics on the edge server and employing Pareto-based bi-objective and crowding distance strategies, the sorted features are subsequently sent back to the IoT devices. The proposed method is evaluated through two scenarios: 1) transmitting reduced-size datasets to the edge server for centralized classifier usage, and 2) employing federated learning with reduced-size datasets. Evaluation across three metrics - performance, time complexity, and communication cost - demonstrates that FMLFS outperforms five other comparable methods in the literature and provides a good trade-off on three real-world datasets. | [
"['Afsaneh Mahanipour' 'Hana Khamfroush']"
]
|
null | null | 2405.00532 | null | null | http://arxiv.org/pdf/2405.00532v3 | 2024-07-03T06:34:31Z | 2024-05-01T14:05:52Z | ULLER: A Unified Language for Learning and Reasoning | The field of neuro-symbolic artificial intelligence (NeSy), which combines learning and reasoning, has recently experienced significant growth. There now are a wide variety of NeSy frameworks, each with its own specific language for expressing background knowledge and how to relate it to neural networks. This heterogeneity hinders accessibility for newcomers and makes comparing different NeSy frameworks challenging. We propose a unified language for NeSy, which we call ULLER, a Unified Language for LEarning and Reasoning. ULLER encompasses a wide variety of settings, while ensuring that knowledge described in it can be used in existing NeSy systems. ULLER has a neuro-symbolic first-order syntax for which we provide example semantics including classical, fuzzy, and probabilistic logics. We believe ULLER is a first step towards making NeSy research more accessible and comparable, paving the way for libraries that streamline training and evaluation across a multitude of semantics, knowledge bases, and NeSy systems. | [
"['Emile van Krieken' 'Samy Badreddine' 'Robin Manhaeve'\n 'Eleonora Giunchiglia']"
]
|
null | null | 2405.00555 | null | null | http://arxiv.org/pdf/2405.00555v1 | 2024-05-01T14:57:59Z | 2024-05-01T14:57:59Z | Derivative-based regularization for regression | In this work, we introduce a novel approach to regularization in multivariable regression problems. Our regularizer, called DLoss, penalises differences between the model's derivatives and derivatives of the data generating function as estimated from the training data. We call these estimated derivatives data derivatives. The goal of our method is to align the model to the data, not only in terms of target values but also in terms of the derivatives involved. To estimate data derivatives, we select (from the training data) 2-tuples of input-value pairs, using either nearest neighbour or random, selection. On synthetic and real datasets, we evaluate the effectiveness of adding DLoss, with different weights, to the standard mean squared error loss. The experimental results show that with DLoss (using nearest neighbour selection) we obtain, on average, the best rank with respect to MSE on validation data sets, compared to no regularization, L2 regularization, and Dropout. | [
"['Enrico Lopedoto' 'Maksim Shekhunov' 'Vitaly Aksenov' 'Kizito Salako'\n 'Tillman Weyde']"
]
|
null | null | 2405.00556 | null | null | http://arxiv.org/pdf/2405.00556v1 | 2024-05-01T14:59:24Z | 2024-05-01T14:59:24Z | Swarm Learning: A Survey of Concepts, Applications, and Trends | Deep learning models have raised privacy and security concerns due to their reliance on large datasets on central servers. As the number of Internet of Things (IoT) devices increases, artificial intelligence (AI) will be crucial for resource management, data processing, and knowledge acquisition. To address those issues, federated learning (FL) has introduced a novel approach to building a versatile, large-scale machine learning framework that operates in a decentralized and hardware-agnostic manner. However, FL faces network bandwidth limitations and data breaches. To reduce the central dependency in FL and increase scalability, swarm learning (SL) has been proposed in collaboration with Hewlett Packard Enterprise (HPE). SL represents a decentralized machine learning framework that leverages blockchain technology for secure, scalable, and private data management. A blockchain-based network enables the exchange and aggregation of model parameters among participants, thus mitigating the risk of a single point of failure and eliminating communication bottlenecks. To the best of our knowledge, this survey is the first to introduce the principles of Swarm Learning, its architectural design, and its fields of application. In addition, it highlights numerous research avenues that require further exploration by academic and industry communities to unlock the full potential and applications of SL. | [
"['Elham Shammar' 'Xiaohui Cui' 'Mohammed A. A. Al-qaness']"
]
|
null | null | 2405.00570 | null | null | http://arxiv.org/pdf/2405.00570v1 | 2024-05-01T15:19:19Z | 2024-05-01T15:19:19Z | WEST GCN-LSTM: Weighted Stacked Spatio-Temporal Graph Neural Networks
for Regional Traffic Forecasting | Regional traffic forecasting is a critical challenge in urban mobility, with applications to various fields such as the Internet of Everything. In recent years, spatio-temporal graph neural networks have achieved state-of-the-art results in the context of numerous traffic forecasting challenges. This work aims at expanding upon the conventional spatio-temporal graph neural network architectures in a manner that may facilitate the inclusion of information regarding the examined regions, as well as the populations that traverse them, in order to establish a more efficient prediction model. The end-product of this scientific endeavour is a novel spatio-temporal graph neural network architecture that is referred to as WEST (WEighted STacked) GCN-LSTM. Furthermore, the inclusion of the aforementioned information is conducted via the use of two novel dedicated algorithms that are referred to as the Shared Borders Policy and the Adjustable Hops Policy. Through information fusion and distillation, the proposed solution manages to significantly outperform its competitors in the frame of an experimental evaluation that consists of 19 forecasting models, across several datasets. Finally, an additional ablation study determined that each of the components of the proposed solution contributes towards enhancing its overall performance. | [
"['Theodoros Theodoropoulos' 'Angelos-Christos Maroudis' 'Antonios Makris'\n 'Konstantinos Tserpes']"
]
|
null | null | 2405.00577 | null | null | http://arxiv.org/pdf/2405.00577v1 | 2024-05-01T15:29:55Z | 2024-05-01T15:29:55Z | Discovering robust biomarkers of neurological disorders from functional
MRI using graph neural networks: A Review | Graph neural networks (GNN) have emerged as a popular tool for modelling functional magnetic resonance imaging (fMRI) datasets. Many recent studies have reported significant improvements in disorder classification performance via more sophisticated GNN designs and highlighted salient features that could be potential biomarkers of the disorder. In this review, we provide an overview of how GNN and model explainability techniques have been applied on fMRI datasets for disorder prediction tasks, with a particular emphasis on the robustness of biomarkers produced for neurodegenerative diseases and neuropsychiatric disorders. We found that while most studies have performant models, salient features highlighted in these studies vary greatly across studies on the same disorder and little has been done to evaluate their robustness. To address these issues, we suggest establishing new standards that are based on objective evaluation metrics to determine the robustness of these potential biomarkers. We further highlight gaps in the existing literature and put together a prediction-attribution-evaluation framework that could set the foundations for future research on improving the robustness of potential biomarkers discovered via GNNs. | [
"['Yi Hao Chan' 'Deepank Girish' 'Sukrit Gupta' 'Jing Xia'\n 'Chockalingam Kasi' 'Yinan He' 'Conghao Wang' 'Jagath C. Rajapakse']"
]
|
null | null | 2405.00588 | null | null | http://arxiv.org/pdf/2405.00588v1 | 2024-05-01T15:51:15Z | 2024-05-01T15:51:15Z | Are Models Biased on Text without Gender-related Language? | Gender bias research has been pivotal in revealing undesirable behaviors in large language models, exposing serious gender stereotypes associated with occupations, and emotions. A key observation in prior work is that models reinforce stereotypes as a consequence of the gendered correlations that are present in the training data. In this paper, we focus on bias where the effect from training data is unclear, and instead address the question: Do language models still exhibit gender bias in non-stereotypical settings? To do so, we introduce UnStereoEval (USE), a novel framework tailored for investigating gender bias in stereotype-free scenarios. USE defines a sentence-level score based on pretraining data statistics to determine if the sentence contain minimal word-gender associations. To systematically benchmark the fairness of popular language models in stereotype-free scenarios, we utilize USE to automatically generate benchmarks without any gender-related language. By leveraging USE's sentence-level score, we also repurpose prior gender bias benchmarks (Winobias and Winogender) for non-stereotypical evaluation. Surprisingly, we find low fairness across all 28 tested models. Concretely, models demonstrate fair behavior in only 9%-41% of stereotype-free sentences, suggesting that bias does not solely stem from the presence of gender-related words. These results raise important questions about where underlying model biases come from and highlight the need for more systematic and comprehensive bias evaluation. We release the full dataset and code at https://ucinlp.github.io/unstereo-eval. | [
"['Catarina G Belém' 'Preethi Seshadri' 'Yasaman Razeghi' 'Sameer Singh']"
]
|
null | null | 2405.00592 | null | null | http://arxiv.org/pdf/2405.00592v3 | 2024-06-26T16:56:06Z | 2024-05-01T15:59:00Z | Scaling and renormalization in high-dimensional regression | This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models using the basic tools of random matrix theory and free probability. We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning. Analytic formulas for the training and generalization errors are obtained in a few lines of algebra directly from the properties of the $S$-transform of free probability. This allows for a straightforward identification of the sources of power-law scaling in model performance. We compute the generalization error of a broad class of random feature models. We find that in all models, the $S$-transform corresponds to the train-test generalization gap, and yields an analogue of the generalized-cross-validation estimator. Using these techniques, we derive fine-grained bias-variance decompositions for a very general class of random feature models with structured covariates. These novel results allow us to discover a scaling regime for random feature models where the variance due to the features limits performance in the overparameterized setting. We also demonstrate how anisotropic weight structure in random feature models can limit performance and lead to nontrivial exponents for finite-width corrections in the overparameterized setting. Our results extend and provide a unifying perspective on earlier models of neural scaling laws. | [
"['Alexander Atanasov' 'Jacob A. Zavatone-Veth' 'Cengiz Pehlevan']"
]
|
null | null | 2405.00602 | null | null | http://arxiv.org/pdf/2405.00602v1 | 2024-05-01T16:13:54Z | 2024-05-01T16:13:54Z | Investigating Automatic Scoring and Feedback using Large Language Models | Automatic grading and feedback have been long studied using traditional machine learning and deep learning techniques using language models. With the recent accessibility to high performing large language models (LLMs) like LLaMA-2, there is an opportunity to investigate the use of these LLMs for automatic grading and feedback generation. Despite the increase in performance, LLMs require significant computational resources for fine-tuning and additional specific adjustments to enhance their performance for such tasks. To address these issues, Parameter Efficient Fine-tuning (PEFT) methods, such as LoRA and QLoRA, have been adopted to decrease memory and computational requirements in model fine-tuning. This paper explores the efficacy of PEFT-based quantized models, employing classification or regression head, to fine-tune LLMs for automatically assigning continuous numerical grades to short answers and essays, as well as generating corresponding feedback. We conducted experiments on both proprietary and open-source datasets for our tasks. The results show that prediction of grade scores via finetuned LLMs are highly accurate, achieving less than 3% error in grade percentage on average. For providing graded feedback fine-tuned 4-bit quantized LLaMA-2 13B models outperform competitive base models and achieve high similarity with subject matter expert feedback in terms of high BLEU and ROUGE scores and qualitatively in terms of feedback. The findings from this study provide important insights into the impacts of the emerging capabilities of using quantization approaches to fine-tune LLMs for various downstream tasks, such as automatic short answer scoring and feedback generation at comparatively lower costs and latency. | [
"['Gloria Ashiya Katuka' 'Alexander Gain' 'Yen-Yun Yu']"
]
|
null | null | 2405.00614 | null | null | http://arxiv.org/pdf/2405.00614v1 | 2024-05-01T16:35:04Z | 2024-05-01T16:35:04Z | Multigroup Robustness | To address the shortcomings of real-world datasets, robust learning algorithms have been designed to overcome arbitrary and indiscriminate data corruption. However, practical processes of gathering data may lead to patterns of data corruption that are localized to specific partitions of the training dataset. Motivated by critical applications where the learned model is deployed to make predictions about people from a rich collection of overlapping subpopulations, we initiate the study of multigroup robust algorithms whose robustness guarantees for each subpopulation only degrade with the amount of data corruption inside that subpopulation. When the data corruption is not distributed uniformly over subpopulations, our algorithms provide more meaningful robustness guarantees than standard guarantees that are oblivious to how the data corruption and the affected subpopulations are related. Our techniques establish a new connection between multigroup fairness and robustness. | [
"['Lunjia Hu' 'Charlotte Peale' 'Judy Hanwen Shen']"
]
|
null | null | 2405.00622 | null | null | http://arxiv.org/pdf/2405.00622v1 | 2024-05-01T16:43:21Z | 2024-05-01T16:43:21Z | Causal Evaluation of Language Models | Causal reasoning is viewed as crucial for achieving human-level machine intelligence. Recent advances in language models have expanded the horizons of artificial intelligence across various domains, sparking inquiries into their potential for causal reasoning. In this work, we introduce Causal evaluation of Language Models (CaLM), which, to the best of our knowledge, is the first comprehensive benchmark for evaluating the causal reasoning capabilities of language models. First, we propose the CaLM framework, which establishes a foundational taxonomy consisting of four modules: causal target (i.e., what to evaluate), adaptation (i.e., how to obtain the results), metric (i.e., how to measure the results), and error (i.e., how to analyze the bad results). This taxonomy defines a broad evaluation design space while systematically selecting criteria and priorities. Second, we compose the CaLM dataset, comprising 126,334 data samples, to provide curated sets of causal targets, adaptations, metrics, and errors, offering extensive coverage for diverse research pursuits. Third, we conduct an extensive evaluation of 28 leading language models on a core set of 92 causal targets, 9 adaptations, 7 metrics, and 12 error types. Fourth, we perform detailed analyses of the evaluation results across various dimensions (e.g., adaptation, scale). Fifth, we present 50 high-level empirical findings across 9 dimensions (e.g., model), providing valuable guidance for future language model development. Finally, we develop a multifaceted platform, including a website, leaderboards, datasets, and toolkits, to support scalable and adaptable assessments. We envision CaLM as an ever-evolving benchmark for the community, systematically updated with new causal targets, adaptations, models, metrics, and error types to reflect ongoing research advancements. Project website is at https://opencausalab.github.io/CaLM. | [
"['Sirui Chen' 'Bo Peng' 'Meiqi Chen' 'Ruiqi Wang' 'Mengying Xu'\n 'Xingyu Zeng' 'Rui Zhao' 'Shengjie Zhao' 'Yu Qiao' 'Chaochao Lu']"
]
|
null | null | 2405.00625 | null | null | http://arxiv.org/pdf/2405.00625v1 | 2024-05-01T16:48:28Z | 2024-05-01T16:48:28Z | Queue-based Eco-Driving at Roundabouts with Reinforcement Learning | We address eco-driving at roundabouts in mixed traffic to enhance traffic flow and traffic efficiency in urban areas. The aim is to proactively optimize speed of automated or non-automated connected vehicles (CVs), ensuring both an efficient approach and smooth entry into roundabouts. We incorporate the traffic situation ahead, i.e. preceding vehicles and waiting queues. Further, we develop two approaches: a rule-based and an Reinforcement Learning (RL) based eco-driving system, with both using the approach link and information from conflicting CVs for speed optimization. A fair comparison of rule-based and RL-based approaches is performed to explore RL as a viable alternative to classical optimization. Results show that both approaches outperform the baseline. Improvements significantly increase with growing traffic volumes, leading to best results on average being obtained at high volumes. Near capacity, performance deteriorates, indicating limited applicability at capacity limits. Examining different CV penetration rates, a decline in performance is observed, but with substantial results still being achieved at lower CV rates. RL agents can discover effective policies for speed optimization in dynamic roundabout settings, but they do not offer a substantial advantage over classical approaches, especially at higher traffic volumes or lower CV penetration rates. | [
"['Anna-Lena Schlamp' 'Werner Huber' 'Stefanie Schmidtner']"
]
|
null | null | 2405.00627 | null | null | http://arxiv.org/pdf/2405.00627v1 | 2024-05-01T16:49:54Z | 2024-05-01T16:49:54Z | Koopman-based Deep Learning for Nonlinear System Estimation | Nonlinear differential equations are encountered as models of fluid flow, spiking neurons, and many other systems of interest in the real world. Common features of these systems are that their behaviors are difficult to describe exactly and invariably unmodeled dynamics present challenges in making precise predictions. In many cases the models exhibit extremely complicated behavior due to bifurcations and chaotic regimes. In this paper, we present a novel data-driven linear estimator that uses Koopman operator theory to extract finite-dimensional representations of complex nonlinear systems. The extracted model is used together with a deep reinforcement learning network that learns the optimal stepwise actions to predict future states of the original nonlinear system. Our estimator is also adaptive to a diffeomorphic transformation of the nonlinear system which enables transfer learning to compute state estimates of the transformed system without relearning from scratch. | [
"['Zexin Sun' 'Mingyu Chen' 'John Baillieul']"
]
|
null | null | 2405.00629 | null | null | http://arxiv.org/pdf/2405.00629v2 | 2024-05-23T08:42:25Z | 2024-05-01T16:54:12Z | HUGO -- Highlighting Unseen Grid Options: Combining Deep Reinforcement
Learning with a Heuristic Target Topology Approach | With the growth of Renewable Energy (RE) generation, the operation of power grids has become increasingly complex. One solution could be automated grid operation, where Deep Reinforcement Learning (DRL) has repeatedly shown significant potential in Learning to Run a Power Network (L2RPN) challenges. However, only individual actions at the substation level have been subjected to topology optimization by most existing DRL algorithms. In contrast, we propose a more holistic approach by proposing specific Target Topologies (TTs) as actions. These topologies are selected based on their robustness. As part of this paper, we present a search algorithm to find the TTs and upgrade our previously developed DRL agent CurriculumAgent (CAgent) to a novel topology agent. We compare the upgrade to the previous CAgent and can increase their L2RPN score significantly by 10%. Further, we achieve a 25% better median survival time with our TTs included. Later analysis shows that almost all TTs are close to the base topology, explaining their robustness | [
"['Malte Lehna' 'Clara Holzhüter' 'Sven Tomforde' 'Christoph Scholz']"
]
|
null | null | 2405.00636 | null | null | http://arxiv.org/pdf/2405.00636v1 | 2024-05-01T17:04:20Z | 2024-05-01T17:04:20Z | Robustness of graph embedding methods for community detection | This study investigates the robustness of graph embedding methods for community detection in the face of network perturbations, specifically edge deletions. Graph embedding techniques, which represent nodes as low-dimensional vectors, are widely used for various graph machine learning tasks due to their ability to capture structural properties of networks effectively. However, the impact of perturbations on the performance of these methods remains relatively understudied. The research considers state-of-the-art graph embedding methods from two families: matrix factorization (e.g., LE, LLE, HOPE, M-NMF) and random walk-based (e.g., DeepWalk, LINE, node2vec). Through experiments conducted on both synthetic and real-world networks, the study reveals varying degrees of robustness within each family of graph embedding methods. The robustness is found to be influenced by factors such as network size, initial community partition strength, and the type of perturbation. Notably, node2vec and LLE consistently demonstrate higher robustness for community detection across different scenarios, including networks with degree and community size heterogeneity. These findings highlight the importance of selecting an appropriate graph embedding method based on the specific characteristics of the network and the task at hand, particularly in scenarios where robustness to perturbations is crucial. | [
"['Zhi-Feng Wei' 'Pablo Moriano' 'Ramakrishnan Kannan']"
]
|
null | null | 2405.00642 | null | null | http://arxiv.org/pdf/2405.00642v1 | 2024-05-01T17:10:55Z | 2024-05-01T17:10:55Z | From Empirical Observations to Universality: Dynamics of Deep Learning
with Inputs Built on Gaussian mixture | This study broadens the scope of theoretical frameworks in deep learning by delving into the dynamics of neural networks with inputs that demonstrate the structural characteristics to Gaussian Mixture (GM). We analyzed how the dynamics of neural networks under GM-structured inputs diverge from the predictions of conventional theories based on simple Gaussian structures. A revelation of our work is the observed convergence of neural network dynamics towards conventional theory even with standardized GM inputs, highlighting an unexpected universality. We found that standardization, especially in conjunction with certain nonlinear functions, plays a critical role in this phenomena. Consequently, despite the complex and varied nature of GM distributions, we demonstrate that neural networks exhibit asymptotic behaviors in line with predictions under simple Gaussian frameworks. | [
"['Jaeyong Bae' 'Hawoong Jeong']"
]
|
null | null | 2405.00645 | null | null | http://arxiv.org/pdf/2405.00645v1 | 2024-05-01T17:18:46Z | 2024-05-01T17:18:46Z | Gradient-based Automatic Per-Weight Mixed Precision Quantization for
Neural Networks On-Chip | Model size and inference speed at deployment time, are major challenges in many deep learning applications. A promising strategy to overcome these challenges is quantization. However, a straightforward uniform quantization to very low precision can result in significant accuracy loss. Mixed-precision quantization, based on the idea that certain parts of the network can accommodate lower precision without compromising performance compared to other parts, offers a potential solution. In this work, we present High Granularity Quantization (HGQ), an innovative quantization-aware training method designed to fine-tune the per-weight and per-activation precision in an automatic way for ultra-low latency and low power neural networks which are to be deployed on FPGAs. We demonstrate that HGQ can outperform existing methods by a substantial margin, achieving resource reduction by up to a factor of 20 and latency improvement by a factor of 5 while preserving accuracy. | [
"['Chang Sun' 'Thea K. Årrestad' 'Vladimir Loncar' 'Jennifer Ngadiuba'\n 'Maria Spiropulu']"
]
|
null | null | 2405.00646 | null | null | http://arxiv.org/pdf/2405.00646v1 | 2024-05-01T17:21:36Z | 2024-05-01T17:21:36Z | Learning to Compose: Improving Object Centric Learning by Injecting
Compositionality | Learning compositional representation is a key aspect of object-centric learning as it enables flexible systematic generalization and supports complex visual reasoning. However, most of the existing approaches rely on auto-encoding objective, while the compositionality is implicitly imposed by the architectural or algorithmic bias in the encoder. This misalignment between auto-encoding objective and learning compositionality often results in failure of capturing meaningful object representations. In this study, we propose a novel objective that explicitly encourages compositionality of the representations. Built upon the existing object-centric learning framework (e.g., slot attention), our method incorporates additional constraints that an arbitrary mixture of object representations from two images should be valid by maximizing the likelihood of the composite data. We demonstrate that incorporating our objective to the existing framework consistently improves the objective-centric learning and enhances the robustness to the architectural choices. | [
"['Whie Jung' 'Jaehoon Yoo' 'Sungjin Ahn' 'Seunghoon Hong']"
]
|
null | null | 2405.00647 | null | null | http://arxiv.org/pdf/2405.00647v1 | 2024-05-01T17:24:20Z | 2024-05-01T17:24:20Z | Screening of BindingDB database ligands against EGFR, HER2, Estrogen,
Progesterone and NF-kB receptors based on machine learning and molecular
docking | Breast cancer, the second most prevalent cancer among women worldwide, necessitates the exploration of novel therapeutic approaches. To target the four subgroups of breast cancer "hormone receptor-positive and HER2-negative, hormone receptor-positive and HER2-positive, hormone receptor-negative and HER2-positive, and hormone receptor-negative and HER2-negative" it is crucial to inhibit specific targets such as EGFR, HER2, ER, NF-kB, and PR. In this study, we evaluated various methods for binary and multiclass classification. Among them, the GA-SVM-SVM:GA-SVM-SVM model was selected with an accuracy of 0.74, an F1-score of 0.73, and an AUC of 0.94 for virtual screening of ligands from the BindingDB database. This model successfully identified 4454, 803, 438, and 378 ligands with over 90% precision in both active/inactive and target prediction for the classes of EGFR+HER2, ER, NF-kB, and PR, respectively, from the BindingDB database. Based on to the selected ligands, we created a dendrogram that categorizes different ligands based on their targets. This dendrogram aims to facilitate the exploration of chemical space for various therapeutic targets. Ligands that surpassed a 90% threshold in the product of activity probability and correct target selection probability were chosen for further investigation using molecular docking. The binding energy range for these ligands against their respective targets was calculated to be between -15 and -5 kcal/mol. Finally, based on general and common rules in medicinal chemistry, we selected 2, 3, 3, and 8 new ligands with high priority for further studies in the EGFR+HER2, ER, NF-kB, and PR classes, respectively. | [
"['Parham Rezaee' 'Shahab Rezaee' 'Malik Maaza' 'Seyed Shahriar Arab']"
]
|
null | null | 2405.00657 | null | null | http://arxiv.org/pdf/2405.00657v1 | 2024-05-01T17:37:50Z | 2024-05-01T17:37:50Z | RST-LoRA: A Discourse-Aware Low-Rank Adaptation for Long Document
Abstractive Summarization | For long document summarization, discourse structure is important to discern the key content of the text and the differences in importance level between sentences. Unfortunately, the integration of rhetorical structure theory (RST) into parameter-efficient fine-tuning strategies for long document summarization remains unexplored. Therefore, this paper introduces RST-LoRA and proposes four RST-aware variants to explicitly incorporate RST into the LoRA model. Our empirical evaluation demonstrates that incorporating the type and uncertainty of rhetorical relations can complementarily enhance the performance of LoRA in summarization tasks. Furthermore, the best-performing variant we introduced outperforms the vanilla LoRA and full-parameter fine-tuning models, as confirmed by multiple automatic and human evaluations, and even surpasses previous state-of-the-art methods. | [
"['Dongqi Pu' 'Vera Demberg']"
]
|
null | null | 2405.00662 | null | null | http://arxiv.org/pdf/2405.00662v1 | 2024-05-01T17:50:16Z | 2024-05-01T17:50:16Z | No Representation, No Trust: Connecting Representation, Collapse, and
Trust Issues in PPO | Reinforcement learning (RL) is inherently rife with non-stationarity since the states and rewards the agent observes during training depend on its changing policy. Therefore, networks in deep RL must be capable of adapting to new observations and fitting new targets. However, previous works have observed that networks in off-policy deep value-based methods exhibit a decrease in representation rank, often correlated with an inability to continue learning or a collapse in performance. Although this phenomenon has generally been attributed to neural network learning under non-stationarity, it has been overlooked in on-policy policy optimization methods which are often thought capable of training indefinitely. In this work, we empirically study representation dynamics in Proximal Policy Optimization (PPO) on the Atari and MuJoCo environments, revealing that PPO agents are also affected by feature rank deterioration and loss of plasticity. We show that this is aggravated with stronger non-stationarity, ultimately driving the actor's performance to collapse, regardless of the performance of the critic. We draw connections between representation collapse, performance collapse, and trust region issues in PPO, and present Proximal Feature Optimization (PFO), a novel auxiliary loss, that along with other interventions shows that regularizing the representation dynamics improves the performance of PPO agents. | [
"['Skander Moalla' 'Andrea Miele' 'Razvan Pascanu' 'Caglar Gulcehre']"
]
|
null | null | 2405.00664 | null | null | http://arxiv.org/pdf/2405.00664v1 | 2024-05-01T17:50:37Z | 2024-05-01T17:50:37Z | Is Bigger Edit Batch Size Always Better? -- An Empirical Study on Model
Editing with Llama-3 | This study presents a targeted model editing analysis focused on the latest large language model, Llama-3. We explore the efficacy of popular model editing techniques - ROME, MEMIT, and EMMET, which are designed for precise layer interventions. We identify the most effective layers for targeted edits through an evaluation that encompasses up to 4096 edits across three distinct strategies: sequential editing, batch editing, and a hybrid approach we call as sequential-batch editing. Our findings indicate that increasing edit batch-sizes may degrade model performance more significantly than using smaller edit batches sequentially for equal number of edits. With this, we argue that sequential model editing is an important component for scaling model editing methods and future research should focus on methods that combine both batched and sequential editing. This observation suggests a potential limitation in current model editing methods which push towards bigger edit batch sizes, and we hope it paves way for future investigations into optimizing batch sizes and model editing performance. | [
"['Junsang Yoon' 'Akshat Gupta' 'Gopala Anumanchipalli']"
]
|
null | null | 2405.00675 | null | null | http://arxiv.org/pdf/2405.00675v4 | 2024-06-14T05:57:01Z | 2024-05-01T17:59:20Z | Self-Play Preference Optimization for Language Model Alignment | Traditional reinforcement learning from human feedback (RLHF) approaches relying on parametric models like the Bradley-Terry model fall short in capturing the intransitivity and irrationality in human preferences. Recent advancements suggest that directly working with preference probabilities can yield a more accurate reflection of human preferences, enabling more flexible and accurate language model alignment. In this paper, we propose a self-play-based method for language model alignment, which treats the problem as a constant-sum two-player game aimed at identifying the Nash equilibrium policy. Our approach, dubbed Self-Play Preference Optimization (SPPO), approximates the Nash equilibrium through iterative policy updates and enjoys a theoretical convergence guarantee. Our method can effectively increase the log-likelihood of the chosen response and decrease that of the rejected response, which cannot be trivially achieved by symmetric pairwise loss such as Direct Preference Optimization (DPO) and Identity Preference Optimization (IPO). In our experiments, using only 60k prompts (without responses) from the UltraFeedback dataset and without any prompt augmentation, by leveraging a pre-trained preference model PairRM with only 0.4B parameters, SPPO can obtain a model from fine-tuning Mistral-7B-Instruct-v0.2 that achieves the state-of-the-art length-controlled win-rate of 28.53% against GPT-4-Turbo on AlpacaEval 2.0. It also outperforms the (iterative) DPO and IPO on MT-Bench and the Open LLM Leaderboard. Starting from a stronger base model Llama-3-8B-Instruct, we are able to achieve a length-controlled win rate of 38.77%. Notably, the strong performance of SPPO is achieved without additional external supervision (e.g., responses, preferences, etc.) from GPT-4 or other stronger language models. Codes are available at https://github.com/uclaml/SPPO. | [
"['Yue Wu' 'Zhiqing Sun' 'Huizhuo Yuan' 'Kaixuan Ji' 'Yiming Yang'\n 'Quanquan Gu']"
]
|
null | null | 2405.00678 | null | null | http://arxiv.org/abs/2405.00678v1 | 2024-01-26T16:42:51Z | 2024-01-26T16:42:51Z | Low-cost modular devices for on-road vehicle detection and
characterisation | Detecting and characterising vehicles is one of the purposes of embedded systems used in intelligent environments. An analysis of a vehicle characteristics can reveal inappropriate or dangerous behaviour. This detection makes it possible to sanction or notify emergency services to take early and practical actions. Vehicle detection and characterisation systems employ complex sensors such as video cameras, especially in urban environments. These sensors provide high precision and performance, although the price and computational requirements are proportional to their accuracy. These sensors offer high accuracy, but the price and computational requirements are directly proportional to their performance. This article introduces a system based on modular devices that is economical and has a low computational cost. These devices use ultrasonic sensors to detect the speed and length of vehicles. The measurement accuracy is improved through the collaboration of the device modules. The experiments were performed using multiple modules oriented to different angles. This module is coupled with another specifically designed to detect distance using previous modules speed and length data. The collaboration between different modules reduces the speed relative error ranges from 1 to 5, depending on the angle configuration used in the modules. | [
"['Jose-Luis Poza-Lujan' 'Pedro Uribe-Chavert' 'Juan-Luis Posadas-Yagüe']"
]
|
null | null | 2405.00688 | null | null | http://arxiv.org/pdf/2405.00688v1 | 2024-03-09T23:28:01Z | 2024-03-09T23:28:01Z | Understanding Social Perception, Interactions, and Safety Aspects of
Sidewalk Delivery Robots Using Sentiment Analysis | This article presents a comprehensive sentiment analysis (SA) of comments on YouTube videos related to Sidewalk Delivery Robots (SDRs). We manually annotated the collected YouTube comments with three sentiment labels: negative (0), positive (1), and neutral (2). We then constructed models for text sentiment classification and tested the models' performance on both binary and ternary classification tasks in terms of accuracy, precision, recall, and F1 score. Our results indicate that, in binary classification tasks, the Support Vector Machine (SVM) model using Term Frequency-Inverse Document Frequency (TF-IDF) and N-gram get the highest accuracy. In ternary classification tasks, the model using Bidirectional Encoder Representations from Transformers (BERT), Long Short-Term Memory Networks (LSTM) and Gated Recurrent Unit (GRU) significantly outperforms other machine learning models, achieving an accuracy, precision, recall, and F1 score of 0.78. Additionally, we employ the Latent Dirichlet Allocation model to generate 10 topics from the comments to explore the public's underlying views on SDRs. Drawing from these findings, we propose targeted recommendations for shaping future policies concerning SDRs. This work provides valuable insights for stakeholders in the SDR sector regarding social perception, interaction, and safety. | [
"['Yuchen Du' 'Tho V. Le']"
]
|
null | null | 2405.00695 | null | null | http://arxiv.org/pdf/2405.00695v1 | 2024-03-28T09:38:26Z | 2024-03-28T09:38:26Z | Joint torques prediction of a robotic arm using neural networks | Accurate dynamic models are crucial for many robotic applications. Traditional approaches to deriving these models are based on the application of Lagrangian or Newtonian mechanics. Although these methods provide a good insight into the physical behaviour of the system, they rely on the exact knowledge of parameters such as inertia, friction and joint flexibility. In addition, the system is often affected by uncertain and nonlinear effects, such as saturation and dead zones, which can be difficult to model. A popular alternative is the application of Machine Learning (ML) techniques - e.g., Neural Networks (NNs) - in the context of a "black-box" methodology. This paper reports on our experience with this approach for a real-life 6 degrees of freedom (DoF) manipulator. Specifically, we considered several NN architectures: single NN, multiple NNs, and cascade NN. We compared the performance of the system by using different policies for selecting the NN hyperparameters. Our experiments reveal that the best accuracy and performance are obtained by a cascade NN, in which we encode our prior physical knowledge about the dependencies between joints, complemented by an appropriate optimisation of the hyperparameters. | [
"[\"Giulia d'Addato\" 'Ruggero Carli' 'Eurico Pedrosa' 'Artur Pereira'\n 'Luigi Palopoli' 'Daniele Fontanelli']"
]
|
null | null | 2405.00697 | null | null | http://arxiv.org/pdf/2405.00697v1 | 2024-04-10T11:20:52Z | 2024-04-10T11:20:52Z | Pricing Catastrophe Bonds -- A Probabilistic Machine Learning Approach | This paper proposes a probabilistic machine learning method to price catastrophe (CAT) bonds in the primary market. The proposed method combines machine-learning-based predictive models with Conformal Prediction, an innovative algorithm that generates distribution-free probabilistic forecasts for CAT bond prices. Using primary market CAT bond transaction records between January 1999 and March 2021, the proposed method is found to be more robust and yields more accurate predictions of the bond spreads than traditional regression-based methods. Furthermore, the proposed method generates more informative prediction intervals than linear regression and identifies important nonlinear relationships between various risk factors and bond spreads, suggesting that linear regressions could misestimate the bond spreads. Overall, this paper demonstrates the potential of machine learning methods in improving the pricing of CAT bonds. | [
"['Xiaowei Chen' 'Hong Li' 'Yufan Lu' 'Rui Zhou']"
]
|
null | null | 2405.00699 | null | null | http://arxiv.org/pdf/2405.00699v1 | 2024-04-15T15:57:01Z | 2024-04-15T15:57:01Z | Direct Training Needs Regularisation: Anytime Optimal Inference Spiking
Neural Network | Spiking Neural Network (SNN) is acknowledged as the next generation of Artificial Neural Network (ANN) and hold great promise in effectively processing spatial-temporal information. However, the choice of timestep becomes crucial as it significantly impacts the accuracy of the neural network training. Specifically, a smaller timestep indicates better performance in efficient computing, resulting in reduced latency and operations. While, using a small timestep may lead to low accuracy due to insufficient information presentation with few spikes. This observation motivates us to develop an SNN that is more reliable for adaptive timestep by introducing a novel regularisation technique, namely Spatial-Temporal Regulariser (STR). Our approach regulates the ratio between the strength of spikes and membrane potential at each timestep. This effectively balances spatial and temporal performance during training, ultimately resulting in an Anytime Optimal Inference (AOI) SNN. Through extensive experiments on frame-based and event-based datasets, our method, in combination with cutoff based on softmax output, achieves state-of-the-art performance in terms of both latency and accuracy. Notably, with STR and cutoff, SNN achieves 2.14 to 2.89 faster in inference compared to the pre-configured timestep with near-zero accuracy drop of 0.50% to 0.64% over the event-based datasets. Code available: https://github.com/Dengyu-Wu/AOI-SNN-Regularisation | [
"['Dengyu Wu' 'Yi Qi' 'Kaiwen Cai' 'Gaojie Jin' 'Xinping Yi'\n 'Xiaowei Huang']"
]
|
null | null | 2405.00705 | null | null | http://arxiv.org/pdf/2405.00705v1 | 2024-04-23T04:56:48Z | 2024-04-23T04:56:48Z | SHED: Shapley-Based Automated Dataset Refinement for Instruction
Fine-Tuning | The pre-trained Large Language Models (LLMs) can be adapted for many downstream tasks and tailored to align with human preferences through fine-tuning. Recent studies have discovered that LLMs can achieve desirable performance with only a small amount of high-quality data, suggesting that a large amount of the data in these extensive datasets is redundant or even harmful. Identifying high-quality data from vast datasets to curate small yet effective datasets has emerged as a critical challenge. In this paper, we introduce SHED, an automated dataset refinement framework based on Shapley value for instruction fine-tuning. SHED eliminates the need for human intervention or the use of commercial LLMs. Moreover, the datasets curated through SHED exhibit transferability, indicating they can be reused across different LLMs with consistently high performance. We conduct extensive experiments to evaluate the datasets curated by SHED. The results demonstrate SHED's superiority over state-of-the-art methods across various tasks and LLMs; notably, datasets comprising only 10% of the original data selected by SHED achieve performance comparable to or surpassing that of the full datasets. | [
"['Yexiao He' 'Ziyao Wang' 'Zheyu Shen' 'Guoheng Sun' 'Yucong Dai'\n 'Yongkai Wu' 'Hongyi Wang' 'Ang Li']"
]
|
null | null | 2405.00708 | null | null | http://arxiv.org/pdf/2405.00708v1 | 2024-04-23T19:57:03Z | 2024-04-23T19:57:03Z | Interactive Analysis of LLMs using Meaningful Counterfactuals | Counterfactual examples are useful for exploring the decision boundaries of machine learning models and determining feature attributions. How can we apply counterfactual-based methods to analyze and explain LLMs? We identify the following key challenges. First, the generated textual counterfactuals should be meaningful and readable to users and thus can be mentally compared to draw conclusions. Second, to make the solution scalable to long-form text, users should be equipped with tools to create batches of counterfactuals from perturbations at various granularity levels and interactively analyze the results. In this paper, we tackle the above challenges and contribute 1) a novel algorithm for generating batches of complete and meaningful textual counterfactuals by removing and replacing text segments in different granularities, and 2) LLM Analyzer, an interactive visualization tool to help users understand an LLM's behaviors by interactively inspecting and aggregating meaningful counterfactuals. We evaluate the proposed algorithm by the grammatical correctness of its generated counterfactuals using 1,000 samples from medical, legal, finance, education, and news datasets. In our experiments, 97.2% of the counterfactuals are grammatically correct. Through a use case, user studies, and feedback from experts, we demonstrate the usefulness and usability of the proposed interactive visualization tool. | [
"['Furui Cheng' 'Vilém Zouhar' 'Robin Shing Moon Chan' 'Daniel Fürst'\n 'Hendrik Strobelt' 'Mennatallah El-Assady']"
]
|
null | null | 2405.00709 | null | null | http://arxiv.org/pdf/2405.00709v1 | 2024-04-23T20:37:24Z | 2024-04-23T20:37:24Z | Evaluating Tool-Augmented Agents in Remote Sensing Platforms | Tool-augmented Large Language Models (LLMs) have shown impressive capabilities in remote sensing (RS) applications. However, existing benchmarks assume question-answering input templates over predefined image-text data pairs. These standalone instructions neglect the intricacies of realistic user-grounded tasks. Consider a geospatial analyst: they zoom in a map area, they draw a region over which to collect satellite imagery, and they succinctly ask "Detect all objects here". Where is `here`, if it is not explicitly hardcoded in the image-text template, but instead is implied by the system state, e.g., the live map positioning? To bridge this gap, we present GeoLLM-QA, a benchmark designed to capture long sequences of verbal, visual, and click-based actions on a real UI platform. Through in-depth evaluation of state-of-the-art LLMs over a diverse set of 1,000 tasks, we offer insights towards stronger agents for RS applications. | [
"['Simranjit Singh' 'Michael Fore' 'Dimitrios Stamoulis']"
]
|
null | null | 2405.00710 | null | null | http://arxiv.org/pdf/2405.00710v1 | 2024-04-24T21:48:43Z | 2024-04-24T21:48:43Z | Homonym Sense Disambiguation in the Georgian Language | This research proposes a novel approach to the Word Sense Disambiguation (WSD) task in the Georgian language, based on supervised fine-tuning of a pre-trained Large Language Model (LLM) on a dataset formed by filtering the Georgian Common Crawls corpus. The dataset is used to train a classifier for words with multiple senses. Additionally, we present experimental results of using LSTM for WSD. Accurately disambiguating homonyms is crucial in natural language processing. Georgian, an agglutinative language belonging to the Kartvelian language family, presents unique challenges in this context. The aim of this paper is to highlight the specific problems concerning homonym disambiguation in the Georgian language and to present our approach to solving them. The techniques discussed in the article achieve 95% accuracy for predicting lexical meanings of homonyms using a hand-classified dataset of over 7500 sentences. | [
"['Davit Melikidze' 'Alexander Gamkrelidze']"
]
|
null | null | 2405.00712 | null | null | http://arxiv.org/pdf/2405.00712v2 | 2024-05-04T03:48:19Z | 2024-04-25T10:07:56Z | SoK: Behind the Accuracy of Complex Human Activity Recognition Using
Deep Learning | Human Activity Recognition (HAR) is a well-studied field with research dating back to the 1980s. Over time, HAR technologies have evolved significantly from manual feature extraction, rule-based algorithms, and simple machine learning models to powerful deep learning models, from one sensor type to a diverse array of sensing modalities. The scope has also expanded from recognising a limited set of activities to encompassing a larger variety of both simple and complex activities. However, there still exist many challenges that hinder advancement in complex activity recognition using modern deep learning methods. In this paper, we comprehensively systematise factors leading to inaccuracy in complex HAR, such as data variety and model capacity. Among many sensor types, we give more attention to wearable and camera due to their prevalence. Through this Systematisation of Knowledge (SoK) paper, readers can gain a solid understanding of the development history and existing challenges of HAR, different categorisations of activities, obstacles in deep learning-based complex HAR that impact accuracy, and potential research directions. | [
"['Duc-Anh Nguyen' 'Nhien-An Le-Khac']"
]
|
null | null | 2405.00715 | null | null | http://arxiv.org/pdf/2405.00715v4 | 2024-06-10T01:09:03Z | 2024-04-25T15:34:53Z | Adapting Open-Source Large Language Models for Cost-Effective,
Expert-Level Clinical Note Generation with On-Policy Reinforcement Learning | Proprietary Large Language Models (LLMs) such as GPT-4 and Gemini have demonstrated promising capabilities in clinical text summarization tasks. However, due to patient data privacy concerns and computational costs, many healthcare providers prefer using small, locally-hosted models over external generic LLMs. This study presents a comprehensive domain- and task-specific adaptation process for the open-source LLaMA-2 13 billion parameter model, enabling it to generate high-quality clinical notes from outpatient patient-doctor dialogues. Our process incorporates continued pre-training, supervised fine-tuning, and reinforcement learning from both AI and human feedback. We introduced a new approach, DistillDirect, for performing on-policy reinforcement learning with Gemini 1.0 Pro as the teacher model. Our resulting model, LLaMA-Clinic, can generate clinical notes comparable in quality to those authored by physicians. In a blinded physician reader study, the majority (90.4%) of individual evaluations rated the notes generated by LLaMA-Clinic as "acceptable" or higher across all three criteria: real-world readiness, completeness, and accuracy. In the more challenging "Assessment and Plan" section, LLaMA-Clinic scored higher (4.2/5) in real-world readiness than physician-authored notes (4.1/5). Our cost analysis for inference shows that our LLaMA-Clinic model achieves a 3.75-fold cost reduction compared to an external generic LLM service. Additionally, we highlight key considerations for future clinical note-generation tasks, emphasizing the importance of pre-defining a best-practice note format, rather than relying on LLMs to determine this for clinical practice. We have made our newly created synthetic clinic dialogue-note dataset and the physician feedback dataset publicly available to foster future research. | [
"['Hanyin Wang' 'Chufan Gao' 'Bolun Liu' 'Qiping Xu' 'Guleid Hussein'\n 'Mohamad El Labban' 'Kingsley Iheasirim' 'Hariprasad Korsapati'\n 'Chuck Outcalt' 'Jimeng Sun']"
]
|
null | null | 2405.00719 | null | null | http://arxiv.org/pdf/2405.00719v1 | 2024-04-25T18:00:46Z | 2024-04-25T18:00:46Z | EEG-Deformer: A Dense Convolutional Transformer for Brain-computer
Interfaces | Effectively learning the temporal dynamics in electroencephalogram (EEG) signals is challenging yet essential for decoding brain activities using brain-computer interfaces (BCIs). Although Transformers are popular for their long-term sequential learning ability in the BCI field, most methods combining Transformers with convolutional neural networks (CNNs) fail to capture the coarse-to-fine temporal dynamics of EEG signals. To overcome this limitation, we introduce EEG-Deformer, which incorporates two main novel components into a CNN-Transformer: (1) a Hierarchical Coarse-to-Fine Transformer (HCT) block that integrates a Fine-grained Temporal Learning (FTL) branch into Transformers, effectively discerning coarse-to-fine temporal patterns; and (2) a Dense Information Purification (DIP) module, which utilizes multi-level, purified temporal information to enhance decoding accuracy. Comprehensive experiments on three representative cognitive tasks consistently verify the generalizability of our proposed EEG-Deformer, demonstrating that it either outperforms existing state-of-the-art methods or is comparable to them. Visualization results show that EEG-Deformer learns from neurophysiologically meaningful brain regions for the corresponding cognitive tasks. The source code can be found at https://github.com/yi-ding-cs/EEG-Deformer. | [
"['Yi Ding' 'Yong Li' 'Hao Sun' 'Rui Liu' 'Chengxuan Tong' 'Cuntai Guan']"
]
|
null | null | 2405.00720 | null | null | http://arxiv.org/pdf/2405.00720v1 | 2024-04-25T19:04:15Z | 2024-04-25T19:04:15Z | A Novel Machine Learning-based Equalizer for a Downstream 100G PAM-4 PON | A frequency-calibrated SCINet (FC-SCINet) equalizer is proposed for down-stream 100G PON with 28.7 dB path loss. At 5 km, FC-SCINet improves the BER by 88.87% compared to FFE and a 3-layer DNN with 10.57% lower complexity. | [
"['Chen Shao' 'Elias Giacoumidis' 'Shi Li' 'Jialei Li' 'Michael Faerber'\n 'Tobias Kaefer' 'Andre Richter']"
]
|
null | null | 2405.00721 | null | null | http://arxiv.org/pdf/2405.00721v1 | 2024-04-26T00:04:41Z | 2024-04-26T00:04:41Z | Optimizing Brain-Computer Interface Performance: Advancing EEG Signals
Channel Selection through Regularized CSP and SPEA II Multi-Objective
Optimization | Brain-computer interface systems and the recording of brain activity has garnered significant attention across a diverse spectrum of applications. EEG signals have emerged as a modality for recording neural electrical activity. Among the methodologies designed for feature extraction from EEG data, the method of RCSP has proven to be an approach, particularly in the context of MI tasks. RCSP exhibits efficacy in the discrimination and classification of EEG signals. In optimizing the performance of this method, our research extends to a comparative analysis with conventional CSP techniques, as well as optimized methodologies designed for similar applications. Notably, we employ the meta-heuristic multi-objective Strength Pareto Evolutionary Algorithm II (SPEA-II) as a pivotal component of our research paradigm. This is a state-of-the-art approach in the selection of an subset of channels from a multichannel EEG signal with MI tasks. Our main objective is to formulate an optimum channel selection strategy aimed at identifying the most pertinent subset of channels from the multi-dimensional electroencephalogram (EEG) signals. One of the primary objectives inherent to channel selection in the EEG signal analysis pertains to the reduction of the channel count, an approach that enhances user comfort when utilizing gel-based EEG electrodes. Additionally, within this research, we took benefit of ensemble learning models as a component of our decision-making. This technique serves to mitigate the challenges associated with overfitting, especially when confronted with an extensive array of potentially redundant EEG channels and data noise. Our findings not only affirm the performance of RCSP in MI-based BCI systems, but also underscore the significance of channel selection strategies and ensemble learning techniques in optimizing the performance of EEG signal classification. | [
"['M. Moein Esfahani' 'Hossein Sadati' 'Vince D Calhoun']"
]
|
null | null | 2405.00723 | null | null | http://arxiv.org/pdf/2405.00723v1 | 2024-04-26T13:09:50Z | 2024-04-26T13:09:50Z | EEG_RL-Net: Enhancing EEG MI Classification through Reinforcement
Learning-Optimised Graph Neural Networks | Brain-Computer Interfaces (BCIs) rely on accurately decoding electroencephalography (EEG) motor imagery (MI) signals for effective device control. Graph Neural Networks (GNNs) outperform Convolutional Neural Networks (CNNs) in this regard, by leveraging the spatial relationships between EEG electrodes through adjacency matrices. The EEG_GLT-Net framework, featuring the state-of-the-art EEG_GLT adjacency matrix method, has notably enhanced EEG MI signal classification, evidenced by an average accuracy of 83.95% across 20 subjects on the PhysioNet dataset. This significantly exceeds the 76.10% accuracy rate achieved using the Pearson Correlation Coefficient (PCC) method within the same framework. In this research, we advance the field by applying a Reinforcement Learning (RL) approach to the classification of EEG MI signals. Our innovative method empowers the RL agent, enabling not only the classification of EEG MI data points with higher accuracy, but effective identification of EEG MI data points that are less distinct. We present the EEG_RL-Net, an enhancement of the EEG_GLT-Net framework, which incorporates the trained EEG GCN Block from EEG_GLT-Net at an adjacency matrix density of 13.39% alongside the RL-centric Dueling Deep Q Network (Dueling DQN) block. The EEG_RL-Net model showcases exceptional classification performance, achieving an unprecedented average accuracy of 96.40% across 20 subjects within 25 milliseconds. This model illustrates the transformative effect of the RL in EEG MI time point classification. | [
"['Htoo Wai Aung' 'Jiao Jiao Li' 'Yang An' 'Steven W. Su']"
]
|
null | null | 2405.00724 | null | null | http://arxiv.org/pdf/2405.00724v1 | 2024-04-26T15:46:58Z | 2024-04-26T15:46:58Z | Baseline Drift Tolerant Signal Encoding for ECG Classification with Deep
Learning | Common artefacts such as baseline drift, rescaling, and noise critically limit the performance of machine learningbased automated ECG analysis and interpretation. This study proposes Derived Peak (DP) encoding, a non-parametric method that generates signed spikes corresponding to zero crossings of the signals first and second-order time derivatives. Notably, DP encoding is invariant to shift and scaling artefacts, and its implementation is further simplified by the absence of userdefined parameters. DP encoding was used to encode the 12-lead ECG data from the PTB-XL dataset (n=18,869 participants) and was fed to 1D-ResNet-18 models trained to identify myocardial infarction, conductive deficits and ST-segment abnormalities. Robustness to artefacts was assessed by corrupting ECG data with sinusoidal baseline drift, shift, rescaling and noise, before encoding. The addition of these artefacts resulted in a significant drop in accuracy for seven other methods from prior art, while DP encoding maintained a baseline AUC of 0.88 under drift, shift and rescaling. DP achieved superior performance to unencoded inputs in the presence of shift (AUC under 1mV shift: 0.91 vs 0.62), and rescaling artefacts (AUC 0.91 vs 0.79). Thus, DP encoding is a simple method by which robustness to common ECG artefacts may be improved for automated ECG analysis and interpretation. | [
"['Robert O Shea' 'Prabodh Katti' 'Bipin Rajendran']"
]
|
null | null | 2405.00725 | null | null | http://arxiv.org/pdf/2405.00725v2 | 2024-05-15T14:52:58Z | 2024-04-26T19:29:48Z | Federated Learning and Differential Privacy Techniques on Multi-hospital
Population-scale Electrocardiogram Data | This research paper explores ways to apply Federated Learning (FL) and Differential Privacy (DP) techniques to population-scale Electrocardiogram (ECG) data. The study learns a multi-label ECG classification model using FL and DP based on 1,565,849 ECG tracings from 7 hospitals in Alberta, Canada. The FL approach allowed collaborative model training without sharing raw data between hospitals while building robust ECG classification models for diagnosing various cardiac conditions. These accurate ECG classification models can facilitate the diagnoses while preserving patient confidentiality using FL and DP techniques. Our results show that the performance achieved using our implementation of the FL approach is comparable to that of the pooled approach, where the model is trained over the aggregating data from all hospitals. Furthermore, our findings suggest that hospitals with limited ECGs for training can benefit from adopting the FL model compared to single-site training. In addition, this study showcases the trade-off between model performance and data privacy by employing DP during model training. Our code is available at https://github.com/vikhyatt/Hospital-FL-DP. | [
"['Vikhyat Agrawal' 'Sunil Vasu Kalmady' 'Venkataseetharam Manoj Malipeddi'\n 'Manisimha Varma Manthena' 'Weijie Sun' 'Saiful Islam' 'Abram Hindle'\n 'Padma Kaul' 'Russell Greiner']"
]
|
null | null | 2405.00727 | null | null | http://arxiv.org/pdf/2405.00727v2 | 2024-05-06T08:15:43Z | 2024-04-26T21:35:05Z | Generalised envelope spectrum-based signal-to-noise objectives:
Formulation, optimisation and application for gear fault detection under
time-varying speed conditions | In vibration-based condition monitoring, optimal filter design improves fault detection by enhancing weak fault signatures within vibration signals. This process involves optimising a derived objective function from a defined objective. The objectives are often based on proxy health indicators to determine the filter's parameters. However, these indicators can be compromised by irrelevant extraneous signal components and fluctuating operational conditions, affecting the filter's efficacy. Fault detection primarily uses the fault component's prominence in the squared envelope spectrum, quantified by a squared envelope spectrum-based signal-to-noise ratio. New optimal filter objective functions are derived from the proposed generalised envelope spectrum-based signal-to-noise objective for machines operating under variable speed conditions. Instead of optimising proxy health indicators, the optimal filter coefficients of the formulation directly maximise the squared envelope spectrum-based signal-to-noise ratio over targeted frequency bands using standard gradient-based optimisers. Four derived objective functions from the proposed objective effectively outperform five prominent methods in tests on three experimental datasets. | [
"['Stephan Schmidt' 'Daniel N. Wilke' 'Konstantinos C. Gryllias']"
]
|
null | null | 2405.00732 | null | null | http://arxiv.org/pdf/2405.00732v1 | 2024-04-29T04:01:45Z | 2024-04-29T04:01:45Z | LoRA Land: 310 Fine-tuned LLMs that Rival GPT-4, A Technical Report | Low Rank Adaptation (LoRA) has emerged as one of the most widely adopted methods for Parameter Efficient Fine-Tuning (PEFT) of Large Language Models (LLMs). LoRA reduces the number of trainable parameters and memory usage while achieving comparable performance to full fine-tuning. We aim to assess the viability of training and serving LLMs fine-tuned with LoRA in real-world applications. First, we measure the quality of LLMs fine-tuned with quantized low rank adapters across 10 base models and 31 tasks for a total of 310 models. We find that 4-bit LoRA fine-tuned models outperform base models by 34 points and GPT-4 by 10 points on average. Second, we investigate the most effective base models for fine-tuning and assess the correlative and predictive capacities of task complexity heuristics in forecasting the outcomes of fine-tuning. Finally, we evaluate the latency and concurrency capabilities of LoRAX, an open-source Multi-LoRA inference server that facilitates the deployment of multiple LoRA fine-tuned models on a single GPU using shared base model weights and dynamic adapter loading. LoRAX powers LoRA Land, a web application that hosts 25 LoRA fine-tuned Mistral-7B LLMs on a single NVIDIA A100 GPU with 80GB memory. LoRA Land highlights the quality and cost-effectiveness of employing multiple specialized LLMs over a single, general-purpose LLM. | [
"['Justin Zhao' 'Timothy Wang' 'Wael Abid' 'Geoffrey Angus' 'Arnav Garg'\n 'Jeffery Kinnison' 'Alex Sherstinsky' 'Piero Molino' 'Travis Addair'\n 'Devvret Rishi']"
]
|
null | null | 2405.00734 | null | null | http://arxiv.org/pdf/2405.00734v1 | 2024-04-29T10:08:43Z | 2024-04-29T10:08:43Z | EEG-MACS: Manifold Attention and Confidence Stratification for EEG-based
Cross-Center Brain Disease Diagnosis under Unreliable Annotations | Cross-center data heterogeneity and annotation unreliability significantly challenge the intelligent diagnosis of diseases using brain signals. A notable example is the EEG-based diagnosis of neurodegenerative diseases, which features subtler abnormal neural dynamics typically observed in small-group settings. To advance this area, in this work, we introduce a transferable framework employing Manifold Attention and Confidence Stratification (MACS) to diagnose neurodegenerative disorders based on EEG signals sourced from four centers with unreliable annotations. The MACS framework's effectiveness stems from these features: 1) The Augmentor generates various EEG-represented brain variants to enrich the data space; 2) The Switcher enhances the feature space for trusted samples and reduces overfitting on incorrectly labeled samples; 3) The Encoder uses the Riemannian manifold and Euclidean metrics to capture spatiotemporal variations and dynamic synchronization in EEG; 4) The Projector, equipped with dual heads, monitors consistency across multiple brain variants and ensures diagnostic accuracy; 5) The Stratifier adaptively stratifies learned samples by confidence levels throughout the training process; 6) Forward and backpropagation in MACS are constrained by confidence stratification to stabilize the learning system amid unreliable annotations. Our subject-independent experiments, conducted on both neurocognitive and movement disorders using cross-center corpora, have demonstrated superior performance compared to existing related algorithms. This work not only improves EEG-based diagnostics for cross-center and small-setting brain diseases but also offers insights into extending MACS techniques to other data analyses, tackling data heterogeneity and annotation unreliability in multimedia and multimodal content understanding. | [
"['Zhenxi Song' 'Ruihan Qin' 'Huixia Ren' 'Zhen Liang' 'Yi Guo' 'Min Zhang'\n 'Zhiguo Zhang']"
]
|
null | null | 2405.00736 | null | null | http://arxiv.org/pdf/2405.00736v1 | 2024-04-29T15:40:19Z | 2024-04-29T15:40:19Z | Joint Signal Detection and Automatic Modulation Classification via Deep
Learning | Signal detection and modulation classification are two crucial tasks in various wireless communication systems. Different from prior works that investigate them independently, this paper studies the joint signal detection and automatic modulation classification (AMC) by considering a realistic and complex scenario, in which multiple signals with different modulation schemes coexist at different carrier frequencies. We first generate a coexisting RADIOML dataset (CRML23) to facilitate the joint design. Different from the publicly available AMC dataset ignoring the signal detection step and containing only one signal, our synthetic dataset covers the more realistic multiple-signal coexisting scenario. Then, we present a joint framework for detection and classification (JDM) for such a multiple-signal coexisting environment, which consists of two modules for signal detection and AMC, respectively. In particular, these two modules are interconnected using a designated data structure called "proposal". Finally, we conduct extensive simulations over the newly developed dataset, which demonstrate the effectiveness of our designs. Our code and dataset are now available as open-source (https://github.com/Singingkettle/ChangShuoRadioData). | [
"['Huijun Xing' 'Xuhui Zhang' 'Shuo Chang' 'Jinke Ren' 'Zixun Zhang'\n 'Jie Xu' 'Shuguang Cui']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.