categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2404.03200
null
null
http://arxiv.org/pdf/2404.03200v1
2024-04-04T05:08:51Z
2024-04-04T05:08:51Z
Future-Proofing Class Incremental Learning
Exemplar-Free Class Incremental Learning is a highly challenging setting where replay memory is unavailable. Methods relying on frozen feature extractors have drawn attention recently in this setting due to their impressive performances and lower computational costs. However, those methods are highly dependent on the data used to train the feature extractor and may struggle when an insufficient amount of classes are available during the first incremental step. To overcome this limitation, we propose to use a pre-trained text-to-image diffusion model in order to generate synthetic images of future classes and use them to train the feature extractor. Experiments on the standard benchmarks CIFAR100 and ImageNet-Subset demonstrate that our proposed method can be used to improve state-of-the-art methods for exemplar-free class incremental learning, especially in the most difficult settings where the first incremental step only contains few classes. Moreover, we show that using synthetic samples of future classes achieves higher performance than using real data from different classes, paving the way for better and less costly pre-training methods for incremental learning.
[ "['Quentin Jodelet' 'Xin Liu' 'Yin Jun Phua' 'Tsuyoshi Murata']" ]
null
null
2404.03204
null
null
http://arxiv.org/pdf/2404.03204v3
2024-05-19T21:34:28Z
2024-04-04T05:15:07Z
RALL-E: Robust Codec Language Modeling with Chain-of-Thought Prompting for Text-to-Speech Synthesis
We present RALL-E, a robust language modeling method for text-to-speech (TTS) synthesis. While previous work based on large language models (LLMs) shows impressive performance on zero-shot TTS, such methods often suffer from poor robustness, such as unstable prosody (weird pitch and rhythm/duration) and a high word error rate (WER), due to the autoregressive prediction style of language models. The core idea behind RALL-E is chain-of-thought (CoT) prompting, which decomposes the task into simpler steps to enhance the robustness of LLM-based TTS. To accomplish this idea, RALL-E first predicts prosody features (pitch and duration) of the input text and uses them as intermediate conditions to predict speech tokens in a CoT style. Second, RALL-E utilizes the predicted duration prompt to guide the computing of self-attention weights in Transformer to enforce the model to focus on the corresponding phonemes and prosody features when predicting speech tokens. Results of comprehensive objective and subjective evaluations demonstrate that, compared to a powerful baseline method VALL-E, RALL-E significantly improves the WER of zero-shot TTS from $5.6%$ (without reranking) and $1.7%$ (with reranking) to $2.5%$ and $1.0%$, respectively. Furthermore, we demonstrate that RALL-E correctly synthesizes sentences that are hard for VALL-E and reduces the error rate from $68%$ to $4%$.
[ "['Detai Xin' 'Xu Tan' 'Kai Shen' 'Zeqian Ju' 'Dongchao Yang'\n 'Yuancheng Wang' 'Shinnosuke Takamichi' 'Hiroshi Saruwatari' 'Shujie Liu'\n 'Jinyu Li' 'Sheng Zhao']" ]
null
null
2404.03208
null
null
http://arxiv.org/pdf/2404.03208v2
2024-05-06T03:02:55Z
2024-04-04T05:30:03Z
HiMAL: A Multimodal Hierarchical Multi-task Auxiliary Learning framework for predicting and explaining Alzheimer disease progression
Objective: We aimed to develop and validate a novel multimodal framework HiMAL (Hierarchical, Multi-task Auxiliary Learning) framework, for predicting cognitive composite functions as auxiliary tasks that estimate the longitudinal risk of transition from Mild Cognitive Impairment (MCI) to Alzheimer Disease (AD). Methods: HiMAL utilized multimodal longitudinal visit data including imaging features, cognitive assessment scores, and clinical variables from MCI patients in the Alzheimer Disease Neuroimaging Initiative (ADNI) dataset, to predict at each visit if an MCI patient will progress to AD within the next 6 months. Performance of HiMAL was compared with state-of-the-art single-task and multi-task baselines using area under the receiver operator curve (AUROC) and precision recall curve (AUPRC) metrics. An ablation study was performed to assess the impact of each input modality on model performance. Additionally, longitudinal explanations regarding risk of disease progression were provided to interpret the predicted cognitive decline. Results: Out of 634 MCI patients (mean [IQR] age : 72.8 [67-78], 60% men), 209 (32%) progressed to AD. HiMAL showed better prediction performance compared to all single-modality singe-task baselines (AUROC = 0.923 [0.915-0.937]; AUPRC= 0.623 [0.605-0.644]; all p<0.05). Ablation analysis highlighted that imaging and cognition scores with maximum contribution towards prediction of disease progression. Discussion: Clinically informative model explanations anticipate cognitive decline 6 months in advance, aiding clinicians in future disease progression assessment. HiMAL relies on routinely collected EHR variables for proximal (6 months) prediction of AD onset, indicating its translational potential for point-of-care monitoring and managing of high-risk patients.
[ "['Sayantan Kumar' 'Sean Yu' 'Andrew Michelson' 'Thomas Kannampallil'\n 'Philip Payne']" ]
null
null
2404.03211
null
null
http://arxiv.org/pdf/2404.03211v4
2024-06-09T13:11:36Z
2024-04-04T05:35:59Z
Convergence Conditions of Online Regularized Statistical Learning in Reproducing Kernel Hilbert Space With Non-Stationary Data
We study the convergence of recursive regularized learning algorithms in the reproducing kernel Hilbert space (RKHS) with dependent and non-stationary online data streams. Firstly, we study the mean square asymptotic stability of a class of random difference equations in RKHS, whose non-homogeneous terms are martingale difference sequences dependent on the homogeneous ones. Secondly, we introduce the concept of random Tikhonov regularization path, and show that if the regularization path is slowly time-varying in some sense, then the output of the algorithm is consistent with the regularization path in mean square. Furthermore, if the data streams also satisfy the RKHS persistence of excitation condition, i.e. there exists a fixed length of time period, such that the conditional expectation of the operators induced by the input data accumulated over every time period has a uniformly strictly positive compact lower bound in the sense of the operator order with respect to time, then the output of the algorithm is consistent with the unknown function in mean square. Finally, for the case with independent and non-identically distributed data streams, the algorithm achieves the mean square consistency provided the marginal probability measures induced by the input data are slowly time-varying and the average measure over each fixed-length time period has a uniformly strictly positive lower bound.
[ "['Xiwei Zhang' 'Tao Li']" ]
null
null
2404.03222
null
null
http://arxiv.org/pdf/2404.03222v1
2024-04-04T06:10:57Z
2024-04-04T06:10:57Z
Enabling Clean Energy Resilience with Machine Learning-Empowered Underground Hydrogen Storage
To address the urgent challenge of climate change, there is a critical need to transition away from fossil fuels towards sustainable energy systems, with renewable energy sources playing a pivotal role. However, the inherent variability of renewable energy, without effective storage solutions, often leads to imbalances between energy supply and demand. Underground Hydrogen Storage (UHS) emerges as a promising long-term storage solution to bridge this gap, yet its widespread implementation is impeded by the high computational costs associated with high fidelity UHS simulations. This paper introduces UHS from a data-driven perspective and outlines a roadmap for integrating machine learning into UHS, thereby facilitating the large-scale deployment of UHS.
[ "['Alvaro Carbonero' 'Shaowen Mao' 'Mohamed Mehana']" ]
null
null
2404.03225
null
null
http://arxiv.org/pdf/2404.03225v1
2024-04-04T06:20:22Z
2024-04-04T06:20:22Z
FACTUAL: A Novel Framework for Contrastive Learning Based Robust SAR Image Classification
Deep Learning (DL) Models for Synthetic Aperture Radar (SAR) Automatic Target Recognition (ATR), while delivering improved performance, have been shown to be quite vulnerable to adversarial attacks. Existing works improve robustness by training models on adversarial samples. However, by focusing mostly on attacks that manipulate images randomly, they neglect the real-world feasibility of such attacks. In this paper, we propose FACTUAL, a novel Contrastive Learning framework for Adversarial Training and robust SAR classification. FACTUAL consists of two components: (1) Differing from existing works, a novel perturbation scheme that incorporates realistic physical adversarial attacks (such as OTSA) to build a supervised adversarial pre-training network. This network utilizes class labels for clustering clean and perturbed images together into a more informative feature space. (2) A linear classifier cascaded after the encoder to use the computed representations to predict the target labels. By pre-training and fine-tuning our model on both clean and adversarial samples, we show that our model achieves high prediction accuracy on both cases. Our model achieves 99.7% accuracy on clean samples, and 89.6% on perturbed samples, both outperforming previous state-of-the-art methods.
[ "['Xu Wang' 'Tian Ye' 'Rajgopal Kannan' 'Viktor Prasanna']" ]
null
null
2404.03227
null
null
http://arxiv.org/pdf/2404.03227v1
2024-04-04T06:24:11Z
2024-04-04T06:24:11Z
Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a multi-hop wireless network with statistically-identical agents. Agents cache the most recent samples from others and communicate over wireless collision channels governed by an underlying graph topology. Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies, considering both oblivious (where decision-making is independent of the physical processes) and non-oblivious policies (where decision-making depends on physical processes). We prove that in oblivious policies, minimizing estimation error is equivalent to minimizing the age of information. The complexity of the problem, especially the multi-dimensional action spaces and arbitrary network topologies, makes theoretical methods for finding optimal transmission policies intractable. We optimize the policies using a graphical multi-agent reinforcement learning framework, where each agent employs a permutation-equivariant graph neural network architecture. Theoretically, we prove that our proposed framework exhibits desirable transferability properties, allowing transmission policies trained on small- or moderate-size networks to be executed effectively on large-scale topologies. Numerical experiments demonstrate that (i) Our proposed framework outperforms state-of-the-art baselines; (ii) The trained policies are transferable to larger networks, and their performance gains increase with the number of agents; (iii) The training procedure withstands non-stationarity even if we utilize independent learning techniques; and, (iv) Recurrence is pivotal in both independent learning and centralized training and decentralized execution, and improves the resilience to non-stationarity in independent learning.
[ "['Xingran Chen' 'Navid NaderiAlizadeh' 'Alejandro Ribeiro'\n 'Shirin Saeedi Bidokhti']" ]
null
null
2404.03239
null
null
http://arxiv.org/pdf/2404.03239v1
2024-04-04T06:54:44Z
2024-04-04T06:54:44Z
Exploring Emotions in Multi-componential Space using Interactive VR Games
Emotion understanding is a complex process that involves multiple components. The ability to recognise emotions not only leads to new context awareness methods but also enhances system interaction's effectiveness by perceiving and expressing emotions. Despite the attention to discrete and dimensional models, neuroscientific evidence supports those emotions as being complex and multi-faceted. One framework that resonated well with such findings is the Component Process Model (CPM), a theory that considers the complexity of emotions with five interconnected components: appraisal, expression, motivation, physiology and feeling. However, the relationship between CPM and discrete emotions has not yet been fully explored. Therefore, to better understand emotions underlying processes, we operationalised a data-driven approach using interactive Virtual Reality (VR) games and collected multimodal measures (self-reports, physiological and facial signals) from 39 participants. We used Machine Learning (ML) methods to identify the unique contributions of each component to emotion differentiation. Our results showed the role of different components in emotion differentiation, with the model including all components demonstrating the most significant contribution. Moreover, we found that at least five dimensions are needed to represent the variation of emotions in our dataset. These findings also have implications for using VR environments in emotion research and highlight the role of physiological signals in emotion recognition within such environments.
[ "['Rukshani Somarathna' 'Gelareh Mohammadi']" ]
null
null
2404.03240
null
null
http://arxiv.org/pdf/2404.03240v1
2024-04-04T06:56:32Z
2024-04-04T06:56:32Z
Knowledge-Based Convolutional Neural Network for the Simulation and Prediction of Two-Phase Darcy Flows
Physics-informed neural networks (PINNs) have gained significant prominence as a powerful tool in the field of scientific computing and simulations. Their ability to seamlessly integrate physical principles into deep learning architectures has revolutionized the approaches to solving complex problems in physics and engineering. However, a persistent challenge faced by mainstream PINNs lies in their handling of discontinuous input data, leading to inaccuracies in predictions. This study addresses these challenges by incorporating the discretized forms of the governing equations into the PINN framework. We propose to combine the power of neural networks with the dynamics imposed by the discretized differential equations. By discretizing the governing equations, the PINN learns to account for the discontinuities and accurately capture the underlying relationships between inputs and outputs, improving the accuracy compared to traditional interpolation techniques. Moreover, by leveraging the power of neural networks, the computational cost associated with numerical simulations is substantially reduced. We evaluate our model on a large-scale dataset for the prediction of pressure and saturation fields demonstrating high accuracies compared to non-physically aware models.
[ "['Zakaria Elabid' 'Daniel Busby' 'Abdenour Hadid']" ]
null
null
2404.03250
null
null
http://arxiv.org/pdf/2404.03250v2
2024-05-27T11:37:12Z
2024-04-04T07:09:43Z
Multi-task learning via robust regularized clustering with non-convex group penalties
Multi-task learning (MTL) aims to improve estimation and prediction performance by sharing common information among related tasks. One natural assumption in MTL is that tasks are classified into clusters based on their characteristics. However, existing MTL methods based on this assumption often ignore outlier tasks that have large task-specific components or no relation to other tasks. To address this issue, we propose a novel MTL method called Multi-Task Learning via Robust Regularized Clustering (MTLRRC). MTLRRC incorporates robust regularization terms inspired by robust convex clustering, which is further extended to handle non-convex and group-sparse penalties. The extension allows MTLRRC to simultaneously perform robust task clustering and outlier task detection. The connection between the extended robust clustering and the multivariate M-estimator is also established. This provides an interpretation of the robustness of MTLRRC against outlier tasks. An efficient algorithm based on a modified alternating direction method of multipliers is developed for the estimation of the parameters. The effectiveness of MTLRRC is demonstrated through simulation studies and application to real data.
[ "['Akira Okazaki' 'Shuichi Kawano']" ]
null
null
2404.03253
null
null
http://arxiv.org/pdf/2404.03253v1
2024-04-04T07:19:31Z
2024-04-04T07:19:31Z
A dataset of primary nasopharyngeal carcinoma MRI with multi-modalities segmentation
Multi-modality magnetic resonance imaging data with various sequences facilitate the early diagnosis, tumor segmentation, and disease staging in the management of nasopharyngeal carcinoma (NPC). The lack of publicly available, comprehensive datasets limits advancements in diagnosis, treatment planning, and the development of machine learning algorithms for NPC. Addressing this critical need, we introduce the first comprehensive NPC MRI dataset, encompassing MR axial imaging of 277 primary NPC patients. This dataset includes T1-weighted, T2-weighted, and contrast-enhanced T1-weighted sequences, totaling 831 scans. In addition to the corresponding clinical data, manually annotated and labeled segmentations by experienced radiologists offer high-quality data resources from untreated primary NPC.
[ "['Yin Li' 'Qi Chen' 'Kai Wang' 'Meige Li' 'Liping Si' 'Yingwei Guo'\n 'Yu Xiong' 'Qixing Wang' 'Yang Qin' 'Ling Xu' 'Patrick van der Smagt'\n 'Jun Tang' 'Nutan Chen']" ]
null
null
2404.03263
null
null
http://arxiv.org/pdf/2404.03263v2
2024-05-03T06:08:30Z
2024-04-04T07:38:11Z
On the Surprising Efficacy of Distillation as an Alternative to Pre-Training Small Models
In this paper, we propose that small models may not need to absorb the cost of pre-training to reap its benefits. Instead, they can capitalize on the astonishing results achieved by modern, enormous models to a surprising degree. We observe that, when distilled on a task from a pre-trained teacher model, a small model can achieve or surpass the performance it would achieve if it was pre-trained then finetuned on that task. To allow this phenomenon to be easily leveraged, we establish a connection reducing knowledge distillation to modern contrastive learning, opening two doors: (1) vastly different model architecture pairings can work for the distillation, and (2) most contrastive learning algorithms rooted in the theory of Noise Contrastive Estimation can be easily applied and used. We demonstrate this paradigm using pre-trained teacher models from open-source model hubs, Transformer and convolution based model combinations, and a novel distillation algorithm that massages the Alignment/Uniformity perspective of contrastive learning by Wang & Isola (2020) into a distillation objective. We choose this flavor of contrastive learning due to its low computational cost, an overarching theme of this work. We also observe that this phenomenon tends not to occur if the task is data-limited. However, this can be alleviated by leveraging yet another scale-inspired development: large, pre-trained generative models for dataset augmentation. Again, we use an open-source model, and our rudimentary prompts are sufficient to boost the small model`s performance. Thus, we highlight a training method for small models that is up to 94% faster than the standard pre-training paradigm without sacrificing performance. For practitioners discouraged from fully utilizing modern foundation datasets for their small models due to the prohibitive scale, we believe our work keeps that door open.
[ "['Sean Farhat' 'Deming Chen']" ]
null
null
2404.03272
null
null
http://arxiv.org/pdf/2404.03272v1
2024-04-04T07:49:09Z
2024-04-04T07:49:09Z
Cryptographic Hardness of Score Estimation
We show that $L^2$-accurate score estimation, in the absence of strong assumptions on the data distribution, is computationally hard even when sample complexity is polynomial in the relevant problem parameters. Our reduction builds on the result of Chen et al. (ICLR 2023), who showed that the problem of generating samples from an unknown data distribution reduces to $L^2$-accurate score estimation. Our hard-to-estimate distributions are the "Gaussian pancakes" distributions, originally due to Diakonikolas et al. (FOCS 2017), which have been shown to be computationally indistinguishable from the standard Gaussian under widely believed hardness assumptions from lattice-based cryptography (Bruna et al., STOC 2021; Gupte et al., FOCS 2022).
[ "['Min Jae Song']" ]
null
null
2404.03273
null
null
http://arxiv.org/pdf/2404.03273v2
2024-04-25T08:57:48Z
2024-04-04T07:55:46Z
Gaussian-Smoothed Sliced Probability Divergences
Gaussian smoothed sliced Wasserstein distance has been recently introduced for comparing probability distributions, while preserving privacy on the data. It has been shown that it provides performances similar to its non-smoothed (non-private) counterpart. However, the computationaland statistical properties of such a metric have not yet been well-established. This work investigates the theoretical properties of this distance as well as those of generalized versions denoted as Gaussian-smoothed sliced divergences. We first show that smoothing and slicing preserve the metric property and the weak topology. To study the sample complexity of such divergences, we then introduce $hat{hatmu}_{n}$ the double empirical distribution for the smoothed-projected $mu$. The distribution $hat{hatmu}_{n}$ is a result of a double sampling process: one from sampling according to the origin distribution $mu$ and the second according to the convolution of the projection of $mu$ on the unit sphere and the Gaussian smoothing. We particularly focus on the Gaussian smoothed sliced Wasserstein distance and prove that it converges with a rate $O(n^{-1/2})$. We also derive other properties, including continuity, of different divergences with respect to the smoothing parameter. We support our theoretical findings with empirical studies in the context of privacy-preserving domain adaptation.
[ "['Mokhtar Z. Alaya' 'Alain Rakotomamonjy' 'Maxime Berar' 'Gilles Gasso']" ]
null
null
2404.03290
null
null
http://arxiv.org/pdf/2404.03290v1
2024-04-04T08:24:57Z
2024-04-04T08:24:57Z
Learning-to-Optimize with PAC-Bayesian Guarantees: Theoretical Considerations and Practical Implementation
We use the PAC-Bayesian theory for the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-Bayesian bounds) and explicit trade-off between convergence guarantees and convergence speed, which contrasts with the typical worst-case analysis. Our learned optimization algorithms provably outperform related ones derived from a (deterministic) worst-case analysis. The results rely on PAC-Bayesian bounds for general, possibly unbounded loss-functions based on exponential families. Then, we reformulate the learning procedure into a one-dimensional minimization problem and study the possibility to find a global minimum. Furthermore, we provide a concrete algorithmic realization of the framework and new methodologies for learning-to-optimize, and we conduct four practically relevant experiments to support our theory. With this, we showcase that the provided learning framework yields optimization algorithms that provably outperform the state-of-the-art by orders of magnitude.
[ "['Michael Sucker' 'Jalal Fadili' 'Peter Ochs']" ]
null
null
2404.03299
null
null
http://arxiv.org/pdf/2404.03299v1
2024-04-04T08:48:30Z
2024-04-04T08:48:30Z
SiloFuse: Cross-silo Synthetic Data Generation with Latent Tabular Diffusion Models
Synthetic tabular data is crucial for sharing and augmenting data across silos, especially for enterprises with proprietary data. However, existing synthesizers are designed for centrally stored data. Hence, they struggle with real-world scenarios where features are distributed across multiple silos, necessitating on-premise data storage. We introduce SiloFuse, a novel generative framework for high-quality synthesis from cross-silo tabular data. To ensure privacy, SiloFuse utilizes a distributed latent tabular diffusion architecture. Through autoencoders, latent representations are learned for each client's features, masking their actual values. We employ stacked distributed training to improve communication efficiency, reducing the number of rounds to a single step. Under SiloFuse, we prove the impossibility of data reconstruction for vertically partitioned synthesis and quantify privacy risks through three attacks using our benchmark framework. Experimental results on nine datasets showcase SiloFuse's competence against centralized diffusion-based synthesizers. Notably, SiloFuse achieves 43.8 and 29.8 higher percentage points over GANs in resemblance and utility. Experiments on communication show stacked training's fixed cost compared to the growing costs of end-to-end training as the number of training iterations increases. Additionally, SiloFuse proves robust to feature permutations and varying numbers of clients.
[ "['Aditya Shankar' 'Hans Brouwer' 'Rihan Hai' 'Lydia Chen']" ]
null
null
2404.03309
null
null
http://arxiv.org/pdf/2404.03309v1
2024-04-04T09:08:04Z
2024-04-04T09:08:04Z
Optimistic Online Non-stochastic Control via FTRL
This paper brings the concept of "optimism" to the new and promising framework of online Non-stochastic Control (NSC). Namely, we study how can NSC benefit from a prediction oracle of unknown quality responsible for forecasting future costs. The posed problem is first reduced to an optimistic learning with delayed feedback problem, which is handled through the Optimistic Follow the Regularized Leader (OFTRL) algorithmic family. This reduction enables the design of OptFTRL-C, the first Disturbance Action Controller (DAC) with optimistic policy regret bounds. These new bounds are commensurate with the oracle's accuracy, ranging from $mathcal{O}(1)$ for perfect predictions to the order-optimal $mathcal{O}(sqrt{T})$ even when all predictions fail. By addressing the challenge of incorporating untrusted predictions into control systems, our work contributes to the advancement of the NSC framework and paves the way towards effective and robust learning-based controllers.
[ "['Naram Mhaisen' 'George Iosifidis']" ]
null
null
2404.03310
null
null
http://arxiv.org/pdf/2404.03310v1
2024-04-04T09:12:13Z
2024-04-04T09:12:13Z
Site-specific Deterministic Temperature and Humidity Forecasts with Explainable and Reliable Machine Learning
Site-specific weather forecasts are essential to accurate prediction of power demand and are consequently of great interest to energy operators. However, weather forecasts from current numerical weather prediction (NWP) models lack the fine-scale detail to capture all important characteristics of localised real-world sites. Instead they provide weather information representing a rectangular gridbox (usually kilometres in size). Even after post-processing and bias correction, area-averaged information is usually not optimal for specific sites. Prior work on site optimised forecasts has focused on linear methods, weighted consensus averaging, time-series methods, and others. Recent developments in machine learning (ML) have prompted increasing interest in applying ML as a novel approach towards this problem. In this study, we investigate the feasibility of optimising forecasts at sites by adopting the popular machine learning model gradient boosting decision tree, supported by the Python version of the XGBoost package. Regression trees have been trained with historical NWP and site observations as training data, aimed at predicting temperature and dew point at multiple site locations across Australia. We developed a working ML framework, named 'Multi-SiteBoost' and initial testing results show a significant improvement compared with gridded values from bias-corrected NWP models. The improvement from XGBoost is found to be comparable with non-ML methods reported in literature. With the insights provided by SHapley Additive exPlanations (SHAP), this study also tests various approaches to understand the ML predictions and increase the reliability of the forecasts generated by ML.
[ "['MengMeng Han' 'Tennessee Leeuwenburg' 'Brad Murphy']" ]
null
null
2404.03320
null
null
http://arxiv.org/abs/2404.03320v1
2024-04-04T09:35:48Z
2024-04-04T09:35:48Z
Exploring Lightweight Federated Learning for Distributed Load Forecasting
Federated Learning (FL) is a distributed learning scheme that enables deep learning to be applied to sensitive data streams and applications in a privacy-preserving manner. This paper focuses on the use of FL for analyzing smart energy meter data with the aim to achieve comparable accuracy to state-of-the-art methods for load forecasting while ensuring the privacy of individual meter data. We show that with a lightweight fully connected deep neural network, we are able to achieve forecasting accuracy comparable to existing schemes, both at each meter source and at the aggregator, by utilising the FL framework. The use of lightweight models further reduces the energy and resource consumption caused by complex deep-learning models, making this approach ideally suited for deployment across resource-constrained smart meter systems. With our proposed lightweight model, we are able to achieve an overall average load forecasting RMSE of 0.17, with the model having a negligible energy overhead of 50 mWh when performing training and inference on an Arduino Uno platform.
[ "['Abhishek Duttagupta' 'Jin Zhao' 'Shanker Shreejith']" ]
null
null
2404.03325
null
null
http://arxiv.org/pdf/2404.03325v1
2024-04-04T09:52:22Z
2024-04-04T09:52:22Z
Embodied Neuromorphic Artificial Intelligence for Robotics: Perspectives, Challenges, and Research Development Stack
Robotic technologies have been an indispensable part for improving human productivity since they have been helping humans in completing diverse, complex, and intensive tasks in a fast yet accurate and efficient way. Therefore, robotic technologies have been deployed in a wide range of applications, ranging from personal to industrial use-cases. However, current robotic technologies and their computing paradigm still lack embodied intelligence to efficiently interact with operational environments, respond with correct/expected actions, and adapt to changes in the environments. Toward this, recent advances in neuromorphic computing with Spiking Neural Networks (SNN) have demonstrated the potential to enable the embodied intelligence for robotics through bio-plausible computing paradigm that mimics how the biological brain works, known as "neuromorphic artificial intelligence (AI)". However, the field of neuromorphic AI-based robotics is still at an early stage, therefore its development and deployment for solving real-world problems expose new challenges in different design aspects, such as accuracy, adaptability, efficiency, reliability, and security. To address these challenges, this paper will discuss how we can enable embodied neuromorphic AI for robotic systems through our perspectives: (P1) Embodied intelligence based on effective learning rule, training mechanism, and adaptability; (P2) Cross-layer optimizations for energy-efficient neuromorphic computing; (P3) Representative and fair benchmarks; (P4) Low-cost reliability and safety enhancements; (P5) Security and privacy for neuromorphic computing; and (P6) A synergistic development for energy-efficient and robust neuromorphic-based robotics. Furthermore, this paper identifies research challenges and opportunities, as well as elaborates our vision for future research development toward embodied neuromorphic AI for robotics.
[ "['Rachmad Vidya Wicaksana Putra' 'Alberto Marchisio' 'Fakhreddine Zayer'\n 'Jorge Dias' 'Muhammad Shafique']" ]
null
null
2404.03329
null
null
http://arxiv.org/pdf/2404.03329v2
2024-04-24T12:39:30Z
2024-04-04T09:55:11Z
DeepFunction: Deep Metric Learning-based Imbalanced Classification for Diagnosing Threaded Pipe Connection Defects using Functional Data
In modern manufacturing, most of the product lines are conforming. Few products are nonconforming but with different defect types. The identification of defect types can help further root cause diagnosis of production lines. With the sensing development, signals of process variables can be collected in high resolution, which can be regarded as multichannel functional data. They have abundant information to characterize the process and help identify the defect types. Motivated by a real example from the pipe tightening process, we focus on defect classification where each sample is a multichannel functional data. However, the available samples for each defect type are limited and imbalanced. Moreover, the functions are incomplete since the pre-tightening process before the pipe tightening process is unobserved. To classify the defect samples based on imbalanced, multichannel, and incomplete functional data is very important but challenging. Thus, we propose an innovative classification framework based on deep metric learning using functional data (DeepFunction). The framework leverages the power of deep metric learning to train on imbalanced datasets. A neural network specially crafted for processing functional data is also proposed to handle multichannel and incomplete functional data. The results from a real-world case study demonstrate the superior accuracy of our framework when compared to existing benchmarks.
[ "['Yukun Xie' 'Juan Du' 'Chen Zhang']" ]
null
null
2404.03331
null
null
http://arxiv.org/pdf/2404.03331v1
2024-04-04T09:57:29Z
2024-04-04T09:57:29Z
LancBiO: dynamic Lanczos-aided bilevel optimization via Krylov subspace
Bilevel optimization, with broad applications in machine learning, has an intricate hierarchical structure. Gradient-based methods have emerged as a common approach to large-scale bilevel problems. However, the computation of the hyper-gradient, which involves a Hessian inverse vector product, confines the efficiency and is regarded as a bottleneck. To circumvent the inverse, we construct a sequence of low-dimensional approximate Krylov subspaces with the aid of the Lanczos process. As a result, the constructed subspace is able to dynamically and incrementally approximate the Hessian inverse vector product with less effort and thus leads to a favorable estimate of the hyper-gradient. Moreover, we propose a~provable subspace-based framework for bilevel problems where one central step is to solve a small-size tridiagonal linear system. To the best of our knowledge, this is the first time that subspace techniques are incorporated into bilevel optimization. This successful trial not only enjoys $mathcal{O}(epsilon^{-1})$ convergence rate but also demonstrates efficiency in a synthetic problem and two deep learning tasks.
[ "['Bin Gao' 'Yan Yang' 'Ya-xiang Yuan']" ]
null
null
2404.03340
null
null
http://arxiv.org/pdf/2404.03340v1
2024-04-04T10:10:38Z
2024-04-04T10:10:38Z
Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks
Despite providing high-performance solutions for computer vision tasks, the deep neural network (DNN) model has been proved to be extremely vulnerable to adversarial attacks. Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked. Besides, commonly used adaptive learning and fine-tuning technique is unsuitable for adversarial defense since it is essentially a zero-shot problem when deployed. Thus, to tackle this challenge, we propose an attack-agnostic defense method named Meta Invariance Defense (MID). Specifically, various combinations of adversarial attacks are randomly sampled from a manually constructed Attacker Pool to constitute different defense tasks against unknown attacks, in which a student encoder is supervised by multi-consistency distillation to learn the attack-invariant features via a meta principle. The proposed MID has two merits: 1) Full distillation from pixel-, feature- and prediction-level between benign and adversarial samples facilitates the discovery of attack-invariance. 2) The model simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration. Theoretical and empirical studies on numerous benchmarks such as ImageNet verify the generalizable robustness and superiority of MID under various attacks.
[ "['Lei Zhang' 'Yuhang Zhou' 'Yi Yang' 'Xinbo Gao']" ]
null
null
2404.03348
null
null
http://arxiv.org/pdf/2404.03348v1
2024-04-04T10:28:55Z
2024-04-04T10:28:55Z
Knowledge Distillation-Based Model Extraction Attack using Private Counterfactual Explanations
In recent years, there has been a notable increase in the deployment of machine learning (ML) models as services (MLaaS) across diverse production software applications. In parallel, explainable AI (XAI) continues to evolve, addressing the necessity for transparency and trustworthiness in ML models. XAI techniques aim to enhance the transparency of ML models by providing insights, in terms of the model's explanations, into their decision-making process. Simultaneously, some MLaaS platforms now offer explanations alongside the ML prediction outputs. This setup has elevated concerns regarding vulnerabilities in MLaaS, particularly in relation to privacy leakage attacks such as model extraction attacks (MEA). This is due to the fact that explanations can unveil insights about the inner workings of the model which could be exploited by malicious users. In this work, we focus on investigating how model explanations, particularly Generative adversarial networks (GANs)-based counterfactual explanations (CFs), can be exploited for performing MEA within the MLaaS platform. We also delve into assessing the effectiveness of incorporating differential privacy (DP) as a mitigation strategy. To this end, we first propose a novel MEA methodology based on Knowledge Distillation (KD) to enhance the efficiency of extracting a substitute model of a target model exploiting CFs. Then, we advise an approach for training CF generators incorporating DP to generate private CFs. We conduct thorough experimental evaluations on real-world datasets and demonstrate that our proposed KD-based MEA can yield a high-fidelity substitute model with reduced queries with respect to baseline approaches. Furthermore, our findings reveal that the inclusion of a privacy layer impacts the performance of the explainer, the quality of CFs, and results in a reduction in the MEA performance.
[ "['Fatima Ezzeddine' 'Omran Ayoub' 'Silvia Giordano']" ]
null
null
2404.03359
null
null
http://arxiv.org/pdf/2404.03359v1
2024-04-04T10:56:30Z
2024-04-04T10:56:30Z
REACT: Revealing Evolutionary Action Consequence Trajectories for Interpretable Reinforcement Learning
To enhance the interpretability of Reinforcement Learning (RL), we propose Revealing Evolutionary Action Consequence Trajectories (REACT). In contrast to the prevalent practice of validating RL models based on their optimal behavior learned during training, we posit that considering a range of edge-case trajectories provides a more comprehensive understanding of their inherent behavior. To induce such scenarios, we introduce a disturbance to the initial state, optimizing it through an evolutionary algorithm to generate a diverse population of demonstrations. To evaluate the fitness of trajectories, REACT incorporates a joint fitness function that encourages both local and global diversity in the encountered states and chosen actions. Through assessments with policies trained for varying durations in discrete and continuous environments, we demonstrate the descriptive power of REACT. Our results highlight its effectiveness in revealing nuanced aspects of RL models' behavior beyond optimal performance, thereby contributing to improved interpretability.
[ "['Philipp Altmann' 'Céline Davignon' 'Maximilian Zorn' 'Fabian Ritz'\n 'Claudia Linnhoff-Popien' 'Thomas Gabor']" ]
null
null
2404.03368
null
null
http://arxiv.org/pdf/2404.03368v1
2024-04-04T11:09:49Z
2024-04-04T11:09:49Z
Graph Neural Networks for Electric and Hydraulic Data Fusion to Enhance Short-term Forecasting of Pumped-storage Hydroelectricity
Pumped-storage hydropower plants (PSH) actively participate in grid power-frequency control and therefore often operate under dynamic conditions, which results in rapidly varying system states. Predicting these dynamically changing states is essential for comprehending the underlying sensor and machine conditions. This understanding aids in detecting anomalies and faults, ensuring the reliable operation of the connected power grid, and in identifying faulty and miscalibrated sensors. PSH are complex, highly interconnected systems encompassing electrical and hydraulic subsystems, each characterized by their respective underlying networks that can individually be represented as graphs. To take advantage of this relational inductive bias, graph neural networks (GNNs) have been separately applied to state forecasting tasks in the individual subsystems, but without considering their interdependencies. In PSH, however, these subsystems depend on the same control input, making their operations highly interdependent and interconnected. Consequently, hydraulic and electrical sensor data should be fused across PSH subsystems to improve state forecasting accuracy. This approach has not been explored in GNN literature yet because many available PSH graphs are limited to their respective subsystem boundaries, which makes the method unsuitable to be applied directly. In this work, we introduce the application of spectral-temporal graph neural networks, which leverage self-attention mechanisms to concurrently capture and learn meaningful subsystem interdependencies and the dynamic patterns observed in electric and hydraulic sensors. Our method effectively fuses data from the PSH's subsystems by operating on a unified, system-wide graph, learned directly from the data, This approach leads to demonstrably improved state forecasting performance and enhanced generalizability.
[ "['Raffael Theiler' 'Olga Fink']" ]
null
null
2404.03372
null
null
http://arxiv.org/pdf/2404.03372v2
2024-04-11T02:59:07Z
2024-04-04T11:16:16Z
Elementary Analysis of Policy Gradient Methods
Projected policy gradient under the simplex parameterization, policy gradient and natural policy gradient under the softmax parameterization, are fundamental algorithms in reinforcement learning. There have been a flurry of recent activities in studying these algorithms from the theoretical aspect. Despite this, their convergence behavior is still not fully understood, even given the access to exact policy evaluations. In this paper, we focus on the discounted MDP setting and conduct a systematic study of the aforementioned policy optimization methods. Several novel results are presented, including 1) global linear convergence of projected policy gradient for any constant step size, 2) sublinear convergence of softmax policy gradient for any constant step size, 3) global linear convergence of softmax natural policy gradient for any constant step size, 4) global linear convergence of entropy regularized softmax policy gradient for a wider range of constant step sizes than existing result, 5) tight local linear convergence rate of entropy regularized natural policy gradient, and 6) a new and concise local quadratic convergence rate of soft policy iteration without the assumption on the stationary distribution under the optimal policy. New and elementary analysis techniques have been developed to establish these results.
[ "['Jiacai Liu' 'Wenye Li' 'Ke Wei']" ]
null
null
2404.03380
null
null
http://arxiv.org/pdf/2404.03380v1
2024-04-04T11:26:51Z
2024-04-04T11:26:51Z
On the Theoretical Expressive Power and the Design Space of Higher-Order Graph Transformers
Graph transformers have recently received significant attention in graph learning, partly due to their ability to capture more global interaction via self-attention. Nevertheless, while higher-order graph neural networks have been reasonably well studied, the exploration of extending graph transformers to higher-order variants is just starting. Both theoretical understanding and empirical results are limited. In this paper, we provide a systematic study of the theoretical expressive power of order-$k$ graph transformers and sparse variants. We first show that, an order-$k$ graph transformer without additional structural information is less expressive than the $k$-Weisfeiler Lehman ($k$-WL) test despite its high computational cost. We then explore strategies to both sparsify and enhance the higher-order graph transformers, aiming to improve both their efficiency and expressiveness. Indeed, sparsification based on neighborhood information can enhance the expressive power, as it provides additional information about input graph structures. In particular, we show that a natural neighborhood-based sparse order-$k$ transformer model is not only computationally efficient, but also expressive -- as expressive as $k$-WL test. We further study several other sparse graph attention models that are computationally efficient and provide their expressiveness analysis. Finally, we provide experimental results to show the effectiveness of the different sparsification strategies.
[ "['Cai Zhou' 'Rose Yu' 'Yusu Wang']" ]
null
null
2404.03382
null
null
http://arxiv.org/pdf/2404.03382v1
2024-04-04T11:29:05Z
2024-04-04T11:29:05Z
DIDA: Denoised Imitation Learning based on Domain Adaptation
Imitating skills from low-quality datasets, such as sub-optimal demonstrations and observations with distractors, is common in real-world applications. In this work, we focus on the problem of Learning from Noisy Demonstrations (LND), where the imitator is required to learn from data with noise that often occurs during the processes of data collection or transmission. Previous IL methods improve the robustness of learned policies by injecting an adversarially learned Gaussian noise into pure expert data or utilizing additional ranking information, but they may fail in the LND setting. To alleviate the above problems, we propose Denoised Imitation learning based on Domain Adaptation (DIDA), which designs two discriminators to distinguish the noise level and expertise level of data, facilitating a feature encoder to learn task-related but domain-agnostic representations. Experiment results on MuJoCo demonstrate that DIDA can successfully handle challenging imitation tasks from demonstrations with various types of noise, outperforming most baseline methods.
[ "['Kaichen Huang' 'Hai-Hang Sun' 'Shenghua Wan' 'Minghao Shao' 'Shuai Feng'\n 'Le Gan' 'De-Chuan Zhan']" ]
null
null
2404.03386
null
null
http://arxiv.org/pdf/2404.03386v1
2024-04-04T11:37:55Z
2024-04-04T11:37:55Z
SENSOR: Imitate Third-Person Expert's Behaviors via Active Sensoring
In many real-world visual Imitation Learning (IL) scenarios, there is a misalignment between the agent's and the expert's perspectives, which might lead to the failure of imitation. Previous methods have generally solved this problem by domain alignment, which incurs extra computation and storage costs, and these methods fail to handle the textit{hard cases} where the viewpoint gap is too large. To alleviate the above problems, we introduce active sensoring in the visual IL setting and propose a model-based SENSory imitatOR (SENSOR) to automatically change the agent's perspective to match the expert's. SENSOR jointly learns a world model to capture the dynamics of latent states, a sensor policy to control the camera, and a motor policy to control the agent. Experiments on visual locomotion tasks show that SENSOR can efficiently simulate the expert's perspective and strategy, and outperforms most baseline methods.
[ "['Kaichen Huang' 'Minghao Shao' 'Shenghua Wan' 'Hai-Hang Sun' 'Shuai Feng'\n 'Le Gan' 'De-Chuan Zhan']" ]
null
null
2404.03411
null
null
http://arxiv.org/pdf/2404.03411v1
2024-04-04T12:38:14Z
2024-04-04T12:38:14Z
Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks?
Various jailbreak attacks have been proposed to red-team Large Language Models (LLMs) and revealed the vulnerable safeguards of LLMs. Besides, some methods are not limited to the textual modality and extend the jailbreak attack to Multimodal Large Language Models (MLLMs) by perturbing the visual input. However, the absence of a universal evaluation benchmark complicates the performance reproduction and fair comparison. Besides, there is a lack of comprehensive evaluation of closed-source state-of-the-art (SOTA) models, especially MLLMs, such as GPT-4V. To address these issues, this work first builds a comprehensive jailbreak evaluation dataset with 1445 harmful questions covering 11 different safety policies. Based on this dataset, extensive red-teaming experiments are conducted on 11 different LLMs and MLLMs, including both SOTA proprietary models and open-source models. We then conduct a deep analysis of the evaluated results and find that (1) GPT4 and GPT-4V demonstrate better robustness against jailbreak attacks compared to open-source LLMs and MLLMs. (2) Llama2 and Qwen-VL-Chat are more robust compared to other open-source models. (3) The transferability of visual jailbreak methods is relatively limited compared to textual jailbreak methods. The dataset and code can be found here https://anonymous.4open.science/r/red_teaming_gpt4-C1CE/README.md .
[ "['Shuo Chen' 'Zhen Han' 'Bailan He' 'Zifeng Ding' 'Wenqian Yu'\n 'Philip Torr' 'Volker Tresp' 'Jindong Gu']" ]
null
null
2404.03419
null
null
http://arxiv.org/pdf/2404.03419v2
2024-04-13T14:57:37Z
2024-04-04T12:54:13Z
Integrating Hyperparameter Search into Model-Free AutoML with Context-Free Grammars
Automated Machine Learning (AutoML) has become increasingly popular in recent years due to its ability to reduce the amount of time and expertise required to design and develop machine learning systems. This is very important for the practice of machine learning, as it allows building strong baselines quickly, improving the efficiency of the data scientists, and reducing the time to production. However, despite the advantages of AutoML, it faces several challenges, such as defining the solutions space and exploring it efficiently. Recently, some approaches have been shown to be able to do it using tree-based search algorithms and context-free grammars. In particular, GramML presents a model-free reinforcement learning approach that leverages pipeline configuration grammars and operates using Monte Carlo tree search. However, one of the limitations of GramML is that it uses default hyperparameters, limiting the search problem to finding optimal pipeline structures for the available data preprocessors and models. In this work, we propose an extension to GramML that supports larger search spaces including hyperparameter search. We evaluated the approach using an OpenML benchmark and found significant improvements compared to other state-of-the-art techniques.
[ "['Hernán Ceferino Vázquez' 'Jorge Sanchez' 'Rafael Carrascosa']" ]
null
null
2404.03426
null
null
http://arxiv.org/pdf/2404.03426v1
2024-04-04T13:09:26Z
2024-04-04T13:09:26Z
Accurate estimation of feature importance faithfulness for tree models
In this paper, we consider a perturbation-based metric of predictive faithfulness of feature rankings (or attributions) that we call PGI squared. When applied to decision tree-based regression models, the metric can be computed accurately and efficiently for arbitrary independent feature perturbation distributions. In particular, the computation does not involve Monte Carlo sampling that has been typically used for computing similar metrics and which is inherently prone to inaccuracies. Moreover, we propose a method of ranking features by their importance for the tree model's predictions based on PGI squared. Our experiments indicate that in some respects, the method may identify the globally important features better than the state-of-the-art SHAP explainer
[ "['Mateusz Gajewski' 'Adam Karczmarz' 'Mateusz Rapicki' 'Piotr Sankowski']" ]
null
null
2404.03434
null
null
http://arxiv.org/pdf/2404.03434v1
2024-04-04T13:27:22Z
2024-04-04T13:27:22Z
Learning From Simplicial Data Based on Random Walks and 1D Convolutions
Triggered by limitations of graph-based deep learning methods in terms of computational expressivity and model flexibility, recent years have seen a surge of interest in computational models that operate on higher-order topological domains such as hypergraphs and simplicial complexes. While the increased expressivity of these models can indeed lead to a better classification performance and a more faithful representation of the underlying system, the computational cost of these higher-order models can increase dramatically. To this end, we here explore a simplicial complex neural network learning architecture based on random walks and fast 1D convolutions (SCRaWl), in which we can adjust the increase in computational cost by varying the length and number of random walks considered while accounting for higher-order relationships. Importantly, due to the random walk-based design, the expressivity of the proposed architecture is provably incomparable to that of existing message-passing simplicial neural networks. We empirically evaluate SCRaWl on real-world datasets and show that it outperforms other simplicial neural networks.
[ "['Florian Frantzen' 'Michael T. Schaub']" ]
null
null
2404.03441
null
null
http://arxiv.org/pdf/2404.03441v2
2024-04-16T21:32:47Z
2024-04-04T13:39:06Z
Benchmarking ChatGPT on Algorithmic Reasoning
We evaluate ChatGPT's ability to solve algorithm problems from the CLRS benchmark suite that is designed for GNNs. The benchmark requires the use of a specified classical algorithm to solve a given problem. We find that ChatGPT outperforms specialist GNN models, using Python to successfully solve these problems. This raises new points in the discussion about learning algorithms with neural networks and how we think about what out of distribution testing looks like with web scale training data.
[ "['Sean McLeish' 'Avi Schwarzschild' 'Tom Goldstein']" ]
null
null
2404.03446
null
null
http://arxiv.org/pdf/2404.03446v1
2024-04-04T13:46:52Z
2024-04-04T13:46:52Z
SP$^2$OT: Semantic-Regularized Progressive Partial Optimal Transport for Imbalanced Clustering
Deep clustering, which learns representation and semantic clustering without labels information, poses a great challenge for deep learning-based approaches. Despite significant progress in recent years, most existing methods focus on uniformly distributed datasets, significantly limiting the practical applicability of their methods. In this paper, we propose a more practical problem setting named deep imbalanced clustering, where the underlying classes exhibit an imbalance distribution. To address this challenge, we introduce a novel optimal transport-based pseudo-label learning framework. Our framework formulates pseudo-label generation as a Semantic-regularized Progressive Partial Optimal Transport (SP$^2$OT) problem, which progressively transports each sample to imbalanced clusters under several prior distribution and semantic relation constraints, thus generating high-quality and imbalance-aware pseudo-labels. To solve SP$^2$OT, we develop a Majorization-Minimization-based optimization algorithm. To be more precise, we employ the strategy of majorization to reformulate the SP$^2$OT problem into a Progressive Partial Optimal Transport problem, which can be transformed into an unbalanced optimal transport problem with augmented constraints and can be solved efficiently by a fast matrix scaling algorithm. Experiments on various datasets, including a human-curated long-tailed CIFAR100, challenging ImageNet-R, and large-scale subsets of fine-grained iNaturalist2018 datasets, demonstrate the superiority of our method.
[ "['Chuyu Zhang' 'Hui Ren' 'Xuming He']" ]
null
null
2404.03453
null
null
http://arxiv.org/pdf/2404.03453v1
2024-04-04T13:57:44Z
2024-04-04T13:57:44Z
Conditioning of Banach Space Valued Gaussian Random Variables: An Approximation Approach Based on Martingales
In this paper we investigate the conditional distributions of two Banach space valued, jointly Gaussian random variables. These conditional distributions are again Gaussian and their means and covariances are determined by a general approximation scheme based upon a martingale idea. We then apply our general results to the case of Gaussian processes with continuous paths conditioned to partial observations of their paths.
[ "['Ingo Steinwart']" ]
null
null
2404.03471
null
null
http://arxiv.org/pdf/2404.03471v2
2024-04-07T21:55:38Z
2024-04-04T14:24:06Z
The Impact of Unstated Norms in Bias Analysis of Language Models
Large language models (LLMs), trained on vast datasets, can carry biases that manifest in various forms, from overt discrimination to implicit stereotypes. One facet of bias is performance disparities in LLMs, often harming underprivileged groups, such as racial minorities. A common approach to quantifying bias is to use template-based bias probes, which explicitly state group membership (e.g. White) and evaluate if the outcome of a task, sentiment analysis for instance, is invariant to the change of group membership (e.g. change White race to Black). This approach is widely used in bias quantification. However, in this work, we find evidence of an unexpectedly overlooked consequence of using template-based probes for LLM bias quantification. We find that in doing so, text examples associated with White ethnicities appear to be classified as exhibiting negative sentiment at elevated rates. We hypothesize that the scenario arises artificially through a mismatch between the pre-training text of LLMs and the templates used to measure bias through reporting bias, unstated norms that imply group membership without explicit statement. Our finding highlights the potential misleading impact of varying group membership through explicit mention in bias quantification
[ "['Farnaz Kohankhaki' 'Jacob-Junqi Tian' 'David Emerson'\n 'Laleh Seyyed-Kalantari' 'Faiza Khan Khattak']" ]
null
null
2404.03473
null
null
http://arxiv.org/pdf/2404.03473v1
2024-04-04T14:26:47Z
2024-04-04T14:26:47Z
Generalization Bounds for Message Passing Networks on Mixture of Graphons
We study the generalization capabilities of Message Passing Neural Networks (MPNNs), a prevalent class of Graph Neural Networks (GNN). We derive generalization bounds specifically for MPNNs with normalized sum aggregation and mean aggregation. Our analysis is based on a data generation model incorporating a finite set of template graphons. Each graph within this framework is generated by sampling from one of the graphons with a certain degree of perturbation. In particular, we extend previous MPNN generalization results to a more realistic setting, which includes the following modifications: 1) we analyze simple random graphs with Bernoulli-distributed edges instead of weighted graphs; 2) we sample both graphs and graph signals from perturbed graphons instead of clean graphons; and 3) we analyze sparse graphs instead of dense graphs. In this more realistic and challenging scenario, we provide a generalization bound that decreases as the average number of nodes in the graphs increases. Our results imply that MPNNs with higher complexity than the size of the training set can still generalize effectively, as long as the graphs are sufficiently large.
[ "['Sohir Maskey' 'Gitta Kutyniok' 'Ron Levie']" ]
null
null
2404.03493
null
null
http://arxiv.org/pdf/2404.03493v2
2024-04-05T11:42:57Z
2024-04-04T14:48:26Z
A Methodology to Study the Impact of Spiking Neural Network Parameters considering Event-Based Automotive Data
Autonomous Driving (AD) systems are considered as the future of human mobility and transportation. Solving computer vision tasks such as image classification and object detection/segmentation, with high accuracy and low power/energy consumption, is highly needed to realize AD systems in real life. These requirements can potentially be satisfied by Spiking Neural Networks (SNNs). However, the state-of-the-art works in SNN-based AD systems still focus on proposing network models that can achieve high accuracy, and they have not systematically studied the roles of SNN parameters when used for learning event-based automotive data. Therefore, we still lack understanding of how to effectively develop SNN models for AD systems. Toward this, we propose a novel methodology to systematically study and analyze the impact of SNN parameters considering event-based automotive data, then leverage this analysis for enhancing SNN developments. To do this, we first explore different settings of SNN parameters that directly affect the learning mechanism (i.e., batch size, learning rate, neuron threshold potential, and weight decay), then analyze the accuracy results. Afterward, we propose techniques that jointly improve SNN accuracy and reduce training time. Experimental results show that our methodology can improve the SNN models for AD systems than the state-of-the-art, as it achieves higher accuracy (i.e., 86%) for the NCARS dataset, and it can also achieve iso-accuracy (i.e., ~85% with standard deviation less than 0.5%) while speeding up the training time by 1.9x. In this manner, our research work provides a set of guidelines for SNN parameter enhancements, thereby enabling the practical developments of SNN-based AD systems.
[ "['Iqra Bano' 'Rachmad Vidya Wicaksana Putra' 'Alberto Marchisio'\n 'Muhammad Shafique']" ]
null
null
2404.03495
null
null
http://arxiv.org/pdf/2404.03495v1
2024-04-04T14:50:50Z
2024-04-04T14:50:50Z
About Test-time training for outlier detection
In this paper, we introduce DOUST, our method applying test-time training for outlier detection, significantly improving the detection performance. After thoroughly evaluating our algorithm on common benchmark datasets, we discuss a common problem and show that it disappears with a large enough test set. Thus, we conclude that under reasonable conditions, our algorithm can reach almost supervised performance even when no labeled outliers are given.
[ "['Simon Klüttermann' 'Emmanuel Müller']" ]
null
null
2404.03506
null
null
http://arxiv.org/pdf/2404.03506v1
2024-04-04T15:10:13Z
2024-04-04T15:10:13Z
CountARFactuals -- Generating plausible model-agnostic counterfactual explanations with adversarial random forests
Counterfactual explanations elucidate algorithmic decisions by pointing to scenarios that would have led to an alternative, desired outcome. Giving insight into the model's behavior, they hint users towards possible actions and give grounds for contesting decisions. As a crucial factor in achieving these goals, counterfactuals must be plausible, i.e., describing realistic alternative scenarios within the data manifold. This paper leverages a recently developed generative modeling technique -- adversarial random forests (ARFs) -- to efficiently generate plausible counterfactuals in a model-agnostic way. ARFs can serve as a plausibility measure or directly generate counterfactual explanations. Our ARF-based approach surpasses the limitations of existing methods that aim to generate plausible counterfactual explanations: It is easy to train and computationally highly efficient, handles continuous and categorical data naturally, and allows integrating additional desiderata such as sparsity in a straightforward manner.
[ "['Susanne Dandl' 'Kristin Blesch' 'Timo Freiesleben' 'Gunnar König'\n 'Jan Kapar' 'Bernd Bischl' 'Marvin Wright']" ]
null
null
2404.03524
null
null
http://arxiv.org/pdf/2404.03524v1
2024-04-04T15:29:50Z
2024-04-04T15:29:50Z
Approximate Gradient Coding for Privacy-Flexible Federated Learning with Non-IID Data
This work focuses on the challenges of non-IID data and stragglers/dropouts in federated learning. We introduce and explore a privacy-flexible paradigm that models parts of the clients' local data as non-private, offering a more versatile and business-oriented perspective on privacy. Within this framework, we propose a data-driven strategy for mitigating the effects of label heterogeneity and client straggling on federated learning. Our solution combines both offline data sharing and approximate gradient coding techniques. Through numerical simulations using the MNIST dataset, we demonstrate that our approach enables achieving a deliberate trade-off between privacy and utility, leading to improved model convergence and accuracy while using an adaptable portion of non-private data.
[ "['Okko Makkonen' 'Sampo Niemelä' 'Camilla Hollanti' 'Serge Kas Hanna']" ]
null
null
2404.03528
null
null
http://arxiv.org/pdf/2404.03528v3
2024-06-05T13:39:56Z
2024-04-04T15:31:21Z
BanglaAutoKG: Automatic Bangla Knowledge Graph Construction with Semantic Neural Graph Filtering
Knowledge Graphs (KGs) have proven essential in information processing and reasoning applications because they link related entities and give context-rich information, supporting efficient information retrieval and knowledge discovery; presenting information flow in a very effective manner. Despite being widely used globally, Bangla is relatively underrepresented in KGs due to a lack of comprehensive datasets, encoders, NER (named entity recognition) models, POS (part-of-speech) taggers, and lemmatizers, hindering efficient information processing and reasoning applications in the language. Addressing the KG scarcity in Bengali, we propose BanglaAutoKG, a pioneering framework that is able to automatically construct Bengali KGs from any Bangla text. We utilize multilingual LLMs to understand various languages and correlate entities and relations universally. By employing a translation dictionary to identify English equivalents and extracting word features from pre-trained BERT models, we construct the foundational KG. To reduce noise and align word embeddings with our goal, we employ graph-based polynomial filters. Lastly, we implement a GNN-based semantic filter, which elevates contextual understanding and trims unnecessary edges, culminating in the formation of the definitive KG. Empirical findings and case studies demonstrate the universal effectiveness of our model, capable of autonomously constructing semantically enriched KGs from any text.
[ "['Azmine Toushik Wasi' 'Taki Hasan Rafi' 'Raima Islam' 'Dong-Kyu Chae']" ]
null
null
2404.03543
null
null
http://arxiv.org/pdf/2404.03543v2
2024-04-06T04:29:25Z
2024-04-04T15:49:49Z
CodeEditorBench: Evaluating Code Editing Capability of Large Language Models
Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners.
[ "['Jiawei Guo' 'Ziming Li' 'Xueling Liu' 'Kaijing Ma' 'Tianyu Zheng'\n 'Zhouliang Yu' 'Ding Pan' 'Yizhi LI' 'Ruibo Liu' 'Yue Wang' 'Shuyue Guo'\n 'Xingwei Qu' 'Xiang Yue' 'Ge Zhang' 'Wenhu Chen' 'Jie Fu']" ]
null
null
2404.03558
null
null
http://arxiv.org/pdf/2404.03558v1
2024-04-04T16:15:23Z
2024-04-04T16:15:23Z
How does Multi-Task Training Affect Transformer In-Context Capabilities? Investigations with Function Classes
Large language models (LLM) have recently shown the extraordinary ability to perform unseen tasks based on few-shot examples provided as text, also known as in-context learning (ICL). While recent works have attempted to understand the mechanisms driving ICL, few have explored training strategies that incentivize these models to generalize to multiple tasks. Multi-task learning (MTL) for generalist models is a promising direction that offers transfer learning potential, enabling large parameterized models to be trained from simpler, related tasks. In this work, we investigate the combination of MTL with ICL to build models that efficiently learn tasks while being robust to out-of-distribution examples. We propose several effective curriculum learning strategies that allow ICL models to achieve higher data efficiency and more stable convergence. Our experiments reveal that ICL models can effectively learn difficult tasks by training on progressively harder tasks while mixing in prior tasks, denoted as mixed curriculum in this work. Our code and models are available at https://github.com/harmonbhasin/curriculum_learning_icl .
[ "['Harmon Bhasin' 'Timothy Ossowski' 'Yiqiao Zhong' 'Junjie Hu']" ]
null
null
2404.03574
null
null
http://arxiv.org/pdf/2404.03574v1
2024-04-04T16:38:49Z
2024-04-04T16:38:49Z
TinyVQA: Compact Multimodal Deep Neural Network for Visual Question Answering on Resource-Constrained Devices
Traditional machine learning models often require powerful hardware, making them unsuitable for deployment on resource-limited devices. Tiny Machine Learning (tinyML) has emerged as a promising approach for running machine learning models on these devices, but integrating multiple data modalities into tinyML models still remains a challenge due to increased complexity, latency, and power consumption. This paper proposes TinyVQA, a novel multimodal deep neural network for visual question answering tasks that can be deployed on resource-constrained tinyML hardware. TinyVQA leverages a supervised attention-based model to learn how to answer questions about images using both vision and language modalities. Distilled knowledge from the supervised attention-based VQA model trains the memory aware compact TinyVQA model and low bit-width quantization technique is employed to further compress the model for deployment on tinyML devices. The TinyVQA model was evaluated on the FloodNet dataset, which is used for post-disaster damage assessment. The compact model achieved an accuracy of 79.5%, demonstrating the effectiveness of TinyVQA for real-world applications. Additionally, the model was deployed on a Crazyflie 2.0 drone, equipped with an AI deck and GAP8 microprocessor. The TinyVQA model achieved low latencies of 56 ms and consumes 693 mW power while deployed on the tiny drone, showcasing its suitability for resource-constrained embedded systems.
[ "['Hasib-Al Rashid' 'Argho Sarkar' 'Aryya Gangopadhyay'\n 'Maryam Rahnemoonfar' 'Tinoosh Mohsenin']" ]
null
null
2404.03578
null
null
http://arxiv.org/pdf/2404.03578v1
2024-04-04T16:40:22Z
2024-04-04T16:40:22Z
Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithm
The sim-to-real gap, which represents the disparity between training and testing environments, poses a significant challenge in reinforcement learning (RL). A promising approach to addressing this challenge is distributionally robust RL, often framed as a robust Markov decision process (RMDP). In this framework, the objective is to find a robust policy that achieves good performance under the worst-case scenario among all environments within a pre-specified uncertainty set centered around the training environment. Unlike previous work, which relies on a generative model or a pre-collected offline dataset enjoying good coverage of the deployment environment, we tackle robust RL via interactive data collection, where the learner interacts with the training environment only and refines the policy through trial and error. In this robust RL paradigm, two main challenges emerge: managing distributional robustness while striking a balance between exploration and exploitation during data collection. Initially, we establish that sample-efficient learning without additional assumptions is unattainable owing to the curse of support shift; i.e., the potential disjointedness of the distributional supports between the training and testing environments. To circumvent such a hardness result, we introduce the vanishing minimal value assumption to RMDPs with a total-variation (TV) distance robust set, postulating that the minimal value of the optimal robust value function is zero. We prove that such an assumption effectively eliminates the support shift issue for RMDPs with a TV distance robust set, and present an algorithm with a provable sample complexity guarantee. Our work makes the initial step to uncovering the inherent difficulty of robust RL via interactive data collection and sufficient conditions for designing a sample-efficient algorithm accompanied by sharp sample complexity analysis.
[ "['Miao Lu' 'Han Zhong' 'Tong Zhang' 'Jose Blanchet']" ]
null
null
2404.03586
null
null
http://arxiv.org/pdf/2404.03586v1
2024-04-04T16:52:17Z
2024-04-04T16:52:17Z
Leveraging Interpolation Models and Error Bounds for Verifiable Scientific Machine Learning
Effective verification and validation techniques for modern scientific machine learning workflows are challenging to devise. Statistical methods are abundant and easily deployed, but often rely on speculative assumptions about the data and methods involved. Error bounds for classical interpolation techniques can provide mathematically rigorous estimates of accuracy, but often are difficult or impractical to determine computationally. In this work, we present a best-of-both-worlds approach to verifiable scientific machine learning by demonstrating that (1) multiple standard interpolation techniques have informative error bounds that can be computed or estimated efficiently; (2) comparative performance among distinct interpolants can aid in validation goals; (3) deploying interpolation methods on latent spaces generated by deep learning techniques enables some interpretability for black-box models. We present a detailed case study of our approach for predicting lift-drag ratios from airfoil images. Code developed for this work is available in a public Github repository.
[ "['Tyler Chang' 'Andrew Gillette' 'Romit Maulik']" ]
null
null
2404.03592
null
null
http://arxiv.org/pdf/2404.03592v3
2024-05-22T17:52:31Z
2024-04-04T17:00:37Z
ReFT: Representation Finetuning for Language Models
Parameter-efficient finetuning (PEFT) methods seek to adapt large neural models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. We pursue this hypothesis by developing a family of Representation Finetuning (ReFT) methods. ReFT methods operate on a frozen base model and learn task-specific interventions on hidden representations. We define a strong instance of the ReFT family, Low-rank Linear Subspace ReFT (LoReFT), and we identify an ablation of this method that trades some performance for increased efficiency. Both are drop-in replacements for existing PEFTs and learn interventions that are 15x--65x more parameter-efficient than LoRA. We showcase LoReFT on eight commonsense reasoning tasks, four arithmetic reasoning tasks, instruction-tuning, and GLUE. In all these evaluations, our ReFTs deliver the best balance of efficiency and performance, and almost always outperform state-of-the-art PEFTs. We release a generic ReFT training library publicly at https://github.com/stanfordnlp/pyreft.
[ "['Zhengxuan Wu' 'Aryaman Arora' 'Zheng Wang' 'Atticus Geiger'\n 'Dan Jurafsky' 'Christopher D. Manning' 'Christopher Potts']" ]
null
null
2404.03596
null
null
http://arxiv.org/pdf/2404.03596v1
2024-04-04T17:05:42Z
2024-04-04T17:05:42Z
Laser Learning Environment: A new environment for coordination-critical multi-agent tasks
We introduce the Laser Learning Environment (LLE), a collaborative multi-agent reinforcement learning environment in which coordination is central. In LLE, agents depend on each other to make progress (interdependence), must jointly take specific sequences of actions to succeed (perfect coordination), and accomplishing those joint actions does not yield any intermediate reward (zero-incentive dynamics). The challenge of such problems lies in the difficulty of escaping state space bottlenecks caused by interdependence steps since escaping those bottlenecks is not rewarded. We test multiple state-of-the-art value-based MARL algorithms against LLE and show that they consistently fail at the collaborative task because of their inability to escape state space bottlenecks, even though they successfully achieve perfect coordination. We show that Q-learning extensions such as prioritized experience replay and n-steps return hinder exploration in environments with zero-incentive dynamics, and find that intrinsic curiosity with random network distillation is not sufficient to escape those bottlenecks. We demonstrate the need for novel methods to solve this problem and the relevance of LLE as cooperative MARL benchmark.
[ "['Yannick Molinghen' 'Raphaël Avalos' 'Mark Van Achter' 'Ann Nowé'\n 'Tom Lenaerts']" ]
null
null
2404.03605
null
null
http://arxiv.org/pdf/2404.03605v1
2024-04-04T17:25:30Z
2024-04-04T17:25:30Z
Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization
We consider the problem of accurate quantization for language models, where both the weights and activations are uniformly quantized to 4 bits per parameter, the lowest bitwidth format natively supported by GPU hardware. In this context, the key challenge is activation quantization: it is known that language models contain outlier channels whose values on average are orders of magnitude higher than than other channels, which prevents accurate low-bitwidth quantization with known techniques. We systematically study this phenomena and find that these outlier channels emerge early in training, and that they occur more frequently in layers with residual streams. We then propose a simple strategy which regularizes a layer's inputs via quantization-aware training (QAT) and its outputs via activation kurtosis regularization. We show that regularizing both the inputs and outputs is crucial for preventing a model's "migrating" the difficulty in input quantization to the weights, which makes post-training quantization (PTQ) of weights more difficult. When combined with weight PTQ, we show that our approach can obtain a W4A4 model that performs competitively to the standard-precision W16A16 baseline.
[ "['Aniruddha Nrusimha' 'Mayank Mishra' 'Naigang Wang' 'Dan Alistarh'\n 'Rameswar Panda' 'Yoon Kim']" ]
null
null
2404.03617
null
null
http://arxiv.org/pdf/2404.03617v2
2024-05-21T17:56:36Z
2024-04-04T17:39:41Z
On the Efficiency of Convolutional Neural Networks
Since the breakthrough performance of AlexNet in 2012, convolutional neural networks (convnets) have grown into extremely powerful vision models. Deep learning researchers have used convnets to perform vision tasks with accuracy that was unachievable a decade ago. Confronted with the immense computation that convnets use, deep learning researchers also became interested in efficiency. However, the engineers who deployed efficient convnets soon realized that they were slower than the previous generation, despite using fewer operations. Many reverted to older models that ran faster. Hence researchers switched the objective of their search from arithmetic complexity to latency and produced a new wave of models that performed better. Paradoxically, these models also used more operations. Skepticism grew among researchers and engineers alike about the relevance of arithmetic complexity. Contrary to the prevailing view that latency and arithmetic complexity are irreconcilable, a simple formula relates both through computational efficiency. This insight enabled us to co-optimize the separate factors that determine latency. We observed that the degenerate conv2d layers that produce the best accuracy--complexity trade-off also use significant memory resources and have low computational efficiency. We devised block fusion algorithms to implement all the layers of a residual block in a single kernel, thereby creating temporal locality, avoiding communication, and reducing workspace size. Our ConvFirst model with block-fusion kernels has less arithmetic complexity and greater computational efficiency than baseline models and kernels, and ran approximately four times as fast as ConvNeXt. We also created novel tools, including efficiency gap plots and waterline analysis. Our unified approach to convnet efficiency envisions a new era of models and kernels that achieve greater accuracy at lower cost.
[ "['Andrew Lavin']" ]
null
null
2404.03626
null
null
http://arxiv.org/pdf/2404.03626v1
2024-04-04T17:48:28Z
2024-04-04T17:48:28Z
Training LLMs over Neurally Compressed Text
In this paper, we explore the idea of training large language models (LLMs) over highly compressed text. While standard subword tokenizers compress text by a small factor, neural text compressors can achieve much higher rates of compression. If it were possible to train LLMs directly over neurally compressed text, this would confer advantages in training and serving efficiency, as well as easier handling of long text spans. The main obstacle to this goal is that strong compression tends to produce opaque outputs that are not well-suited for learning. In particular, we find that text na"ively compressed via Arithmetic Coding is not readily learnable by LLMs. To overcome this, we propose Equal-Info Windows, a novel compression technique whereby text is segmented into blocks that each compress to the same bit length. Using this method, we demonstrate effective learning over neurally compressed text that improves with scale, and outperforms byte-level baselines by a wide margin on perplexity and inference speed benchmarks. While our method delivers worse perplexity than subword tokenizers for models trained with the same parameter count, it has the benefit of shorter sequence lengths. Shorter sequence lengths require fewer autoregressive generation steps, and reduce latency. Finally, we provide extensive analysis of the properties that contribute to learnability, and offer concrete suggestions for how to further improve the performance of high-compression tokenizers.
[ "['Brian Lester' 'Jaehoon Lee' 'Alex Alemi' 'Jeffrey Pennington'\n 'Adam Roberts' 'Jascha Sohl-Dickstein' 'Noah Constant']" ]
null
null
2404.03635
null
null
http://arxiv.org/pdf/2404.03635v4
2024-06-02T04:56:32Z
2024-04-04T17:54:33Z
WorDepth: Variational Language Prior for Monocular Depth Estimation
Three-dimensional (3D) reconstruction from a single image is an ill-posed problem with inherent ambiguities, i.e. scale. Predicting a 3D scene from text description(s) is similarly ill-posed, i.e. spatial arrangements of objects described. We investigate the question of whether two inherently ambiguous modalities can be used in conjunction to produce metric-scaled reconstructions. To test this, we focus on monocular depth estimation, the problem of predicting a dense depth map from a single image, but with an additional text caption describing the scene. To this end, we begin by encoding the text caption as a mean and standard deviation; using a variational framework, we learn the distribution of the plausible metric reconstructions of 3D scenes corresponding to the text captions as a prior. To "select" a specific reconstruction or depth map, we encode the given image through a conditional sampler that samples from the latent space of the variational text encoder, which is then decoded to the output depth map. Our approach is trained alternatingly between the text and image branches: in one optimization step, we predict the mean and standard deviation from the text description and sample from a standard Gaussian, and in the other, we sample using a (image) conditional sampler. Once trained, we directly predict depth from the encoded text using the conditional sampler. We demonstrate our approach on indoor (NYUv2) and outdoor (KITTI) scenarios, where we show that language can consistently improve performance in both.
[ "['Ziyao Zeng' 'Daniel Wang' 'Fengyu Yang' 'Hyoungseob Park' 'Yangchao Wu'\n 'Stefano Soatto' 'Byung-Woo Hong' 'Dong Lao' 'Alex Wong']" ]
null
null
2404.03647
null
null
http://arxiv.org/pdf/2404.03647v1
2024-04-04T17:58:38Z
2024-04-04T17:58:38Z
Capabilities of Large Language Models in Control Engineering: A Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra
In this paper, we explore the capabilities of state-of-the-art large language models (LLMs) such as GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra in solving undergraduate-level control problems. Controls provides an interesting case study for LLM reasoning due to its combination of mathematical theory and engineering design. We introduce ControlBench, a benchmark dataset tailored to reflect the breadth, depth, and complexity of classical control design. We use this dataset to study and evaluate the problem-solving abilities of these LLMs in the context of control engineering. We present evaluations conducted by a panel of human experts, providing insights into the accuracy, reasoning, and explanatory prowess of LLMs in control engineering. Our analysis reveals the strengths and limitations of each LLM in the context of classical control, and our results imply that Claude 3 Opus has become the state-of-the-art LLM for solving undergraduate control problems. Our study serves as an initial step towards the broader goal of employing artificial general intelligence in control engineering.
[ "['Darioush Kevian' 'Usman Syed' 'Xingang Guo' 'Aaron Havens'\n 'Geir Dullerud' 'Peter Seiler' 'Lianhui Qin' 'Bin Hu']" ]
null
null
2404.03659
null
null
http://arxiv.org/pdf/2404.03659v1
2024-01-17T15:51:36Z
2024-01-17T15:51:36Z
Federated Unlearning for Human Activity Recognition
The rapid evolution of Internet of Things (IoT) technology has spurred the widespread adoption of Human Activity Recognition (HAR) in various daily life domains. Federated Learning (FL) is frequently utilized to build a global HAR model by aggregating user contributions without transmitting raw individual data. Despite substantial progress in user privacy protection with FL, challenges persist. Regulations like the General Data Protection Regulation (GDPR) empower users to request data removal, raising a new query in FL: How can a HAR client request data removal without compromising other clients' privacy? In response, we propose a lightweight machine unlearning method for refining the FL HAR model by selectively removing a portion of a client's training data. Our method employs a third-party dataset unrelated to model training. Using KL divergence as a loss function for fine-tuning, we aim to align the predicted probability distribution on forgotten data with the third-party dataset. Additionally, we introduce a membership inference evaluation method to assess unlearning effectiveness. Experimental results across diverse datasets show our method achieves unlearning accuracy comparable to textit{retraining} methods, resulting in speedups ranging from hundreds to thousands.
[ "['Kongyang Chen' 'Dongping zhang' 'Yaping Chai' 'Weibin Zhang'\n 'Shaowei Wang' 'Jiaxing Shen']" ]
null
null
2404.03660
null
null
http://arxiv.org/pdf/2404.03660v1
2024-01-24T04:41:03Z
2024-01-24T04:41:03Z
Machine Learning in Proton Exchange Membrane Water Electrolysis -- Part I: A Knowledge-Integrated Framework
In this study, we propose to adopt a novel framework, Knowledge-integrated Machine Learning, for advancing Proton Exchange Membrane Water Electrolysis (PEMWE) development. Given the significance of PEMWE in green hydrogen production and the inherent challenges in optimizing its performance, our framework aims to meld data-driven models with domain-specific insights systematically to address the domain challenges. We first identify the uncertainties originating from data acquisition conditions, data-driven model mechanisms, and domain expertise, highlighting their complementary characteristics in carrying information from different perspectives. Building upon this foundation, we showcase how to adeptly decompose knowledge and extract unique information to contribute to the data augmentation, modeling process, and knowledge discovery. We demonstrate a hierarchical three-level framework, termed the "Ladder of Knowledge-integrated Machine Learning", in the PEMWE context, applying it to three case studies within a context of cell degradation analysis to affirm its efficacy in interpolation, extrapolation, and information representation. This research lays the groundwork for more knowledge-informed enhancements in ML applications in engineering.
[ "['Xia Chen' 'Alexander Rex' 'Janis Woelke' 'Christoph Eckert'\n 'Boris Bensmann' 'Richard Hanke-Rauschenbach' 'Philipp Geyer']" ]
null
null
2404.03673
null
null
http://arxiv.org/pdf/2404.03673v2
2024-06-22T08:07:39Z
2024-03-25T15:40:22Z
RL for Consistency Models: Faster Reward Guided Text-to-Image Generation
Reinforcement learning (RL) has improved guided image generation with diffusion models by directly optimizing rewards that capture image quality, aesthetics, and instruction following capabilities. However, the resulting generative policies inherit the same iterative sampling process of diffusion models that causes slow generation. To overcome this limitation, consistency models proposed learning a new class of generative models that directly map noise to data, resulting in a model that can generate an image in as few as one sampling iteration. In this work, to optimize text-to-image generative models for task specific rewards and enable fast training and inference, we propose a framework for fine-tuning consistency models via RL. Our framework, called Reinforcement Learning for Consistency Model (RLCM), frames the iterative inference process of a consistency model as an RL procedure. Comparing to RL finetuned diffusion models, RLCM trains significantly faster, improves the quality of the generation measured under the reward objectives, and speeds up the inference procedure by generating high quality images with as few as two inference steps. Experimentally, we show that RLCM can adapt text-to-image consistency models to objectives that are challenging to express with prompting, such as image compressibility, and those derived from human feedback, such as aesthetic quality. Our code is available at https://rlcm.owenoertell.com.
[ "['Owen Oertell' 'Jonathan D. Chang' 'Yiyi Zhang' 'Kianté Brantley'\n 'Wen Sun']" ]
null
null
2404.03676
null
null
http://arxiv.org/pdf/2404.03676v1
2024-02-15T15:15:11Z
2024-02-15T15:15:11Z
Neural Information Organizing and Processing -- Neural Machines
The informational synthesis of neural structures, processes, parameters and characteristics that allow a unified description and modeling as neural machines of natural and artificial neural systems is presented. The general informational parameters as the global quantitative measure of the neural systems computing potential as absolute and relative neural power were proposed. Neural information organizing and processing follows the way in which nature manages neural information by developing functions, functionalities and circuits related to different internal or peripheral components and also to the whole system through a non-deterministic memorization, fragmentation and aggregation of afferent and efferent information, deep neural information processing representing multiple alternations of fragmentation and aggregation stages. The relevant neural characteristics were integrated into a neural machine type model that incorporates unitary also peripheral or interface components as the central ones. The proposed approach allows overcoming the technical constraints in artificial computational implementations of neural information processes and also provides a more relevant description of natural ones.
[ "['Iosif Iulian Petrila']" ]
null
null
2404.03678
null
null
http://arxiv.org/pdf/2404.03678v1
2024-03-28T09:51:28Z
2024-03-28T09:51:28Z
Machine learning augmented diagnostic testing to identify sources of variability in test performance
Diagnostic tests which can detect pre-clinical or sub-clinical infection, are one of the most powerful tools in our armoury of weapons to control infectious diseases. Considerable effort has been therefore paid to improving diagnostic testing for human, plant and animal diseases, including strategies for targeting the use of diagnostic tests towards individuals who are more likely to be infected. Here, we follow other recent proposals to further refine this concept, by using machine learning to assess the situational risk under which a diagnostic test is applied to augment its interpretation . We develop this to predict the occurrence of breakdowns of cattle herds due to bovine tuberculosis, exploiting the availability of exceptionally detailed testing records. We show that, without compromising test specificity, test sensitivity can be improved so that the proportion of infected herds detected by the skin test, improves by over 16 percentage points. While many risk factors are associated with increased risk of becoming infected, of note are several factors which suggest that, in some herds there is a higher risk of infection going undetected, including effects that are correlated to the veterinary practice conducting the test, and number of livestock moved off the herd.
[ "['Christopher J. Banks' 'Aeron Sanchez' 'Vicki Stewart' 'Kate Bowen'\n 'Graham Smith' 'Rowland R. Kao']" ]
null
null
2404.03683
null
null
http://arxiv.org/pdf/2404.03683v1
2024-04-01T06:50:52Z
2024-04-01T06:50:52Z
Stream of Search (SoS): Learning to Search in Language
Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones.
[ "['Kanishk Gandhi' 'Denise Lee' 'Gabriel Grand' 'Muxin Liu' 'Winson Cheng'\n 'Archit Sharma' 'Noah D. Goodman']" ]
null
null
2404.03686
null
null
http://arxiv.org/pdf/2404.03686v1
2024-04-01T20:41:28Z
2024-04-01T20:41:28Z
Securing Social Spaces: Harnessing Deep Learning to Eradicate Cyberbullying
In today's digital world, cyberbullying is a serious problem that can harm the mental and physical health of people who use social media. This paper explains just how serious cyberbullying is and how it really affects indi-viduals exposed to it. It also stresses how important it is to find better ways to detect cyberbullying so that online spaces can be safer. Plus, it talks about how making more accurate tools to spot cyberbullying will be really helpful in the future. Our paper introduces a deep learning-based ap-proach, primarily employing BERT and BiLSTM architectures, to effective-ly address cyberbullying. This approach is designed to analyse large vol-umes of posts and predict potential instances of cyberbullying in online spaces. Our results demonstrate the superiority of the hateBERT model, an extension of BERT focused on hate speech detection, among the five mod-els, achieving an accuracy rate of 89.16%. This research is a significant con-tribution to "Computational Intelligence for Social Transformation," prom-ising a safer and more inclusive digital landscape.
[ "['Rohan Biswas' 'Kasturi Ganguly' 'Arijit Das' 'Diganta Saha']" ]
null
null
2404.03687
null
null
http://arxiv.org/pdf/2404.03687v1
2024-04-01T20:44:28Z
2024-04-01T20:44:28Z
DRIVE: Dual Gradient-Based Rapid Iterative Pruning
Modern deep neural networks (DNNs) consist of millions of parameters, necessitating high-performance computing during training and inference. Pruning is one solution that significantly reduces the space and time complexities of DNNs. Traditional pruning methods that are applied post-training focus on streamlining inference, but there are recent efforts to leverage sparsity early on by pruning before training. Pruning methods, such as iterative magnitude-based pruning (IMP) achieve up to a 90% parameter reduction while retaining accuracy comparable to the original model. However, this leads to impractical runtime as it relies on multiple train-prune-reset cycles to identify and eliminate redundant parameters. In contrast, training agnostic early pruning methods, such as SNIP and SynFlow offer fast pruning but fall short of the accuracy achieved by IMP at high sparsities. To bridge this gap, we present Dual Gradient-Based Rapid Iterative Pruning (DRIVE), which leverages dense training for initial epochs to counteract the randomness inherent at the initialization. Subsequently, it employs a unique dual gradient-based metric for parameter ranking. It has been experimentally demonstrated for VGG and ResNet architectures on CIFAR-10/100 and Tiny ImageNet, and ResNet on ImageNet that DRIVE consistently has superior performance over other training-agnostic early pruning methods in accuracy. Notably, DRIVE is 43$times$ to 869$times$ faster than IMP for pruning.
[ "['Dhananjay Saikumar' 'Blesson Varghese']" ]
null
null
2404.03689
null
null
http://arxiv.org/pdf/2404.03689v1
2024-04-02T03:13:05Z
2024-04-02T03:13:05Z
A Tutorial on Gaussian Process Learning-based Model Predictive Control
This tutorial provides a systematic introduction to Gaussian process learning-based model predictive control (GP-MPC), an advanced approach integrating Gaussian process (GP) with model predictive control (MPC) for enhanced control in complex systems. It begins with GP regression fundamentals, illustrating how it enriches MPC with enhanced predictive accuracy and robust handling of uncertainties. A central contribution of this tutorial is the first detailed, systematic mathematical formulation of GP-MPC in literature, focusing on deriving the approximation of means and variances propagation for GP multi-step predictions. Practical applications in robotics control, such as path-following for mobile robots in challenging terrains and mixed-vehicle platooning, are discussed to demonstrate the real-world effectiveness and adaptability of GP-MPC. This tutorial aims to make GP-MPC accessible to researchers and practitioners, enriching the learning-based control field with in-depth theoretical and practical insights and fostering further innovations in complex system control.
[ "['Jie Wang' 'Youmin Zhang']" ]
null
null
2404.03693
null
null
http://arxiv.org/pdf/2404.03693v1
2024-04-03T02:41:16Z
2024-04-03T02:41:16Z
Improve Knowledge Distillation via Label Revision and Data Selection
Knowledge distillation (KD) has become a widely used technique in the field of model compression, which aims to transfer knowledge from a large teacher model to a lightweight student model for efficient network development. In addition to the supervision of ground truth, the vanilla KD method regards the predictions of the teacher as soft labels to supervise the training of the student model. Based on vanilla KD, various approaches have been developed to further improve the performance of the student model. However, few of these previous methods have considered the reliability of the supervision from teacher models. Supervision from erroneous predictions may mislead the training of the student model. This paper therefore proposes to tackle this problem from two aspects: Label Revision to rectify the incorrect supervision and Data Selection to select appropriate samples for distillation to reduce the impact of erroneous supervision. In the former, we propose to rectify the teacher's inaccurate predictions using the ground truth. In the latter, we introduce a data selection technique to choose suitable training samples to be supervised by the teacher, thereby reducing the impact of incorrect predictions to some extent. Experiment results demonstrate the effectiveness of our proposed method, and show that our method can be combined with other distillation approaches, improving their performance.
[ "['Weichao Lan' 'Yiu-ming Cheung' 'Qing Xu' 'Buhua Liu' 'Zhikai Hu'\n 'Mengke Li' 'Zhenghua Chen']" ]
null
null
2404.03696
null
null
http://arxiv.org/pdf/2404.03696v2
2024-04-16T20:06:16Z
2024-04-03T15:17:29Z
Convolutional variational autoencoders for secure lossy image compression in remote sensing
The volume of remote sensing data is experiencing rapid growth, primarily due to the plethora of space and air platforms equipped with an array of sensors. Due to limited hardware and battery constraints the data is transmitted back to Earth for processing. The large amounts of data along with security concerns call for new compression and encryption techniques capable of preserving reconstruction quality while minimizing the transmission cost of this data back to Earth. This study investigates image compression based on convolutional variational autoencoders (CVAE), which are capable of substantially reducing the volume of transmitted data while guaranteeing secure lossy image reconstruction. CVAEs have been demonstrated to outperform conventional compression methods such as JPEG2000 by a substantial margin on compression benchmark datasets. The proposed model draws on the strength of the CVAEs capability to abstract data into highly insightful latent spaces, and combining it with the utilization of an entropy bottleneck is capable of finding an optimal balance between compressibility and reconstruction quality. The balance is reached by optimizing over a composite loss function that represents the rate-distortion curve.
[ "['Alessandro Giuliano' 'S. Andrew Gadsden' 'Waleed Hilal' 'John Yawney']" ]
null
null
2404.03701
null
null
http://arxiv.org/pdf/2404.03701v2
2024-04-19T03:51:42Z
2024-04-04T00:49:05Z
Predictive Analytics of Varieties of Potatoes
We explore the application of machine learning algorithms to predict the suitability of Russet potato clones for advancement in breeding trials. Leveraging data from manually collected trials in the state of Oregon, we investigate the potential of a wide variety of state-of-the-art binary classification models. We conduct a comprehensive analysis of the dataset that includes preprocessing, feature engineering, and imputation to address missing values. We focus on several key metrics such as accuracy, F1-score, and Matthews correlation coefficient (MCC) for model evaluation. The top-performing models, namely the multi-layer perceptron classifier (MLPC), histogram-based gradient boosting classifier (HGBC), and a support vector machine classifier (SVC), demonstrate consistent and significant results. Variable selection further enhances model performance and identifies influential features in predicting trial outcomes. The findings emphasize the potential of machine learning in streamlining the selection process for potato varieties, offering benefits such as increased efficiency, substantial cost savings, and judicious resource utilization. Our study contributes insights into precision agriculture and showcases the relevance of advanced technologies for informed decision-making in breeding programs.
[ "['Fabiana Ferracina' 'Bala Krishnamoorthy' 'Mahantesh Halappanavar'\n 'Shengwei Hu' 'Vidyasagar Sathuvalli']" ]
null
null
2404.03702
null
null
http://arxiv.org/pdf/2404.03702v1
2024-04-04T02:43:56Z
2024-04-04T02:43:56Z
Personalized Federated Learning for Spatio-Temporal Forecasting: A Dual Semantic Alignment-Based Contrastive Approach
The existing federated learning (FL) methods for spatio-temporal forecasting fail to capture the inherent spatio-temporal heterogeneity, which calls for personalized FL (PFL) methods to model the spatio-temporally variant patterns. While contrastive learning approach is promising in addressing spatio-temporal heterogeneity, the existing methods are noneffective in determining negative pairs and can hardly apply to PFL paradigm. To tackle this limitation, we propose a novel PFL method, named Federated dUal sEmantic aLignment-based contraStive learning (FUELS), which can adaptively align positive and negative pairs based on semantic similarity, thereby injecting precise spatio-temporal heterogeneity into the latent representation space by auxiliary contrastive tasks. From temporal perspective, a hard negative filtering module is introduced to dynamically align heterogeneous temporal representations for the supplemented intra-client contrastive task. From spatial perspective, we design lightweight-but-efficient prototypes as client-level semantic representations, based on which the server evaluates spatial similarity and yields client-customized global prototypes for the supplemented inter-client contrastive task. Extensive experiments demonstrate that FUELS outperforms state-of-the-art methods, with communication cost decreasing by around 94%.
[ "['Qingxiang Liu' 'Sheng Sun' 'Yuxuan Liang' 'Jingjing Xue' 'Min Liu']" ]
null
null
2404.03703
null
null
http://arxiv.org/pdf/2404.03703v1
2024-04-04T07:49:39Z
2024-04-04T07:49:39Z
Mitigating analytical variability in fMRI results with style transfer
We propose a novel approach to improve the reproducibility of neuroimaging results by converting statistic maps across different functional MRI pipelines. We make the assumption that pipelines can be considered as a style component of data and propose to use different generative models, among which, Diffusion Models (DM) to convert data between pipelines. We design a new DM-based unsupervised multi-domain image-to-image transition framework and constrain the generation of 3D fMRI statistic maps using the latent space of an auxiliary classifier that distinguishes statistic maps from different pipelines. We extend traditional sampling techniques used in DM to improve the transition performance. Our experiments demonstrate that our proposed methods are successful: pipelines can indeed be transferred, providing an important source of data augmentation for future medical studies.
[ "['Elodie Germani' 'Elisa Fromont' 'Camille Maumet']" ]
null
null
2404.03704
null
null
http://arxiv.org/abs/2404.03704v1
2024-04-04T09:02:17Z
2024-04-04T09:02:17Z
Improvement of Performance in Freezing of Gait detection in Parkinsons Disease using Transformer networks and a single waist worn triaxial accelerometer
Freezing of gait (FOG) is one of the most incapacitating symptoms in Parkinsons disease, affecting more than 50 percent of patients in advanced stages of the disease. The presence of FOG may lead to falls and a loss of independence with a consequent reduction in the quality of life. Wearable technology and artificial intelligence have been used for automatic FOG detection to optimize monitoring. However, differences between laboratory and daily-life conditions present challenges for the implementation of reliable detection systems. Consequently, improvement of FOG detection methods remains important to provide accurate monitoring mechanisms intended for free-living and real-time use. This paper presents advances in automatic FOG detection using a single body-worn triaxial accelerometer and a novel classification algorithm based on Transformers and convolutional networks. This study was performed with data from 21 patients who manifested FOG episodes while performing activities of daily living in a home setting. Results indicate that the proposed FOG-Transformer can bring a significant improvement in FOG detection using leave-one-subject-out cross-validation (LOSO CV). These results bring opportunities for the implementation of accurate monitoring systems for use in ambulatory or home settings.
[ "['Luis Sigcha' 'Luigi Borzì' 'Ignacio Pavón' 'Nélson Costa' 'Susana Costa'\n 'Pedro Arezes' 'Juan-Manuel López' 'Guillermo De Arcas']" ]
null
null
2404.03706
null
null
http://arxiv.org/pdf/2404.03706v1
2024-04-04T10:36:56Z
2024-04-04T10:36:56Z
Bi-level Guided Diffusion Models for Zero-Shot Medical Imaging Inverse Problems
In the realm of medical imaging, inverse problems aim to infer high-quality images from incomplete, noisy measurements, with the objective of minimizing expenses and risks to patients in clinical settings. The Diffusion Models have recently emerged as a promising approach to such practical challenges, proving particularly useful for the zero-shot inference of images from partially acquired measurements in Magnetic Resonance Imaging (MRI) and Computed Tomography (CT). A central challenge in this approach, however, is how to guide an unconditional prediction to conform to the measurement information. Existing methods rely on deficient projection or inefficient posterior score approximation guidance, which often leads to suboptimal performance. In this paper, we propose underline{textbf{B}}i-level underline{G}uided underline{D}iffusion underline{M}odels ({BGDM}), a zero-shot imaging framework that efficiently steers the initial unconditional prediction through a emph{bi-level} guidance strategy. Specifically, BGDM first approximates an emph{inner-level} conditional posterior mean as an initial measurement-consistent reference point and then solves an emph{outer-level} proximal optimization objective to reinforce the measurement consistency. Our experimental findings, using publicly available MRI and CT medical datasets, reveal that BGDM is more effective and efficient compared to the baselines, faithfully generating high-fidelity medical images and substantially reducing hallucinatory artifacts in cases of severe degradation.
[ "['Hossein Askari' 'Fred Roosta' 'Hongfu Sun']" ]
null
null
2404.03707
null
null
http://arxiv.org/pdf/2404.03707v1
2024-04-04T10:54:38Z
2024-04-04T10:54:38Z
Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study
Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models. While the CLTR models can be theoretically unbiased when the user behavior assumption is correct and the propensity estimation is accurate, their effectiveness is usually empirically evaluated via simulation-based experiments due to a lack of widely-available, large-scale, real click logs. However, the mainstream simulation-based experiments are somewhat limited as they often feature a single, deterministic production ranker and simplified user simulation models to generate the synthetic click logs. As a result, the robustness of CLTR models in complex and diverse situations is largely unknown and needs further investigation. To address this problem, in this paper, we aim to investigate the robustness of existing CLTR models in a reproducibility study with extensive simulation-based experiments that (1) use both deterministic and stochastic production rankers, each with different ranking performance, and (2) leverage multiple user simulation models with different user behavior assumptions. We find that the DLA models and IPS-DCM show better robustness under various simulation settings than IPS-PBM and PRS with offline propensity estimation. Besides, the existing CLTR models often fail to outperform the naive click baselines when the production ranker has relatively high ranking performance or certain randomness, which suggests an urgent need for developing new CLTR algorithms that work for these settings.
[ "['Zechun Niu' 'Jiaxin Mao' 'Qingyao Ai' 'Ji-Rong Wen']" ]
null
null
2404.03708
null
null
http://arxiv.org/pdf/2404.03708v1
2024-04-04T11:22:58Z
2024-04-04T11:22:58Z
Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning
Artificial neural networks (ANNs) are at the core of most Deep learning (DL) algorithms that successfully tackle complex problems like image recognition, autonomous driving, and natural language processing. However, unlike biological brains who tackle similar problems in a very efficient manner, DL algorithms require a large number of trainable parameters, making them energy-intensive and prone to overfitting. Here, we show that a new ANN architecture that incorporates the structured connectivity and restricted sampling properties of biological dendrites counteracts these limitations. We find that dendritic ANNs are more robust to overfitting and outperform traditional ANNs on several image classification tasks while using significantly fewer trainable parameters. This is achieved through the adoption of a different learning strategy, whereby most of the nodes respond to several classes, unlike classical ANNs that strive for class-specificity. These findings suggest that the incorporation of dendrites can make learning in ANNs precise, resilient, and parameter-efficient and shed new light on how biological features can impact the learning strategies of ANNs.
[ "['Spyridon Chavlis' 'Panayiota Poirazi']" ]
null
null
2404.03709
null
null
http://arxiv.org/abs/2404.03709v1
2024-04-04T11:51:26Z
2024-04-04T11:51:26Z
Proceedings 12th International Workshop on Theorem proving components for Educational software
The ThEdu series pursues the smooth transition from an intuitive way of doing mathematics at secondary school to a more formal approach to the subject in STEM education, while favouring software support for this transition by exploiting the power of theorem-proving technologies. What follows is a brief description of how the present volume contributes to this enterprise. The 12th International Workshop on Theorem Proving Components for Educational Software(ThEdu'23), was a satellite event of the 29th international Conference on Automated Deduction (CADE 2023), July 1-4, 2023, Rome, Italy. ThEdu'23 was very successful, with one invited talk, by Yves Bertot (Inria, France), "The challenges of using Type Theory to teach Mathematics", and seven regular contributions. An open call for papers was then issued, to which eight contributions were submitted. Seven submissions have been accepted by our reviewers, who jointly produced at least three careful reports on each of the contributions. The resulting revised papers are collected in the present volume. We, the volume editors, hope that this collection of papers will further promote the development of theorem-proving based software, and that it will allow to improve the mutual understanding between computer scientists, mathematicians and stakeholders in education. PC Chairs:Julien Narboux (University of Strasbourg, France); Walther Neuper (JKU, Johannes Kepler University, Linz, Austria); Pedro Quaresma (University of Coimbra, Portugal)
[ "['Julien Narboux' 'Walther Neuper' 'Pedro Quaresma']" ]
null
null
2404.03710
null
null
http://arxiv.org/pdf/2404.03710v1
2024-04-04T13:43:17Z
2024-04-04T13:43:17Z
Self-organized arrival system for urban air mobility
Urban air mobility is an innovative mode of transportation in which electric vertical takeoff and landing (eVTOL) vehicles operate between nodes called vertiports. We outline a self-organized vertiport arrival system based on deep reinforcement learning. The airspace around the vertiport is assumed to be circular, and the vehicles can freely operate inside. Each aircraft is considered an individual agent and follows a shared policy, resulting in decentralized actions that are based on local information. We investigate the development of the reinforcement learning policy during training and illustrate how the algorithm moves from suboptimal local holding patterns to a safe and efficient final policy. The latter is validated in simulation-based scenarios and also deployed on small-scale unmanned aerial vehicles to showcase its real-world usability.
[ "['Martin Waltz' 'Ostap Okhrin' 'Michael Schultz']" ]
null
null
2404.03713
null
null
http://arxiv.org/pdf/2404.03713v1
2024-04-04T17:46:20Z
2024-04-04T17:46:20Z
Explaining Explainability: Understanding Concept Activation Vectors
Recent interpretability methods propose using concept-based explanations to translate the internal representations of deep learning models into a language that humans are familiar with: concepts. This requires understanding which concepts are present in the representation space of a neural network. One popular method for finding concepts is Concept Activation Vectors (CAVs), which are learnt using a probe dataset of concept exemplars. In this work, we investigate three properties of CAVs. CAVs may be: (1) inconsistent between layers, (2) entangled with different concepts, and (3) spatially dependent. Each property provides both challenges and opportunities in interpreting models. We introduce tools designed to detect the presence of these properties, provide insight into how they affect the derived explanations, and provide recommendations to minimise their impact. Understanding these properties can be used to our advantage. For example, we introduce spatially dependent CAVs to test if a model is translation invariant with respect to a specific concept and class. Our experiments are performed on ImageNet and a new synthetic dataset, Elements. Elements is designed to capture a known ground truth relationship between concepts and classes. We release this dataset to facilitate further research in understanding and evaluating interpretability methods.
[ "['Angus Nicolson' 'Lisa Schut' 'J. Alison Noble' 'Yarin Gal']" ]
null
null
2404.03715
null
null
http://arxiv.org/pdf/2404.03715v1
2024-04-04T17:56:41Z
2024-04-04T17:56:41Z
Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences
This paper studies post-training large language models (LLMs) using preference feedback from a powerful oracle to help a model iteratively improve over itself. The typical approach for post-training LLMs involves Reinforcement Learning from Human Feedback (RLHF), which traditionally separates reward learning and subsequent policy optimization. However, such a reward maximization approach is limited by the nature of "point-wise" rewards (such as Bradley-Terry model), which fails to express complex intransitive or cyclic preference relations. While advances on RLHF show reward learning and policy optimization can be merged into a single contrastive objective for stability, they yet still remain tethered to the reward maximization framework. Recently, a new wave of research sidesteps the reward maximization presumptions in favor of directly optimizing over "pair-wise" or general preferences. In this paper, we introduce Direct Nash Optimization (DNO), a provable and scalable algorithm that marries the simplicity and stability of contrastive learning with theoretical generality from optimizing general preferences. Because DNO is a batched on-policy algorithm using a regression-based objective, its implementation is straightforward and efficient. Moreover, DNO enjoys monotonic improvement across iterations that help it improve even over a strong teacher (such as GPT-4). In our experiments, a resulting 7B parameter Orca-2.5 model aligned by DNO achieves the state-of-the-art win-rate against GPT-4-Turbo of 33% on AlpacaEval 2.0 (even after controlling for response length), an absolute gain of 26% (7% to 33%) over the initializing model. It outperforms models with far more parameters, including Mistral Large, Self-Rewarding LM (70B parameters), and older versions of GPT-4.
[ "['Corby Rosset' 'Ching-An Cheng' 'Arindam Mitra' 'Michael Santacroce'\n 'Ahmed Awadallah' 'Tengyang Xie']" ]
null
null
2404.03729
null
null
http://arxiv.org/pdf/2404.03729v2
2024-04-09T22:53:57Z
2024-04-04T18:00:15Z
JUICER: Data-Efficient Imitation Learning for Robotic Assembly
While learning from demonstrations is powerful for acquiring visuomotor policies, high-performance imitation without large demonstration datasets remains challenging for tasks requiring precise, long-horizon manipulation. This paper proposes a pipeline for improving imitation learning performance with a small human demonstration budget. We apply our approach to assembly tasks that require precisely grasping, reorienting, and inserting multiple parts over long horizons and multiple task phases. Our pipeline combines expressive policy architectures and various techniques for dataset expansion and simulation-based data augmentation. These help expand dataset support and supervise the model with locally corrective actions near bottleneck regions requiring high precision. We demonstrate our pipeline on four furniture assembly tasks in simulation, enabling a manipulator to assemble up to five parts over nearly 2500 time steps directly from RGB images, outperforming imitation and data augmentation baselines. Project website: https://imitation-juicer.github.io/.
[ "['Lars Ankile' 'Anthony Simeonov' 'Idan Shenfeld' 'Pulkit Agrawal']" ]
null
null
2404.03753
null
null
http://arxiv.org/pdf/2404.03753v2
2024-04-19T19:56:29Z
2024-04-04T18:44:33Z
A Reinforcement Learning based Reset Policy for CDCL SAT Solvers
Restart policy is an important technique used in modern Conflict-Driven Clause Learning (CDCL) solvers, wherein some parts of the solver state are erased at certain intervals during the run of the solver. In most solvers, variable activities are preserved across restart boundaries, resulting in solvers continuing to search parts of the assignment tree that are not far from the one immediately prior to a restart. To enable the solver to search possibly "distant" parts of the assignment tree, we study the effect of resets, a variant of restarts which not only erases the assignment trail, but also randomizes the activity scores of the variables of the input formula after reset, thus potentially enabling a better global exploration of the search space. In this paper, we model the problem of whether to trigger reset as a multi-armed bandit (MAB) problem, and propose two reinforcement learning (RL) based adaptive reset policies using the Upper Confidence Bound (UCB) and Thompson sampling algorithms. These two algorithms balance the exploration-exploitation tradeoff by adaptively choosing arms (reset vs. no reset) based on their estimated rewards during the solver's run. We implement our reset policies in four baseline SOTA CDCL solvers and compare the baselines against the reset versions on Satcoin benchmarks and SAT Competition instances. Our results show that RL-based reset versions outperform the corresponding baseline solvers on both Satcoin and the SAT competition instances, suggesting that our RL policy helps to dynamically and profitably adapt the reset frequency for any given input instance. We also introduce the concept of a partial reset, where at least a constant number of variable activities are retained across reset boundaries. Building on previous results, we show that there is an exponential separation between O(1) vs. $Omega(n)$-length partial resets.
[ "['Chunxiao Li' 'Charlie Liu' 'Jonathan Chung' 'Zhengyang Lu' 'Piyush Jha'\n 'Vijay Ganesh']" ]
null
null
2404.03759
null
null
http://arxiv.org/pdf/2404.03759v1
2024-04-04T19:06:29Z
2024-04-04T19:06:29Z
Localized Distributional Robustness in Submodular Multi-Task Subset Selection
In this work, we approach the problem of multi-task submodular optimization with the perspective of local distributional robustness, within the neighborhood of a reference distribution which assigns an importance score to each task. We initially propose to introduce a regularization term which makes use of the relative entropy to the standard multi-task objective. We then demonstrate through duality that this novel formulation itself is equivalent to the maximization of a submodular function, which may be efficiently carried out through standard greedy selection methods. This approach bridges the existing gap in the optimization of performance-robustness trade-offs in multi-task subset selection. To numerically validate our theoretical results, we test the proposed method in two different setting, one involving the selection of satellites in low Earth orbit constellations in the context of a sensor selection problem, and the other involving an image summarization task using neural networks. Our method is compared with two other algorithms focused on optimizing the performance of the worst-case task, and on directly optimizing the performance on the reference distribution itself. We conclude that our novel formulation produces a solution that is locally distributional robust, and computationally inexpensive.
[ "['Ege C. Kaya' 'Abolfazl Hashemi']" ]
null
null
2404.03761
null
null
http://arxiv.org/pdf/2404.03761v1
2024-04-04T19:07:21Z
2024-04-04T19:07:21Z
Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks
Learning approximations to smooth target functions of many variables from finite sets of pointwise samples is an important task in scientific computing and its many applications in computational science and engineering. Despite well over half a century of research on high-dimensional approximation, this remains a challenging problem. Yet, significant advances have been made in the last decade towards efficient methods for doing this, commencing with so-called sparse polynomial approximation methods and continuing most recently with methods based on Deep Neural Networks (DNNs). In tandem, there have been substantial advances in the relevant approximation theory and analysis of these techniques. In this work, we survey this recent progress. We describe the contemporary motivations for this problem, which stem from parametric models and computational uncertainty quantification; the relevant function classes, namely, classes of infinite-dimensional, Banach-valued, holomorphic functions; fundamental limits of learnability from finite data for these classes; and finally, sparse polynomial and DNN methods for efficiently learning such functions from finite data. For the latter, there is currently a significant gap between the approximation theory of DNNs and the practical performance of deep learning. Aiming to narrow this gap, we develop the topic of practical existence theory, which asserts the existence of dimension-independent DNN architectures and training strategies that achieve provably near-optimal generalization errors in terms of the amount of training data.
[ "['Ben Adcock' 'Simone Brugiapaglia' 'Nick Dexter' 'Sebastian Moraga']" ]
null
null
2404.03764
null
null
http://arxiv.org/pdf/2404.03764v1
2024-03-30T07:32:58Z
2024-03-30T07:32:58Z
CONCERT: Covariate-Elaborated Robust Local Information Transfer with Conditional Spike-and-Slab Prior
The popularity of transfer learning stems from the fact that it can borrow information from useful auxiliary datasets. Existing statistical transfer learning methods usually adopt a global similarity measure between the source data and the target data, which may lead to inefficiency when only local information is shared. In this paper, we propose a novel Bayesian transfer learning method named "CONCERT" to allow robust local information transfer for high-dimensional data analysis. A novel conditional spike-and-slab prior is introduced in the joint distribution of target and source parameters for information transfer. By incorporating covariate-specific priors, we can characterize the local similarities and make the sources work collaboratively to help improve the performance on the target. Distinguished from existing work, CONCERT is a one-step procedure, which achieves variable selection and information transfer simultaneously. Variable selection consistency is established for our CONCERT. To make our algorithm scalable, we adopt the variational Bayes framework to facilitate implementation. Extensive experiments and a genetic data analysis demonstrate the validity and the advantage of CONCERT over existing cutting-edge transfer learning methods. We also extend our CONCERT to the logistical models with numerical studies showing its superiority over other methods.
[ "['Ruqian Zhang' 'Yijiao Zhang' 'Annie Qu' 'Zhongyi Zhu' 'Juan Shen']" ]
null
null
2404.03769
null
null
http://arxiv.org/pdf/2404.03769v1
2024-04-04T19:28:38Z
2024-04-04T19:28:38Z
On Extending the Automatic Test Markup Language (ATML) for Machine Learning
This paper addresses the urgent need for messaging standards in the operational test and evaluation (T&E) of machine learning (ML) applications, particularly in edge ML applications embedded in systems like robots, satellites, and unmanned vehicles. It examines the suitability of the IEEE Standard 1671 (IEEE Std 1671), known as the Automatic Test Markup Language (ATML), an XML-based standard originally developed for electronic systems, for ML application testing. The paper explores extending IEEE Std 1671 to encompass the unique challenges of ML applications, including the use of datasets and dependencies on software. Through modeling various tests such as adversarial robustness and drift detection, this paper offers a framework adaptable to specific applications, suggesting that minor modifications to ATML might suffice to address the novelties of ML. This paper differentiates ATML's focus on testing from other ML standards like Predictive Model Markup Language (PMML) or Open Neural Network Exchange (ONNX), which concentrate on ML model specification. We conclude that ATML is a promising tool for effective, near real-time operational T&E of ML applications, an essential aspect of AI lifecycle management, safety, and governance.
[ "['Tyler Cody' 'Bingtong Li' 'Peter A. Beling']" ]
null
null
2404.03774
null
null
http://arxiv.org/pdf/2404.03774v1
2024-04-04T19:35:41Z
2024-04-04T19:35:41Z
Exploration is Harder than Prediction: Cryptographically Separating Reinforcement Learning from Supervised Learning
Supervised learning is often computationally easy in practice. But to what extent does this mean that other modes of learning, such as reinforcement learning (RL), ought to be computationally easy by extension? In this work we show the first cryptographic separation between RL and supervised learning, by exhibiting a class of block MDPs and associated decoding functions where reward-free exploration is provably computationally harder than the associated regression problem. We also show that there is no computationally efficient algorithm for reward-directed RL in block MDPs, even when given access to an oracle for this regression problem. It is known that being able to perform regression in block MDPs is necessary for finding a good policy; our results suggest that it is not sufficient. Our separation lower bound uses a new robustness property of the Learning Parities with Noise (LPN) hardness assumption, which is crucial in handling the dependent nature of RL data. We argue that separations and oracle lower bounds, such as ours, are a more meaningful way to prove hardness of learning because the constructions better reflect the practical reality that supervised learning by itself is often not the computational bottleneck.
[ "['Noah Golowich' 'Ankur Moitra' 'Dhruv Rohatgi']" ]
null
null
2404.03775
null
null
http://arxiv.org/pdf/2404.03775v1
2024-04-04T19:36:47Z
2024-04-04T19:36:47Z
A Systems Theoretic Approach to Online Machine Learning
The machine learning formulation of online learning is incomplete from a systems theoretic perspective. Typically, machine learning research emphasizes domains and tasks, and a problem solving worldview. It focuses on algorithm parameters, features, and samples, and neglects the perspective offered by considering system structure and system behavior or dynamics. Online learning is an active field of research and has been widely explored in terms of statistical theory and computational algorithms, however, in general, the literature still lacks formal system theoretical frameworks for modeling online learning systems and resolving systems-related concept drift issues. Furthermore, while the machine learning formulation serves to classify methods and literature, the systems theoretic formulation presented herein serves to provide a framework for the top-down design of online learning systems, including a novel definition of online learning and the identification of key design parameters. The framework is formulated in terms of input-output systems and is further divided into system structure and system behavior. Concept drift is a critical challenge faced in online learning, and this work formally approaches it as part of the system behavior characteristics. Healthcare provider fraud detection using machine learning is used as a case study throughout the paper to ground the discussion in a real-world online learning challenge.
[ "['Anli du Preez' 'Peter A. Beling' 'Tyler Cody']" ]
null
null
2404.03784
null
null
http://arxiv.org/pdf/2404.03784v1
2024-04-04T19:55:11Z
2024-04-04T19:55:11Z
Layerwise Early Stopping for Test Time Adaptation
Test Time Adaptation (TTA) addresses the problem of distribution shift by enabling pretrained models to learn new features on an unseen domain at test time. However, it poses a significant challenge to maintain a balance between learning new features and retaining useful pretrained features. In this paper, we propose Layerwise EArly STopping (LEAST) for TTA to address this problem. The key idea is to stop adapting individual layers during TTA if the features being learned do not appear beneficial for the new domain. For that purpose, we propose using a novel gradient-based metric to measure the relevance of the current learnt features to the new domain without the need for supervised labels. More specifically, we propose to use this metric to determine dynamically when to stop updating each layer during TTA. This enables a more balanced adaptation, restricted to layers benefiting from it, and only for a certain number of steps. Such an approach also has the added effect of limiting the forgetting of pretrained features useful for dealing with new domains. Through extensive experiments, we demonstrate that Layerwise Early Stopping improves the performance of existing TTA approaches across multiple datasets, domain shifts, model architectures, and TTA losses.
[ "['Sabyasachi Sahoo' 'Mostafa ElAraby' 'Jonas Ngnawe' 'Yann Pequignot'\n 'Frederic Precioso' 'Christian Gagne']" ]
null
null
2404.03800
null
null
http://arxiv.org/pdf/2404.03800v1
2024-04-04T20:44:56Z
2024-04-04T20:44:56Z
Learning Social Fairness Preferences from Non-Expert Stakeholder Opinions in Kidney Placement
Modern kidney placement incorporates several intelligent recommendation systems which exhibit social discrimination due to biases inherited from training data. Although initial attempts were made in the literature to study algorithmic fairness in kidney placement, these methods replace true outcomes with surgeons' decisions due to the long delays involved in recording such outcomes reliably. However, the replacement of true outcomes with surgeons' decisions disregards expert stakeholders' biases as well as social opinions of other stakeholders who do not possess medical expertise. This paper alleviates the latter concern and designs a novel fairness feedback survey to evaluate an acceptance rate predictor (ARP) that predicts a kidney's acceptance rate in a given kidney-match pair. The survey is launched on Prolific, a crowdsourcing platform, and public opinions are collected from 85 anonymous crowd participants. A novel social fairness preference learning algorithm is proposed based on minimizing social feedback regret computed using a novel logit-based fairness feedback model. The proposed model and learning algorithm are both validated using simulation experiments as well as Prolific data. Public preferences towards group fairness notions in the context of kidney placement have been estimated and discussed in detail. The specific ARP tested in the Prolific survey has been deemed fair by the participants.
[ "['Mukund Telukunta' 'Sukruth Rao' 'Gabriella Stickney'\n 'Venkata Sriram Siddardh Nadendla' 'Casey Canfield']" ]
null
null
2404.03804
null
null
http://arxiv.org/pdf/2404.03804v1
2024-04-04T20:51:37Z
2024-04-04T20:51:37Z
TransformerLSR: Attentive Joint Model of Longitudinal Data, Survival, and Recurrent Events with Concurrent Latent Structure
In applications such as biomedical studies, epidemiology, and social sciences, recurrent events often co-occur with longitudinal measurements and a terminal event, such as death. Therefore, jointly modeling longitudinal measurements, recurrent events, and survival data while accounting for their dependencies is critical. While joint models for the three components exist in statistical literature, many of these approaches are limited by heavy parametric assumptions and scalability issues. Recently, incorporating deep learning techniques into joint modeling has shown promising results. However, current methods only address joint modeling of longitudinal measurements at regularly-spaced observation times and survival events, neglecting recurrent events. In this paper, we develop TransformerLSR, a flexible transformer-based deep modeling and inference framework to jointly model all three components simultaneously. TransformerLSR integrates deep temporal point processes into the joint modeling framework, treating recurrent and terminal events as two competing processes dependent on past longitudinal measurements and recurrent event times. Additionally, TransformerLSR introduces a novel trajectory representation and model architecture to potentially incorporate a priori knowledge of known latent structures among concurrent longitudinal variables. We demonstrate the effectiveness and necessity of TransformerLSR through simulation studies and analyzing a real-world medical dataset on patients after kidney transplantation.
[ "['Zhiyue Zhang' 'Yao Zhao' 'Yanxun Xu']" ]
null
null
2404.03813
null
null
http://arxiv.org/pdf/2404.03813v1
2024-04-04T21:39:47Z
2024-04-04T21:39:47Z
Agnostic Tomography of Stabilizer Product States
We define a quantum learning task called agnostic tomography, where given copies of an arbitrary state $rho$ and a class of quantum states $mathcal{C}$, the goal is to output a succinct description of a state that approximates $rho$ at least as well as any state in $mathcal{C}$ (up to some small error $varepsilon$). This task generalizes ordinary quantum tomography of states in $mathcal{C}$ and is more challenging because the learning algorithm must be robust to perturbations of $rho$. We give an efficient agnostic tomography algorithm for the class $mathcal{C}$ of $n$-qubit stabilizer product states. Assuming $rho$ has fidelity at least $tau$ with a stabilizer product state, the algorithm runs in time $n^{O(1 + log(1/tau))} / varepsilon^2$. This runtime is quasipolynomial in all parameters, and polynomial if $tau$ is a constant.
[ "['Sabee Grewal' 'Vishnu Iyer' 'William Kretschmer' 'Daniel Liang']" ]
null
null
2404.03827
null
null
http://arxiv.org/pdf/2404.03827v2
2024-06-12T18:57:21Z
2024-04-04T23:05:30Z
Uniform Memory Retrieval with Larger Capacity for Modern Hopfield Models
We propose a two-stage memory retrieval dynamics for modern Hopfield models, termed $mathtt{Utext{-}Hop}$, with enhanced memory capacity. Our key contribution is a learnable feature map $Phi$ which transforms the Hopfield energy function into kernel space. This transformation ensures convergence between the local minima of energy and the fixed points of retrieval dynamics within the kernel space. Consequently, the kernel norm induced by $Phi$ serves as a novel similarity measure. It utilizes the stored memory patterns as learning data to enhance memory capacity across all modern Hopfield models. Specifically, we accomplish this by constructing a separation loss $mathcal{L}_Phi$ that separates the local minima of kernelized energy by separating stored memory patterns in kernel space. Methodologically, $mathtt{Utext{-}Hop}$ memory retrieval process consists of: (Stage I) minimizing separation loss for a more uniform memory (local minimum) distribution, followed by (Stage II) standard Hopfield energy minimization for memory retrieval. This results in a significant reduction of possible metastable states in the Hopfield energy function, thus enhancing memory capacity by preventing memory confusion. Empirically, with real-world datasets, we demonstrate that $mathtt{Utext{-}Hop}$ outperforms all existing modern Hopfield models and state-of-the-art similarity measures, achieving substantial improvements in both associative memory retrieval and deep learning tasks. Code is available at https://github.com/MAGICS-LAB/UHop ; future updates are on arXiv:2404.03827
[ "['Dennis Wu' 'Jerry Yao-Chieh Hu' 'Teng-Yun Hsiao' 'Han Liu']" ]
null
null
2404.03828
null
null
http://arxiv.org/pdf/2404.03828v2
2024-06-26T20:50:18Z
2024-04-04T23:08:43Z
Outlier-Efficient Hopfield Layers for Large Transformer-Based Models
We introduce an Outlier-Efficient Modern Hopfield Model (termed $mathrm{OutEffHop}$) and use it to address the outlier inefficiency problem of {training} gigantic transformer-based models. Our main contribution is a novel associative memory model facilitating textit{outlier-efficient} associative memory retrievals. Interestingly, this memory model manifests a model-based interpretation of an outlier-efficient attention mechanism (${rm Softmax}_1$): it is an approximation of the memory retrieval process of $mathrm{OutEffHop}$. Methodologically, this allows us to introduce novel outlier-efficient Hopfield layers as powerful alternatives to traditional attention mechanisms, with superior post-quantization performance. Theoretically, the Outlier-Efficient Modern Hopfield Model retains and improves the desirable properties of standard modern Hopfield models, including fixed point convergence and exponential storage capacity. Empirically, we demonstrate the efficacy of the proposed model across large-scale transformer-based and Hopfield-based models (including BERT, OPT, ViT, and STanHop-Net), benchmarking against state-of-the-art methods like $mathtt{Clipped_Softmax}$ and $mathtt{Gated_Attention}$. Notably, $mathrm{OutEffHop}$ achieves an average reduction of 22+% in average kurtosis and 26+% in the maximum infinity norm of model outputs across four models. Code is available at href{https://github.com/MAGICS-LAB/OutEffHop}{GitHub}; models are on href{https://huggingface.co/collections/magicslabnu/outeffhop-6610fcede8d2cda23009a98f}{Hugging Face Hub}; future updates are on href{https://arxiv.org/abs/2404.03828}{arXiv}.
[ "['Jerry Yao-Chieh Hu' 'Pei-Hsuan Chang' 'Robin Luo' 'Hong-Yu Chen'\n 'Weijian Li' 'Wei-Po Wang' 'Han Liu']" ]
null
null
2404.03830
null
null
http://arxiv.org/pdf/2404.03830v2
2024-07-12T22:45:41Z
2024-04-04T23:13:32Z
BiSHop: Bi-Directional Cellular Learning for Tabular Data with Generalized Sparse Modern Hopfield Model
We introduce the textbf{B}i-Directional textbf{S}parse textbf{Hop}field Network (textbf{BiSHop}), a novel end-to-end framework for deep tabular learning. BiSHop handles the two major challenges of deep tabular learning: non-rotationally invariant data structure and feature sparsity in tabular data. Our key motivation comes from the recent established connection between associative memory and attention mechanisms. Consequently, BiSHop uses a dual-component approach, sequentially processing data both column-wise and row-wise through two interconnected directional learning modules. Computationally, these modules house layers of generalized sparse modern Hopfield layers, a sparse extension of the modern Hopfield model with adaptable sparsity. Methodologically, BiSHop facilitates multi-scale representation learning, capturing both intra-feature and inter-feature interactions, with adaptive sparsity at each scale. Empirically, through experiments on diverse real-world datasets, we demonstrate that BiSHop surpasses current SOTA methods with significantly less HPO runs, marking it a robust solution for deep tabular learning.
[ "['Chenwei Xu' 'Yu-Chao Huang' 'Jerry Yao-Chieh Hu' 'Weijian Li'\n 'Ammar Gilani' 'Hsi-Sheng Goan' 'Han Liu']" ]
null
null
2404.03833
null
null
http://arxiv.org/pdf/2404.03833v1
2024-04-04T23:30:01Z
2024-04-04T23:30:01Z
An ExplainableFair Framework for Prediction of Substance Use Disorder Treatment Completion
Fairness of machine learning models in healthcare has drawn increasing attention from clinicians, researchers, and even at the highest level of government. On the other hand, the importance of developing and deploying interpretable or explainable models has been demonstrated, and is essential to increasing the trustworthiness and likelihood of adoption of these models. The objective of this study was to develop and implement a framework for addressing both these issues - fairness and explainability. We propose an explainable fairness framework, first developing a model with optimized performance, and then using an in-processing approach to mitigate model biases relative to the sensitive attributes of race and sex. We then explore and visualize explanations of the model changes that lead to the fairness enhancement process through exploring the changes in importance of features. Our resulting-fairness enhanced models retain high sensitivity with improved fairness and explanations of the fairness-enhancement that may provide helpful insights for healthcare providers to guide clinical decision-making and resource allocation.
[ "['Mary M. Lucas' 'Xiaoyang Wang' 'Chia-Hsuan Chang' 'Christopher C. Yang'\n 'Jacqueline E. Braughton' 'Quyen M. Ngo']" ]
null
null
2404.03843
null
null
http://arxiv.org/pdf/2404.03843v2
2024-05-13T22:37:49Z
2024-04-05T00:25:37Z
Scaling Motion Forecasting Models with Ensemble Distillation
Motion forecasting has become an increasingly critical component of autonomous robotic systems. Onboard compute budgets typically limit the accuracy of real-time systems. In this work we propose methods of improving motion forecasting systems subject to limited compute budgets by combining model ensemble and distillation techniques. The use of ensembles of deep neural networks has been shown to improve generalization accuracy in many application domains. We first demonstrate significant performance gains by creating a large ensemble of optimized single models. We then develop a generalized framework to distill motion forecasting model ensembles into small student models which retain high performance with a fraction of the computing cost. For this study we focus on the task of motion forecasting using real world data from autonomous driving systems. We develop ensemble models that are very competitive on the Waymo Open Motion Dataset (WOMD) and Argoverse leaderboards. From these ensembles, we train distilled student models which have high performance at a fraction of the compute costs. These experiments demonstrate distillation from ensembles as an effective method for improving accuracy of predictive models for robotic systems with limited compute budgets.
[ "['Scott Ettinger' 'Kratarth Goel' 'Avikalp Srivastava' 'Rami Al-Rfou']" ]
null
null
2404.03854
null
null
http://arxiv.org/pdf/2404.03854v2
2024-05-24T15:08:38Z
2024-04-05T01:17:25Z
Align as Ideal: Cross-Modal Alignment Binding for Federated Medical Vision-Language Pre-training
Vision-language pre-training (VLP) has arised as an efficient scheme for multimodal representation learning, but it requires large-scale multimodal data for pre-training, making it an obstacle especially for medical applications. To overcome the data limitation, federated learning (FL) can be a promising strategy to scale up the dataset for medical VLP while protecting data privacy. However, client data are often heterogeneous in real-world scenarios, and we observe that local training on heterogeneous client data would distort the multimodal representation learning and lead to biased cross-modal alignment. To address this challenge, we propose a Federated Align as IDeal (FedAID) framework for federated VLP with robustness to data heterogeneity, to bind local clients with an ideal crossmodal alignment. Specifically, to reduce distortions on global-aggregated features while learning diverse semantics from client datasets during local training, we propose to bind the cross-model aligned representation space learned by local models with an unbiased one via guidance-based regularization. Moreover, we employ a distribution-based min-max optimization to learn the unbiased cross-modal alignment at each communication turn of federated pre-training. The experiments on real-world datasets demonstrate our method successfully promotes efficient federated multimodal learning for medical VLP with data heterogeneity.
[ "['Zitao Shuai' 'Liyue Shen']" ]
null
null
2404.03865
null
null
http://arxiv.org/pdf/2404.03865v1
2024-04-05T02:35:43Z
2024-04-05T02:35:43Z
FFN-SkipLLM: A Hidden Gem for Autoregressive Decoding with Adaptive Feed Forward Skipping
Autoregressive Large Language Models (e.g., LLaMa, GPTs) are omnipresent achieving remarkable success in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents significant challenges for autoregressive token-by-token generation. To mitigate computation overload incurred during generation, several early-exit and layer-dropping strategies have been proposed. Despite some promising success due to the redundancy across LLMs layers on metrics like Rough-L/BLUE, our careful knowledge-intensive evaluation unveils issues such as generation collapse, hallucination of wrong facts, and noticeable performance drop even at the trivial exit ratio of 10-15% of layers. We attribute these errors primarily to ineffective handling of the KV cache through state copying during early-exit. In this work, we observed the saturation of computationally expensive feed-forward blocks of LLM layers and proposed FFN-SkipLLM, which is a novel fine-grained skip strategy of autoregressive LLMs. More specifically, FFN-SkipLLM is an input-adaptive feed-forward skipping strategy that can skip 25-30% of FFN blocks of LLMs with marginal change in performance on knowledge-intensive generation tasks without any requirement to handle KV cache. Our extensive experiments and ablation across benchmarks like MT-Bench, Factoid-QA, and variable-length text summarization illustrate how our simple and ease-at-use method can facilitate faster autoregressive decoding.
[ "['Ajay Jaiswal' 'Bodun Hu' 'Lu Yin' 'Yeonju Ro' 'Shiwei Liu'\n 'Tianlong Chen' 'Aditya Akella']" ]
null
null
2404.03868
null
null
http://arxiv.org/pdf/2404.03868v1
2024-04-05T02:53:51Z
2024-04-05T02:53:51Z
Extract, Define, Canonicalize: An LLM-based Framework for Knowledge Graph Construction
In this work, we are interested in automated methods for knowledge graph creation (KGC) from input text. Progress on large language models (LLMs) has prompted a series of recent works applying them to KGC, e.g., via zero/few-shot prompting. Despite successes on small domain-specific datasets, these models face difficulties scaling up to text common in many real-world applications. A principal issue is that in prior methods, the KG schema has to be included in the LLM prompt to generate valid triplets; larger and more complex schema easily exceed the LLMs' context window length. To address this problem, we propose a three-phase framework named Extract-Define-Canonicalize (EDC): open information extraction followed by schema definition and post-hoc canonicalization. EDC is flexible in that it can be applied to settings where a pre-defined target schema is available and when it is not; in the latter case, it constructs a schema automatically and applies self-canonicalization. To further improve performance, we introduce a trained component that retrieves schema elements relevant to the input text; this improves the LLMs' extraction performance in a retrieval-augmented generation-like manner. We demonstrate on three KGC benchmarks that EDC is able to extract high-quality triplets without any parameter tuning and with significantly larger schemas compared to prior works.
[ "['Bowen Zhang' 'Harold Soh']" ]
null
null
2404.03869
null
null
http://arxiv.org/pdf/2404.03869v1
2024-04-05T03:02:57Z
2024-04-05T03:02:57Z
Heterogeneous Multi-Agent Reinforcement Learning for Zero-Shot Scalable Collaboration
The rise of multi-agent systems, especially the success of multi-agent reinforcement learning (MARL), is reshaping our future across diverse domains like autonomous vehicle networks. However, MARL still faces significant challenges, particularly in achieving zero-shot scalability, which allows trained MARL models to be directly applied to unseen tasks with varying numbers of agents. In addition, real-world multi-agent systems usually contain agents with different functions and strategies, while the existing scalable MARL methods only have limited heterogeneity. To address this, we propose a novel MARL framework named Scalable and Heterogeneous Proximal Policy Optimization (SHPPO), integrating heterogeneity into parameter-shared PPO-based MARL networks. we first leverage a latent network to adaptively learn strategy patterns for each agent. Second, we introduce a heterogeneous layer for decision-making, whose parameters are specifically generated by the learned latent variables. Our approach is scalable as all the parameters are shared except for the heterogeneous layer, and gains both inter-individual and temporal heterogeneity at the same time. We implement our approach based on the state-of-the-art backbone PPO-based algorithm as SHPPO, while our approach is agnostic to the backbone and can be seamlessly plugged into any parameter-shared MARL method. SHPPO exhibits superior performance over the baselines such as MAPPO and HAPPO in classic MARL environments like Starcraft Multi-Agent Challenge (SMAC) and Google Research Football (GRF), showcasing enhanced zero-shot scalability and offering insights into the learned latent representation's impact on team performance by visualization.
[ "['Xudong Guo' 'Daming Shi' 'Junjie Yu' 'Wenhui Fan']" ]
null
null
2404.03870
null
null
http://arxiv.org/pdf/2404.03870v1
2024-04-05T03:11:24Z
2024-04-05T03:11:24Z
Optimizing Convolutional Neural Networks for Identifying Invasive Pollinator Apis Mellifera and Finding a Ligand drug to Protect California's Biodiversity
In North America, there are many diverse species of native bees crucial for the environment, who are the primary pollinators of most native floral species. The Californian agriculture industry imports European honeybees (Apis Mellifera) primarily for pollinating almonds. Unfortunately, this has resulted in the unintended consequence of disrupting the native ecosystem and threatening many native bee species as they are outcompeted for food. Our first step for protecting the native species is identification with the use of a Convolutional Neural Network (CNN) to differentiate common native bee species from invasive ones. Removing invasive colonies efficiently without harming native species is difficult as pesticides cause myriad diseases in native species. Our approach seeks to prevent the formation of new queens, causing the colony's collapse. Workers secrete royal jelly, a substance that causes fertility and longevity; it is fed to future honeybee queens. Targeting the production of this substance is safe as no native species use it; small organic molecules (ligands) prevent the proteins Apisimin and MRJP1 from combining and producing an oligomer used to form the substance. Ideal ligands bind to only one of these proteins preventing them from joining together: they have a high affinity for one receptor and a significantly lower affinity for the other. We optimized the CNN to provide a framework for creating Machine Learning models that excel at differentiating between subspecies of insects by measuring the effects of image alteration and class grouping on model performance. The CNN is able to achieve an accuracy of 82% in differentiating between invasive and native bee species; 3 ligands have been identified as effective. Our new approach offers a promising solution to curb the spread of invasive bees within California through an identification and neutralization method.
[ "['Arnav Swaroop']" ]