categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2404.16789
null
null
http://arxiv.org/pdf/2404.16789v2
2024-06-30T02:19:00Z
2024-04-25T17:38:57Z
Continual Learning of Large Language Models: A Comprehensive Survey
The recent success of large language models (LLMs) trained on static, pre-collected, general datasets has sparked numerous research directions and applications. One such direction addresses the non-trivial challenge of integrating pre-trained LLMs into dynamic data distributions, task structures, and user preferences. Pre-trained LLMs, when tailored for specific needs, often experience significant performance degradation in previous knowledge domains -- a phenomenon known as "catastrophic forgetting". While extensively studied in the continual learning (CL) community, it presents new manifestations in the realm of LLMs. In this survey, we provide a comprehensive overview of the current research progress on LLMs within the context of CL. This survey is structured into four main sections: we first describe an overview of continually learning LLMs, consisting of two directions of continuity: vertical continuity (or vertical continual learning), i.e., continual adaptation from general to specific capabilities, and horizontal continuity (or horizontal continual learning), i.e., continual adaptation across time and domains (Section 3). We then summarize three stages of learning LLMs in the context of modern CL: Continual Pre-Training (CPT), Domain-Adaptive Pre-training (DAP), and Continual Fine-Tuning (CFT) (Section 4). Then we provide an overview of evaluation protocols for continual learning with LLMs, along with the current available data sources (Section 5). Finally, we discuss intriguing questions pertaining to continual learning for LLMs (Section 6). The full list of papers examined in this survey is available at https://github.com/Wang-ML-Lab/llm-continual-learning-survey.
[ "['Haizhou Shi' 'Zihao Xu' 'Hengyi Wang' 'Weiyi Qin' 'Wenyuan Wang'\n 'Yibin Wang' 'Zifeng Wang' 'Sayna Ebrahimi' 'Hao Wang']" ]
null
null
2404.16792
null
null
http://arxiv.org/pdf/2404.16792v2
2024-05-22T19:33:30Z
2024-04-25T17:39:50Z
Weak-to-Strong Extrapolation Expedites Alignment
The open-source community is experiencing a surge in the release of large language models (LLMs) that are trained to follow instructions and align with human preference. However, further training to improve them still requires expensive computational resources and data annotations. Is it possible to bypass additional training and cost-effectively acquire better-aligned models? Inspired by the literature on model interpolation, we propose a simple method called ExPO to boost LLMs' alignment with human preference. Utilizing a model that has undergone alignment training (e.g., via DPO or RLHF) and its initial SFT checkpoint, ExPO directly obtains a better-aligned model by extrapolating from the weights of the initial and the aligned models, which implicitly optimizes the alignment objective via first-order approximation. Through experiments with twelve open-source LLMs on HuggingFace, we demonstrate that ExPO consistently improves off-the-shelf DPO/RLHF models, as evaluated on the mainstream LLM benchmarks AlpacaEval 2.0 and MT-Bench. Moreover, ExPO exhibits remarkable scalability across various model sizes (from 1.8B to 70B) and capabilities. Through controlled experiments and further empirical analyses, we shed light on the essence of ExPO amplifying the reward signal learned during alignment training. Our work demonstrates the efficacy of model extrapolation in expediting the alignment of LLMs with human preference, suggesting a promising direction for future research.
[ "['Chujie Zheng' 'Ziqi Wang' 'Heng Ji' 'Minlie Huang' 'Nanyun Peng']" ]
null
null
2404.16795
null
null
http://arxiv.org/pdf/2404.16795v2
2024-06-07T20:39:25Z
2024-04-25T17:40:52Z
In-Context Freeze-Thaw Bayesian Optimization for Hyperparameter Optimization
With the increasing computational costs associated with deep learning, automated hyperparameter optimization methods, strongly relying on black-box Bayesian optimization (BO), face limitations. Freeze-thaw BO offers a promising grey-box alternative, strategically allocating scarce resources incrementally to different configurations. However, the frequent surrogate model updates inherent to this approach pose challenges for existing methods, requiring retraining or fine-tuning their neural network surrogates online, introducing overhead, instability, and hyper-hyperparameters. In this work, we propose FT-PFN, a novel surrogate for Freeze-thaw style BO. FT-PFN is a prior-data fitted network (PFN) that leverages the transformers' in-context learning ability to efficiently and reliably do Bayesian learning curve extrapolation in a single forward pass. Our empirical analysis across three benchmark suites shows that the predictions made by FT-PFN are more accurate and 10-100 times faster than those of the deep Gaussian process and deep ensemble surrogates used in previous work. Furthermore, we show that, when combined with our novel acquisition mechanism (MFPI-random), the resulting in-context freeze-thaw BO method (ifBO), yields new state-of-the-art performance in the same three families of deep learning HPO benchmarks considered in prior work.
[ "['Herilalaina Rakotoarison' 'Steven Adriaensen' 'Neeratyoy Mallik'\n 'Samir Garibov' 'Edward Bergman' 'Frank Hutter']" ]
null
null
2404.16804
null
null
http://arxiv.org/pdf/2404.16804v1
2024-04-25T17:51:10Z
2024-04-25T17:51:10Z
AAPL: Adding Attributes to Prompt Learning for Vision-Language Models
Recent advances in large pre-trained vision-language models have demonstrated remarkable performance on zero-shot downstream tasks. Building upon this, recent studies, such as CoOp and CoCoOp, have proposed the use of prompt learning, where context within a prompt is replaced with learnable vectors, leading to significant improvements over manually crafted prompts. However, the performance improvement for unseen classes is still marginal, and to tackle this problem, data augmentation has been frequently used in traditional zero-shot learning techniques. Through our experiments, we have identified important issues in CoOp and CoCoOp: the context learned through traditional image augmentation is biased toward seen classes, negatively impacting generalization to unseen classes. To address this problem, we propose adversarial token embedding to disentangle low-level visual augmentation features from high-level class information when inducing bias in learnable prompts. Through our novel mechanism called "Adding Attributes to Prompt Learning", AAPL, we guide the learnable context to effectively extract text features by focusing on high-level features for unseen classes. We have conducted experiments across 11 datasets, and overall, AAPL shows favorable performances compared to the existing methods in few-shot learning, zero-shot learning, cross-dataset, and domain generalization tasks.
[ "['Gahyeon Kim' 'Sohee Kim' 'Seokju Lee']" ]
null
null
2404.16814
null
null
http://arxiv.org/pdf/2404.16814v1
2024-04-25T17:56:45Z
2024-04-25T17:56:45Z
Meta-Transfer Derm-Diagnosis: Exploring Few-Shot Learning and Transfer Learning for Skin Disease Classification in Long-Tail Distribution
Addressing the challenges of rare diseases is difficult, especially with the limited number of reference images and a small patient population. This is more evident in rare skin diseases, where we encounter long-tailed data distributions that make it difficult to develop unbiased and broadly effective models. The diverse ways in which image datasets are gathered and their distinct purposes also add to these challenges. Our study conducts a detailed examination of the benefits and drawbacks of episodic and conventional training methodologies, adopting a few-shot learning approach alongside transfer learning. We evaluated our models using the ISIC2018, Derm7pt, and SD-198 datasets. With minimal labeled examples, our models showed substantial information gains and better performance compared to previously trained models. Our research emphasizes the improved ability to represent features in DenseNet121 and MobileNetV2 models, achieved by using pre-trained models on ImageNet to increase similarities within classes. Moreover, our experiments, ranging from 2-way to 5-way classifications with up to 10 examples, showed a growing success rate for traditional transfer learning methods as the number of examples increased. The addition of data augmentation techniques significantly improved our transfer learning based model performance, leading to higher performances than existing methods, especially in the SD-198 and ISIC2018 datasets. All source code related to this work will be made publicly available soon at the provided URL.
[ "['Zeynep Özdemir' 'Hacer Yalim Keles' 'Ömer Özgür Tanrıöver']" ]
null
null
2404.16823
null
null
http://arxiv.org/pdf/2404.16823v2
2024-05-22T22:44:28Z
2024-04-25T17:59:41Z
Learning Visuotactile Skills with Two Multifingered Hands
Aiming to replicate human-like dexterity, perceptual experiences, and motion patterns, we explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data. Two significant challenges exist: the lack of an affordable and accessible teleoperation system suitable for a dual-arm setup with multifingered hands, and the scarcity of multifingered hand hardware equipped with touch sensing. To tackle the first challenge, we develop HATO, a low-cost hands-arms teleoperation system that leverages off-the-shelf electronics, complemented with a software suite that enables efficient data collection; the comprehensive software suite also supports multimodal data processing, scalable policy learning, and smooth policy deployment. To tackle the latter challenge, we introduce a novel hardware adaptation by repurposing two prosthetic hands equipped with touch sensors for research. Using visuotactile data collected from our system, we learn skills to complete long-horizon, high-precision tasks which are difficult to achieve without multifingered dexterity and touch feedback. Furthermore, we empirically investigate the effects of dataset size, sensing modality, and visual input preprocessing on policy learning. Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data. Videos, code, and datasets can be found at https://toruowo.github.io/hato/ .
[ "['Toru Lin' 'Yu Zhang' 'Qiyang Li' 'Haozhi Qi' 'Brent Yi' 'Sergey Levine'\n 'Jitendra Malik']" ]
null
null
2404.16828
null
null
http://arxiv.org/pdf/2404.16828v1
2024-04-25T17:59:56Z
2024-04-25T17:59:56Z
Made to Order: Discovering monotonic temporal changes via self-supervised video ordering
Our objective is to discover and localize monotonic temporal changes in a sequence of images. To achieve this, we exploit a simple proxy task of ordering a shuffled image sequence, with `time' serving as a supervisory signal since only changes that are monotonic with time can give rise to the correct ordering. We also introduce a flexible transformer-based model for general-purpose ordering of image sequences of arbitrary length with built-in attribution maps. After training, the model successfully discovers and localizes monotonic changes while ignoring cyclic and stochastic ones. We demonstrate applications of the model in multiple video settings covering different scene and object types, discovering both object-level and environmental changes in unseen sequences. We also demonstrate that the attention-based attribution maps function as effective prompts for segmenting the changing regions, and that the learned representations can be used for downstream applications. Finally, we show that the model achieves the state of the art on standard benchmarks for ordering a set of images.
[ "['Charig Yang' 'Weidi Xie' 'Andrew Zisserman']" ]
null
null
2404.16833
null
null
http://arxiv.org/pdf/2404.16833v1
2023-12-17T03:40:12Z
2023-12-17T03:40:12Z
Leaf-Based Plant Disease Detection and Explainable AI
The agricultural sector plays an essential role in the economic growth of a country. Specifically, in an Indian context, it is the critical source of livelihood for millions of people living in rural areas. Plant Disease is one of the significant factors affecting the agricultural sector. Plants get infected with diseases for various reasons, including synthetic fertilizers, archaic practices, environmental conditions, etc., which impact the farm yield and subsequently hinder the economy. To address this issue, researchers have explored many applications based on AI and Machine Learning techniques to detect plant diseases. This research survey provides a comprehensive understanding of common plant leaf diseases, evaluates traditional and deep learning techniques for disease detection, and summarizes available datasets. It also explores Explainable AI (XAI) to enhance the interpretability of deep learning models' decisions for end-users. By consolidating this knowledge, the survey offers valuable insights to researchers, practitioners, and stakeholders in the agricultural sector, fostering the development of efficient and transparent solutions for combating plant diseases and promoting sustainable agricultural practices.
[ "['Saurav Sagar' 'Mohammed Javed' 'David S Doermann']" ]
null
null
2404.16838
null
null
http://arxiv.org/pdf/2404.16838v1
2024-01-24T16:03:57Z
2024-01-24T16:03:57Z
Predicting SSH keys in Open SSH Memory dumps
As the digital landscape evolves, cybersecurity has become an indispensable focus of IT systems. Its ever-escalating challenges have amplified the importance of digital forensics, particularly in the analysis of heap dumps from main memory. In this context, the Secure Shell protocol (SSH) designed for encrypted communications, serves as both a safeguard and a potential veil for malicious activities. This research project focuses on predicting SSH keys in OpenSSH memory dumps, aiming to enhance protective measures against illicit access and enable the development of advanced security frameworks or tools like honeypots. This Masterarbeit is situated within the broader SmartVMI project, and seeks to build upon existing research on key prediction in OpenSSH heap dumps. Utilizing machine learning (ML) and deep learning models, the study aims to refine features for embedding techniques and explore innovative methods for effective key detection based on recent advancements in Knowledge Graph and ML. The objective is to accurately predict the presence and location of SSH keys within memory dumps. This work builds upon, and aims to enhance, the foundations laid by SSHkex and SmartKex, enriching both the methodology and the results of the original research while exploring the untapped potential of newly proposed approaches. The current thesis dives into memory graph modelization from raw binary heap dump files. Each memory graph can support a range of embeddings that can be used directly for model training, through the use of classic ML models and graph neural network. It offers an in-depth discussion on the current state-of-the-art in key prediction for OpenSSH memory dumps, research questions, experimental setups, programs development, results as well as discussing potential future directions.
[ "['Florian Rascoussier']" ]
null
null
2404.16844
null
null
http://arxiv.org/pdf/2404.16844v1
2024-02-13T01:12:39Z
2024-02-13T01:12:39Z
Sugarcane Health Monitoring With Satellite Spectroscopy and Machine Learning: A Review
Research into large-scale crop monitoring has flourished due to increased accessibility to satellite imagery. This review delves into previously unexplored and under-explored areas in sugarcane health monitoring and disease/pest detection using satellite-based spectroscopy and Machine Learning (ML). It discusses key considerations in system development, including relevant satellites, vegetation indices, ML methods, factors influencing sugarcane reflectance, optimal growth conditions, common diseases, and traditional detection methods. Many studies highlight how factors like crop age, soil type, viewing angle, water content, recent weather patterns, and sugarcane variety can impact spectral reflectance, affecting the accuracy of health assessments via spectroscopy. However, these variables have not been fully considered in the literature. In addition, the current literature lacks comprehensive comparisons between ML techniques and vegetation indices. We address these gaps in this review. We discuss that, while current findings suggest the potential for an ML-driven satellite spectroscopy system for monitoring sugarcane health, further research is essential. This paper offers a comprehensive analysis of previous research to aid in unlocking this potential and advancing the development of an effective sugarcane health monitoring system using satellite technology.
[ "['Ethan Kane Waters' 'Carla Chia-Ming Chen' 'Mostafa Rahimi Azghadi']" ]
null
null
2404.16852
null
null
http://arxiv.org/pdf/2404.16852v1
2024-03-18T07:10:33Z
2024-03-18T07:10:33Z
A Disease Labeler for Chinese Chest X-Ray Report Generation
In the field of medical image analysis, the scarcity of Chinese chest X-ray report datasets has hindered the development of technology for generating Chinese chest X-ray reports. On one hand, the construction of a Chinese chest X-ray report dataset is limited by the time-consuming and costly process of accurate expert disease annotation. On the other hand, a single natural language generation metric is commonly used to evaluate the similarity between generated and ground-truth reports, while the clinical accuracy and effectiveness of the generated reports rely on an accurate disease labeler (classifier). To address the issues, this study proposes a disease labeler tailored for the generation of Chinese chest X-ray reports. This labeler leverages a dual BERT architecture to handle diagnostic reports and clinical information separately and constructs a hierarchical label learning algorithm based on the affiliation between diseases and body parts to enhance text classification performance. Utilizing this disease labeler, a Chinese chest X-ray report dataset comprising 51,262 report samples was established. Finally, experiments and analyses were conducted on a subset of expert-annotated Chinese chest X-ray reports, validating the effectiveness of the proposed disease labeler.
[ "['Mengwei Wang' 'Ruixin Yan' 'Zeyi Hou' 'Ning Lang' 'Xiuzhuang Zhou']" ]
null
null
2404.16866
null
null
http://arxiv.org/pdf/2404.16866v2
2024-05-27T07:23:26Z
2024-04-18T09:37:54Z
Functional Protein Design with Local Domain Alignment
The core challenge of de novo protein design lies in creating proteins with specific functions or properties, guided by certain conditions. Current models explore to generate protein using structural and evolutionary guidance, which only provide indirect conditions concerning functions and properties. However, textual annotations of proteins, especially the annotations for protein domains, which directly describe the protein's high-level functionalities, properties, and their correlation with target amino acid sequences, remain unexplored in the context of protein design tasks. In this paper, we propose Protein-Annotation Alignment Generation (PAAG), a multi-modality protein design framework that integrates the textual annotations extracted from protein database for controllable generation in sequence space. Specifically, within a multi-level alignment module, PAAG can explicitly generate proteins containing specific domains conditioned on the corresponding domain annotations, and can even design novel proteins with flexible combinations of different kinds of annotations. Our experimental results underscore the superiority of the aligned protein representations from PAAG over 7 prediction tasks. Furthermore, PAAG demonstrates a nearly sixfold increase in generation success rate (24.7% vs 4.7% in zinc finger, and 54.3% vs 8.7% in the immunoglobulin domain) in comparison to the existing model.
[ "['Chaohao Yuan' 'Songyou Li' 'Geyan Ye' 'Yikun Zhang' 'Long-Kai Huang'\n 'Wenbing Huang' 'Wei Liu' 'Jianhua Yao' 'Yu Rong']" ]
null
null
2404.16870
null
null
http://arxiv.org/pdf/2404.16870v1
2024-04-20T11:11:47Z
2024-04-20T11:11:47Z
LEMDA: A Novel Feature Engineering Method for Intrusion Detection in IoT Systems
Intrusion detection systems (IDS) for the Internet of Things (IoT) systems can use AI-based models to ensure secure communications. IoT systems tend to have many connected devices producing massive amounts of data with high dimensionality, which requires complex models. Complex models have notorious problems such as overfitting, low interpretability, and high computational complexity. Adding model complexity penalty (i.e., regularization) can ease overfitting, but it barely helps interpretability and computational efficiency. Feature engineering can solve these issues; hence, it has become critical for IDS in large-scale IoT systems to reduce the size and dimensionality of data, resulting in less complex models with excellent performance, smaller data storage, and fast detection. This paper proposes a new feature engineering method called LEMDA (Light feature Engineering based on the Mean Decrease in Accuracy). LEMDA applies exponential decay and an optional sensitivity factor to select and create the most informative features. The proposed method has been evaluated and compared to other feature engineering methods using three IoT datasets and four AI/ML models. The results show that LEMDA improves the F1 score performance of all the IDS models by an average of 34% and reduces the average training and detection times in most cases.
[ "['Ali Ghubaish' 'Zebo Yang' 'Aiman Erbad' 'Raj Jain']" ]
null
null
2404.16873
null
null
http://arxiv.org/pdf/2404.16873v1
2024-04-21T22:18:13Z
2024-04-21T22:18:13Z
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain jailbreaking attacks that lead to generation of inappropriate or harmful content. Manual red-teaming requires finding adversarial prompts that cause such jailbreaking, e.g. by appending a suffix to a given instruction, which is inefficient and time-consuming. On the other hand, automatic adversarial prompt generation often leads to semantically meaningless attacks that can easily be detected by perplexity-based filters, may require gradient information from the TargetLLM, or do not scale well due to time-consuming discrete optimization processes over the token space. In this paper, we present a novel method that uses another LLM, called the AdvPrompter, to generate human-readable adversarial prompts in seconds, $sim800times$ faster than existing optimization-based approaches. We train the AdvPrompter using a novel algorithm that does not require access to the gradients of the TargetLLM. This process alternates between two steps: (1) generating high-quality target adversarial suffixes by optimizing the AdvPrompter predictions, and (2) low-rank fine-tuning of the AdvPrompter with the generated adversarial suffixes. The trained AdvPrompter generates suffixes that veil the input instruction without changing its meaning, such that the TargetLLM is lured to give a harmful response. Experimental results on popular open source TargetLLMs show state-of-the-art results on the AdvBench dataset, that also transfer to closed-source black-box LLM APIs. Further, we demonstrate that by fine-tuning on a synthetic dataset generated by AdvPrompter, LLMs can be made more robust against jailbreaking attacks while maintaining performance, i.e. high MMLU scores.
[ "['Anselm Paulus' 'Arman Zharmagambetov' 'Chuan Guo' 'Brandon Amos'\n 'Yuandong Tian']" ]
null
null
2404.16876
null
null
http://arxiv.org/pdf/2404.16876v1
2024-04-22T09:23:56Z
2024-04-22T09:23:56Z
AdaQAT: Adaptive Bit-Width Quantization-Aware Training
Large-scale deep neural networks (DNNs) have achieved remarkable success in many application scenarios. However, high computational complexity and energy costs of modern DNNs make their deployment on edge devices challenging. Model quantization is a common approach to deal with deployment constraints, but searching for optimized bit-widths can be challenging. In this work, we present Adaptive Bit-Width Quantization Aware Training (AdaQAT), a learning-based method that automatically optimizes weight and activation signal bit-widths during training for more efficient DNN inference. We use relaxed real-valued bit-widths that are updated using a gradient descent rule, but are otherwise discretized for all quantization operations. The result is a simple and flexible QAT approach for mixed-precision uniform quantization problems. Compared to other methods that are generally designed to be run on a pretrained network, AdaQAT works well in both training from scratch and fine-tuning scenarios.Initial results on the CIFAR-10 and ImageNet datasets using ResNet20 and ResNet18 models, respectively, indicate that our method is competitive with other state-of-the-art mixed-precision quantization approaches.
[ "['Cédric Gernigon' 'Silviu-Ioan Filip' 'Olivier Sentieys'\n 'Clément Coggiola' 'Mickael Bruno']" ]
null
null
2404.16877
null
null
http://arxiv.org/pdf/2404.16877v1
2024-04-22T10:57:54Z
2024-04-22T10:57:54Z
Rapid Deployment of DNNs for Edge Computing via Structured Pruning at Initialization
Edge machine learning (ML) enables localized processing of data on devices and is underpinned by deep neural networks (DNNs). However, DNNs cannot be easily run on devices due to their substantial computing, memory and energy requirements for delivering performance that is comparable to cloud-based ML. Therefore, model compression techniques, such as pruning, have been considered. Existing pruning methods are problematic for edge ML since they: (1) Create compressed models that have limited runtime performance benefits (using unstructured pruning) or compromise the final model accuracy (using structured pruning), and (2) Require substantial compute resources and time for identifying a suitable compressed DNN model (using neural architecture search). In this paper, we explore a new avenue, referred to as Pruning-at-Initialization (PaI), using structured pruning to mitigate the above problems. We develop Reconvene, a system for rapidly generating pruned models suited for edge deployments using structured PaI. Reconvene systematically identifies and prunes DNN convolution layers that are least sensitive to structured pruning. Reconvene rapidly creates pruned DNNs within seconds that are up to 16.21x smaller and 2x faster while maintaining the same accuracy as an unstructured PaI counterpart.
[ "['Bailey J. Eccles' 'Leon Wong' 'Blesson Varghese']" ]
null
null
2404.16879
null
null
http://arxiv.org/pdf/2404.16879v1
2024-04-22T22:52:14Z
2024-04-22T22:52:14Z
Learning Control Barrier Functions and their application in Reinforcement Learning: A Survey
Reinforcement learning is a powerful technique for developing new robot behaviors. However, typical lack of safety guarantees constitutes a hurdle for its practical application on real robots. To address this issue, safe reinforcement learning aims to incorporate safety considerations, enabling faster transfer to real robots and facilitating lifelong learning. One promising approach within safe reinforcement learning is the use of control barrier functions. These functions provide a framework to ensure that the system remains in a safe state during the learning process. However, synthesizing control barrier functions is not straightforward and often requires ample domain knowledge. This challenge motivates the exploration of data-driven methods for automatically defining control barrier functions, which is highly appealing. We conduct a comprehensive review of the existing literature on safe reinforcement learning using control barrier functions. Additionally, we investigate various techniques for automatically learning the Control Barrier Functions, aiming to enhance the safety and efficacy of Reinforcement Learning in practical robot applications.
[ "['Maeva Guerrier' 'Hassan Fouad' 'Giovanni Beltrame']" ]
null
null
2404.16881
null
null
http://arxiv.org/pdf/2404.16881v1
2024-04-23T13:59:11Z
2024-04-23T13:59:11Z
On uncertainty-penalized Bayesian information criterion
The uncertainty-penalized information criterion (UBIC) has been proposed as a new model-selection criterion for data-driven partial differential equation (PDE) discovery. In this paper, we show that using the UBIC is equivalent to employing the conventional BIC to a set of overparameterized models derived from the potential regression models of different complexity measures. The result indicates that the asymptotic property of the UBIC and BIC holds indifferently.
[ "['Pongpisit Thanasutives' 'Ken-ichi Fukui']" ]
null
null
2404.16882
null
null
http://arxiv.org/pdf/2404.16882v1
2024-04-23T19:56:11Z
2024-04-23T19:56:11Z
ThermoPore: Predicting Part Porosity Based on Thermal Images Using Deep Learning
We present a deep learning approach for quantifying and localizing ex-situ porosity within Laser Powder Bed Fusion fabricated samples utilizing in-situ thermal image monitoring data. Our goal is to build the real time porosity map of parts based on thermal images acquired during the build. The quantification task builds upon the established Convolutional Neural Network model architecture to predict pore count and the localization task leverages the spatial and temporal attention mechanisms of the novel Video Vision Transformer model to indicate areas of expected porosity. Our model for porosity quantification achieved a $R^2$ score of 0.57 and our model for porosity localization produced an average IoU score of 0.32 and a maximum of 1.0. This work is setting the foundations of part porosity "Digital Twins" based on additive manufacturing monitoring data and can be applied downstream to reduce time-intensive post-inspection and testing activities during part qualification and certification. In addition, we seek to accelerate the acquisition of crucial insights normally only available through ex-situ part evaluation by means of machine learning analysis of in-situ process monitoring data.
[ "['Peter Myung-Won Pak' 'Francis Ogoke' 'Andrew Polonsky' 'Anthony Garland'\n 'Dan S. Bolintineanu' 'Dan R. Moser' 'Michael J. Heiden'\n 'Amir Barati Farimani']" ]
null
null
2404.16883
null
null
http://arxiv.org/pdf/2404.16883v1
2024-04-23T20:29:01Z
2024-04-23T20:29:01Z
Myopically Verifiable Probabilistic Certificates for Safe Control and Learning
This paper addresses the design of safety certificates for stochastic systems, with a focus on ensuring long-term safety through fast real-time control. In stochastic environments, set invariance-based methods that restrict the probability of risk events in infinitesimal time intervals may exhibit significant long-term risks due to cumulative uncertainties/risks. On the other hand, reachability-based approaches that account for the long-term future may require prohibitive computation in real-time decision making. To overcome this challenge involving stringent long-term safety vs. computation tradeoffs, we first introduce a novel technique termed `probabilistic invariance'. This technique characterizes the invariance conditions of the probability of interest. When the target probability is defined using long-term trajectories, this technique can be used to design myopic conditions/controllers with assured long-term safe probability. Then, we integrate this technique into safe control and learning. The proposed control methods efficiently assure long-term safety using neural networks or model predictive controllers with short outlook horizons. The proposed learning methods can be used to guarantee long-term safety during and after training. Finally, we demonstrate the performance of the proposed techniques in numerical simulations.
[ "['Zhuoyuan Wang' 'Haoming Jing' 'Christian Kurniawan' 'Albert Chern'\n 'Yorie Nakahira']" ]
null
null
2404.16884
null
null
http://arxiv.org/pdf/2404.16884v1
2024-04-23T20:33:17Z
2024-04-23T20:33:17Z
Aligning Knowledge Graphs Provided by Humans and Generated from Neural Networks in Specific Tasks
This paper develops an innovative method that enables neural networks to generate and utilize knowledge graphs, which describe their concept-level knowledge and optimize network parameters through alignment with human-provided knowledge. This research addresses a gap where traditionally, network-generated knowledge has been limited to applications in downstream symbolic analysis or enhancing network transparency. By integrating a novel autoencoder design with the Vector Symbolic Architecture (VSA), we have introduced auxiliary tasks that support end-to-end training. Our approach eschews traditional dependencies on ontologies or word embedding models, mining concepts from neural networks and directly aligning them with human knowledge. Experiments show that our method consistently captures network-generated concepts that align closely with human knowledge and can even uncover new, useful concepts not previously identified by humans. This plug-and-play strategy not only enhances the interpretability of neural networks but also facilitates the integration of symbolic logical reasoning within these systems.
[ "['Tangrui Li' 'Jun Zhou']" ]
null
null
2404.16885
null
null
http://arxiv.org/pdf/2404.16885v1
2024-04-23T23:14:30Z
2024-04-23T23:14:30Z
Adapting an Artificial Intelligence Sexually Transmitted Diseases Symptom Checker Tool for Mpox Detection: The HeHealth Experience
Artificial Intelligence applications have shown promise in the management of pandemics and have been widely used to assist the identification, classification, and diagnosis of medical images. In response to the global outbreak of Monkeypox (Mpox), the HeHealth.ai team leveraged an existing tool to screen for sexually transmitted diseases to develop a digital screening test for symptomatic Mpox through AI approaches. Prior to the global outbreak of Mpox, the team developed a smartphone app, where app users can use their own smartphone cameras to take pictures of their own penises to screen for symptomatic STD. The AI model was initially developed using 5000 cases and use a modified convolutional neural network to output prediction scores across visually diagnosable penis pathologies including Syphilis, Herpes Simplex Virus, and Human Papilloma Virus. From June 2022 to October 2022, a total of about 22,000 users downloaded the HeHealth app, and about 21,000 images have been analyzed using HeHealth AI technology. We then engaged in formative research, stakeholder engagement, rapid consolidation images, a validation study, and implementation of the tool from July 2022. From July 2022 to October 2022, a total of 1000 Mpox related images had been used to train the Mpox symptom checker tool. Our digital symptom checker tool showed accuracy of 87% to rule in Mpox and 90% to rule out symptomatic Mpox. Several hurdles identified included issues of data privacy and security for app users, initial lack of data to train the AI tool, and the potential generalizability of input data. We offer several suggestions to help others get started on similar projects in emergency situations, including engaging a wide range of stakeholders, having a multidisciplinary team, prioritizing pragmatism, as well as the concept that big data in fact is made up of small data.
[ "['Rayner Kay Jin Tan' 'Dilruk Perera' 'Salomi Arasaratnam'\n 'Yudara Kularathne']" ]
null
null
2404.16886
null
null
http://arxiv.org/pdf/2404.16886v1
2024-04-24T00:34:44Z
2024-04-24T00:34:44Z
Review of Data-centric Time Series Analysis from Sample, Feature, and Period
Data is essential to performing time series analysis utilizing machine learning approaches, whether for classic models or today's large language models. A good time-series dataset is advantageous for the model's accuracy, robustness, and convergence, as well as task outcomes and costs. The emergence of data-centric AI represents a shift in the landscape from model refinement to prioritizing data quality. Even though time-series data processing methods frequently come up in a wide range of research fields, it hasn't been well investigated as a specific topic. To fill the gap, in this paper, we systematically review different data-centric methods in time series analysis, covering a wide range of research topics. Based on the time-series data characteristics at sample, feature, and period, we propose a taxonomy for the reviewed data selection methods. In addition to discussing and summarizing their characteristics, benefits, and drawbacks targeting time-series data, we also introduce the challenges and opportunities by proposing recommendations, open problems, and possible research topics.
[ "['Chenxi Sun' 'Hongyan Li' 'Yaliang Li' 'Shenda Hong']" ]
null
null
2404.16887
null
null
http://arxiv.org/pdf/2404.16887v1
2024-04-24T00:46:19Z
2024-04-24T00:46:19Z
Anomaly Detection for Incident Response at Scale
We present a machine learning-based anomaly detection product, AI Detect and Respond (AIDR), that monitors Walmart's business and system health in real-time. During the validation over 3 months, the product served predictions from over 3000 models to more than 25 application, platform, and operation teams, covering 63% of major incidents and reducing the mean-time-to-detect (MTTD) by more than 7 minutes. Unlike previous anomaly detection methods, our solution leverages statistical, ML and deep learning models while continuing to incorporate rule-based static thresholds to incorporate domain-specific knowledge. Both univariate and multivariate ML models are deployed and maintained through distributed services for scalability and high availability. AIDR has a feedback loop that assesses model quality with a combination of drift detection algorithms and customer feedback. It also offers self-onboarding capabilities and customizability. AIDR has achieved success with various internal teams with lower time to detection and fewer false positives than previous methods. As we move forward, we aim to expand incident coverage and prevention, reduce noise, and integrate further with root cause recommendation (RCR) to enable an end-to-end AIDR experience.
[ "['Hanzhang Wang' 'Gowtham Kumar Tangirala' 'Gilkara Pranav Naidu'\n 'Charles Mayville' 'Arighna Roy' 'Joanne Sun' 'Ramesh Babu Mandava']" ]
null
null
2404.16890
null
null
http://arxiv.org/pdf/2404.16890v1
2024-04-24T09:12:04Z
2024-04-24T09:12:04Z
NEPENTHE: Entropy-Based Pruning as a Neural Network Depth's Reducer
While deep neural networks are highly effective at solving complex tasks, their computational demands can hinder their usefulness in real-time applications and with limited-resources systems. Besides, for many tasks it is known that these models are over-parametrized: neoteric works have broadly focused on reducing the width of these networks, rather than their depth. In this paper, we aim to reduce the depth of over-parametrized deep neural networks: we propose an eNtropy-basEd Pruning as a nEural Network depTH's rEducer (NEPENTHE) to alleviate deep neural networks' computational burden. Based on our theoretical finding, NEPENTHE focuses on un-structurally pruning connections in layers with low entropy to remove them entirely. We validate our approach on popular architectures such as MobileNet and Swin-T, showing that when encountering an over-parametrization regime, it can effectively linearize some layers (hence reducing the model's depth) with little to no performance loss. The code will be publicly available upon acceptance of the article.
[ "['Zhu Liao' 'Victor Quétu' 'Van-Tam Nguyen' 'Enzo Tartaglione']" ]
null
null
2404.16893
null
null
http://arxiv.org/pdf/2404.16893v1
2024-04-24T23:22:37Z
2024-04-24T23:22:37Z
Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge
In safety-critical systems that interface with the real world, the role of uncertainty in decision-making is pivotal, particularly in the context of machine learning models. For the secure functioning of Cyber-Physical Systems (CPS), it is imperative to manage such uncertainty adeptly. In this research, we focus on the development of a vehicle's lateral control system using a machine learning framework. Specifically, we employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification. This capability allows us to gauge the level of confidence or uncertainty in the model's predictions. The BNN based controller is trained using simulated data gathered from the vehicle traversing a single track and subsequently tested on various other tracks. We want to share two significant results: firstly, the trained model demonstrates the ability to adapt and effectively control the vehicle on multiple similar tracks. Secondly, the quantification of prediction confidence integrated into the controller serves as an early-warning system, signaling when the algorithm lacks confidence in its predictions and is therefore susceptible to failure. By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
[ "['Neha Kumari' 'Sumit Kumar. Sneha Priya' 'Ayush Kumar' 'Akash Fogla']" ]
null
null
2404.16894
null
null
http://arxiv.org/pdf/2404.16894v2
2024-04-29T20:14:46Z
2024-04-25T01:57:11Z
On TinyML and Cybersecurity: Electric Vehicle Charging Infrastructure Use Case
As technology advances, the use of Machine Learning (ML) in cybersecurity is becoming increasingly crucial to tackle the growing complexity of cyber threats. While traditional ML models can enhance cybersecurity, their high energy and resource demands limit their applications, leading to the emergence of Tiny Machine Learning (TinyML) as a more suitable solution for resource-constrained environments. TinyML is widely applied in areas such as smart homes, healthcare, and industrial automation. TinyML focuses on optimizing ML algorithms for small, low-power devices, enabling intelligent data processing directly on edge devices. This paper provides a comprehensive review of common challenges of TinyML techniques, such as power consumption, limited memory, and computational constraints; it also explores potential solutions to these challenges, such as energy harvesting, computational optimization techniques, and transfer learning for privacy preservation. On the other hand, this paper discusses TinyML's applications in advancing cybersecurity for Electric Vehicle Charging Infrastructures (EVCIs) as a representative use case. It presents an experimental case study that enhances cybersecurity in EVCI using TinyML, evaluated against traditional ML in terms of reduced delay and memory usage, with a slight trade-off in accuracy. Additionally, the study includes a practical setup using the ESP32 microcontroller in the PlatformIO environment, which provides a hands-on assessment of TinyML's application in cybersecurity for EVCI.
[ "['Fatemeh Dehrouyeh' 'Li Yang' 'Firouz Badrkhani Ajaei' 'Abdallah Shami']" ]
null
null
2404.16896
null
null
http://arxiv.org/pdf/2404.16896v1
2024-04-25T05:52:20Z
2024-04-25T05:52:20Z
A Neural-Network-Based Approach for Loose-Fitting Clothing
Since loose-fitting clothing contains dynamic modes that have proven to be difficult to predict via neural networks, we first illustrate how to coarsely approximate these modes with a real-time numerical algorithm specifically designed to mimic the most important ballistic features of a classical numerical simulation. Although there is some flexibility in the choice of the numerical algorithm used as a proxy for full simulation, it is essential that the stability and accuracy be independent from any time step restriction or similar requirements in order to facilitate real-time performance. In order to reduce the number of degrees of freedom that require approximations to their dynamics, we simulate rigid frames and use skinning to reconstruct a rough approximation to a desirable mesh; as one might expect, neural-network-based skinning seems to perform better than linear blend skinning in this scenario. Improved high frequency deformations are subsequently added to the skinned mesh via a quasistatic neural network (QNN). In contrast to recurrent neural networks that require a plethora of training data in order to adequately generalize to new examples, QNNs perform well with significantly less training data.
[ "['Yongxu Jin' 'Dalton Omens' 'Zhenglin Geng' 'Joseph Teran'\n 'Abishek Kumar' 'Kenji Tashiro' 'Ronald Fedkiw']" ]
null
null
2404.16897
null
null
http://arxiv.org/pdf/2404.16897v1
2024-04-25T06:04:34Z
2024-04-25T06:04:34Z
Exploring Learngene via Stage-wise Weight Sharing for Initializing Variable-sized Models
In practice, we usually need to build variable-sized models adapting for diverse resource constraints in different application scenarios, where weight initialization is an important step prior to training. The Learngene framework, introduced recently, firstly learns one compact part termed as learngene from a large well-trained model, after which learngene is expanded to initialize variable-sized models. In this paper, we start from analysing the importance of guidance for the expansion of well-trained learngene layers, inspiring the design of a simple but highly effective Learngene approach termed SWS (Stage-wise Weight Sharing), where both learngene layers and their learning process critically contribute to providing knowledge and guidance for initializing models at varying scales. Specifically, to learn learngene layers, we build an auxiliary model comprising multiple stages where the layer weights in each stage are shared, after which we train it through distillation. Subsequently, we expand these learngene layers containing stage information at their corresponding stage to initialize models of variable depths. Extensive experiments on ImageNet-1K demonstrate that SWS achieves consistent better performance compared to many models trained from scratch, while reducing around 6.6x total training costs. In some cases, SWS performs better only after 1 epoch tuning. When initializing variable-sized models adapting for different resource constraints, SWS achieves better results while reducing around 20x parameters stored to initialize these models and around 10x pre-training costs, in contrast to the pre-training and fine-tuning approach.
[ "['Shi-Yu Xia' 'Wenxuan Zhu' 'Xu Yang' 'Xin Geng']" ]
null
null
2404.16898
null
null
http://arxiv.org/pdf/2404.16898v1
2024-04-25T06:58:16Z
2024-04-25T06:58:16Z
How to Parameterize Asymmetric Quantization Ranges for Quantization-Aware Training
This paper investigates three different parameterizations of asymmetric uniform quantization for quantization-aware training: (1) scale and offset, (2) minimum and maximum, and (3) beta and gamma. We perform a comprehensive comparative analysis of these parameterizations' influence on quantization-aware training, using both controlled experiments and real-world large language models. Our particular focus is on their changing behavior in response to critical training hyperparameters, bit width and learning rate. Based on our investigation, we propose best practices to stabilize and accelerate quantization-aware training with learnable asymmetric quantization ranges.
[ "['Jaeseong You' 'Minseop Park' 'Kyunggeun Lee' 'Seokjun An' 'Chirag Patel'\n 'Markus Nage']" ]
null
null
2404.16899
null
null
http://arxiv.org/pdf/2404.16899v1
2024-04-25T08:57:35Z
2024-04-25T08:57:35Z
mlr3summary: Concise and interpretable summaries for machine learning models
This work introduces a novel R package for concise, informative summaries of machine learning models. We take inspiration from the summary function for (generalized) linear models in R, but extend it in several directions: First, our summary function is model-agnostic and provides a unified summary output also for non-parametric machine learning models; Second, the summary output is more extensive and customizable -- it comprises information on the dataset, model performance, model complexity, model's estimated feature importances, feature effects, and fairness metrics; Third, models are evaluated based on resampling strategies for unbiased estimates of model performances, feature importances, etc. Overall, the clear, structured output should help to enhance and expedite the model selection process, making it a helpful tool for practitioners and researchers alike.
[ "['Susanne Dandl' 'Marc Becker' 'Bernd Bischl' 'Giuseppe Casalicchio'\n 'Ludwig Bothmann']" ]
null
null
2404.16900
null
null
http://arxiv.org/pdf/2404.16900v1
2024-04-25T08:58:41Z
2024-04-25T08:58:41Z
Space-Variant Total Variation boosted by learning techniques in few-view tomographic imaging
This paper focuses on the development of a space-variant regularization model for solving an under-determined linear inverse problem. The case study is a medical image reconstruction from few-view tomographic noisy data. The primary objective of the proposed optimization model is to achieve a good balance between denoising and the preservation of fine details and edges, overcoming the performance of the popular and largely used Total Variation (TV) regularization through the application of appropriate pixel-dependent weights. The proposed strategy leverages the role of gradient approximations for the computation of the space-variant TV weights. For this reason, a convolutional neural network is designed, to approximate both the ground truth image and its gradient using an elastic loss function in its training. Additionally, the paper provides a theoretical analysis of the proposed model, showing the uniqueness of its solution, and illustrates a Chambolle-Pock algorithm tailored to address the specific problem at hand. This comprehensive framework integrates innovative regularization techniques with advanced neural network capabilities, demonstrating promising results in achieving high-quality reconstructions from low-sampled tomographic data.
[ "['Elena Morotti' 'Davide Evangelista' 'Andrea Sebastiani'\n 'Elena Loli Piccolomini']" ]
null
null
2404.16907
null
null
http://arxiv.org/pdf/2404.16907v1
2024-04-25T12:48:11Z
2024-04-25T12:48:11Z
Season combinatorial intervention predictions with Salt & Peper
Interventions play a pivotal role in the study of complex biological systems. In drug discovery, genetic interventions (such as CRISPR base editing) have become central to both identifying potential therapeutic targets and understanding a drug's mechanism of action. With the advancement of CRISPR and the proliferation of genome-scale analyses such as transcriptomics, a new challenge is to navigate the vast combinatorial space of concurrent genetic interventions. Addressing this, our work concentrates on estimating the effects of pairwise genetic combinations on the cellular transcriptome. We introduce two novel contributions: Salt, a biologically-inspired baseline that posits the mostly additive nature of combination effects, and Peper, a deep learning model that extends Salt's additive assumption to achieve unprecedented accuracy. Our comprehensive comparison against existing state-of-the-art methods, grounded in diverse metrics, and our out-of-distribution analysis highlight the limitations of current models in realistic settings. This analysis underscores the necessity for improved modelling techniques and data acquisition strategies, paving the way for more effective exploration of genetic intervention effects.
[ "['Thomas Gaudelet' 'Alice Del Vecchio' 'Eli M Carrami' 'Juliana Cudini'\n 'Chantriolnt-Andreas Kapourani' 'Caroline Uhler' 'Lindsay Edwards']" ]
null
null
2404.16908
null
null
http://arxiv.org/pdf/2404.16908v1
2024-04-25T13:14:32Z
2024-04-25T13:14:32Z
Closing the gap: Optimizing Guidance and Control Networks through Neural ODEs
We improve the accuracy of Guidance & Control Networks (G&CNETs), trained to represent the optimal control policies of a time-optimal transfer and a mass-optimal landing, respectively. In both cases we leverage the dynamics of the spacecraft, described by Ordinary Differential Equations which incorporate a neural network on their right-hand side (Neural ODEs). Since the neural dynamics is differentiable, the ODEs sensitivities to the network parameters can be computed using the variational equations, thereby allowing to update the G&CNET parameters based on the observed dynamics. We start with a straightforward regression task, training the G&CNETs on datasets of optimal trajectories using behavioural cloning. These networks are then refined using the Neural ODE sensitivities by minimizing the error between the final states and the target states. We demonstrate that for the orbital transfer, the final error to the target can be reduced by 99% on a single trajectory and by 70% on a batch of 500 trajectories. For the landing problem the reduction in error is around 98-99% (position) and 40-44% (velocity). This step significantly enhances the accuracy of G&CNETs, which instills greater confidence in their reliability for operational use. We also compare our results to the popular Dataset Aggregation method (DaGGER) and allude to the strengths and weaknesses of both methods.
[ "['Sebastien Origer' 'Dario Izzo']" ]
null
null
2404.16911
null
null
http://arxiv.org/pdf/2404.16911v1
2024-04-25T13:54:31Z
2024-04-25T13:54:31Z
HEroBM: a deep equivariant graph neural network for universal backmapping from coarse-grained to all-atom representations
Molecular simulations have assumed a paramount role in the fields of chemistry, biology, and material sciences, being able to capture the intricate dynamic properties of systems. Within this realm, coarse-grained (CG) techniques have emerged as invaluable tools to sample large-scale systems and reach extended timescales by simplifying system representation. However, CG approaches come with a trade-off: they sacrifice atomistic details that might hold significant relevance in deciphering the investigated process. Therefore, a recommended approach is to identify key CG conformations and process them using backmapping methods, which retrieve atomistic coordinates. Currently, rule-based methods yield subpar geometries and rely on energy relaxation, resulting in less-than-optimal outcomes. Conversely, machine learning techniques offer higher accuracy but are either limited in transferability between systems or tied to specific CG mappings. In this work, we introduce HEroBM, a dynamic and scalable method that employs deep equivariant graph neural networks and a hierarchical approach to achieve high-resolution backmapping. HEroBM handles any type of CG mapping, offering a versatile and efficient protocol for reconstructing atomistic structures with high accuracy. Focused on local principles, HEroBM spans the entire chemical space and is transferable to systems of varying sizes. We illustrate the versatility of our framework through diverse biological systems, including a complex real-case scenario. Here, our end-to-end backmapping approach accurately generates the atomistic coordinates of a G protein-coupled receptor bound to an organic small molecule within a cholesterol/phospholipid bilayer.
[ "['Daniele Angioletti' 'Stefano Raniolo' 'Vittorio Limongelli']" ]
null
null
2404.16913
null
null
http://arxiv.org/pdf/2404.16913v1
2024-04-25T15:15:58Z
2024-04-25T15:15:58Z
DE-CGAN: Boosting rTMS Treatment Prediction with Diversity Enhancing Conditional Generative Adversarial Networks
Repetitive Transcranial Magnetic Stimulation (rTMS) is a well-supported, evidence-based treatment for depression. However, patterns of response to this treatment are inconsistent. Emerging evidence suggests that artificial intelligence can predict rTMS treatment outcomes for most patients using fMRI connectivity features. While these models can reliably predict treatment outcomes for many patients for some underrepresented fMRI connectivity measures DNN models are unable to reliably predict treatment outcomes. As such we propose a novel method, Diversity Enhancing Conditional General Adversarial Network (DE-CGAN) for oversampling these underrepresented examples. DE-CGAN creates synthetic examples in difficult-to-classify regions by first identifying these data points and then creating conditioned synthetic examples to enhance data diversity. Through empirical experiments we show that a classification model trained using a diversity enhanced training set outperforms traditional data augmentation techniques and existing benchmark results. This work shows that increasing the diversity of a training dataset can improve classification model performance. Furthermore, this work provides evidence for the utility of synthetic patients providing larger more robust datasets for both AI researchers and psychiatrists to explore variable relationships.
[ "['Matthew Squires' 'Xiaohui Tao' 'Soman Elangovan' 'Raj Gururajan'\n 'Haoran Xie' 'Xujuan Zhou' 'Yuefeng Li' 'U Rajendra Acharya']" ]
null
null
2404.16914
null
null
http://arxiv.org/pdf/2404.16914v1
2024-04-25T15:39:59Z
2024-04-25T15:39:59Z
Prediction Is All MoE Needs: Expert Load Distribution Goes from Fluctuating to Stabilizing
MoE facilitates the development of large models by making the computational complexity of the model no longer scale linearly with increasing parameters. The learning sparse gating network selects a set of experts for each token to be processed; however, this may lead to differences in the number of tokens processed by each expert over several successive iterations, i.e., the expert load fluctuations, which reduces computational parallelization and resource utilization. To this end, we traced and analyzed loads of each expert in the training iterations for several large language models in this work, and defined the transient state with "obvious load fluctuation" and the stable state with "temporal locality". Moreover, given the characteristics of these two states and the computational overhead, we deployed three classical prediction algorithms that achieve accurate expert load prediction results. For the GPT3 350M model, the average error rates for predicting the expert load proportion over the next 1,000 and 2,000 steps are approximately 1.3% and 1.8%, respectively. This work can provide valuable guidance for expert placement or resource allocation for MoE model training. Based on this work, we will propose an expert placement scheme for transient and stable states in our coming work.
[ "['Peizhuang Cong' 'Aomufei Yuan' 'Shimao Chen' 'Yuxuan Tian' 'Bowen Ye'\n 'Tong Yang']" ]
null
null
2404.16917
null
null
http://arxiv.org/pdf/2404.16917v1
2024-04-25T16:07:01Z
2024-04-25T16:07:01Z
Grad Queue : A probabilistic framework to reinforce sparse gradients
Informative gradients are often lost in large batch updates. We propose a robust mechanism to reinforce the sparse components within a random batch of data points. A finite queue of online gradients is used to determine their expected instantaneous statistics. We propose a function to measure the scarcity of incoming gradients using these statistics and establish the theoretical ground of this mechanism. To minimize conflicting components within large mini-batches, samples are grouped with aligned objectives by clustering based on inherent feature space. Sparsity is measured for each centroid and weighted accordingly. A strong intuitive criterion to squeeze out redundant information from each cluster is the backbone of the system. It makes rare information indifferent to aggressive momentum also exhibits superior performance with larger mini-batch horizon. The effective length of the queue kept variable to follow the local loss pattern. The contribution of our method is to restore intra-mini-batch diversity at the same time widening the optimal batch boundary. Both of these collectively drive it deeper towards the minima. Our method has shown superior performance for CIFAR10, MNIST, and Reuters News category dataset compared to mini-batch gradient descent.
[ "['Irfan Mohammad Al Hasib']" ]
null
null
2404.16918
null
null
http://arxiv.org/pdf/2404.16918v1
2024-04-25T17:16:13Z
2024-04-25T17:16:13Z
On-the-fly Data Augmentation for Forecasting with Deep Learning
Deep learning approaches are increasingly used to tackle forecasting tasks. A key factor in the successful application of these methods is a large enough training sample size, which is not always available. In these scenarios, synthetic data generation techniques are usually applied to augment the dataset. Data augmentation is typically applied before fitting a model. However, these approaches create a single augmented dataset, potentially limiting their effectiveness. This work introduces OnDAT (On-the-fly Data Augmentation for Time series) to address this issue by applying data augmentation during training and validation. Contrary to traditional methods that create a single, static augmented dataset beforehand, OnDAT performs augmentation on-the-fly. By generating a new augmented dataset on each iteration, the model is exposed to a constantly changing augmented data variations. We hypothesize this process enables a better exploration of the data space, which reduces the potential for overfitting and improves forecasting performance. We validated the proposed approach using a state-of-the-art deep learning forecasting method and 8 benchmark datasets containing a total of 75797 time series. The experiments suggest that OnDAT leads to better forecasting performance than a strategy that applies data augmentation before training as well as a strategy that does not involve data augmentation. The method and experiments are publicly available.
[ "['Vitor Cerqueira' 'Moisés Santos' 'Yassine Baghoussi' 'Carlos Soares']" ]
null
null
2404.16920
null
null
http://arxiv.org/pdf/2404.16920v1
2024-04-25T17:26:41Z
2024-04-25T17:26:41Z
Structured Reinforcement Learning for Delay-Optimal Data Transmission in Dense mmWave Networks
We study the data packet transmission problem (mmDPT) in dense cell-free millimeter wave (mmWave) networks, i.e., users sending data packet requests to access points (APs) via uplinks and APs transmitting requested data packets to users via downlinks. Our objective is to minimize the average delay in the system due to APs' limited service capacity and unreliable wireless channels between APs and users. This problem can be formulated as a restless multi-armed bandits problem with fairness constraint (RMAB-F). Since finding the optimal policy for RMAB-F is intractable, existing learning algorithms are computationally expensive and not suitable for practical dynamic dense mmWave networks. In this paper, we propose a structured reinforcement learning (RL) solution for mmDPT by exploiting the inherent structure encoded in RMAB-F. To achieve this, we first design a low-complexity and provably asymptotically optimal index policy for RMAB-F. Then, we leverage this structure information to develop a structured RL algorithm called mmDPT-TS, which provably achieves an tilde{O}(sqrt{T}) Bayesian regret. More importantly, mmDPT-TS is computation-efficient and thus amenable to practical implementation, as it fully exploits the structure of index policy for making decisions. Extensive emulation based on data collected in realistic mmWave networks demonstrate significant gains of mmDPT-TS over existing approaches.
[ "['Shufan Wang' 'Guojun Xiong' 'Shichen Zhang' 'Huacheng Zeng' 'Jian Li'\n 'Shivendra Panwar']" ]
null
null
2404.16921
null
null
http://arxiv.org/pdf/2404.16921v1
2024-04-25T17:52:19Z
2024-04-25T17:52:19Z
A Short Survey of Human Mobility Prediction in Epidemic Modeling from Transformers to LLMs
This paper provides a comprehensive survey of recent advancements in leveraging machine learning techniques, particularly Transformer models, for predicting human mobility patterns during epidemics. Understanding how people move during epidemics is essential for modeling the spread of diseases and devising effective response strategies. Forecasting population movement is crucial for informing epidemiological models and facilitating effective response planning in public health emergencies. Predicting mobility patterns can enable authorities to better anticipate the geographical and temporal spread of diseases, allocate resources more efficiently, and implement targeted interventions. We review a range of approaches utilizing both pretrained language models like BERT and Large Language Models (LLMs) tailored specifically for mobility prediction tasks. These models have demonstrated significant potential in capturing complex spatio-temporal dependencies and contextual patterns in textual data.
[ "['Christian N. Mayemba' \"D'Jeff K. Nkashama\" 'Jean Marie Tshimula'\n 'Maximilien V. Dialufuma' 'Jean Tshibangu Muabila' 'Mbuyi Mukendi Didier'\n 'Hugues Kanda' 'René Manassé Galekwa' 'Heber Dibwe Fita' 'Serge Mundele'\n 'Kalonji Kalala' 'Aristarque Ilunga' 'Lambert Mukendi Ntobo'\n 'Dominique Muteba' 'Aaron Aruna Abedi']" ]
null
null
2404.16954
null
null
http://arxiv.org/pdf/2404.16954v1
2024-04-25T18:06:47Z
2024-04-25T18:06:47Z
Taming False Positives in Out-of-Distribution Detection with Human Feedback
Robustness to out-of-distribution (OOD) samples is crucial for safely deploying machine learning models in the open world. Recent works have focused on designing scoring functions to quantify OOD uncertainty. Setting appropriate thresholds for these scoring functions for OOD detection is challenging as OOD samples are often unavailable up front. Typically, thresholds are set to achieve a desired true positive rate (TPR), e.g., $95%$ TPR. However, this can lead to very high false positive rates (FPR), ranging from 60 to 96%, as observed in the Open-OOD benchmark. In safety-critical real-life applications, e.g., medical diagnosis, controlling the FPR is essential when dealing with various OOD samples dynamically. To address these challenges, we propose a mathematically grounded OOD detection framework that leverages expert feedback to emph{safely} update the threshold on the fly. We provide theoretical results showing that it is guaranteed to meet the FPR constraint at all times while minimizing the use of human feedback. Another key feature of our framework is that it can work with any scoring function for OOD uncertainty quantification. Empirical evaluation of our system on synthetic and benchmark OOD datasets shows that our method can maintain FPR at most $5%$ while maximizing TPR.
[ "['Harit Vishwakarma' 'Heguang Lin' 'Ramya Korlakai Vinayak']" ]
null
null
2404.16956
null
null
http://arxiv.org/pdf/2404.16956v2
2024-05-17T23:48:47Z
2024-04-25T18:10:27Z
A Notion of Uniqueness for the Adversarial Bayes Classifier
We propose a new notion of uniqueness for the adversarial Bayes classifier in the setting of binary classification. Analyzing this concept produces a simple procedure for computing all adversarial Bayes classifiers for a well-motivated family of one dimensional data distributions. This characterization is then leveraged to show that as the perturbation radius increases, certain the regularity of adversarial Bayes classifiers improves. Various examples demonstrate that the boundary of the adversarial Bayes classifier frequently lies near the boundary of the Bayes classifier.
[ "['Natalie S. Frank']" ]
null
null
2404.16958
null
null
http://arxiv.org/abs/2404.16958v2
2024-07-02T08:53:09Z
2024-04-25T18:12:43Z
A Closer Look at Classification Evaluation Metrics and a Critical Reflection of Common Evaluation Practice
Classification systems are evaluated in a countless number of papers. However, we find that evaluation practice is often nebulous. Frequently, metrics are selected without arguments, and blurry terminology invites misconceptions. For instance, many works use so-called 'macro' metrics to rank systems (e.g., 'macro F1') but do not clearly specify what they would expect from such a `macro' metric. This is problematic, since picking a metric can affect research findings, and thus any clarity in the process should be maximized. Starting from the intuitive concepts of bias and prevalence, we perform an analysis of common evaluation metrics. The analysis helps us understand the metrics' underlying properties, and how they align with expectations as found expressed in papers. Then we reflect on the practical situation in the field, and survey evaluation practice in recent shared tasks. We find that metric selection is often not supported with convincing arguments, an issue that can make a system ranking seem arbitrary. Our work aims at providing overview and guidance for more informed and transparent metric selection, fostering meaningful evaluation.
[ "['Juri Opitz']" ]
null
null
2404.16967
null
null
http://arxiv.org/pdf/2404.16967v1
2024-03-28T23:55:10Z
2024-03-28T23:55:10Z
ML2SC: Deploying Machine Learning Models as Smart Contracts on the Blockchain
With the growing concern of AI safety, there is a need to trust the computations done by machine learning (ML) models. Blockchain technology, known for recording data and running computations transparently and in a tamper-proof manner, can offer this trust. One significant challenge in deploying ML Classifiers on-chain is that while ML models are typically written in Python using an ML library such as Pytorch, smart contracts deployed on EVM-compatible blockchains are written in Solidity. We introduce Machine Learning to Smart Contract (ML2SC), a PyTorch to Solidity translator that can automatically translate multi-layer perceptron (MLP) models written in Pytorch to Solidity smart contract versions. ML2SC uses a fixed-point math library to approximate floating-point computation. After deploying the generated smart contract, we can train our models off-chain using PyTorch and then further transfer the acquired weights and biases to the smart contract using a function call. Finally, the model inference can also be done with a function call providing the input. We mathematically model the gas costs associated with deploying, updating model parameters, and running inference on these models on-chain, showing that the gas costs increase linearly in various parameters associated with an MLP. We present empirical results matching our modeling. We also evaluate the classification accuracy showing that the outputs obtained by our transparent on-chain implementation are identical to the original off-chain implementation with Pytorch.
[ "['Zhikai Li' 'Steve Vott' 'Bhaskar Krishnamachar']" ]
null
null
2404.16969
null
null
http://arxiv.org/pdf/2404.16969v2
2024-04-29T07:33:24Z
2024-04-25T18:42:25Z
COCOLA: Coherence-Oriented Contrastive Learning of Musical Audio Representations
We present COCOLA (Coherence-Oriented Contrastive Learning for Audio), a contrastive learning method for musical audio representations that captures the harmonic and rhythmic coherence between samples. Our method operates at the level of stems (or their combinations) composing music tracks and allows the objective evaluation of compositional models for music in the task of accompaniment generation. We also introduce a new baseline for compositional music generation called CompoNet, based on ControlNet, generalizing the tasks of MSDM, and quantify it against the latter using COCOLA. We release all models trained on public datasets containing separate stems (MUSDB18-HQ, MoisesDB, Slakh2100, and CocoChorales).
[ "['Ruben Ciranni' 'Emilian Postolache' 'Giorgio Mariani' 'Michele Mancusi'\n 'Luca Cosmo' 'Emanuele Rodolà']" ]
null
null
2404.16989
null
null
http://arxiv.org/pdf/2404.16989v1
2024-04-25T19:18:30Z
2024-04-25T19:18:30Z
IDIL: Imitation Learning of Intent-Driven Expert Behavior
When faced with accomplishing a task, human experts exhibit intentional behavior. Their unique intents shape their plans and decisions, resulting in experts demonstrating diverse behaviors to accomplish the same task. Due to the uncertainties encountered in the real world and their bounded rationality, experts sometimes adjust their intents, which in turn influences their behaviors during task execution. This paper introduces IDIL, a novel imitation learning algorithm to mimic these diverse intent-driven behaviors of experts. Iteratively, our approach estimates expert intent from heterogeneous demonstrations and then uses it to learn an intent-aware model of their behavior. Unlike contemporary approaches, IDIL is capable of addressing sequential tasks with high-dimensional state representations, while sidestepping the complexities and drawbacks associated with adversarial training (a mainstay of related techniques). Our empirical results suggest that the models generated by IDIL either match or surpass those produced by recent imitation learning benchmarks in metrics of task performance. Moreover, as it creates a generative model, IDIL demonstrates superior performance in intent inference metrics, crucial for human-agent interactions, and aptly captures a broad spectrum of expert behaviors.
[ "['Sangwon Seo' 'Vaibhav Unhelkar']" ]
null
null
2404.17019
null
null
http://arxiv.org/pdf/2404.17019v1
2024-04-25T20:25:12Z
2024-04-25T20:25:12Z
Neyman Meets Causal Machine Learning: Experimental Evaluation of Individualized Treatment Rules
A century ago, Neyman showed how to evaluate the efficacy of treatment using a randomized experiment under a minimal set of assumptions. This classical repeated sampling framework serves as a basis of routine experimental analyses conducted by today's scientists across disciplines. In this paper, we demonstrate that Neyman's methodology can also be used to experimentally evaluate the efficacy of individualized treatment rules (ITRs), which are derived by modern causal machine learning algorithms. In particular, we show how to account for additional uncertainty resulting from a training process based on cross-fitting. The primary advantage of Neyman's approach is that it can be applied to any ITR regardless of the properties of machine learning algorithms that are used to derive the ITR. We also show, somewhat surprisingly, that for certain metrics, it is more efficient to conduct this ex-post experimental evaluation of an ITR than to conduct an ex-ante experimental evaluation that randomly assigns some units to the ITR. Our analysis demonstrates that Neyman's repeated sampling framework is as relevant for causal inference today as it has been since its inception.
[ "['Michael Lingzhi Li' 'Kosuke Imai']" ]
null
null
2404.17023
null
null
http://arxiv.org/pdf/2404.17023v1
2024-04-25T20:28:43Z
2024-04-25T20:28:43Z
Out-of-Distribution Detection using Maximum Entropy Coding
Given a default distribution $P$ and a set of test data $x^M={x_1,x_2,ldots,x_M}$ this paper seeks to answer the question if it was likely that $x^M$ was generated by $P$. For discrete distributions, the definitive answer is in principle given by Kolmogorov-Martin-L"{o}f randomness. In this paper we seek to generalize this to continuous distributions. We consider a set of statistics $T_1(x^M),T_2(x^M),ldots$. To each statistic we associate its maximum entropy distribution and with this a universal source coder. The maximum entropy distributions are subsequently combined to give a total codelength, which is compared with $-log P(x^M)$. We show that this approach satisfied a number of theoretical properties. For real world data $P$ usually is unknown. We transform data into a standard distribution in the latent space using a bidirectional generate network and use maximum entropy coding there. We compare the resulting method to other methods that also used generative neural networks to detect anomalies. In most cases, our results show better performance.
[ "['Mojtaba Abolfazli' 'Mohammad Zaeri Amirani' 'Anders Høst-Madsen'\n 'June Zhang' 'Andras Bratincsak']" ]
null
null
2404.17034
null
null
http://arxiv.org/pdf/2404.17034v1
2024-04-25T20:49:03Z
2024-04-25T20:49:03Z
Learning Actionable Counterfactual Explanations in Large State Spaces
Counterfactual explanations (CFEs) are sets of actions that an agent with a negative classification could take to achieve a (desired) positive classification, for consequential decisions such as loan applications, hiring, admissions, etc. In this work, we consider settings where optimal CFEs correspond to solutions of weighted set cover problems. In particular, there is a collection of actions that agents can perform that each have their own cost and each provide the agent with different sets of capabilities. The agent wants to perform the cheapest subset of actions that together provide all the needed capabilities to achieve a positive classification. Since this is an NP-hard optimization problem, we are interested in the question: can we, from training data (instances of agents and their optimal CFEs) learn a CFE generator that will quickly provide optimal sets of actions for new agents? In this work, we provide a deep-network learning procedure that we show experimentally is able to achieve strong performance at this task. We consider several problem formulations, including formulations in which the underlying "capabilities" and effects of actions are not explicitly provided, and so there is an informational challenge in addition to the computational challenge. Our problem can also be viewed as one of learning an optimal policy in a family of large but deterministic Markov Decision Processes (MDPs).
[ "['Keziah Naggita' 'Matthew R. Walter' 'Avrim Blum']" ]
null
null
2404.17047
null
null
http://arxiv.org/pdf/2404.17047v2
2024-05-24T12:51:29Z
2024-04-25T21:14:24Z
Near to Mid-term Risks and Opportunities of Open-Source Generative AI
In the next few years, applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education. The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation, in particular from some of the major tech companies who are leading in AI development. This regulation is likely to put at risk the budding field of open-source Generative AI. We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current large language models. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.
[ "['Francisco Eiras' 'Aleksandar Petrov' 'Bertie Vidgen'\n 'Christian Schroeder de Witt' 'Fabio Pizzati' 'Katherine Elkins'\n 'Supratik Mukhopadhyay' 'Adel Bibi' 'Botos Csaba' 'Fabro Steibel'\n 'Fazl Barez' 'Genevieve Smith' 'Gianluca Guadagni' 'Jon Chun'\n 'Jordi Cabot' 'Joseph Marvin Imperial' 'Juan A. Nolazco-Flores'\n 'Lori Landay' 'Matthew Jackson' 'Paul Röttger' 'Philip H. S. Torr'\n 'Trevor Darrell' 'Yong Suk Lee' 'Jakob Foerster']" ]
null
null
2404.17048
null
null
http://arxiv.org/pdf/2404.17048v1
2024-04-25T21:15:15Z
2024-04-25T21:15:15Z
Transductive Spiking Graph Neural Networks for Loihi
Graph neural networks have emerged as a specialized branch of deep learning, designed to address problems where pairwise relations between objects are crucial. Recent advancements utilize graph convolutional neural networks to extract features within graph structures. Despite promising results, these methods face challenges in real-world applications due to sparse features, resulting in inefficient resource utilization. Recent studies draw inspiration from the mammalian brain and employ spiking neural networks to model and learn graph structures. However, these approaches are limited to traditional Von Neumann-based computing systems, which still face hardware inefficiencies. In this study, we present a fully neuromorphic implementation of spiking graph neural networks designed for Loihi 2. We optimize network parameters using Lava Bayesian Optimization, a novel hyperparameter optimization system compatible with neuromorphic computing architectures. We showcase the performance benefits of combining neuromorphic Bayesian optimization with our approach for citation graph classification using fixed-precision spiking neurons. Our results demonstrate the capability of integer-precision, Loihi 2 compatible spiking neural networks in performing citation graph classification with comparable accuracy to existing floating point implementations.
[ "['Shay Snyder' 'Victoria Clerico' 'Guojing Cong' 'Shruti Kulkarni'\n 'Catherine Schuman' 'Sumedh R. Risbud' 'Maryam Parsa']" ]
null
null
2404.17069
null
null
http://arxiv.org/pdf/2404.17069v1
2024-04-25T22:34:36Z
2024-04-25T22:34:36Z
Channel Modeling for FR3 Upper Mid-band via Generative Adversarial Networks
The upper mid-band (FR3) has been recently attracting interest for new generation of mobile networks, as it provides a promising balance between spectrum availability and coverage, which are inherent limitations of the sub 6GHz and millimeter wave bands, respectively. In order to efficiently design and optimize the network, channel modeling plays a key role since FR3 systems are expected to operate at multiple frequency bands. Data-driven methods, especially generative adversarial networks (GANs), can capture the intricate relationships among data samples, and provide an appropriate tool for FR3 channel modeling. In this work, we present the architecture, link state model, and path generative network of GAN-based FR3 channel modeling. The comparison of our model greatly matches the ray-tracing simulated data.
[ "['Yaqi Hu' 'Mingsheng Yin' 'Marco Mezzavilla' 'Hao Guo' 'Sundeep Rangan']" ]
null
null
2404.17099
null
null
http://arxiv.org/pdf/2404.17099v1
2024-04-26T01:20:45Z
2024-04-26T01:20:45Z
Unleashing the Potential of Fractional Calculus in Graph Neural Networks with FROND
We introduce the FRactional-Order graph Neural Dynamical network (FROND), a new continuous graph neural network (GNN) framework. Unlike traditional continuous GNNs that rely on integer-order differential equations, FROND employs the Caputo fractional derivative to leverage the non-local properties of fractional calculus. This approach enables the capture of long-term dependencies in feature updates, moving beyond the Markovian update mechanisms in conventional integer-order models and offering enhanced capabilities in graph representation learning. We offer an interpretation of the node feature updating process in FROND from a non-Markovian random walk perspective when the feature updating is particularly governed by a diffusion process. We demonstrate analytically that oversmoothing can be mitigated in this setting. Experimentally, we validate the FROND framework by comparing the fractional adaptations of various established integer-order continuous GNNs, demonstrating their consistently improved performance and underscoring the framework's potential as an effective extension to enhance traditional continuous GNNs. The code is available at url{https://github.com/zknus/ICLR2024-FROND}.
[ "['Qiyu Kang' 'Kai Zhao' 'Qinxu Ding' 'Feng Ji' 'Xuhao Li' 'Wenfei Liang'\n 'Yang Song' 'Wee Peng Tay']" ]
null
null
2404.17110
null
null
http://arxiv.org/pdf/2404.17110v1
2024-04-26T01:57:12Z
2024-04-26T01:57:12Z
Software Vulnerability Prediction in Low-Resource Languages: An Empirical Study of CodeBERT and ChatGPT
Background: Software Vulnerability (SV) prediction in emerging languages is increasingly important to ensure software security in modern systems. However, these languages usually have limited SV data for developing high-performing prediction models. Aims: We conduct an empirical study to evaluate the impact of SV data scarcity in emerging languages on the state-of-the-art SV prediction model and investigate potential solutions to enhance the performance. Method: We train and test the state-of-the-art model based on CodeBERT with and without data sampling techniques for function-level and line-level SV prediction in three low-resource languages - Kotlin, Swift, and Rust. We also assess the effectiveness of ChatGPT for low-resource SV prediction given its recent success in other domains. Results: Compared to the original work in C/C++ with large data, CodeBERT's performance of function-level and line-level SV prediction significantly declines in low-resource languages, signifying the negative impact of data scarcity. Regarding remediation, data sampling techniques fail to improve CodeBERT; whereas, ChatGPT showcases promising results, substantially enhancing predictive performance by up to 34.4% for the function level and up to 53.5% for the line level. Conclusion: We have highlighted the challenge and made the first promising step for low-resource SV prediction, paving the way for future research in this direction.
[ "['Triet H. M. Le' 'M. Ali Babar' 'Tung Hoang Thai']" ]
null
null
2404.17113
null
null
http://arxiv.org/pdf/2404.17113v3
2024-05-23T12:43:15Z
2024-04-26T02:05:20Z
MER 2024: Semi-Supervised Learning, Noise Robustness, and Open-Vocabulary Multimodal Emotion Recognition
Multimodal emotion recognition is an important research topic in artificial intelligence. Over the past few decades, researchers have made remarkable progress by increasing dataset size and building more effective architectures. However, due to various reasons (such as complex environments and inaccurate annotations), current systems are hard to meet the demands of practical applications. Therefore, we organize a series of challenges around emotion recognition to further promote the development of this area. Last year, we launched MER2023, focusing on three topics: multi-label learning, noise robustness, and semi-supervised learning. This year, we continue to organize MER2024. In addition to expanding the dataset size, we introduce a new track around open-vocabulary emotion recognition. The main consideration for this track is that existing datasets often fix the label space and use majority voting to enhance annotator consistency, but this process may limit the model's ability to describe subtle emotions. In this track, we encourage participants to generate any number of labels in any category, aiming to describe the emotional state as accurately as possible. Our baseline is based on MERTools and the code is available at: https://github.com/zeroQiaoba/MERTools/tree/master/MER2024.
[ "['Zheng Lian' 'Haiyang Sun' 'Licai Sun' 'Zhuofan Wen' 'Siyuan Zhang'\n 'Shun Chen' 'Hao Gu' 'Jinming Zhao' 'Ziyang Ma' 'Xie Chen' 'Jiangyan Yi'\n 'Rui Liu' 'Kele Xu' 'Bin Liu' 'Erik Cambria' 'Guoying Zhao'\n 'Björn W. Schuller' 'Jianhua Tao']" ]
null
null
2404.17120
null
null
http://arxiv.org/pdf/2404.17120v2
2024-04-29T17:41:51Z
2024-04-26T02:29:26Z
Talking Nonsense: Probing Large Language Models' Understanding of Adversarial Gibberish Inputs
Large language models (LLMs) exhibit excellent ability to understand human languages, but do they also understand their own language that appears gibberish to us? In this work we delve into this question, aiming to uncover the mechanisms underlying such behavior in LLMs. We employ the Greedy Coordinate Gradient optimizer to craft prompts that compel LLMs to generate coherent responses from seemingly nonsensical inputs. We call these inputs LM Babel and this work systematically studies the behavior of LLMs manipulated by these prompts. We find that the manipulation efficiency depends on the target text's length and perplexity, with the Babel prompts often located in lower loss minima compared to natural prompts. We further examine the structure of the Babel prompts and evaluate their robustness. Notably, we find that guiding the model to generate harmful texts is not more difficult than into generating benign texts, suggesting lack of alignment for out-of-distribution prompts.
[ "['Valeriia Cherepanova' 'James Zou']" ]
null
null
2404.17123
null
null
http://arxiv.org/pdf/2404.17123v2
2024-06-12T14:12:17Z
2024-04-26T02:40:03Z
Text Sentiment Analysis and Classification Based on Bidirectional Gated Recurrent Units (GRUs) Model
This paper explores the importance of text sentiment analysis and classification in the field of natural language processing, and proposes a new approach to sentiment analysis and classification based on the bidirectional gated recurrent units (GRUs) model. The study firstly analyses the word cloud model of the text with six sentiment labels, and then carries out data preprocessing, including the steps of removing special symbols, punctuation marks, numbers, stop words and non-alphabetic parts. Subsequently, the data set is divided into training set and test set, and through model training and testing, it is found that the accuracy of the validation set is increased from 85% to 93% with training, which is an increase of 8%; at the same time, the loss value of the validation set decreases from 0.7 to 0.1 and tends to be stable, and the model is gradually close to the actual value, which can effectively classify the text emotions. The confusion matrix shows that the accuracy of the model on the test set reaches 94.8%, the precision is 95.9%, the recall is 99.1%, and the F1 score is 97.4%, which proves that the model has good generalisation ability and classification effect. Overall, the study demonstrated an effective method for text sentiment analysis and classification with satisfactory results.
[ "['Wei Xu' 'Jianlong Chen' 'Zhicheng Ding' 'Jinyin Wang']" ]
null
null
2404.17126
null
null
http://arxiv.org/pdf/2404.17126v1
2024-04-26T02:43:45Z
2024-04-26T02:43:45Z
Deep Evidential Learning for Dose Prediction
In this work, we present a novel application of an uncertainty-quantification framework called Deep Evidential Learning in the domain of radiotherapy dose prediction. Using medical images of the Open Knowledge-Based Planning Challenge dataset, we found that this model can be effectively harnessed to yield uncertainty estimates that inherited correlations with prediction errors upon completion of network training. This was achieved only after reformulating the original loss function for a stable implementation. We found that (i)epistemic uncertainty was highly correlated with prediction errors, with various association indices comparable or stronger than those for Monte-Carlo Dropout and Deep Ensemble methods, (ii)the median error varied with uncertainty threshold much more linearly for epistemic uncertainty in Deep Evidential Learning relative to these other two conventional frameworks, indicative of a more uniformly calibrated sensitivity to model errors, (iii)relative to epistemic uncertainty, aleatoric uncertainty demonstrated a more significant shift in its distribution in response to Gaussian noise added to CT intensity, compatible with its interpretation as reflecting data noise. Collectively, our results suggest that Deep Evidential Learning is a promising approach that can endow deep-learning models in radiotherapy dose prediction with statistical robustness. Towards enhancing its clinical relevance, we demonstrate how we can use such a model to construct the predicted Dose-Volume-Histograms' confidence intervals.
[ "['Hai Siong Tan' 'Kuancheng Wang' 'Rafe Mcbeth']" ]
null
null
2404.17144
null
null
http://arxiv.org/pdf/2404.17144v1
2024-04-26T04:21:14Z
2024-04-26T04:21:14Z
Sensor Response-Time Reduction using Long-Short Term Memory Network Forecasting
The response time of a biosensor is a crucial metric in safety-critical applications such as medical diagnostics where an earlier diagnosis can markedly improve patient outcomes. However, the speed at which a biosensor reaches a final equilibrium state can be limited by poor mass transport and long molecular diffusion times that increase the time it takes target molecules to reach the active sensing region of a biosensor. While optimization of system and sensor design can promote molecules reaching the sensing element faster, a simpler and complementary approach for response time reduction that is widely applicable across all sensor platforms is to use time-series forecasting to predict the ultimate steady-state sensor response. In this work, we show that ensembles of long short-term memory (LSTM) networks can accurately predict equilibrium biosensor response from a small quantity of initial time-dependent biosensor measurements, allowing for significant reduction in response time by a mean and median factor of improvement of 18.6 and 5.1, respectively. The ensemble of models also provides simultaneous estimation of uncertainty, which is vital to provide confidence in the predictions and subsequent safety-related decisions that are made. This approach is demonstrated on real-time experimental data collected by exposing porous silicon biosensors to buffered protein solutions using a multi-channel fluidic cell that enables the automated measurement of 100 porous silicon biosensors in parallel. The dramatic improvement in sensor response time achieved using LSTM network ensembles and associated uncertainty quantification opens the door to trustworthy and faster responding biosensors, enabling more rapid medical diagnostics for improved patient outcomes and healthcare access, as well as quicker identification of toxins in food and the environment.
[ "['Simon J. Ward' 'Muhamed Baljevic' 'Sharon M. Weiss']" ]
null
null
2404.17147
null
null
http://arxiv.org/pdf/2404.17147v1
2024-04-26T04:34:45Z
2024-04-26T04:34:45Z
On the Federated Learning Framework for Cooperative Perception
Cooperative perception is essential to enhance the efficiency and safety of future transportation systems, requiring extensive data sharing among vehicles on the road, which raises significant privacy concerns. Federated learning offers a promising solution by enabling data privacy-preserving collaborative enhancements in perception, decision-making, and planning among connected and autonomous vehicles (CAVs). However, federated learning is impeded by significant challenges arising from data heterogeneity across diverse clients, potentially diminishing model accuracy and prolonging convergence periods. This study introduces a specialized federated learning framework for CP, termed the federated dynamic weighted aggregation (FedDWA) algorithm, facilitated by dynamic adjusting loss (DALoss) function. This framework employs dynamic client weighting to direct model convergence and integrates a novel loss function that utilizes Kullback-Leibler divergence (KLD) to counteract the detrimental effects of non-independently and identically distributed (Non-IID) and unbalanced data. Utilizing the BEV transformer as the primary model, our rigorous testing on the OpenV2V dataset, augmented with FedBEVT data, demonstrates significant improvements in the average intersection over union (IoU). These results highlight the substantial potential of our federated learning framework to address data heterogeneity challenges in CP, thereby enhancing the accuracy of environmental perception models and facilitating more robust and efficient collaborative learning solutions in the transportation sector.
[ "['Zhenrong Zhang' 'Jianan Liu' 'Xi Zhou' 'Tao Huang' 'Qing-Long Han'\n 'Jingxin Liu' 'Hongbin Liu']" ]
null
null
2404.17157
null
null
http://arxiv.org/pdf/2404.17157v1
2024-04-26T05:01:08Z
2024-04-26T05:01:08Z
Neuro-Symbolic Embedding for Short and Effective Feature Selection via Autoregressive Generation
Feature selection aims to identify the optimal feature subset for enhancing downstream models. Effective feature selection can remove redundant features, save computational resources, accelerate the model learning process, and improve the model overall performance. However, existing works are often time-intensive to identify the effective feature subset within high-dimensional feature spaces. Meanwhile, these methods mainly utilize a single downstream task performance as the selection criterion, leading to the selected subsets that are not only redundant but also lack generalizability. To bridge these gaps, we reformulate feature selection through a neuro-symbolic lens and introduce a novel generative framework aimed at identifying short and effective feature subsets. More specifically, we found that feature ID tokens of the selected subset can be formulated as symbols to reflect the intricate correlations among features. Thus, in this framework, we first create a data collector to automatically collect numerous feature selection samples consisting of feature ID tokens, model performance, and the measurement of feature subset redundancy. Building on the collected data, an encoder-decoder-evaluator learning paradigm is developed to preserve the intelligence of feature selection into a continuous embedding space for efficient search. Within the learned embedding space, we leverage a multi-gradient search algorithm to find more robust and generalized embeddings with the objective of improving model performance and reducing feature subset redundancy. These embeddings are then utilized to reconstruct the feature ID tokens for executing the final feature selection. Ultimately, comprehensive experiments and case studies are conducted to validate the effectiveness of the proposed framework.
[ "['Nanxu Gong' 'Wangyang Ying' 'Dongjie Wang' 'Yanjie Fu']" ]
null
null
2404.17158
null
null
http://arxiv.org/pdf/2404.17158v1
2024-04-26T05:03:48Z
2024-04-26T05:03:48Z
Online $\mathrm{L}^{\natural}$-Convex Minimization
An online decision-making problem is a learning problem in which a player repeatedly makes decisions in order to minimize the long-term loss. These problems that emerge in applications often have nonlinear combinatorial objective functions, and developing algorithms for such problems has attracted considerable attention. An existing general framework for dealing with such objective functions is the online submodular minimization. However, practical problems are often out of the scope of this framework, since the domain of a submodular function is limited to a subset of the unit hypercube. To manage this limitation of the existing framework, we in this paper introduce the online $mathrm{L}^{natural}$-convex minimization, where an $mathrm{L}^{natural}$-convex function generalizes a submodular function so that the domain is a subset of the integer lattice. We propose computationally efficient algorithms for the online $mathrm{L}^{natural}$-convex function minimization in two major settings: the full information and the bandit settings. We analyze the regrets of these algorithms and show in particular that our algorithm for the full information setting obtains a tight regret bound up to a constant factor. We also demonstrate several motivating examples that illustrate the usefulness of the online $mathrm{L}^{natural}$-convex minimization.
[ "['Ken Yokoyama' 'Shinji Ito' 'Tatsuya Matsuoka' 'Kei Kimura'\n 'Makoto Yokoo']" ]
null
null
2404.17164
null
null
http://arxiv.org/pdf/2404.17164v1
2024-04-26T05:26:10Z
2024-04-26T05:26:10Z
DPGAN: A Dual-Path Generative Adversarial Network for Missing Data Imputation in Graphs
Missing data imputation poses a paramount challenge when dealing with graph data. Prior works typically are based on feature propagation or graph autoencoders to address this issue. However, these methods usually encounter the over-smoothing issue when dealing with missing data, as the graph neural network (GNN) modules are not explicitly designed for handling missing data. This paper proposes a novel framework, called Dual-Path Generative Adversarial Network (DPGAN), that can deal simultaneously with missing data and avoid over-smoothing problems. The crux of our work is that it admits both global and local representations of the input graph signal, which can capture the long-range dependencies. It is realized via our proposed generator, consisting of two key components, i.e., MLPUNet++ and GraphUNet++. Our generator is trained with a designated discriminator via an adversarial process. In particular, to avoid assessing the entire graph as did in the literature, our discriminator focuses on the local subgraph fidelity, thereby boosting the quality of the local imputation. The subgraph size is adjustable, allowing for control over the intensity of adversarial regularization. Comprehensive experiments across various benchmark datasets substantiate that DPGAN consistently rivals, if not outperforms, existing state-of-the-art imputation algorithms. The code is provided at url{https://github.com/momoxia/DPGAN}.
[ "['Xindi Zheng' 'Yuwei Wu' 'Yu Pan' 'Wanyu Lin' 'Lei Ma' 'Jianjun Zhao']" ]
null
null
2404.17169
null
null
http://arxiv.org/pdf/2404.17169v1
2024-04-26T05:48:59Z
2024-04-26T05:48:59Z
FairGT: A Fairness-aware Graph Transformer
The design of Graph Transformers (GTs) generally neglects considerations for fairness, resulting in biased outcomes against certain sensitive subgroups. Since GTs encode graph information without relying on message-passing mechanisms, conventional fairness-aware graph learning methods cannot be directly applicable to address these issues. To tackle this challenge, we propose FairGT, a Fairness-aware Graph Transformer explicitly crafted to mitigate fairness concerns inherent in GTs. FairGT incorporates a meticulous structural feature selection strategy and a multi-hop node feature integration method, ensuring independence of sensitive features and bolstering fairness considerations. These fairness-aware graph information encodings seamlessly integrate into the Transformer framework for downstream tasks. We also prove that the proposed fair structural topology encoding with adjacency matrix eigenvector selection and multi-hop integration are theoretically effective. Empirical evaluations conducted across five real-world datasets demonstrate FairGT's superiority in fairness metrics over existing graph transformers, graph neural networks, and state-of-the-art fairness-aware graph learning approaches.
[ "['Renqiang Luo' 'Huafei Huang' 'Shuo Yu' 'Xiuzhen Zhang' 'Feng Xia']" ]
null
null
2404.17174
null
null
http://arxiv.org/pdf/2404.17174v1
2024-04-26T06:06:37Z
2024-04-26T06:06:37Z
Optimizing Cycle Life Prediction of Lithium-ion Batteries via a Physics-Informed Model
Accurately measuring the cycle lifetime of commercial lithium-ion batteries is crucial for performance and technology development. We introduce a novel hybrid approach combining a physics-based equation with a self-attention model to predict the cycle lifetimes of commercial lithium iron phosphate graphite cells via early-cycle data. After fitting capacity loss curves to this physics-based equation, we then use a self-attention layer to reconstruct entire battery capacity loss curves. Our model exhibits comparable performances to existing models while predicting more information: the entire capacity loss curve instead of cycle life. This provides more robustness and interpretability: our model does not need to be retrained for a different notion of end-of-life and is backed by physical intuition.
[ "['Constantin-Daniel Nicolae' 'Sara Sameer' 'Nathan Sun' 'Karena Yan']" ]
null
null
2404.17177
null
null
http://arxiv.org/pdf/2404.17177v1
2024-04-26T06:19:02Z
2024-04-26T06:19:02Z
RE-RFME: Real-Estate RFME Model for customer segmentation
Marketing is one of the high-cost activities for any online platform. With the increase in the number of customers, it is crucial to understand customers based on their dynamic behaviors to design effective marketing strategies. Customer segmentation is a widely used approach to group customers into different categories and design the marketing strategy targeting each group individually. Therefore, in this paper, we propose an end-to-end pipeline RE-RFME for segmenting customers into 4 groups: high value, promising, need attention, and need activation. Concretely, we propose a novel RFME (Recency, Frequency, Monetary and Engagement) model to track behavioral features of customers and segment them into different categories. Finally, we train the K-means clustering algorithm to cluster the user into one of the 4 categories. We show the effectiveness of the proposed approach on real-world Housing.com datasets for both website and mobile application users.
[ "['Anurag Kumar Pandey' 'Anil Goyal' 'Nikhil Sikka']" ]
null
null
2404.17186
null
null
http://arxiv.org/pdf/2404.17186v1
2024-04-26T06:40:54Z
2024-04-26T06:40:54Z
MCSDNet: Mesoscale Convective System Detection Network via Multi-scale Spatiotemporal Information
The accurate detection of Mesoscale Convective Systems (MCS) is crucial for meteorological monitoring due to their potential to cause significant destruction through severe weather phenomena such as hail, thunderstorms, and heavy rainfall. However, the existing methods for MCS detection mostly targets on single-frame detection, which just considers the static characteristics and ignores the temporal evolution in the life cycle of MCS. In this paper, we propose a novel encoder-decoder neural network for MCS detection(MCSDNet). MCSDNet has a simple architecture and is easy to expand. Different from the previous models, MCSDNet targets on multi-frames detection and leverages multi-scale spatiotemporal information for the detection of MCS regions in remote sensing imagery(RSI). As far as we know, it is the first work to utilize multi-scale spatiotemporal information to detect MCS regions. Firstly, we design a multi-scale spatiotemporal information module to extract multi-level semantic from different encoder levels, which makes our models can extract more detail spatiotemporal features. Secondly, a Spatiotemporal Mix Unit(STMU) is introduced to MCSDNet to capture both intra-frame features and inter-frame correlations, which is a scalable module and can be replaced by other spatiotemporal module, e.g., CNN, RNN, Transformer and our proposed Dual Spatiotemporal Attention(DSTA). This means that the future works about spatiotemporal modules can be easily integrated to our model. Finally, we present MCSRSI, the first publicly available dataset for multi-frames MCS detection based on visible channel images from the FY-4A satellite. We also conduct several experiments on MCSRSI and find that our proposed MCSDNet achieve the best performance on MCS detection task when comparing to other baseline methods.
[ "['Jiajun Liang' 'Baoquan Zhang' 'Yunming Ye' 'Xutao Li' 'Chuyao Luo'\n 'Xukai Fu']" ]
null
null
2404.17187
null
null
http://arxiv.org/pdf/2404.17187v1
2024-04-26T06:44:52Z
2024-04-26T06:44:52Z
An Explainable Deep Reinforcement Learning Model for Warfarin Maintenance Dosing Using Policy Distillation and Action Forging
Deep Reinforcement Learning is an effective tool for drug dosing for chronic condition management. However, the final protocol is generally a black box without any justification for its prescribed doses. This paper addresses this issue by proposing an explainable dosing protocol for warfarin using a Proximal Policy Optimization method combined with Policy Distillation. We introduce Action Forging as an effective tool to achieve explainability. Our focus is on the maintenance dosing protocol. Results show that the final model is as easy to understand and deploy as the current dosing protocols and outperforms the baseline dosing algorithms.
[ "['Sadjad Anzabi Zadeh' 'W. Nick Street' 'Barrett W. Thomas']" ]
null
null
2404.17217
null
null
http://arxiv.org/abs/2404.17217v1
2024-04-26T07:46:25Z
2024-04-26T07:46:25Z
Cycling into the workshop: predictive maintenance for Barcelona's bike-sharing system
Bike-sharing systems have emerged as a significant element of urban mobility, providing an environmentally friendly transportation alternative. With the increasing integration of electric bikes alongside mechanical bikes, it is crucial to illuminate distinct usage patterns and their impact on maintenance. Accordingly, this research aims to develop a comprehensive understanding of mobility dynamics, distinguishing between different mobility modes, and introducing a novel predictive maintenance system tailored for bikes. By utilising a combination of trip information and maintenance data from Barcelona's bike-sharing system, Bicing, this study conducts an extensive analysis of mobility patterns and their relationship to failures of bike components. To accurately predict maintenance needs for essential bike parts, this research delves into various mobility metrics and applies statistical and machine learning survival models, including deep learning models. Due to their complexity, and with the objective of bolstering confidence in the system's predictions, interpretability techniques explain the main predictors of maintenance needs. The analysis reveals marked differences in the usage patterns of mechanical bikes and electric bikes, with a growing user preference for the latter despite their extra costs. These differences in mobility were found to have a considerable impact on the maintenance needs within the bike-sharing system. Moreover, the predictive maintenance models proved effective in forecasting these maintenance needs, capable of operating across an entire bike fleet. Despite challenges such as approximated bike usage metrics and data imbalances, the study successfully showcases the feasibility of an accurate predictive maintenance system capable of improving operational costs, bike availability, and security.
[ "['Jordi Grau-Escolano' 'Aleix Bassolas' 'Julian Vicens']" ]
null
null
2404.17249
null
null
http://arxiv.org/pdf/2404.17249v1
2024-04-26T08:41:55Z
2024-04-26T08:41:55Z
Making Better Use of Unlabelled Data in Bayesian Active Learning
Fully supervised models are predominant in Bayesian active learning. We argue that their neglect of the information present in unlabelled data harms not just predictive performance but also decisions about what data to acquire. Our proposed solution is a simple framework for semi-supervised Bayesian active learning. We find it produces better-performing models than either conventional Bayesian active learning or semi-supervised learning with randomly acquired data. It is also easier to scale up than the conventional approach. As well as supporting a shift towards semi-supervised models, our findings highlight the importance of studying models and acquisition methods in conjunction.
[ "['Freddie Bickford Smith' 'Adam Foster' 'Tom Rainforth']" ]
null
null
2404.17252
null
null
http://arxiv.org/pdf/2404.17252v1
2024-04-26T08:47:28Z
2024-04-26T08:47:28Z
Comparison of self-supervised in-domain and supervised out-domain transfer learning for bird species recognition
Transferring the weights of a pre-trained model to assist another task has become a crucial part of modern deep learning, particularly in data-scarce scenarios. Pre-training refers to the initial step of training models outside the current task of interest, typically on another dataset. It can be done via supervised models using human-annotated datasets or self-supervised models trained on unlabeled datasets. In both cases, many pre-trained models are available to fine-tune for the task of interest. Interestingly, research has shown that pre-trained models from ImageNet can be helpful for audio tasks despite being trained on image datasets. Hence, it's unclear whether in-domain models would be advantageous compared to competent out-domain models, such as convolutional neural networks from ImageNet. Our experiments will demonstrate the usefulness of in-domain models and datasets for bird species recognition by leveraging VICReg, a recent and powerful self-supervised method.
[ "['Houtan Ghaffari' 'Paul Devos']" ]
null
null
2404.17275
null
null
http://arxiv.org/pdf/2404.17275v1
2024-04-26T09:29:55Z
2024-04-26T09:29:55Z
Adversarial Reweighting with $α$-Power Maximization for Domain Adaptation
The practical Domain Adaptation (DA) tasks, e.g., Partial DA (PDA), open-set DA, universal DA, and test-time adaptation, have gained increasing attention in the machine learning community. In this paper, we propose a novel approach, dubbed Adversarial Reweighting with $alpha$-Power Maximization (ARPM), for PDA where the source domain contains private classes absent in target domain. In ARPM, we propose a novel adversarial reweighting model that adversarially learns to reweight source domain data to identify source-private class samples by assigning smaller weights to them, for mitigating potential negative transfer. Based on the adversarial reweighting, we train the transferable recognition model on the reweighted source distribution to be able to classify common class data. To reduce the prediction uncertainty of the recognition model on the target domain for PDA, we present an $alpha$-power maximization mechanism in ARPM, which enriches the family of losses for reducing the prediction uncertainty for PDA. Extensive experimental results on five PDA benchmarks, i.e., Office-31, Office-Home, VisDA-2017, ImageNet-Caltech, and DomainNet, show that our method is superior to recent PDA methods. Ablation studies also confirm the effectiveness of components in our approach. To theoretically analyze our method, we deduce an upper bound of target domain expected error for PDA, which is approximately minimized in our approach. We further extend ARPM to open-set DA, universal DA, and test time adaptation, and verify the usefulness through experiments.
[ "['Xiang Gu' 'Xi Yu' 'Yan Yang' 'Jian Sun' 'Zongben Xu']" ]
null
null
2404.17276
null
null
http://arxiv.org/pdf/2404.17276v1
2024-04-26T09:30:55Z
2024-04-26T09:30:55Z
Efficient Deterministic Renewable Energy Forecasting Guided by Multiple-Location Weather Data
Electricity generated from renewable energy sources has been established as an efficient remedy for both energy shortages and the environmental pollution stemming from conventional energy production methods. Solar and wind power are two of the most dominant renewable energy sources. The accurate forecasting of the energy generation of those sources facilitates their integration into electric grids, by minimizing the negative impact of uncertainty regarding their management and operation. This paper proposes a novel methodology for deterministic wind and solar energy generation forecasting for multiple generation sites, utilizing multi-location weather forecasts. The method employs a U-shaped Temporal Convolutional Auto-Encoder (UTCAE) architecture for temporal processing of weather-related and energy-related time-series across each site. The Multi-sized Kernels convolutional Spatio-Temporal Attention (MKST-Attention), inspired by the multi-head scaled-dot product attention mechanism, is also proposed aiming to efficiently transfer temporal patterns from weather data to energy data, without a priori knowledge of the locations of the power stations and the locations of provided weather data. The conducted experimental evaluation on a day-ahead solar and wind energy forecasting scenario on five datasets demonstrated that the proposed method achieves top results, outperforming all competitive time-series forecasting state-of-the-art methods.
[ "['Charalampos Symeonidis' 'Nikos Nikolaidis']" ]
null
null
2404.17284
null
null
http://arxiv.org/pdf/2404.17284v1
2024-04-26T09:40:23Z
2024-04-26T09:40:23Z
Machine Learning based prediction of Vanadium Redox Flow Battery temperature rise under different charge-discharge conditions
Accurate prediction of battery temperature rise is very essential for designing an efficient thermal management scheme. In this paper, machine learning (ML) based prediction of Vanadium Redox Flow Battery (VRFB) thermal behavior during charge-discharge operation has been demonstrated for the first time. Considering different currents with a specified electrolyte flow rate, the temperature of a kW scale VRFB system is studied through experiments. Three different ML algorithms; Linear Regression (LR), Support Vector Regression (SVR) and Extreme Gradient Boost (XGBoost) have been used for the prediction work. The training and validation of ML algorithms have been done by the practical dataset of a 1kW 6kWh VRFB storage under 40A, 45A, 50A and 60A charge-discharge currents and 10 L min-1 of flow rate. A comparative analysis among the ML algorithms is done in terms of performance metrics such as correlation coefficient (R2), mean absolute error (MAE) and root mean square error (RMSE). It is observed that XGBoost shows the highest accuracy in prediction of around 99%. The ML based prediction results obtained in this work can be very useful for controlling the VRFB temperature rise during operation and act as indicator for further development of an optimized thermal management system.
[ "['Anirudh Narayan D' 'Akshat Johar' 'Divye Kalra' 'Bhavya Ardeshna'\n 'Ankur Bhattacharjee']" ]
null
null
2404.17293
null
null
http://arxiv.org/abs/2404.17293v2
2024-06-19T00:52:16Z
2024-04-26T09:51:24Z
Lazy Data Practices Harm Fairness Research
Data practices shape research and practice on fairness in machine learning (fair ML). Critical data studies offer important reflections and critiques for the responsible advancement of the field by highlighting shortcomings and proposing recommendations for improvement. In this work, we present a comprehensive analysis of fair ML datasets, demonstrating how unreflective yet common practices hinder the reach and reliability of algorithmic fairness findings. We systematically study protected information encoded in tabular datasets and their usage in 280 experiments across 142 publications. Our analyses identify three main areas of concern: (1) a textbf{lack of representation for certain protected attributes} in both data and evaluations; (2) the widespread textbf{exclusion of minorities} during data preprocessing; and (3) textbf{opaque data processing} threatening the generalization of fairness research. By conducting exemplary analyses on the utilization of prominent datasets, we demonstrate how unreflective data decisions disproportionately affect minority groups, fairness metrics, and resultant model comparisons. Additionally, we identify supplementary factors such as limitations in publicly available data, privacy considerations, and a general lack of awareness, which exacerbate these challenges. To address these issues, we propose a set of recommendations for data usage in fairness research centered on transparency and responsible inclusion. This study underscores the need for a critical reevaluation of data practices in fair ML and offers directions to improve both the sourcing and usage of datasets.
[ "['Jan Simson' 'Alessandro Fabris' 'Christoph Kern']" ]
null
null
2404.17323
null
null
http://arxiv.org/pdf/2404.17323v1
2024-04-26T11:07:09Z
2024-04-26T11:07:09Z
A Deep Dive into Effects of Structural Bias on CMA-ES Performance along Affine Trajectories
To guide the design of better iterative optimisation heuristics, it is imperative to understand how inherent structural biases within algorithm components affect the performance on a wide variety of search landscapes. This study explores the impact of structural bias in the modular Covariance Matrix Adaptation Evolution Strategy (modCMA), focusing on the roles of various modulars within the algorithm. Through an extensive investigation involving 435,456 configurations of modCMA, we identified key modules that significantly influence structural bias of various classes. Our analysis utilized the Deep-BIAS toolbox for structural bias detection and classification, complemented by SHAP analysis for quantifying module contributions. The performance of these configurations was tested on a sequence of affine-recombined functions, maintaining fixed optimum locations while gradually varying the landscape features. Our results demonstrate an interplay between module-induced structural bias and algorithm performance across different landscape characteristics.
[ "['Niki van Stein' 'Sarah L. Thomson' 'Anna V. Kononova']" ]
null
null
2404.17344
null
null
http://arxiv.org/pdf/2404.17344v1
2024-04-26T11:50:16Z
2024-04-26T11:50:16Z
Fast Evaluation of Additive Kernels: Feature Arrangement, Fourier Methods, and Kernel Derivatives
One of the main computational bottlenecks when working with kernel based learning is dealing with the large and typically dense kernel matrix. Techniques dealing with fast approximations of the matrix vector product for these kernel matrices typically deteriorate in their performance if the feature vectors reside in higher-dimensional feature spaces. We here present a technique based on the non-equispaced fast Fourier transform (NFFT) with rigorous error analysis. We show that this approach is also well suited to allow the approximation of the matrix that arises when the kernel is differentiated with respect to the kernel hyperparameters; a problem often found in the training phase of methods such as Gaussian processes. We also provide an error analysis for this case. We illustrate the performance of the additive kernel scheme with fast matrix vector products on a number of data sets. Our code is available at https://github.com/wagnertheresa/NFFTAddKer
[ "['Theresa Wagner' 'Franziska Nestler' 'Martin Stoll']" ]
null
null
2404.17350
null
null
http://arxiv.org/pdf/2404.17350v1
2024-04-26T11:57:17Z
2024-04-26T11:57:17Z
On the Road to Clarity: Exploring Explainable AI for World Models in a Driver Assistance System
In Autonomous Driving (AD) transparency and safety are paramount, as mistakes are costly. However, neural networks used in AD systems are generally considered black boxes. As a countermeasure, we have methods of explainable AI (XAI), such as feature relevance estimation and dimensionality reduction. Coarse graining techniques can also help reduce dimensionality and find interpretable global patterns. A specific coarse graining method is Renormalization Groups from statistical physics. It has previously been applied to Restricted Boltzmann Machines (RBMs) to interpret unsupervised learning. We refine this technique by building a transparent backbone model for convolutional variational autoencoders (VAE) that allows mapping latent values to input features and has performance comparable to trained black box VAEs. Moreover, we propose a custom feature map visualization technique to analyze the internal convolutional layers in the VAE to explain internal causes of poor reconstruction that may lead to dangerous traffic scenarios in AD applications. In a second key contribution, we propose explanation and evaluation techniques for the internal dynamics and feature relevance of prediction networks. We test a long short-term memory (LSTM) network in the computer vision domain to evaluate the predictability and in future applications potentially safety of prediction models. We showcase our methods by analyzing a VAE-LSTM world model that predicts pedestrian perception in an urban traffic situation.
[ "['Mohamed Roshdi' 'Julian Petzold' 'Mostafa Wahby' 'Hussein Ebrahim'\n 'Mladen Berekovic' 'Heiko Hamann']" ]
null
null
2404.17358
null
null
http://arxiv.org/pdf/2404.17358v2
2024-05-15T15:43:18Z
2024-04-26T12:16:08Z
Adversarial Consistency and the Uniqueness of the Adversarial Bayes Classifier
Adversarial training is a common technique for learning robust classifiers. Prior work showed that convex surrogate losses are not statistically consistent in the adversarial context -- or in other words, a minimizing sequence of the adversarial surrogate risk will not necessarily minimize the adversarial classification error. We connect the consistency of adversarial surrogate losses to properties of minimizers to the adversarial classification risk, known as emph{adversarial Bayes classifiers}. Specifically, under reasonable distributional assumptions, a convex loss is statistically consistent for adversarial learning iff the adversarial Bayes classifier satisfies a certain notion of uniqueness.
[ "['Natalie S. Frank']" ]
null
null
2404.17365
null
null
http://arxiv.org/pdf/2404.17365v1
2024-04-26T12:30:32Z
2024-04-26T12:30:32Z
Similarity Equivariant Graph Neural Networks for Homogenization of Metamaterials
Soft, porous mechanical metamaterials exhibit pattern transformations that may have important applications in soft robotics, sound reduction and biomedicine. To design these innovative materials, it is important to be able to simulate them accurately and quickly, in order to tune their mechanical properties. Since conventional simulations using the finite element method entail a high computational cost, in this article we aim to develop a machine learning-based approach that scales favorably to serve as a surrogate model. To ensure that the model is also able to handle various microstructures, including those not encountered during training, we include the microstructure as part of the network input. Therefore, we introduce a graph neural network that predicts global quantities (energy, stress stiffness) as well as the pattern transformations that occur (the kinematics). To make our model as accurate and data-efficient as possible, various symmetries are incorporated into the model. The starting point is an E(n)-equivariant graph neural network (which respects translation, rotation and reflection) that has periodic boundary conditions (i.e., it is in-/equivariant with respect to the choice of RVE), is scale in-/equivariant, can simulate large deformations, and can predict scalars, vectors as well as second and fourth order tensors (specifically energy, stress and stiffness). The incorporation of scale equivariance makes the model equivariant with respect to the similarities group, of which the Euclidean group E(n) is a subgroup. We show that this network is more accurate and data-efficient than graph neural networks with fewer symmetries. To create an efficient graph representation of the finite element discretization, we use only the internal geometrical hole boundaries from the finite element mesh to achieve a better speed-up and scaling with the mesh size.
[ "['Fleur Hendriks' 'Vlado Menkovski' 'Martin Doškář' 'Marc G. D. Geers'\n 'Ondřej Rokoš']" ]
null
null
2404.17371
null
null
http://arxiv.org/pdf/2404.17371v1
2024-04-26T12:43:19Z
2024-04-26T12:43:19Z
Estimating the Robustness Radius for Randomized Smoothing with 100$\times$ Sample Efficiency
Randomized smoothing (RS) has successfully been used to improve the robustness of predictions for deep neural networks (DNNs) by adding random noise to create multiple variations of an input, followed by deciding the consensus. To understand if an RS-enabled DNN is effective in the sampled input domains, it is mandatory to sample data points within the operational design domain, acquire the point-wise certificate regarding robustness radius, and compare it with pre-defined acceptance criteria. Consequently, ensuring that a point-wise robustness certificate for any given data point is obtained relatively cost-effectively is crucial. This work demonstrates that reducing the number of samples by one or two orders of magnitude can still enable the computation of a slightly smaller robustness radius (commonly ~20% radius reduction) with the same confidence. We provide the mathematical foundation for explaining the phenomenon while experimentally showing promising results on the standard CIFAR-10 and ImageNet datasets.
[ "['Emmanouil Seferis' 'Stefanos Kollias' 'Chih-Hong Cheng']" ]
null
null
2404.17391
null
null
http://arxiv.org/abs/2404.17391v1
2024-04-26T13:09:35Z
2024-04-26T13:09:35Z
M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial Training
Over the years, multimodal mobile sensing has been used extensively for inferences regarding health and well being, behavior, and context. However, a significant challenge hindering the widespread deployment of such models in real world scenarios is the issue of distribution shift. This is the phenomenon where the distribution of data in the training set differs from the distribution of data in the real world, the deployment environment. While extensively explored in computer vision and natural language processing, and while prior research in mobile sensing briefly addresses this concern, current work primarily focuses on models dealing with a single modality of data, such as audio or accelerometer readings, and consequently, there is little research on unsupervised domain adaptation when dealing with multimodal sensor data. To address this gap, we did extensive experiments with domain adversarial neural networks (DANN) showing that they can effectively handle distribution shifts in multimodal sensor data. Moreover, we proposed a novel improvement over DANN, called M3BAT, unsupervised domain adaptation for multimodal mobile sensing with multi-branch adversarial training, to account for the multimodality of sensor data during domain adaptation with multiple branches. Through extensive experiments conducted on two multimodal mobile sensing datasets, three inference tasks, and 14 source-target domain pairs, including both regression and classification, we demonstrate that our approach performs effectively on unseen domains. Compared to directly deploying a model trained in the source domain to the target domain, the model shows performance increases up to 12% AUC (area under the receiver operating characteristics curves) on classification tasks, and up to 0.13 MAE (mean absolute error) on regression tasks.
[ "['Lakmal Meegahapola' 'Hamza Hassoune' 'Daniel Gatica-Perez']" ]
null
null
2404.17398
null
null
http://arxiv.org/pdf/2404.17398v1
2024-04-26T13:19:27Z
2024-04-26T13:19:27Z
Online Policy Learning and Inference by Matrix Completion
Making online decisions can be challenging when features are sparse and orthogonal to historical ones, especially when the optimal policy is learned through collaborative filtering. We formulate the problem as a matrix completion bandit (MCB), where the expected reward under each arm is characterized by an unknown low-rank matrix. The $epsilon$-greedy bandit and the online gradient descent algorithm are explored. Policy learning and regret performance are studied under a specific schedule for exploration probabilities and step sizes. A faster decaying exploration probability yields smaller regret but learns the optimal policy less accurately. We investigate an online debiasing method based on inverse propensity weighting (IPW) and a general framework for online policy inference. The IPW-based estimators are asymptotically normal under mild arm-optimality conditions. Numerical simulations corroborate our theoretical findings. Our methods are applied to the San Francisco parking pricing project data, revealing intriguing discoveries and outperforming the benchmark policy.
[ "['Congyuan Duan' 'Jingyang Li' 'Dong Xia']" ]
null
null
2404.17399
null
null
http://arxiv.org/pdf/2404.17399v1
2024-04-26T13:21:30Z
2024-04-26T13:21:30Z
Evaluations of Machine Learning Privacy Defenses are Misleading
Empirical defenses for machine learning privacy forgo the provable guarantees of differential privacy in the hope of achieving higher utility while resisting realistic adversaries. We identify severe pitfalls in existing empirical privacy evaluations (based on membership inference attacks) that result in misleading conclusions. In particular, we show that prior evaluations fail to characterize the privacy leakage of the most vulnerable samples, use weak attacks, and avoid comparisons with practical differential privacy baselines. In 5 case studies of empirical privacy defenses, we find that prior evaluations underestimate privacy leakage by an order of magnitude. Under our stronger evaluation, none of the empirical defenses we study are competitive with a properly tuned, high-utility DP-SGD baseline (with vacuous provable guarantees).
[ "['Michael Aerni' 'Jie Zhang' 'Florian Tramèr']" ]
null
null
2404.17426
null
null
http://arxiv.org/pdf/2404.17426v1
2024-04-26T14:03:23Z
2024-04-26T14:03:23Z
One-Shot Image Restoration
Image restoration, or inverse problems in image processing, has long been an extensively studied topic. In recent years supervised learning approaches have become a popular strategy attempting to tackle this task. Unfortunately, most supervised learning-based methods are highly demanding in terms of computational resources and training data (sample complexity). In addition, trained models are sensitive to domain changes, such as varying acquisition systems, signal sampling rates, resolution and contrast. In this work, we try to answer a fundamental question: Can supervised learning models generalize well solely by learning from one image or even part of an image? If so, then what is the minimal amount of patches required to achieve acceptable generalization? To this end, we focus on an efficient patch-based learning framework that requires a single image input-output pair for training. Experimental results demonstrate the applicability, robustness and computational efficiency of the proposed approach for supervised image deblurring and super-resolution. Our results showcase significant improvement of learning models' sample efficiency, generalization and time complexity, that can hopefully be leveraged for future real-time applications, and applied to other signals and modalities.
[ "['Deborah Pereg']" ]
null
null
2404.17429
null
null
http://arxiv.org/pdf/2404.17429v2
2024-05-01T15:53:49Z
2024-04-26T14:10:55Z
Separation capacity of linear reservoirs with random connectivity matrix
We argue that the success of reservoir computing lies within the separation capacity of the reservoirs and show that the expected separation capacity of random linear reservoirs is fully characterised by the spectral decomposition of an associated generalised matrix of moments. Of particular interest are reservoirs with Gaussian matrices that are either symmetric or whose entries are all independent. In the symmetric case, we prove that the separation capacity always deteriorates with time; while for short inputs, separation with large reservoirs is best achieved when the entries of the matrix are scaled with a factor $rho_T/sqrt{N}$, where $N$ is the dimension of the reservoir and $rho_T$ depends on the maximum length of the input time series. In the i.i.d. case, we establish that optimal separation with large reservoirs is consistently achieved when the entries of the reservoir matrix are scaled with the exact factor $1/sqrt{N}$. We further give upper bounds on the quality of separation in function of the length of the time series. We complement this analysis with an investigation of the likelihood of this separation and the impact of the chosen architecture on separation consistency.
[ "['Youness Boutaib']" ]
null
null
2404.17442
null
null
http://arxiv.org/pdf/2404.17442v1
2024-04-26T14:28:18Z
2024-04-26T14:28:18Z
Uniform Generalization Bounds on Data-Dependent Hypothesis Sets via PAC-Bayesian Theory on Random Sets
We propose data-dependent uniform generalization bounds by approaching the problem from a PAC-Bayesian perspective. We first apply the PAC-Bayesian framework on `random sets' in a rigorous way, where the training algorithm is assumed to output a data-dependent hypothesis set after observing the training data. This approach allows us to prove data-dependent bounds, which can be applicable in numerous contexts. To highlight the power of our approach, we consider two main applications. First, we propose a PAC-Bayesian formulation of the recently developed fractal-dimension-based generalization bounds. The derived results are shown to be tighter and they unify the existing results around one simple proof technique. Second, we prove uniform bounds over the trajectories of continuous Langevin dynamics and stochastic gradient Langevin dynamics. These results provide novel information about the generalization properties of noisy algorithms.
[ "['Benjamin Dupuis' 'Paul Viallard' 'George Deligiannidis' 'Umut Simsekli']" ]
null
null
2404.17451
null
null
http://arxiv.org/pdf/2404.17451v1
2024-04-26T14:43:19Z
2024-04-26T14:43:19Z
Any-Quantile Probabilistic Forecasting of Short-Term Electricity Demand
Power systems operate under uncertainty originating from multiple factors that are impossible to account for deterministically. Distributional forecasting is used to control and mitigate risks associated with this uncertainty. Recent progress in deep learning has helped to significantly improve the accuracy of point forecasts, while accurate distributional forecasting still presents a significant challenge. In this paper, we propose a novel general approach for distributional forecasting capable of predicting arbitrary quantiles. We show that our general approach can be seamlessly applied to two distinct neural architectures leading to the state-of-the-art distributional forecasting results in the context of short-term electricity demand forecasting task. We empirically validate our method on 35 hourly electricity demand time-series for European countries. Our code is available here: https://github.com/boreshkinai/any-quantile.
[ "['Slawek Smyl' 'Boris N. Oreshkin' 'Paweł Pełka' 'Grzegorz Dudek']" ]
null
null
2404.17452
null
null
http://arxiv.org/pdf/2404.17452v1
2024-04-26T14:47:40Z
2024-04-26T14:47:40Z
A Continuous Relaxation for Discrete Bayesian Optimization
To optimize efficiently over discrete data and with only few available target observations is a challenge in Bayesian optimization. We propose a continuous relaxation of the objective function and show that inference and optimization can be computationally tractable. We consider in particular the optimization domain where very few observations and strict budgets exist; motivated by optimizing protein sequences for expensive to evaluate bio-chemical properties. The advantages of our approach are two-fold: the problem is treated in the continuous setting, and available prior knowledge over sequences can be incorporated directly. More specifically, we utilize available and learned distributions over the problem domain for a weighting of the Hellinger distance which yields a covariance function. We show that the resulting acquisition function can be optimized with both continuous or discrete optimization algorithms and empirically assess our method on two bio-chemical sequence optimization tasks.
[ "['Richard Michael' 'Simon Bartels' 'Miguel González-Duque'\n 'Yevgen Zainchkovskyy' 'Jes Frellsen' 'Søren Hauberg' 'Wouter Boomsma']" ]
null
null
2404.17454
null
null
http://arxiv.org/pdf/2404.17454v2
2024-04-29T16:14:26Z
2024-04-26T14:48:24Z
Domain Adaptive and Fine-grained Anomaly Detection for Single-cell Sequencing Data and Beyond
Fined-grained anomalous cell detection from affected tissues is critical for clinical diagnosis and pathological research. Single-cell sequencing data provide unprecedented opportunities for this task. However, current anomaly detection methods struggle to handle domain shifts prevalent in multi-sample and multi-domain single-cell sequencing data, leading to suboptimal performance. Moreover, these methods fall short of distinguishing anomalous cells into pathologically distinct subtypes. In response, we propose ACSleuth, a novel, reconstruction deviation-guided generative framework that integrates the detection, domain adaptation, and fine-grained annotating of anomalous cells into a methodologically cohesive workflow. Notably, we present the first theoretical analysis of using reconstruction deviations output by generative models for anomaly detection in lieu of domain shifts. This analysis informs us to develop a novel and superior maximum mean discrepancy-based anomaly scorer in ACSleuth. Extensive benchmarks over various single-cell data and other types of tabular data demonstrate ACSleuth's superiority over the state-of-the-art methods in identifying and subtyping anomalies in multi-sample and multi-domain contexts. Our code is available at https://github.com/Catchxu/ACsleuth.
[ "['Kaichen Xu' 'Yueyang Ding' 'Suyang Hou' 'Weiqiang Zhan' 'Nisang Chen'\n 'Jun Wang' 'Xiaobo Sun']" ]
null
null
2404.17461
null
null
http://arxiv.org/pdf/2404.17461v1
2024-04-26T14:57:56Z
2024-04-26T14:57:56Z
Multi-layer random features and the approximation power of neural networks
A neural architecture with randomly initialized weights, in the infinite width limit, is equivalent to a Gaussian Random Field whose covariance function is the so-called Neural Network Gaussian Process kernel (NNGP). We prove that a reproducing kernel Hilbert space (RKHS) defined by the NNGP contains only functions that can be approximated by the architecture. To achieve a certain approximation error the required number of neurons in each layer is defined by the RKHS norm of the target function. Moreover, the approximation can be constructed from a supervised dataset by a random multi-layer representation of an input vector, together with training of the last layer's weights. For a 2-layer NN and a domain equal to an $n-1$-dimensional sphere in ${mathbb R}^n$, we compare the number of neurons required by Barron's theorem and by the multi-layer features construction. We show that if eigenvalues of the integral operator of the NNGP decay slower than $k^{-n-frac{2}{3}}$ where $k$ is an order of an eigenvalue, then our theorem guarantees a more succinct neural network approximation than Barron's theorem. We also make some computational experiments to verify our theoretical findings. Our experiments show that realistic neural networks easily learn target functions even when both theorems do not give any guarantees.
[ "['Rustem Takhanov']" ]
null
null
2404.17465
null
null
http://arxiv.org/pdf/2404.17465v4
2024-05-08T17:05:10Z
2024-04-26T15:08:27Z
Fast Abstracts and Student Forum Proceedings -- EDCC 2024 -- 19th European Dependable Computing Conference
The goal of the Fast Abstracts track is to bring together researchers and practitioners working on dependable computing to discuss work in progress or opinion pieces. Contributions are welcome from academia and industry. Fast Abstracts aim to serve as a rapid and flexible mechanism to: (i) Report on current work that may or may not be complete; (ii) Introduce new ideas to the community; (iii) State positions on controversial issues or open problems; (iv) Share lessons learnt from real-word dependability engineering; and (v) Debunk or question results from other papers based on contra-indications. The Student Forum aims at creating a vibrant and friendly environment where students can present and discuss their work, and exchange ideas and experiences with other students, researchers and industry. One of the key goals of the Forum is to provide students with feedback on their preliminary results that might help with their future research directions.
[ "['Simona Bernardi' 'Tommaso Zoppi']" ]
null
null
2404.17466
null
null
http://arxiv.org/pdf/2404.17466v1
2024-04-26T15:08:57Z
2024-04-26T15:08:57Z
FTL: Transfer Learning Nonlinear Plasma Dynamic Transitions in Low Dimensional Embeddings via Deep Neural Networks
Deep learning algorithms provide a new paradigm to study high-dimensional dynamical behaviors, such as those in fusion plasma systems. Development of novel model reduction methods, coupled with detection of abnormal modes with plasma physics, opens a unique opportunity for building efficient models to identify plasma instabilities for real-time control. Our Fusion Transfer Learning (FTL) model demonstrates success in reconstructing nonlinear kink mode structures by learning from a limited amount of nonlinear simulation data. The knowledge transfer process leverages a pre-trained neural encoder-decoder network, initially trained on linear simulations, to effectively capture nonlinear dynamics. The low-dimensional embeddings extract the coherent structures of interest, while preserving the inherent dynamics of the complex system. Experimental results highlight FTL's capacity to capture transitional behaviors and dynamical features in plasma dynamics -- a task often challenging for conventional methods. The model developed in this study is generalizable and can be extended broadly through transfer learning to address various magnetohydrodynamics (MHD) modes.
[ "['Zhe Bai' 'Xishuo Wei' 'William Tang' 'Leonid Oliker' 'Zhihong Lin'\n 'Samuel Williams']" ]
null
null
2404.17483
null
null
http://arxiv.org/pdf/2404.17483v5
2024-06-01T04:36:10Z
2024-04-26T15:34:04Z
Differentiable Pareto-Smoothed Weighting for High-Dimensional Heterogeneous Treatment Effect Estimation
There is a growing interest in estimating heterogeneous treatment effects across individuals using their high-dimensional feature attributes. Achieving high performance in such high-dimensional heterogeneous treatment effect estimation is challenging because in this setup, it is usual that some features induce sample selection bias while others do not but are predictive of potential outcomes. To avoid losing such predictive feature information, existing methods learn separate feature representations using inverse probability weighting (IPW). However, due to their numerically unstable IPW weights, these methods suffer from estimation bias under a finite sample setup. To develop a numerically robust estimator by weighted representation learning, we propose a differentiable Pareto-smoothed weighting framework that replaces extreme weight values in an end-to-end fashion. Our experimental results show that by effectively correcting the weight values, our proposed method outperforms the existing ones, including traditional weighting schemes. Our code is available at https://github.com/ychika/DPSW.
[ "['Yoichi Chikahara' 'Kansei Ushiyama']" ]
null
null
2404.17487
null
null
http://arxiv.org/pdf/2404.17487v1
2024-04-26T15:43:06Z
2024-04-26T15:43:06Z
Conformal Prediction with Learned Features
In this paper, we focus on the problem of conformal prediction with conditional guarantees. Prior work has shown that it is impossible to construct nontrivial prediction sets with full conditional coverage guarantees. A wealth of research has considered relaxations of full conditional guarantees, relying on some predefined uncertainty structures. Departing from this line of thinking, we propose Partition Learning Conformal Prediction (PLCP), a framework to improve conditional validity of prediction sets through learning uncertainty-guided features from the calibration data. We implement PLCP efficiently with alternating gradient descent, utilizing off-the-shelf machine learning models. We further analyze PLCP theoretically and provide conditional guarantees for infinite and finite sample sizes. Finally, our experimental results over four real-world and synthetic datasets show the superior performance of PLCP compared to state-of-the-art methods in terms of coverage and length in both classification and regression scenarios.
[ "['Shayan Kiyani' 'George Pappas' 'Hamed Hassani']" ]
null
null
2404.17488
null
null
http://arxiv.org/abs/2404.17488v1
2024-04-26T15:43:24Z
2024-04-26T15:43:24Z
Low Cost Machine Vision for Insect Classification
Preserving the number and diversity of insects is one of our society's most important goals in the area of environmental sustainability. A prerequisite for this is a systematic and up-scaled monitoring in order to detect correlations and identify countermeasures. Therefore, automatized monitoring using live traps is important, but so far there is no system that provides image data of sufficient detailed information for entomological classification. In this work, we present an imaging method as part of a multisensor system developed as a low-cost, scalable, open-source system that is adaptable to classical trap types. The image quality meets the requirements needed for classification in the taxonomic tree. Therefore, illumination and resolution have been optimized and motion artefacts have been suppressed. The system is evaluated exemplarily on a dataset consisting of 16 insect species of the same as well as different genus, family and order. We demonstrate that standard CNN-architectures like ResNet50 (pretrained on iNaturalist data) or MobileNet perform very well for the prediction task after re-training. Smaller custom made CNNs also lead to promising results. Classification accuracy of $>96%$ has been achieved. Moreover, it was proved that image cropping of insects is necessary for classification of species with high inter-class similarity.
[ "['Danja Brandt' 'Martin Tschaikner' 'Teodor Chiaburu' 'Henning Schmidt'\n 'Ilona Schrimpf' 'Alexandra Stadel' 'Ingeborg E. Beckers' 'Frank Haußer']" ]
null
null
2404.17489
null
null
http://arxiv.org/pdf/2404.17489v2
2024-04-30T14:11:15Z
2024-04-26T15:43:49Z
Tabular Data Contrastive Learning via Class-Conditioned and Feature-Correlation Based Augmentation
Contrastive learning is a model pre-training technique by first creating similar views of the original data, and then encouraging the data and its corresponding views to be close in the embedding space. Contrastive learning has witnessed success in image and natural language data, thanks to the domain-specific augmentation techniques that are both intuitive and effective. Nonetheless, in tabular domain, the predominant augmentation technique for creating views is through corrupting tabular entries via swapping values, which is not as sound or effective. We propose a simple yet powerful improvement to this augmentation technique: corrupting tabular data conditioned on class identity. Specifically, when corrupting a specific tabular entry from an anchor row, instead of randomly sampling a value in the same feature column from the entire table uniformly, we only sample from rows that are identified to be within the same class as the anchor row. We assume the semi-supervised learning setting, and adopt the pseudo labeling technique for obtaining class identities over all table rows. We also explore the novel idea of selecting features to be corrupted based on feature correlation structures. Extensive experiments show that the proposed approach consistently outperforms the conventional corruption method for tabular data classification tasks. Our code is available at https://github.com/willtop/Tabular-Class-Conditioned-SSL.
[ "['Wei Cui' 'Rasa Hosseinzadeh' 'Junwei Ma' 'Tongzi Wu' 'Yi Sui'\n 'Keyvan Golestan']" ]
null
null
2404.17493
null
null
http://arxiv.org/pdf/2404.17493v1
2024-04-26T15:48:09Z
2024-04-26T15:48:09Z
Causally Abstracted Multi-armed Bandits
Multi-armed bandits (MAB) and causal MABs (CMAB) are established frameworks for decision-making problems. The majority of prior work typically studies and solves individual MAB and CMAB in isolation for a given problem and associated data. However, decision-makers are often faced with multiple related problems and multi-scale observations where joint formulations are needed in order to efficiently exploit the problem structures and data dependencies. Transfer learning for CMABs addresses the situation where models are defined on identical variables, although causal connections may differ. In this work, we extend transfer learning to setups involving CMABs defined on potentially different variables, with varying degrees of granularity, and related via an abstraction map. Formally, we introduce the problem of causally abstracted MABs (CAMABs) by relying on the theory of causal abstraction in order to express a rigorous abstraction map. We propose algorithms to learn in a CAMAB, and study their regret. We illustrate the limitations and the strengths of our algorithms on a real-world scenario related to online advertising.
[ "['Fabio Massimo Zennaro' 'Nicholas Bishop' 'Joel Dyer' 'Yorgos Felekis'\n 'Anisoara Calinescu' 'Michael Wooldridge' 'Theodoros Damoulas']" ]
null
null
2404.17495
null
null
http://arxiv.org/pdf/2404.17495v1
2024-04-26T15:51:20Z
2024-04-26T15:51:20Z
Q-Learning to navigate turbulence without a map
We consider the problem of olfactory searches in a turbulent environment. We focus on agents that respond solely to odor stimuli, with no access to spatial perception nor prior information about the odor location. We ask whether navigation strategies to a target can be learned robustly within a sequential decision making framework. We develop a reinforcement learning algorithm using a small set of interpretable olfactory states and train it with realistic turbulent odor cues. By introducing a temporal memory, we demonstrate that two salient features of odor traces, discretized in few olfactory states, are sufficient to learn navigation in a realistic odor plume. Performance is dictated by the sparse nature of turbulent plumes. An optimal memory exists which ignores blanks within the plume and activates a recovery strategy outside the plume. We obtain the best performance by letting agents learn their recovery strategy and show that it is mostly casting cross wind, similar to behavior observed in flying insects. The optimal strategy is robust to substantial changes in the odor plumes, suggesting minor parameter tuning may be sufficient to adapt to different environments.
[ "['Marco Rando' 'Martin James' 'Alessandro Verri' 'Lorenzo Rosasco'\n 'Agnese Seminara']" ]