categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.03876 | null | null | http://arxiv.org/pdf/2404.03876v3 | 2024-06-25T02:20:06Z | 2024-04-05T03:51:19Z | Accurately Classifying Out-Of-Distribution Data in Facial Recognition | Standard classification theory assumes that the distribution of images in the test and training sets are identical. Unfortunately, real-life scenarios typically feature unseen data ("out-of-distribution data") which is different from data in the training distribution("in-distribution"). This issue is most prevalent in social justice problems where data from under-represented groups may appear in the test data without representing an equal proportion of the training data. This may result in a model returning confidently wrong decisions and predictions. We are interested in the following question: Can the performance of a neural network improve on facial images of out-of-distribution data when it is trained simultaneously on multiple datasets of in-distribution data? We approach this problem by incorporating the Outlier Exposure model and investigate how the model's performance changes when other datasets of facial images were implemented. We observe that the accuracy and other metrics of the model can be increased by applying Outlier Exposure, incorporating a trainable weight parameter to increase the machine's emphasis on outlier images, and by re-weighting the importance of different class labels. We also experimented with whether sorting the images and determining outliers via image features would have more of an effect on the metrics than sorting by average pixel value. Our goal was to make models not only more accurate but also more fair by scanning a more expanded range of images. We also tested the datasets in reverse order to see whether a more fair dataset with balanced features has an effect on the model's accuracy. | [
"['Gianluca Barone' 'Aashrit Cunchala' 'Rudy Nunez']"
]
|
null | null | 2404.03888 | null | null | http://arxiv.org/pdf/2404.03888v2 | 2024-05-09T03:51:01Z | 2024-04-05T04:34:43Z | A proximal policy optimization based intelligent home solar management | In the smart grid, the prosumers can sell unused electricity back to the power grid, assuming the prosumers own renewable energy sources and storage units. The maximizing of their profits under a dynamic electricity market is a problem that requires intelligent planning. To address this, we propose a framework based on Proximal Policy Optimization (PPO) using recurrent rewards. By using the information about the rewards modeled effectively with PPO to maximize our objective, we were able to get over 30% improvement over the other naive algorithms in accumulating total profits. This shows promise in getting reinforcement learning algorithms to perform tasks required to plan their actions in complex domains like financial markets. We also introduce a novel method for embedding longs based on soliton waves that outperformed normal embedding in our use case with random floating point data augmentation. | [
"['Kode Creer' 'Imitiaz Parvez']"
]
|
null | null | 2404.03891 | null | null | http://arxiv.org/pdf/2404.03891v1 | 2024-04-05T04:58:34Z | 2024-04-05T04:58:34Z | Can only LLMs do Reasoning?: Potential of Small Language Models in Task
Planning | In robotics, the use of Large Language Models (LLMs) is becoming prevalent, especially for understanding human commands. In particular, LLMs are utilized as domain-agnostic task planners for high-level human commands. LLMs are capable of Chain-of-Thought (CoT) reasoning, and this allows LLMs to be task planners. However, we need to consider that modern robots still struggle to perform complex actions, and the domains where robots can be deployed are limited in practice. This leads us to pose a question: If small LMs can be trained to reason in chains within a single domain, would even small LMs be good task planners for the robots? To train smaller LMs to reason in chains, we build `COmmand-STeps datasets' (COST) consisting of high-level commands along with corresponding actionable low-level steps, via LLMs. We release not only our datasets but also the prompt templates used to generate them, to allow anyone to build datasets for their domain. We compare GPT3.5 and GPT4 with the finetuned GPT2 for task domains, in tabletop and kitchen environments, and the result shows that GPT2-medium is comparable to GPT3.5 for task planning in a specific domain. Our dataset, code, and more output samples can be found in https://github.com/Gawon-Choi/small-LMs-Task-Planning | [
"['Gawon Choi' 'Hyemin Ahn']"
]
|
null | null | 2404.03892 | null | null | http://arxiv.org/pdf/2404.03892v3 | 2024-04-27T08:24:37Z | 2024-04-05T05:00:21Z | Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and
Integration of Convolutional Neural Networks and Explainable AI | The Deep learning (DL) models for diagnosing breast cancer from mammographic images often operate as "black boxes", making it difficult for healthcare professionals to trust and understand their decision-making processes. The study presents an integrated framework combining Convolutional Neural Networks (CNNs) and Explainable Artificial Intelligence (XAI) for the enhanced diagnosis of breast cancer using the CBIS-DDSM dataset. The methodology encompasses an elaborate data preprocessing pipeline and advanced data augmentation techniques to counteract dataset limitations and transfer learning using pre-trained networks such as VGG-16, Inception-V3 and ResNet was employed. A focal point of our study is the evaluation of XAI's effectiveness in interpreting model predictions, highlighted by utilizing the Hausdorff measure to assess the alignment between AI-generated explanations and expert annotations quantitatively. This approach is critical for XAI in promoting trustworthiness and ethical fairness in AI-assisted diagnostics. The findings from our research illustrate the effective collaboration between CNNs and XAI in advancing diagnostic methods for breast cancer, thereby facilitating a more seamless integration of advanced AI technologies within clinical settings. By enhancing the interpretability of AI driven decisions, this work lays the groundwork for improved collaboration between AI systems and medical practitioners, ultimately enriching patient care. Furthermore, the implications of our research extended well beyond the current methodologies. It encourages further research into how to combine multimodal data and improve AI explanations to meet the needs of clinical practice. | [
"['Maryam Ahmed' 'Tooba Bibi' 'Rizwan Ahmed Khan' 'Sidra Nasir']"
]
|
null | null | 2404.03898 | null | null | http://arxiv.org/pdf/2404.03898v1 | 2024-04-05T05:42:23Z | 2024-04-05T05:42:23Z | VoltaVision: A Transfer Learning model for electronic component
classification | In this paper, we analyze the effectiveness of transfer learning on classifying electronic components. Transfer learning reuses pre-trained models to save time and resources in building a robust classifier rather than learning from scratch. Our work introduces a lightweight CNN, coined as VoltaVision, and compares its performance against more complex models. We test the hypothesis that transferring knowledge from a similar task to our target domain yields better results than state-of-the-art models trained on general datasets. Our dataset and code for this work are available at https://github.com/AnasIshfaque/VoltaVision. | [
"['Anas Mohammad Ishfaqul Muktadir Osmani' 'Taimur Rahman' 'Salekul Islam']"
]
|
null | null | 2404.03900 | null | null | http://arxiv.org/pdf/2404.03900v1 | 2024-04-05T05:46:20Z | 2024-04-05T05:46:20Z | Nonparametric Modern Hopfield Models | We present a nonparametric construction for deep learning compatible modern Hopfield models and utilize this framework to debut an efficient variant. Our key contribution stems from interpreting the memory storage and retrieval processes in modern Hopfield models as a nonparametric regression problem subject to a set of query-memory pairs. Crucially, our framework not only recovers the known results from the original dense modern Hopfield model but also fills the void in the literature regarding efficient modern Hopfield models, by introducing textit{sparse-structured} modern Hopfield models with sub-quadratic complexity. We establish that this sparse model inherits the appealing theoretical properties of its dense analogue -- connection with transformer attention, fixed point convergence and exponential memory capacity -- even without knowing details of the Hopfield energy function. Additionally, we showcase the versatility of our framework by constructing a family of modern Hopfield models as extensions, including linear, random masked, top-$K$ and positive random feature modern Hopfield models. Empirically, we validate the efficacy of our framework in both synthetic and realistic settings. | [
"['Jerry Yao-Chieh Hu' 'Bo-Yu Chen' 'Dennis Wu' 'Feng Ruan' 'Han Liu']"
]
|
null | null | 2404.03908 | null | null | http://arxiv.org/pdf/2404.03908v1 | 2024-04-05T06:15:58Z | 2024-04-05T06:15:58Z | Multi-Task Learning for Lung sound & Lung disease classification | In recent years, advancements in deep learning techniques have considerably enhanced the efficiency and accuracy of medical diagnostics. In this work, a novel approach using multi-task learning (MTL) for the simultaneous classification of lung sounds and lung diseases is proposed. Our proposed model leverages MTL with four different deep learning models such as 2D CNN, ResNet50, MobileNet and Densenet to extract relevant features from the lung sound recordings. The ICBHI 2017 Respiratory Sound Database was employed in the current study. The MTL for MobileNet model performed better than the other models considered, with an accuracy of74% for lung sound analysis and 91% for lung diseases classification. Results of the experimentation demonstrate the efficacy of our approach in classifying both lung sounds and lung diseases concurrently. In this study,using the demographic data of the patients from the database, risk level computation for Chronic Obstructive Pulmonary Disease is also carried out. For this computation, three machine learning algorithms namely Logistic Regression, SVM and Random Forest classifierswere employed. Among these ML algorithms, the Random Forest classifier had the highest accuracy of 92%.This work helps in considerably reducing the physician's burden of not just diagnosing the pathology but also effectively communicating to the patient about the possible causes or outcomes. | [
"['Suma K V' 'Deepali Koppad' 'Preethi Kumar' 'Neha A Kantikar'\n 'Surabhi Ramesh']"
]
|
null | null | 2404.03913 | null | null | http://arxiv.org/pdf/2404.03913v1 | 2024-04-05T06:41:27Z | 2024-04-05T06:41:27Z | Concept Weaver: Enabling Multi-Concept Fusion in Text-to-Image Models | While there has been significant progress in customizing text-to-image generation models, generating images that combine multiple personalized concepts remains challenging. In this work, we introduce Concept Weaver, a method for composing customized text-to-image diffusion models at inference time. Specifically, the method breaks the process into two steps: creating a template image aligned with the semantics of input prompts, and then personalizing the template using a concept fusion strategy. The fusion strategy incorporates the appearance of the target concepts into the template image while retaining its structural details. The results indicate that our method can generate multiple custom concepts with higher identity fidelity compared to alternative approaches. Furthermore, the method is shown to seamlessly handle more than two concepts and closely follow the semantic meaning of the input prompt without blending appearances across different subjects. | [
"['Gihyun Kwon' 'Simon Jenni' 'Dingzeyu Li' 'Joon-Young Lee' 'Jong Chul Ye'\n 'Fabian Caba Heilbron']"
]
|
null | null | 2404.03936 | null | null | http://arxiv.org/abs/2404.03936v2 | 2024-04-11T13:02:58Z | 2024-04-05T07:44:17Z | Deep Learning for Satellite Image Time Series Analysis: A Review | Earth observation (EO) satellite missions have been providing detailed images about the state of the Earth and its land cover for over 50 years. Long term missions, such as NASA's Landsat, Terra, and Aqua satellites, and more recently, the ESA's Sentinel missions, record images of the entire world every few days. Although single images provide point-in-time data, repeated images of the same area, or satellite image time series (SITS) provide information about the changing state of vegetation and land use. These SITS are useful for modeling dynamic processes and seasonal changes such as plant phenology. They have potential benefits for many aspects of land and natural resource management, including applications in agricultural, forest, water, and disaster management, urban planning, and mining. However, the resulting satellite image time series (SITS) are complex, incorporating information from the temporal, spatial, and spectral dimensions. Therefore, deep learning methods are often deployed as they can analyze these complex relationships. This review presents a summary of the state-of-the-art methods of modelling environmental, agricultural, and other Earth observation variables from SITS data using deep learning methods. We aim to provide a resource for remote sensing experts interested in using deep learning techniques to enhance Earth observation models with temporal information. | [
"['Lynn Miller' 'Charlotte Pelletier' 'Geoffrey I. Webb']"
]
|
null | null | 2404.03948 | null | null | http://arxiv.org/abs/2404.03948v1 | 2024-04-05T08:27:36Z | 2024-04-05T08:27:36Z | Re-pseudonymization Strategies for Smart Meter Data Are Not Robust to
Deep Learning Profiling Attacks | Smart meters, devices measuring the electricity and gas consumption of a household, are currently being deployed at a fast rate throughout the world. The data they collect are extremely useful, including in the fight against climate change. However, these data and the information that can be inferred from them are highly sensitive. Re-pseudonymization, i.e., the frequent replacement of random identifiers over time, is widely used to share smart meter data while mitigating the risk of re-identification. We here show how, in spite of re-pseudonymization, households' consumption records can be pieced together with high accuracy in large-scale datasets. We propose the first deep learning-based profiling attack against re-pseudonymized smart meter data. Our attack combines neural network embeddings, which are used to extract features from weekly consumption records and are tailored to the smart meter identification task, with a nearest neighbor classifier. We evaluate six neural networks architectures as the embedding model. Our results suggest that the Transformer and CNN-LSTM architectures vastly outperform previous methods as well as other architectures, successfully identifying the correct household 73.4% of the time among 5139 households based on electricity and gas consumption records (54.5% for electricity only). We further show that the features extracted by the embedding model maintain their effectiveness when transferred to a set of users disjoint from the one used to train the model. Finally, we extensively evaluate the robustness of our results. Taken together, our results strongly suggest that even frequent re-pseudonymization strategies can be reversed, strongly limiting their ability to prevent re-identification in practice. | [
"['Ana-Maria Cretu' 'Miruna Rusu' 'Yves-Alexandre de Montjoye']"
]
|
null | null | 2404.03969 | null | null | http://arxiv.org/pdf/2404.03969v1 | 2024-04-05T09:05:37Z | 2024-04-05T09:05:37Z | Transformers for molecular property prediction: Lessons learned from the
past five years | Molecular Property Prediction (MPP) is vital for drug discovery, crop protection, and environmental science. Over the last decades, diverse computational techniques have been developed, from using simple physical and chemical properties and molecular fingerprints in statistical models and classical machine learning to advanced deep learning approaches. In this review, we aim to distill insights from current research on employing transformer models for MPP. We analyze the currently available models and explore key questions that arise when training and fine-tuning a transformer model for MPP. These questions encompass the choice and scale of the pre-training data, optimal architecture selections, and promising pre-training objectives. Our analysis highlights areas not yet covered in current research, inviting further exploration to enhance the field's understanding. Additionally, we address the challenges in comparing different models, emphasizing the need for standardized data splitting and robust statistical analysis. | [
"['Afnan Sultan' 'Jochen Sieg' 'Miriam Mathea' 'Andrea Volkamer']"
]
|
null | null | 2404.03984 | null | null | http://arxiv.org/pdf/2404.03984v1 | 2024-04-05T09:39:47Z | 2024-04-05T09:39:47Z | ROMA-iQSS: An Objective Alignment Approach via State-Based Value
Learning and ROund-Robin Multi-Agent Scheduling | Effective multi-agent collaboration is imperative for solving complex, distributed problems. In this context, two key challenges must be addressed: first, autonomously identifying optimal objectives for collective outcomes; second, aligning these objectives among agents. Traditional frameworks, often reliant on centralized learning, struggle with scalability and efficiency in large multi-agent systems. To overcome these issues, we introduce a decentralized state-based value learning algorithm that enables agents to independently discover optimal states. Furthermore, we introduce a novel mechanism for multi-agent interaction, wherein less proficient agents follow and adopt policies from more experienced ones, thereby indirectly guiding their learning process. Our theoretical analysis shows that our approach leads decentralized agents to an optimal collective policy. Empirical experiments further demonstrate that our method outperforms existing decentralized state-based and action-based value learning strategies by effectively identifying and aligning optimal objectives. | [
"['Chi-Hui Lin' 'Joewie J. Koh' 'Alessandro Roncone' 'Lijun Chen']"
]
|
null | null | 2404.03988 | null | null | http://arxiv.org/pdf/2404.03988v1 | 2024-04-05T09:50:00Z | 2024-04-05T09:50:00Z | Model Selection with Model Zoo via Graph Learning | Pre-trained deep learning (DL) models are increasingly accessible in public repositories, i.e., model zoos. Given a new prediction task, finding the best model to fine-tune can be computationally intensive and costly, especially when the number of pre-trained models is large. Selecting the right pre-trained models is crucial, yet complicated by the diversity of models from various model families (like ResNet, Vit, Swin) and the hidden relationships between models and datasets. Existing methods, which utilize basic information from models and datasets to compute scores indicating model performance on target datasets, overlook the intrinsic relationships, limiting their effectiveness in model selection. In this study, we introduce TransferGraph, a novel framework that reformulates model selection as a graph learning problem. TransferGraph constructs a graph using extensive metadata extracted from models and datasets, while capturing their inherent relationships. Through comprehensive experiments across 16 real datasets, both images and texts, we demonstrate TransferGraph's effectiveness in capturing essential model-dataset relationships, yielding up to a 32% improvement in correlation between predicted performance and the actual fine-tuning results compared to the state-of-the-art methods. | [
"['Ziyu Li' 'Hilco van der Wilk' 'Danning Zhan' 'Megha Khosla'\n 'Alessandro Bozzon' 'Rihan Hai']"
]
|
null | null | 2404.03991 | null | null | http://arxiv.org/pdf/2404.03991v1 | 2024-04-05T10:01:31Z | 2024-04-05T10:01:31Z | Towards Efficient and Accurate CT Segmentation via Edge-Preserving
Probabilistic Downsampling | Downsampling images and labels, often necessitated by limited resources or to expedite network training, leads to the loss of small objects and thin boundaries. This undermines the segmentation network's capacity to interpret images accurately and predict detailed labels, resulting in diminished performance compared to processing at original resolutions. This situation exemplifies the trade-off between efficiency and accuracy, with higher downsampling factors further impairing segmentation outcomes. Preserving information during downsampling is especially critical for medical image segmentation tasks. To tackle this challenge, we introduce a novel method named Edge-preserving Probabilistic Downsampling (EPD). It utilizes class uncertainty within a local window to produce soft labels, with the window size dictating the downsampling factor. This enables a network to produce quality predictions at low resolutions. Beyond preserving edge details more effectively than conventional nearest-neighbor downsampling, employing a similar algorithm for images, it surpasses bilinear interpolation in image downsampling, enhancing overall performance. Our method significantly improved Intersection over Union (IoU) to 2.85%, 8.65%, and 11.89% when downsampling data to 1/2, 1/4, and 1/8, respectively, compared to conventional interpolation methods. | [
"['Shahzad Ali' 'Yu Rim Lee' 'Soo Young Park' 'Won Young Tak'\n 'Soon Ki Jung']"
]
|
null | null | 2404.03992 | null | null | http://arxiv.org/abs/2404.03992v1 | 2024-04-05T10:02:32Z | 2024-04-05T10:02:32Z | Rolling the dice for better deep learning performance: A study of
randomness techniques in deep neural networks | This paper investigates how various randomization techniques impact Deep Neural Networks (DNNs). Randomization, like weight noise and dropout, aids in reducing overfitting and enhancing generalization, but their interactions are poorly understood. The study categorizes randomness techniques into four types and proposes new methods: adding noise to the loss function and random masking of gradient updates. Using Particle Swarm Optimizer (PSO) for hyperparameter optimization, it explores optimal configurations across MNIST, FASHION-MNIST, CIFAR10, and CIFAR100 datasets. Over 30,000 configurations are evaluated, revealing data augmentation and weight initialization randomness as main performance contributors. Correlation analysis shows different optimizers prefer distinct randomization types. The complete implementation and dataset are available on GitHub. | [
"['Mohammed Ghaith Altarabichi' 'Sławomir Nowaczyk' 'Sepideh Pashami'\n 'Peyman Sheikholharam Mashhadi' 'Julia Handl']"
]
|
null | null | 2404.03996 | null | null | http://arxiv.org/abs/2404.03996v1 | 2024-04-05T10:15:24Z | 2024-04-05T10:15:24Z | Fast Genetic Algorithm for feature selection -- A qualitative
approximation approach | Evolutionary Algorithms (EAs) are often challenging to apply in real-world settings since evolutionary computations involve a large number of evaluations of a typically expensive fitness function. For example, an evaluation could involve training a new machine learning model. An approximation (also known as meta-model or a surrogate) of the true function can be used in such applications to alleviate the computation cost. In this paper, we propose a two-stage surrogate-assisted evolutionary approach to address the computational issues arising from using Genetic Algorithm (GA) for feature selection in a wrapper setting for large datasets. We define 'Approximation Usefulness' to capture the necessary conditions to ensure correctness of the EA computations when an approximation is used. Based on this definition, we propose a procedure to construct a lightweight qualitative meta-model by the active selection of data instances. We then use a meta-model to carry out the feature selection task. We apply this procedure to the GA-based algorithm CHC (Cross generational elitist selection, Heterogeneous recombination and Cataclysmic mutation) to create a Qualitative approXimations variant, CHCQX. We show that CHCQX converges faster to feature subset solutions of significantly higher accuracy (as compared to CHC), particularly for large datasets with over 100K instances. We also demonstrate the applicability of the thinking behind our approach more broadly to Swarm Intelligence (SI), another branch of the Evolutionary Computation (EC) paradigm with results of PSOQX, a qualitative approximation adaptation of the Particle Swarm Optimization (PSO) method. A GitHub repository with the complete implementation is available. | [
"['Mohammed Ghaith Altarabichi' 'Sławomir Nowaczyk' 'Sepideh Pashami'\n 'Peyman Sheikholharam Mashhadi']"
]
|
null | null | 2404.03997 | null | null | http://arxiv.org/pdf/2404.03997v1 | 2024-04-05T10:19:04Z | 2024-04-05T10:19:04Z | Demonstration Guided Multi-Objective Reinforcement Learning | Multi-objective reinforcement learning (MORL) is increasingly relevant due to its resemblance to real-world scenarios requiring trade-offs between multiple objectives. Catering to diverse user preferences, traditional reinforcement learning faces amplified challenges in MORL. To address the difficulty of training policies from scratch in MORL, we introduce demonstration-guided multi-objective reinforcement learning (DG-MORL). This novel approach utilizes prior demonstrations, aligns them with user preferences via corner weight support, and incorporates a self-evolving mechanism to refine suboptimal demonstrations. Our empirical studies demonstrate DG-MORL's superiority over existing MORL algorithms, establishing its robustness and efficacy, particularly under challenging conditions. We also provide an upper bound of the algorithm's sample complexity. | [
"['Junlin Lu' 'Patrick Mannion' 'Karl Mason']"
]
|
null | null | 2404.04001 | null | null | http://arxiv.org/pdf/2404.04001v1 | 2024-04-05T10:25:26Z | 2024-04-05T10:25:26Z | Approximate UMAP allows for high-rate online visualization of
high-dimensional data streams | In the BCI field, introspection and interpretation of brain signals are desired for providing feedback or to guide rapid paradigm prototyping but are challenging due to the high noise level and dimensionality of the signals. Deep neural networks are often introspected by transforming their learned feature representations into 2- or 3-dimensional subspace visualizations using projection algorithms like Uniform Manifold Approximation and Projection (UMAP). Unfortunately, these methods are computationally expensive, making the projection of data streams in real-time a non-trivial task. In this study, we introduce a novel variant of UMAP, called approximate UMAP (aUMAP). It aims at generating rapid projections for real-time introspection. To study its suitability for real-time projecting, we benchmark the methods against standard UMAP and its neural network counterpart parametric UMAP. Our results show that approximate UMAP delivers projections that replicate the projection space of standard UMAP while decreasing projection speed by an order of magnitude and maintaining the same training time. | [
"['Peter Wassenaar' 'Pierre Guetschel' 'Michael Tangermann']"
]
|
null | null | 2404.04002 | null | null | http://arxiv.org/pdf/2404.04002v2 | 2024-04-09T09:35:24Z | 2024-04-05T10:25:40Z | Continual Learning with Weight Interpolation | Continual learning poses a fundamental challenge for modern machine learning systems, requiring models to adapt to new tasks while retaining knowledge from previous ones. Addressing this challenge necessitates the development of efficient algorithms capable of learning from data streams and accumulating knowledge over time. This paper proposes a novel approach to continual learning utilizing the weight consolidation method. Our method, a simple yet powerful technique, enhances robustness against catastrophic forgetting by interpolating between old and new model weights after each novel task, effectively merging two models to facilitate exploration of local minima emerging after arrival of new concepts. Moreover, we demonstrate that our approach can complement existing rehearsal-based replay approaches, improving their accuracy and further mitigating the forgetting phenomenon. Additionally, our method provides an intuitive mechanism for controlling the stability-plasticity trade-off. Experimental results showcase the significant performance enhancement to state-of-the-art experience replay algorithms the proposed weight consolidation approach offers. Our algorithm can be downloaded from https://github.com/jedrzejkozal/weight-interpolation-cl. | [
"['Jędrzej Kozal' 'Jan Wasilewski' 'Bartosz Krawczyk' 'Michał Woźniak']"
]
|
null | null | 2404.04049 | null | null | http://arxiv.org/pdf/2404.04049v1 | 2024-04-05T12:05:20Z | 2024-04-05T12:05:20Z | Cycle Life Prediction for Lithium-ion Batteries: Machine Learning and
More | Batteries are dynamic systems with complicated nonlinear aging, highly dependent on cell design, chemistry, manufacturing, and operational conditions. Prediction of battery cycle life and estimation of aging states is important to accelerate battery R&D, testing, and to further the understanding of how batteries degrade. Beyond testing, battery management systems rely on real-time models and onboard diagnostics and prognostics for safe operation. Estimating the state of health and remaining useful life of a battery is important to optimize performance and use resources optimally. This tutorial begins with an overview of first-principles, machine learning, and hybrid battery models. Then, a typical pipeline for the development of interpretable machine learning models is explained and showcased for cycle life prediction from laboratory testing data. We highlight the challenges of machine learning models, motivating the incorporation of physics in hybrid modeling approaches, which are needed to decipher the aging trajectory of batteries but require more data and further work on the physics of battery degradation. The tutorial closes with a discussion on generalization and further research directions. | [
"['Joachim Schaeffer' 'Giacomo Galuppini' 'Jinwook Rhyu'\n 'Patrick A. Asinger' 'Robin Droop' 'Rolf Findeisen' 'Richard D. Braatz']"
]
|
null | null | 2404.04057 | null | null | http://arxiv.org/pdf/2404.04057v3 | 2024-05-24T17:20:46Z | 2024-04-05T12:30:19Z | Score identity Distillation: Exponentially Fast Distillation of
Pretrained Diffusion Models for One-Step Generation | We introduce Score identity Distillation (SiD), an innovative data-free method that distills the generative capabilities of pretrained diffusion models into a single-step generator. SiD not only facilitates an exponentially fast reduction in Fr'echet inception distance (FID) during distillation but also approaches or even exceeds the FID performance of the original teacher diffusion models. By reformulating forward diffusion processes as semi-implicit distributions, we leverage three score-related identities to create an innovative loss mechanism. This mechanism achieves rapid FID reduction by training the generator using its own synthesized images, eliminating the need for real data or reverse-diffusion-based generation, all accomplished within significantly shortened generation time. Upon evaluation across four benchmark datasets, the SiD algorithm demonstrates high iteration efficiency during distillation and surpasses competing distillation approaches, whether they are one-step or few-step, data-free, or dependent on training data, in terms of generation quality. This achievement not only redefines the benchmarks for efficiency and effectiveness in diffusion distillation but also in the broader field of diffusion-based generation. The PyTorch implementation is available at https://github.com/mingyuanzhou/SiD | [
"['Mingyuan Zhou' 'Huangjie Zheng' 'Zhendong Wang' 'Mingzhang Yin'\n 'Hai Huang']"
]
|
null | null | 2404.04062 | null | null | http://arxiv.org/pdf/2404.04062v1 | 2024-04-05T12:37:08Z | 2024-04-05T12:37:08Z | Derivative-free tree optimization for complex systems | A tremendous range of design tasks in materials, physics, and biology can be formulated as finding the optimum of an objective function depending on many parameters without knowing its closed-form expression or the derivative. Traditional derivative-free optimization techniques often rely on strong assumptions about objective functions, thereby failing at optimizing non-convex systems beyond 100 dimensions. Here, we present a tree search method for derivative-free optimization that enables accelerated optimal design of high-dimensional complex systems. Specifically, we introduce stochastic tree expansion, dynamic upper confidence bound, and short-range backpropagation mechanism to evade local optimum, iteratively approximating the global optimum using machine learning models. This development effectively confronts the dimensionally challenging problems, achieving convergence to global optima across various benchmark functions up to 2,000 dimensions, surpassing the existing methods by 10- to 20-fold. Our method demonstrates wide applicability to a wide range of real-world complex systems spanning materials, physics, and biology, considerably outperforming state-of-the-art algorithms. This enables efficient autonomous knowledge discovery and facilitates self-driving virtual laboratories. Although we focus on problems within the realm of natural science, the advancements in optimization techniques achieved herein are applicable to a broader spectrum of challenges across all quantitative disciplines. | [
"['Ye Wei' 'Bo Peng' 'Ruiwen Xie' 'Yangtao Chen' 'Yu Qin' 'Peng Wen'\n 'Stefan Bauer' 'Po-Yen Tung']"
]
|
null | null | 2404.04064 | null | null | http://arxiv.org/pdf/2404.04064v1 | 2024-04-05T12:41:53Z | 2024-04-05T12:41:53Z | Fusing Dictionary Learning and Support Vector Machines for Unsupervised
Anomaly Detection | We study in this paper the improvement of one-class support vector machines (OC-SVM) through sparse representation techniques for unsupervised anomaly detection. As Dictionary Learning (DL) became recently a common analysis technique that reveals hidden sparse patterns of data, our approach uses this insight to endow unsupervised detection with more control on pattern finding and dimensions. We introduce a new anomaly detection model that unifies the OC-SVM and DL residual functions into a single composite objective, subsequently solved through K-SVD-type iterative algorithms. A closed-form of the alternating K-SVD iteration is explicitly derived for the new composite model and practical implementable schemes are discussed. The standard DL model is adapted for the Dictionary Pair Learning (DPL) context, where the usual sparsity constraints are naturally eliminated. Finally, we extend both objectives to the more general setting that allows the use of kernel functions. The empirical convergence properties of the resulting algorithms are provided and an in-depth analysis of their parametrization is performed while also demonstrating their numerical performance in comparison with existing methods. | [
"['Paul Irofti' 'Iulian-Andrei Hîji' 'Andrei Pătraşcu' 'Nicolae Cleju']"
]
|
null | null | 2404.04067 | null | null | http://arxiv.org/pdf/2404.04067v3 | 2024-06-24T12:32:41Z | 2024-04-05T12:51:37Z | CLUE: A Clinical Language Understanding Evaluation for LLMs | Large Language Models (LLMs) are expected to significantly contribute to patient care, diagnostics, and administrative processes. Emerging biomedical LLMs aim to address healthcare-specific challenges, including privacy demands and computational constraints. Assessing the models' suitability for this sensitive application area is of the utmost importance. However, evaluation has primarily been limited to non-clinical tasks, which do not reflect the complexity of practical clinical applications. To fill this gap, we present the Clinical Language Understanding Evaluation (CLUE), a benchmark tailored to evaluate LLMs on clinical tasks. CLUE includes six tasks to test the practical applicability of LLMs in complex healthcare settings. Our evaluation includes a total of $25$ LLMs. In contrast to previous evaluations, CLUE shows a decrease in performance for nine out of twelve biomedical models. Our benchmark represents a step towards a standardized approach to evaluating and developing LLMs in healthcare to align future model development with the real-world needs of clinical application. We open-source all evaluation scripts and datasets for future research at https://github.com/TIO-IKIM/CLUE. | [
"['Amin Dada' 'Marie Bauer' 'Amanda Butler Contreras' 'Osman Alperen Koraş'\n 'Constantin Marc Seibold' 'Kaleb E Smith' 'Jens Kleesiek']"
]
|
null | null | 2404.04070 | null | null | http://arxiv.org/pdf/2404.04070v1 | 2024-04-05T12:54:09Z | 2024-04-05T12:54:09Z | Hierarchical Neural Additive Models for Interpretable Demand Forecasts | Demand forecasts are the crucial basis for numerous business decisions, ranging from inventory management to strategic facility planning. While machine learning (ML) approaches offer accuracy gains, their interpretability and acceptance are notoriously lacking. Addressing this dilemma, we introduce Hierarchical Neural Additive Models for time series (HNAM). HNAM expands upon Neural Additive Models (NAM) by introducing a time-series specific additive model with a level and interacting covariate components. Covariate interactions are only allowed according to a user-specified interaction hierarchy. For example, weekday effects may be estimated independently of other covariates, whereas a holiday effect may depend on the weekday and an additional promotion may depend on both former covariates that are lower in the interaction hierarchy. Thereby, HNAM yields an intuitive forecasting interface in which analysts can observe the contribution for each known covariate. We evaluate the proposed approach and benchmark its performance against other state-of-the-art machine learning and statistical models extensively on real-world retail data. The results reveal that HNAM offers competitive prediction performance whilst providing plausible explanations. | [
"['Leif Feddersen' 'Catherine Cleophas']"
]
|
null | null | 2404.04072 | null | null | http://arxiv.org/pdf/2404.04072v1 | 2024-04-05T12:58:07Z | 2024-04-05T12:58:07Z | Label Propagation for Zero-shot Classification with Vision-Language
Models | Vision-Language Models (VLMs) have demonstrated impressive performance on zero-shot classification, i.e. classification when provided merely with a list of class names. In this paper, we tackle the case of zero-shot classification in the presence of unlabeled data. We leverage the graph structure of the unlabeled data and introduce ZLaP, a method based on label propagation (LP) that utilizes geodesic distances for classification. We tailor LP to graphs containing both text and image features and further propose an efficient method for performing inductive inference based on a dual solution and a sparsification step. We perform extensive experiments to evaluate the effectiveness of our method on 14 common datasets and show that ZLaP outperforms the latest related works. Code: https://github.com/vladan-stojnic/ZLaP | [
"['Vladan Stojnić' 'Yannis Kalantidis' 'Giorgos Tolias']"
]
|
null | null | 2404.04102 | null | null | http://arxiv.org/pdf/2404.04102v2 | 2024-05-28T17:11:53Z | 2024-04-05T13:58:51Z | ROPO: Robust Preference Optimization for Large Language Models | Preference alignment is pivotal for empowering large language models (LLMs) to generate helpful and harmless responses. However, the performance of preference alignment is highly sensitive to the prevalent noise in the preference data. Recent efforts for this problem either marginally alleviate the impact of noise without the ability to actually reduce its presence, or rely on costly teacher LLMs prone to reward misgeneralization. To address these challenges, we propose the RObust Preference Optimization (ROPO) framework, an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models. Specifically, ROPO iteratively solves a constrained optimization problem, where we dynamically assign a quality-aware weight for each sample and constrain the sum of the weights to the number of samples we intend to retain. For noise-tolerant training and effective noise identification, we derive a robust loss by suppressing the gradients of samples with high uncertainty. We demonstrate both empirically and theoretically that the derived loss is critical for distinguishing noisy samples from clean ones. Furthermore, inspired by our derived loss, we propose a robustness-guided rejection sampling technique to compensate for the potential important information in discarded queries. Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods, with its superiority growing as the noise rate increases. | [
"['Xize Liang' 'Chao Chen' 'Shuang Qiu' 'Jie Wang' 'Yue Wu' 'Zhihang Fu'\n 'Zhihao Shi' 'Feng Wu' 'Jieping Ye']"
]
|
null | null | 2404.04106 | null | null | http://arxiv.org/pdf/2404.04106v1 | 2024-04-05T14:02:04Z | 2024-04-05T14:02:04Z | Intervention-Assisted Policy Gradient Methods for Online Stochastic
Queuing Network Optimization: Technical Report | Deep Reinforcement Learning (DRL) offers a powerful approach to training neural network control policies for stochastic queuing networks (SQN). However, traditional DRL methods rely on offline simulations or static datasets, limiting their real-world application in SQN control. This work proposes Online Deep Reinforcement Learning-based Controls (ODRLC) as an alternative, where an intelligent agent interacts directly with a real environment and learns an optimal control policy from these online interactions. SQNs present a challenge for ODRLC due to the unbounded nature of the queues within the network resulting in an unbounded state-space. An unbounded state-space is particularly challenging for neural network policies as neural networks are notoriously poor at extrapolating to unseen states. To address this challenge, we propose an intervention-assisted framework that leverages strategic interventions from known stable policies to ensure the queue sizes remain bounded. This framework combines the learning power of neural networks with the guaranteed stability of classical control policies for SQNs. We introduce a method to design these intervention-assisted policies to ensure strong stability of the network. Furthermore, we extend foundational DRL theorems for intervention-assisted policies and develop two practical algorithms specifically for ODRLC of SQNs. Finally, we demonstrate through experiments that our proposed algorithms outperform both classical control approaches and prior ODRLC algorithms. | [
"['Jerrod Wigmore' 'Brooke Shrader' 'Eytan Modiano']"
]
|
null | null | 2404.04108 | null | null | http://arxiv.org/pdf/2404.04108v1 | 2024-04-05T14:04:07Z | 2024-04-05T14:04:07Z | Large language models as oracles for instantiating ontologies with
domain-specific knowledge | Background. Endowing intelligent systems with semantic data commonly requires designing and instantiating ontologies with domain-specific knowledge. Especially in the early phases, those activities are typically performed manually by human experts possibly leveraging on their own experience. The resulting process is therefore time-consuming, error-prone, and often biased by the personal background of the ontology designer. Objective. To mitigate that issue, we propose a novel domain-independent approach to automatically instantiate ontologies with domain-specific knowledge, by leveraging on large language models (LLMs) as oracles. Method. Starting from (i) an initial schema composed by inter-related classes andproperties and (ii) a set of query templates, our method queries the LLM multiple times, and generates instances for both classes and properties from its replies. Thus, the ontology is automatically filled with domain-specific knowledge, compliant to the initial schema. As a result, the ontology is quickly and automatically enriched with manifold instances, which experts may consider to keep, adjust, discard, or complement according to their own needs and expertise. Contribution. We formalise our method in general way and instantiate it over various LLMs, as well as on a concrete case study. We report experiments rooted in the nutritional domain where an ontology of food meals and their ingredients is semi-automatically instantiated from scratch, starting from a categorisation of meals and their relationships. There, we analyse the quality of the generated ontologies and compare ontologies attained by exploiting different LLMs. Finally, we provide a SWOT analysis of the proposed method. | [
"['Giovanni Ciatto' 'Andrea Agiollo' 'Matteo Magnini' 'Andrea Omicini']"
]
|
null | null | 2404.04111 | null | null | http://arxiv.org/pdf/2404.04111v1 | 2024-04-05T14:08:57Z | 2024-04-05T14:08:57Z | The Unreasonable Effectiveness Of Early Discarding After One Epoch In
Neural Network Hyperparameter Optimization | To reach high performance with deep learning, hyperparameter optimization (HPO) is essential. This process is usually time-consuming due to costly evaluations of neural networks. Early discarding techniques limit the resources granted to unpromising candidates by observing the empirical learning curves and canceling neural network training as soon as the lack of competitiveness of a candidate becomes evident. Despite two decades of research, little is understood about the trade-off between the aggressiveness of discarding and the loss of predictive performance. Our paper studies this trade-off for several commonly used discarding techniques such as successive halving and learning curve extrapolation. Our surprising finding is that these commonly used techniques offer minimal to no added value compared to the simple strategy of discarding after a constant number of epochs of training. The chosen number of epochs depends mostly on the available compute budget. We call this approach i-Epoch (i being the constant number of epochs with which neural networks are trained) and suggest to assess the quality of early discarding techniques by comparing how their Pareto-Front (in consumed training epochs and predictive performance) complement the Pareto-Front of i-Epoch. | [
"['Romain Egele' 'Felix Mohr' 'Tom Viering' 'Prasanna Balaprakash']"
]
|
null | null | 2404.04118 | null | null | http://arxiv.org/pdf/2404.04118v1 | 2024-04-05T14:18:06Z | 2024-04-05T14:18:06Z | GNNBENCH: Fair and Productive Benchmarking for Single-GPU GNN System | We hypothesize that the absence of a standardized benchmark has allowed several fundamental pitfalls in GNN System design and evaluation that the community has overlooked. In this work, we propose GNNBench, a plug-and-play benchmarking platform focused on system innovation. GNNBench presents a new protocol to exchange their captive tensor data, supports custom classes in System APIs, and allows automatic integration of the same system module to many deep learning frameworks, such as PyTorch and TensorFlow. To demonstrate the importance of such a benchmark framework, we integrated several GNN systems. Our results show that integration with GNNBench helped us identify several measurement issues that deserve attention from the community. | [
"['Yidong Gong' 'Pradeep Kumar']"
]
|
null | null | 2404.04125 | null | null | http://arxiv.org/pdf/2404.04125v2 | 2024-04-08T21:14:43Z | 2024-04-04T17:58:02Z | No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency
Determines Multimodal Model Performance | Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image generation. However, it is unclear how meaningful the notion of "zero-shot" generalization is for such multimodal models, as it is not known to what extent their pretraining datasets encompass the downstream concepts targeted for during "zero-shot" evaluation. In this work, we ask: How is the performance of multimodal models on downstream concepts influenced by the frequency of these concepts in their pretraining datasets? We comprehensively investigate this question across 34 models and five standard pretraining datasets (CC-3M, CC-12M, YFCC-15M, LAION-400M, LAION-Aesthetics), generating over 300GB of data artifacts. We consistently find that, far from exhibiting "zero-shot" generalization, multimodal models require exponentially more data to achieve linear improvements in downstream "zero-shot" performance, following a sample inefficient log-linear scaling trend. This trend persists even when controlling for sample-level similarity between pretraining and downstream datasets, and testing on purely synthetic data distributions. Furthermore, upon benchmarking models on long-tailed data sampled based on our analysis, we demonstrate that multimodal models across the board perform poorly. We contribute this long-tail test set as the "Let it Wag!" benchmark to further research in this direction. Taken together, our study reveals an exponential need for training data which implies that the key to "zero-shot" generalization capabilities under large-scale training paradigms remains to be found. | [
"['Vishaal Udandarao' 'Ameya Prabhu' 'Adhiraj Ghosh' 'Yash Sharma'\n 'Philip H. S. Torr' 'Adel Bibi' 'Samuel Albanie' 'Matthias Bethge']"
]
|
null | null | 2404.04126 | null | null | http://arxiv.org/pdf/2404.04126v1 | 2024-04-05T14:23:43Z | 2024-04-05T14:23:43Z | Generalizable Temperature Nowcasting with Physics-Constrained RNNs for
Predictive Maintenance of Wind Turbine Components | Machine learning plays an important role in the operation of current wind energy production systems. One central application is predictive maintenance to increase efficiency and lower electricity costs by reducing downtimes. Integrating physics-based knowledge in neural networks to enforce their physical plausibilty is a promising method to improve current approaches, but incomplete system information often impedes their application in real world scenarios. We describe a simple and efficient way for physics-constrained deep learning-based predictive maintenance for wind turbine gearbox bearings with partial system knowledge. The approach is based on temperature nowcasting constrained by physics, where unknown system coefficients are treated as learnable neural network parameters. Results show improved generalization performance to unseen environments compared to a baseline neural network, which is especially important in low data scenarios often encountered in real-world applications. | [
"['Johannes Exenberger' 'Matteo Di Salvo' 'Thomas Hirsch' 'Franz Wotawa'\n 'Gerald Schweiger']"
]
|
null | null | 2404.04140 | null | null | http://arxiv.org/pdf/2404.04140v1 | 2024-04-05T14:39:13Z | 2024-04-05T14:39:13Z | Improving Detection in Aerial Images by Capturing Inter-Object
Relationships | In many image domains, the spatial distribution of objects in a scene exhibits meaningful patterns governed by their semantic relationships. In most modern detection pipelines, however, the detection proposals are processed independently, overlooking the underlying relationships between objects. In this work, we introduce a transformer-based approach to capture these inter-object relationships to refine classification and regression outcomes for detected objects. Building on two-stage detectors, we tokenize the region of interest (RoI) proposals to be processed by a transformer encoder. Specific spatial and geometric relations are incorporated into the attention weights and adaptively modulated and regularized. Experimental results demonstrate that the proposed method achieves consistent performance improvement on three benchmarks including DOTA-v1.0, DOTA-v1.5, and HRSC 2016, especially ranking first on both DOTA-v1.5 and HRSC 2016. Specifically, our new method has an increase of 1.59 mAP on DOTA-v1.0, 4.88 mAP on DOTA-v1.5, and 2.1 mAP on HRSC 2016, respectively, compared to the baselines. | [
"['Botao Ren' 'Botian Xu' 'Yifan Pu' 'Jingyi Wang' 'Zhidong Deng']"
]
|
null | null | 2404.04169 | null | null | http://arxiv.org/pdf/2404.04169v1 | 2024-04-05T15:22:02Z | 2024-04-05T15:22:02Z | Do Sentence Transformers Learn Quasi-Geospatial Concepts from General
Text? | Sentence transformers are language models designed to perform semantic search. This study investigates the capacity of sentence transformers, fine-tuned on general question-answering datasets for asymmetric semantic search, to associate descriptions of human-generated routes across Great Britain with queries often used to describe hiking experiences. We find that sentence transformers have some zero-shot capabilities to understand quasi-geospatial concepts, such as route types and difficulty, suggesting their potential utility for routing recommendation systems. | [
"['Ilya Ilyankou' 'Aldo Lipani' 'Stefano Cavazzi' 'Xiaowei Gao'\n 'James Haworth']"
]
|
null | null | 2404.04173 | null | null | http://arxiv.org/pdf/2404.04173v1 | 2024-04-05T15:32:49Z | 2024-04-05T15:32:49Z | H3DFact: Heterogeneous 3D Integrated CIM for Factorization with
Holographic Perceptual Representations | Disentangling attributes of various sensory signals is central to human-like perception and reasoning and a critical task for higher-order cognitive and neuro-symbolic AI systems. An elegant approach to represent this intricate factorization is via high-dimensional holographic vectors drawing on brain-inspired vector symbolic architectures. However, holographic factorization involves iterative computation with high-dimensional matrix-vector multiplications and suffers from non-convergence problems. In this paper, we present H3DFact, a heterogeneous 3D integrated in-memory compute engine capable of efficiently factorizing high-dimensional holographic representations. H3DFact exploits the computation-in-superposition capability of holographic vectors and the intrinsic stochasticity associated with memristive-based 3D compute-in-memory. Evaluated on large-scale factorization and perceptual problems, H3DFact demonstrates superior capability in factorization accuracy and operational capacity by up to five orders of magnitude, with 5.5x compute density, 1.2x energy efficiency improvements, and 5.9x less silicon footprint compared to iso-capacity 2D designs. | [
"['Zishen Wan' 'Che-Kai Liu' 'Mohamed Ibrahim' 'Hanchen Yang'\n 'Samuel Spetalnick' 'Tushar Krishna' 'Arijit Raychowdhury']"
]
|
null | null | 2404.04188 | null | null | http://arxiv.org/abs/2404.04188v1 | 2024-04-05T16:01:21Z | 2024-04-05T16:01:21Z | Reliable Feature Selection for Adversarially Robust Cyber-Attack
Detection | The growing cybersecurity threats make it essential to use high-quality data to train Machine Learning (ML) models for network traffic analysis, without noisy or missing data. By selecting the most relevant features for cyber-attack detection, it is possible to improve both the robustness and computational efficiency of the models used in a cybersecurity system. This work presents a feature selection and consensus process that combines multiple methods and applies them to several network datasets. Two different feature sets were selected and were used to train multiple ML models with regular and adversarial training. Finally, an adversarial evasion robustness benchmark was performed to analyze the reliability of the different feature sets and their impact on the susceptibility of the models to adversarial examples. By using an improved dataset with more data diversity, selecting the best time-related features and a more specific feature set, and performing adversarial training, the ML models were able to achieve a better adversarially robust generalization. The robustness of the models was significantly improved without their generalization to regular traffic flows being affected, without increases of false alarms, and without requiring too many computational resources, which enables a reliable detection of suspicious activity and perturbed traffic flows in enterprise computer networks. | [
"['João Vitorino' 'Miguel Silva' 'Eva Maia' 'Isabel Praça']"
]
|
null | null | 2404.04199 | null | null | http://arxiv.org/pdf/2404.04199v1 | 2024-04-05T16:13:35Z | 2024-04-05T16:13:35Z | Exploring Probabilistic Models for Semi-supervised Learning | This thesis studies advanced probabilistic models, including both their theoretical foundations and practical applications, for different semi-supervised learning (SSL) tasks. The proposed probabilistic methods are able to improve the safety of AI systems in real applications by providing reliable uncertainty estimates quickly, and at the same time, achieve competitive performance compared to their deterministic counterparts. The experimental results indicate that the methods proposed in the thesis have great value in safety-critical areas, such as the autonomous driving or medical imaging analysis domain, and pave the way for the future discovery of highly effective and efficient probabilistic approaches in the SSL sector. | [
"['Jianfeng Wang']"
]
|
null | null | 2404.04205 | null | null | http://arxiv.org/pdf/2404.04205v1 | 2024-04-05T16:30:45Z | 2024-04-05T16:30:45Z | Enhancing IoT Intelligence: A Transformer-based Reinforcement Learning
Methodology | The proliferation of the Internet of Things (IoT) has led to an explosion of data generated by interconnected devices, presenting both opportunities and challenges for intelligent decision-making in complex environments. Traditional Reinforcement Learning (RL) approaches often struggle to fully harness this data due to their limited ability to process and interpret the intricate patterns and dependencies inherent in IoT applications. This paper introduces a novel framework that integrates transformer architectures with Proximal Policy Optimization (PPO) to address these challenges. By leveraging the self-attention mechanism of transformers, our approach enhances RL agents' capacity for understanding and acting within dynamic IoT environments, leading to improved decision-making processes. We demonstrate the effectiveness of our method across various IoT scenarios, from smart home automation to industrial control systems, showing marked improvements in decision-making efficiency and adaptability. Our contributions include a detailed exploration of the transformer's role in processing heterogeneous IoT data, a comprehensive evaluation of the framework's performance in diverse environments, and a benchmark against traditional RL methods. The results indicate significant advancements in enabling RL agents to navigate the complexities of IoT ecosystems, highlighting the potential of our approach to revolutionize intelligent automation and decision-making in the IoT landscape. | [
"['Gaith Rjoub' 'Saidul Islam' 'Jamal Bentahar' 'Mohammed Amin Almaiah'\n 'Rana Alrawashdeh']"
]
|
null | null | 2404.04219 | null | null | http://arxiv.org/pdf/2404.04219v1 | 2024-04-05T17:05:45Z | 2024-04-05T17:05:45Z | Continual Policy Distillation of Reinforcement Learning-based
Controllers for Soft Robotic In-Hand Manipulation | Dexterous manipulation, often facilitated by multi-fingered robotic hands, holds solid impact for real-world applications. Soft robotic hands, due to their compliant nature, offer flexibility and adaptability during object grasping and manipulation. Yet, benefits come with challenges, particularly in the control development for finger coordination. Reinforcement Learning (RL) can be employed to train object-specific in-hand manipulation policies, but limiting adaptability and generalizability. We introduce a Continual Policy Distillation (CPD) framework to acquire a versatile controller for in-hand manipulation, to rotate different objects in shape and size within a four-fingered soft gripper. The framework leverages Policy Distillation (PD) to transfer knowledge from expert policies to a continually evolving student policy network. Exemplar-based rehearsal methods are then integrated to mitigate catastrophic forgetting and enhance generalization. The performance of the CPD framework over various replay strategies demonstrates its effectiveness in consolidating knowledge from multiple experts and achieving versatile and adaptive behaviours for in-hand manipulation tasks. | [
"['Lanpei Li' 'Enrico Donato' 'Vincenzo Lomonaco' 'Egidio Falotico']"
]
|
null | null | 2404.04220 | null | null | http://arxiv.org/pdf/2404.04220v1 | 2024-04-05T17:06:03Z | 2024-04-05T17:06:03Z | Multi-modal perception for soft robotic interactions using generative
models | Perception is essential for the active interaction of physical agents with the external environment. The integration of multiple sensory modalities, such as touch and vision, enhances this perceptual process, creating a more comprehensive and robust understanding of the world. Such fusion is particularly useful for highly deformable bodies such as soft robots. Developing a compact, yet comprehensive state representation from multi-sensory inputs can pave the way for the development of complex control strategies. This paper introduces a perception model that harmonizes data from diverse modalities to build a holistic state representation and assimilate essential information. The model relies on the causality between sensory input and robotic actions, employing a generative model to efficiently compress fused information and predict the next observation. We present, for the first time, a study on how touch can be predicted from vision and proprioception on soft robots, the importance of the cross-modal generation and why this is essential for soft robotic interactions in unstructured environments. | [
"['Enrico Donato' 'Egidio Falotico' 'Thomas George Thuruthel']"
]
|
null | null | 2404.04224 | null | null | http://arxiv.org/pdf/2404.04224v1 | 2024-04-05T17:15:48Z | 2024-04-05T17:15:48Z | Active Causal Learning for Decoding Chemical Complexities with Targeted
Interventions | Predicting and enhancing inherent properties based on molecular structures is paramount to design tasks in medicine, materials science, and environmental management. Most of the current machine learning and deep learning approaches have become standard for predictions, but they face challenges when applied across different datasets due to reliance on correlations between molecular representation and target properties. These approaches typically depend on large datasets to capture the diversity within the chemical space, facilitating a more accurate approximation, interpolation, or extrapolation of the chemical behavior of molecules. In our research, we introduce an active learning approach that discerns underlying cause-effect relationships through strategic sampling with the use of a graph loss function. This method identifies the smallest subset of the dataset capable of encoding the most information representative of a much larger chemical space. The identified causal relations are then leveraged to conduct systematic interventions, optimizing the design task within a chemical space that the models have not encountered previously. While our implementation focused on the QM9 quantum-chemical dataset for a specific design task-finding molecules with a large dipole moment-our active causal learning approach, driven by intelligent sampling and interventions, holds potential for broader applications in molecular, materials design and discovery. | [
"['Zachary R. Fox' 'Ayana Ghosh']"
]
|
null | null | 2404.04225 | null | null | http://arxiv.org/pdf/2404.04225v1 | 2024-04-05T17:16:10Z | 2024-04-05T17:16:10Z | Twins in rotational spectroscopy: Does a rotational spectrum uniquely
identify a molecule? | Rotational spectroscopy is the most accurate method for determining structures of molecules in the gas phase. It is often assumed that a rotational spectrum is a unique "fingerprint" of a molecule. The availability of large molecular databases and the development of artificial intelligence methods for spectroscopy makes the testing of this assumption timely. In this paper, we pose the determination of molecular structures from rotational spectra as an inverse problem. Within this framework, we adopt a funnel-based approach to search for molecular twins, which are two or more molecules, which have similar rotational spectra but distinctly different molecular structures. We demonstrate that there are twins within standard levels of computational accuracy by generating rotational constants for many molecules from several large molecular databases, indicating the inverse problem is ill-posed. However, some twins can be distinguished by increasing the accuracy of the theoretical methods or by performing additional experiments. | [
"['Marcus Schwarting' 'Nathan A. Seifert' 'Michael J. Davis' 'Ben Blaiszik'\n 'Ian Foster' 'Kirill Prozument']"
]
|
null | null | 2404.04234 | null | null | http://arxiv.org/pdf/2404.04234v3 | 2024-06-07T22:01:05Z | 2024-04-05T17:29:47Z | player2vec: A Language Modeling Approach to Understand Player Behavior
in Games | Methods for learning latent user representations from historical behavior logs have gained traction for recommendation tasks in e-commerce, content streaming, and other settings. However, this area still remains relatively underexplored in video and mobile gaming contexts. In this work, we present a novel method for overcoming this limitation by extending a long-range Transformer model from the natural language processing domain to player behavior data. We discuss specifics of behavior tracking in games and propose preprocessing and tokenization approaches by viewing in-game events in an analogous way to words in sentences, thus enabling learning player representations in a self-supervised manner in the absence of ground-truth annotations. We experimentally demonstrate the efficacy of the proposed approach in fitting the distribution of behavior events by evaluating intrinsic language modeling metrics. Furthermore, we qualitatively analyze the emerging structure of the learned embedding space and show its value for generating insights into behavior patterns to inform downstream applications. | [
"['Tianze Wang' 'Maryam Honari-Jahromi' 'Styliani Katsarou' 'Olga Mikheeva'\n 'Theodoros Panagiotakopoulos' 'Sahar Asadi' 'Oleg Smirnov']"
]
|
null | null | 2404.04240 | null | null | http://arxiv.org/pdf/2404.04240v2 | 2024-05-31T17:43:54Z | 2024-04-05T17:41:52Z | Dynamic Conditional Optimal Transport through Simulation-Free Flows | We study the geometry of conditional optimal transport (COT) and prove a dynamical formulation which generalizes the Benamou-Brenier Theorem. Equipped with these tools, we propose a simulation-free flow-based method for conditional generative modeling. Our method couples an arbitrary source distribution to a specified target distribution through a triangular COT plan, and a conditional generative model is obtained by approximating the geodesic path of measures induced by this COT plan. Our theory and methods are applicable in infinite-dimensional settings, making them well suited for a wide class of Bayesian inverse problems. Empirically, we demonstrate that our method is competitive on several challenging conditional generation tasks, including an infinite-dimensional inverse problem. | [
"['Gavin Kerrigan' 'Giosue Migliorini' 'Padhraic Smyth']"
]
|
null | null | 2404.04242 | null | null | http://arxiv.org/pdf/2404.04242v1 | 2024-04-05T17:45:07Z | 2024-04-05T17:45:07Z | Physical Property Understanding from Language-Embedded Feature Fields | Can computers perceive the physical properties of objects solely through vision? Research in cognitive science and vision science has shown that humans excel at identifying materials and estimating their physical properties based purely on visual appearance. In this paper, we present a novel approach for dense prediction of the physical properties of objects using a collection of images. Inspired by how humans reason about physics through vision, we leverage large language models to propose candidate materials for each object. We then construct a language-embedded point cloud and estimate the physical properties of each 3D point using a zero-shot kernel regression approach. Our method is accurate, annotation-free, and applicable to any object in the open world. Experiments demonstrate the effectiveness of the proposed approach in various physical property reasoning tasks, such as estimating the mass of common objects, as well as other properties like friction and hardness. | [
"['Albert J. Zhai' 'Yuan Shen' 'Emily Y. Chen' 'Gloria X. Wang'\n 'Xinlei Wang' 'Sheng Wang' 'Kaiyu Guan' 'Shenlong Wang']"
]
|
null | null | 2404.04245 | null | null | http://arxiv.org/pdf/2404.04245v1 | 2024-04-05T17:51:58Z | 2024-04-05T17:51:58Z | Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner
Attacks, And The Role of Distillation as Defense Mechanism | This technical report delves into an in-depth exploration of adversarial attacks specifically targeted at Deep Neural Networks (DNNs) utilized for image classification. The study also investigates defense mechanisms aimed at bolstering the robustness of machine learning models. The research focuses on comprehending the ramifications of two prominent attack methodologies: the Fast Gradient Sign Method (FGSM) and the Carlini-Wagner (CW) approach. These attacks are examined concerning three pre-trained image classifiers: Resnext50_32x4d, DenseNet-201, and VGG-19, utilizing the Tiny-ImageNet dataset. Furthermore, the study proposes the robustness of defensive distillation as a defense mechanism to counter FGSM and CW attacks. This defense mechanism is evaluated using the CIFAR-10 dataset, where CNN models, specifically resnet101 and Resnext50_32x4d, serve as the teacher and student models, respectively. The proposed defensive distillation model exhibits effectiveness in thwarting attacks such as FGSM. However, it is noted to remain susceptible to more sophisticated techniques like the CW attack. The document presents a meticulous validation of the proposed scheme. It provides detailed and comprehensive results, elucidating the efficacy and limitations of the defense mechanisms employed. Through rigorous experimentation and analysis, the study offers insights into the dynamics of adversarial attacks on DNNs, as well as the effectiveness of defensive strategies in mitigating their impact. | [
"['Trilokesh Ranjan Sarkar' 'Nilanjan Das' 'Pralay Sankar Maitra'\n 'Bijoy Some' 'Ritwik Saha' 'Orijita Adhikary' 'Bishal Bose' 'Jaydip Sen']"
]
|
null | null | 2404.04253 | null | null | http://arxiv.org/pdf/2404.04253v1 | 2024-04-05T17:58:37Z | 2024-04-05T17:58:37Z | Growing Q-Networks: Solving Continuous Control Tasks with Adaptive
Control Resolution | Recent reinforcement learning approaches have shown surprisingly strong capabilities of bang-bang policies for solving continuous control benchmarks. The underlying coarse action space discretizations often yield favourable exploration characteristics while final performance does not visibly suffer in the absence of action penalization in line with optimal control theory. In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency, but action costs can be detrimental to exploration during early training. In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution, taking advantage of recent results in decoupled Q-learning to scale our approach to high-dimensional action spaces up to dim(A) = 38. Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks. | [
"['Tim Seyde' 'Peter Werner' 'Wilko Schwarting' 'Markus Wulfmeier'\n 'Daniela Rus']"
]
|
null | null | 2404.04254 | null | null | http://arxiv.org/pdf/2404.04254v1 | 2024-04-05T17:58:52Z | 2024-04-05T17:58:52Z | Watermark-based Detection and Attribution of AI-Generated Content | Several companies--such as Google, Microsoft, and OpenAI--have deployed techniques to watermark AI-generated content to enable proactive detection. However, existing literature mainly focuses on user-agnostic detection. Attribution aims to further trace back the user of a generative-AI service who generated a given content detected as AI-generated. Despite its growing importance, attribution is largely unexplored. In this work, we aim to bridge this gap by providing the first systematic study on watermark-based, user-aware detection and attribution of AI-generated content. Specifically, we theoretically study the detection and attribution performance via rigorous probabilistic analysis. Moreover, we develop an efficient algorithm to select watermarks for the users to enhance attribution performance. Both our theoretical and empirical results show that watermark-based detection and attribution inherit the accuracy and (non-)robustness properties of the watermarking method. | [
"['Zhengyuan Jiang' 'Moyang Guo' 'Yuepeng Hu' 'Neil Zhenqiang Gong']"
]
|
null | null | 2404.04265 | null | null | http://arxiv.org/pdf/2404.04265v1 | 2024-03-18T16:27:33Z | 2024-03-18T16:27:33Z | Accelerating Matrix Factorization by Dynamic Pruning for Fast
Recommendation | Matrix factorization (MF) is a widely used collaborative filtering (CF) algorithm for recommendation systems (RSs), due to its high prediction accuracy, great flexibility and high efficiency in big data processing. However, with the dramatically increased number of users/items in current RSs, the computational complexity for training a MF model largely increases. Many existing works have accelerated MF, by either putting in additional computational resources or utilizing parallel systems, introducing a large cost. In this paper, we propose algorithmic methods to accelerate MF, without inducing any additional computational resources. In specific, we observe fine-grained structured sparsity in the decomposed feature matrices when considering a certain threshold. The fine-grained structured sparsity causes a large amount of unnecessary operations during both matrix multiplication and latent factor update, increasing the computational time of the MF training process. Based on the observation, we firstly propose to rearrange the feature matrices based on joint sparsity, which potentially makes a latent vector with a smaller index more dense than that with a larger index. The feature matrix rearrangement is given to limit the error caused by the later performed pruning process. We then propose to prune the insignificant latent factors by an early stopping process during both matrix multiplication and latent factor update. The pruning process is dynamically performed according to the sparsity of the latent factors for different users/items, to accelerate the process. The experiments show that our method can achieve 1.2-1.65 speedups, with up to 20.08% error increase, compared with the conventional MF training process. We also prove the proposed methods are applicable considering different hyperparameters including optimizer, optimization strategy and initialization method. | [
"['Yining Wu' 'Shengyu Duan' 'Gaole Sai' 'Chenhong Cao' 'Guobing Zou']"
]
|
null | null | 2404.04269 | null | null | http://arxiv.org/pdf/2404.04269v1 | 2024-03-19T23:27:15Z | 2024-03-19T23:27:15Z | Algorithmic Collective Action in Recommender Systems: Promoting Songs by
Reordering Playlists | We investigate algorithmic collective action in transformer-based recommender systems. Our use case is a collective of fans aiming to promote the visibility of an artist by strategically placing one of their songs in the existing playlists they control. The success of the collective is measured by the increase in test-time recommendations of the targeted song. We introduce two easily implementable strategies towards this goal and test their efficacy on a publicly available recommender system model released by a major music streaming platform. Our findings reveal that even small collectives (controlling less than 0.01% of the training data) can achieve up 25x amplification of recommendations by strategically choosing the position at which to insert the song. We then focus on investigating the externalities of the strategy. We find that the performance loss for the platform is negligible, and the recommendations of other songs are largely preserved, minimally impairing the user experience of participants. Moreover, the costs are evenly distributed among other artists. Taken together, our findings demonstrate how collective action strategies can be effective while not necessarily being adversarial, raising new questions around incentives, social dynamics, and equilibria in recommender systems. | [
"['Joachim Baumann' 'Celestine Mendler-Dünner']"
]
|
null | null | 2404.04270 | null | null | http://arxiv.org/pdf/2404.04270v1 | 2024-03-22T00:29:06Z | 2024-03-22T00:29:06Z | Accelerating Recommender Model Training by Dynamically Skipping Stale
Embeddings | Training recommendation models pose significant challenges regarding resource utilization and performance. Prior research has proposed an approach that categorizes embeddings into popular and non-popular classes to reduce the training time for recommendation models. We observe that, even among the popular embeddings, certain embeddings undergo rapid training and exhibit minimal subsequent variation, resulting in saturation. Consequently, updates to these embeddings lack any contribution to model quality. This paper presents Slipstream, a software framework that identifies stale embeddings on the fly and skips their updates to enhance performance. This capability enables Slipstream to achieve substantial speedup, optimize CPU-GPU bandwidth usage, and eliminate unnecessary memory access. SlipStream showcases training time reductions of 2x, 2.4x, 1.2x, and 1.175x across real-world datasets and configurations, compared to Baseline XDL, Intel-optimized DRLM, FAE, and Hotline, respectively. | [
"['Yassaman Ebrahimzadeh Maboud' 'Muhammad Adnan' 'Divya Mahajan'\n 'Prashant J. Nair']"
]
|
null | null | 2404.04282 | null | null | http://arxiv.org/pdf/2404.04282v1 | 2024-04-03T07:27:59Z | 2024-04-03T07:27:59Z | Analyzing Economic Convergence Across the Americas: A Survival Analysis
Approach to GDP per Capita Trajectories | By integrating survival analysis, machine learning algorithms, and economic interpretation, this research examines the temporal dynamics associated with attaining a 5 percent rise in purchasing power parity-adjusted GDP per capita over a period of 120 months (2013-2022). A comparative investigation reveals that DeepSurv is proficient at capturing non-linear interactions, although standard models exhibit comparable performance under certain circumstances. The weight matrix evaluates the economic ramifications of vulnerabilities, risks, and capacities. In order to meet the GDPpc objective, the findings emphasize the need of a balanced approach to risk-taking, strategic vulnerability reduction, and investment in governmental capacities and social cohesiveness. Policy guidelines promote individualized approaches that take into account the complex dynamics at play while making decisions. | [
"['Diego Vallarino']"
]
|
null | null | 2404.04283 | null | null | http://arxiv.org/pdf/2404.04283v1 | 2024-04-03T17:48:31Z | 2024-04-03T17:48:31Z | Translation-based Video-to-Video Synthesis | Translation-based Video Synthesis (TVS) has emerged as a vital research area in computer vision, aiming to facilitate the transformation of videos between distinct domains while preserving both temporal continuity and underlying content features. This technique has found wide-ranging applications, encompassing video super-resolution, colorization, segmentation, and more, by extending the capabilities of traditional image-to-image translation to the temporal domain. One of the principal challenges faced in TVS is the inherent risk of introducing flickering artifacts and inconsistencies between frames during the synthesis process. This is particularly challenging due to the necessity of ensuring smooth and coherent transitions between video frames. Efforts to tackle this challenge have induced the creation of diverse strategies and algorithms aimed at mitigating these unwanted consequences. This comprehensive review extensively examines the latest progress in the realm of TVS. It thoroughly investigates emerging methodologies, shedding light on the fundamental concepts and mechanisms utilized for proficient video synthesis. This survey also illuminates their inherent strengths, limitations, appropriate applications, and potential avenues for future development. | [
"['Pratim Saha' 'Chengcui Zhang']"
]
|
null | null | 2404.04286 | null | null | http://arxiv.org/pdf/2404.04286v1 | 2024-04-04T02:01:25Z | 2024-04-04T02:01:25Z | Language Model Evolution: An Iterated Learning Perspective | With the widespread adoption of Large Language Models (LLMs), the prevalence of iterative interactions among these models is anticipated to increase. Notably, recent advancements in multi-round self-improving methods allow LLMs to generate new examples for training subsequent models. At the same time, multi-agent LLM systems, involving automated interactions among agents, are also increasing in prominence. Thus, in both short and long terms, LLMs may actively engage in an evolutionary process. We draw parallels between the behavior of LLMs and the evolution of human culture, as the latter has been extensively studied by cognitive scientists for decades. Our approach involves leveraging Iterated Learning (IL), a Bayesian framework that elucidates how subtle biases are magnified during human cultural evolution, to explain some behaviors of LLMs. This paper outlines key characteristics of agents' behavior in the Bayesian-IL framework, including predictions that are supported by experimental verification with various LLMs. This theoretical framework could help to more effectively predict and guide the evolution of LLMs in desired directions. | [
"['Yi Ren' 'Shangmin Guo' 'Linlu Qiu' 'Bailin Wang' 'Danica J. Sutherland']"
]
|
null | null | 2404.04289 | null | null | http://arxiv.org/abs/2404.04289v1 | 2024-04-04T03:01:57Z | 2024-04-04T03:01:57Z | Designing for Human-Agent Alignment: Understanding what humans want from
their agents | Our ability to build autonomous agents that leverage Generative AI continues to increase by the day. As builders and users of such agents it is unclear what parameters we need to align on before the agents start performing tasks on our behalf. To discover these parameters, we ran a qualitative empirical research study about designing agents that can negotiate during a fictional yet relatable task of selling a camera online. We found that for an agent to perform the task successfully, humans/users and agents need to align over 6 dimensions: 1) Knowledge Schema Alignment 2) Autonomy and Agency Alignment 3) Operational Alignment and Training 4) Reputational Heuristics Alignment 5) Ethics Alignment and 6) Human Engagement Alignment. These empirical findings expand previous work related to process and specification alignment and the need for values and safety in Human-AI interactions. Subsequently we discuss three design directions for designers who are imagining a world filled with Human-Agent collaborations. | [
"['Nitesh Goyal' 'Minsuk Chang' 'Michael Terry']"
]
|
null | null | 2404.04291 | null | null | http://arxiv.org/pdf/2404.04291v1 | 2024-04-04T05:38:44Z | 2024-04-04T05:38:44Z | Investigating Regularization of Self-Play Language Models | This paper explores the effects of various forms of regularization in the context of language model alignment via self-play. While both reinforcement learning from human feedback (RLHF) and direct preference optimization (DPO) require to collect costly human-annotated pairwise preferences, the self-play fine-tuning (SPIN) approach replaces the rejected answers by data generated from the previous iterate. However, the SPIN method presents a performance instability issue in the learning phase, which can be mitigated by playing against a mixture of the two previous iterates. In the same vein, we propose in this work to address this issue from two perspectives: first, by incorporating an additional Kullback-Leibler (KL) regularization to stay at the proximity of the reference policy; second, by using the idea of fictitious play which smoothens the opponent policy across all previous iterations. In particular, we show that the KL-based regularizer boils down to replacing the previous policy by its geometric mixture with the base policy inside of the SPIN loss function. We finally discuss empirical results on MT-Bench as well as on the Hugging Face Open LLM Leaderboard. | [
"['Reda Alami' 'Abdalgader Abubaker' 'Mastane Achab'\n 'Mohamed El Amine Seddik' 'Salem Lahlou']"
]
|
null | null | 2404.04295 | null | null | http://arxiv.org/pdf/2404.04295v1 | 2024-04-04T17:03:47Z | 2024-04-04T17:03:47Z | Transducers with Pronunciation-aware Embeddings for Automatic Speech
Recognition | This paper proposes Transducers with Pronunciation-aware Embeddings (PET). Unlike conventional Transducers where the decoder embeddings for different tokens are trained independently, the PET model's decoder embedding incorporates shared components for text tokens with the same or similar pronunciations. With experiments conducted in multiple datasets in Mandarin Chinese and Korean, we show that PET models consistently improve speech recognition accuracy compared to conventional Transducers. Our investigation also uncovers a phenomenon that we call error chain reactions. Instead of recognition errors being evenly spread throughout an utterance, they tend to group together, with subsequent errors often following earlier ones. Our analysis shows that PET models effectively mitigate this issue by substantially reducing the likelihood of the model generating additional errors following a prior one. Our implementation will be open-sourced with the NeMo toolkit. | [
"['Hainan Xu' 'Zhehuai Chen' 'Fei Jia' 'Boris Ginsburg']"
]
|
null | null | 2404.04298 | null | null | http://arxiv.org/pdf/2404.04298v1 | 2024-04-04T20:27:37Z | 2024-04-04T20:27:37Z | SELF-[IN]CORRECT: LLMs Struggle with Refining Self-Generated Responses | Can LLMs continually improve their previous outputs for better results? An affirmative answer would require LLMs to be better at discriminating among previously-generated alternatives, than generating initial responses. We explore the validity of this hypothesis in practice. We first introduce a unified framework that allows us to compare the generative and discriminative capability of any model on any task. Then, in our resulting experimental analysis of several LLMs, we do not observe the performance of those models on discrimination to be reliably better than generation. We hope these findings inform the growing literature on self-improvement AI systems. | [
"['Dongwei Jiang' 'Jingyu Zhang' 'Orion Weller' 'Nathaniel Weir'\n 'Benjamin Van Durme' 'Daniel Khashabi']"
]
|
null | null | 2404.04308 | null | null | http://arxiv.org/pdf/2404.04308v1 | 2024-04-05T07:31:24Z | 2024-04-05T07:31:24Z | Visual Knowledge in the Big Model Era: Retrospect and Prospect | Visual knowledge is a new form of knowledge representation that can encapsulate visual concepts and their relations in a succinct, comprehensive, and interpretable manner, with a deep root in cognitive psychology. As the knowledge about the visual world has been identified as an indispensable component of human cognition and intelligence, visual knowledge is poised to have a pivotal role in establishing machine intelligence. With the recent advance of Artificial Intelligence (AI) techniques, large AI models (or foundation models) have emerged as a potent tool capable of extracting versatile patterns from broad data as implicit knowledge, and abstracting them into an outrageous amount of numeric parameters. To pave the way for creating visual knowledge empowered AI machines in this coming wave, we present a timely review that investigates the origins and development of visual knowledge in the pre-big model era, and accentuates the opportunities and unique role of visual knowledge in the big model era. | [
"['Wenguan Wang' 'Yi Yang' 'Yunhe Pan']"
]
|
null | null | 2404.04310 | null | null | http://arxiv.org/pdf/2404.04310v1 | 2024-04-05T10:29:18Z | 2024-04-05T10:29:18Z | Suppressing Modulation Instability with Reinforcement Learning | Modulation instability is a phenomenon of spontaneous pattern formation in nonlinear media, oftentimes leading to an unpredictable behaviour and a degradation of a signal of interest. We propose an approach based on reinforcement learning to suppress the unstable modes by optimizing the parameters for the time modulation of the potential in the nonlinear system. We test our approach in 1D and 2D cases and propose a new class of physically-meaningful reward functions to guarantee tamed instability. | [
"['Nikolay Kalmykov' 'Rishat Zagidullin' 'Oleg Rogov' 'Sergey Rykovanov'\n 'Dmitry V. Dylov']"
]
|
null | null | 2404.04311 | null | null | http://arxiv.org/pdf/2404.04311v1 | 2024-04-05T11:03:36Z | 2024-04-05T11:03:36Z | A Real-time Anomaly Detection Using Convolutional Autoencoder with
Dynamic Threshold | The majority of modern consumer-level energy is generated by real-time smart metering systems. These frequently contain anomalies, which prevent reliable estimates of the series' evolution. This work introduces a hybrid modeling approach combining statistics and a Convolutional Autoencoder with a dynamic threshold. The threshold is determined based on Mahalanobis distance and moving averages. It has been tested using real-life energy consumption data collected from smart metering systems. The solution includes a real-time, meter-level anomaly detection system that connects to an advanced monitoring system. This makes a substantial contribution by detecting unusual data movements and delivering an early warning. Early detection and subsequent troubleshooting can financially benefit organizations and consumers and prevent disasters from occurring. | [
"['Sarit Maitra' 'Sukanya Kundu' 'Aishwarya Shankar']"
]
|
null | null | 2404.04312 | null | null | http://arxiv.org/pdf/2404.04312v1 | 2024-04-05T12:03:19Z | 2024-04-05T12:03:19Z | Half-Space Feature Learning in Neural Networks | There currently exist two extreme viewpoints for neural network feature learning -- (i) Neural networks simply implement a kernel method (a la NTK) and hence no features are learned (ii) Neural networks can represent (and hence learn) intricate hierarchical features suitable for the data. We argue in this paper neither interpretation is likely to be correct based on a novel viewpoint. Neural networks can be viewed as a mixture of experts, where each expert corresponds to a (number of layers length) path through a sequence of hidden units. We use this alternate interpretation to motivate a model, called the Deep Linearly Gated Network (DLGN), which sits midway between deep linear networks and ReLU networks. Unlike deep linear networks, the DLGN is capable of learning non-linear features (which are then linearly combined), and unlike ReLU networks these features are ultimately simple -- each feature is effectively an indicator function for a region compactly described as an intersection of (number of layers) half-spaces in the input space. This viewpoint allows for a comprehensive global visualization of features, unlike the local visualizations for neurons based on saliency/activation/gradient maps. Feature learning in DLGNs is shown to happen and the mechanism with which this happens is through learning half-spaces in the input space that contain smooth regions of the target function. Due to the structure of DLGNs, the neurons in later layers are fundamentally the same as those in earlier layers -- they all represent a half-space -- however, the dynamics of gradient descent impart a distinct clustering to the later layer neurons. We hypothesize that ReLU networks also have similar feature learning behaviour. | [
"['Mahesh Lorik Yadav' 'Harish Guruprasad Ramaswamy'\n 'Chandrashekar Lakshminarayanan']"
]
|
null | null | 2404.04314 | null | null | http://arxiv.org/pdf/2404.04314v1 | 2024-04-05T13:18:10Z | 2024-04-05T13:18:10Z | Faraday: Synthetic Smart Meter Generator for the smart grid | Access to smart meter data is essential to rapid and successful transitions to electrified grids, underpinned by flexibility delivered by low carbon technologies, such as electric vehicles (EV) and heat pumps, and powered by renewable energy. Yet little of this data is available for research and modelling purposes due consumer privacy protections. Whilst many are calling for raw datasets to be unlocked through regulatory changes, we believe this approach will take too long. Synthetic data addresses these challenges directly by overcoming privacy issues. In this paper, we present Faraday, a Variational Auto-encoder (VAE)-based model trained over 300 million smart meter data readings from an energy supplier in the UK, with information such as property type and low carbon technologies (LCTs) ownership. The model produces household-level synthetic load profiles conditioned on these labels, and we compare its outputs against actual substation readings to show how the model can be used for real-world applications by grid modellers interested in modelling energy grids of the future. | [
"['Sheng Chai' 'Gus Chadney']"
]
|
null | null | 2404.04316 | null | null | http://arxiv.org/pdf/2404.04316v2 | 2024-06-07T03:54:01Z | 2024-04-05T15:28:44Z | Parameter Efficient Quasi-Orthogonal Fine-Tuning via Givens Rotation | With the increasingly powerful performances and enormous scales of pretrained models, promoting parameter efficiency in fine-tuning has become a crucial need for effective and efficient adaptation to various downstream tasks. One representative line of fine-tuning methods is Orthogonal Fine-tuning (OFT), which rigorously preserves the angular distances within the parameter space to preserve the pretrained knowledge. Despite the empirical effectiveness, OFT still suffers low parameter efficiency at $mathcal{O}(d^2)$ and limited capability of downstream adaptation. Inspired by Givens rotation, in this paper, we proposed quasi-Givens Orthogonal Fine-Tuning (qGOFT) to address the problems. We first use $mathcal{O}(d)$ Givens rotations to accomplish arbitrary orthogonal transformation in $SO(d)$ with provable equivalence, reducing parameter complexity from $mathcal{O}(d^2)$ to $mathcal{O}(d)$. Then we introduce flexible norm and relative angular adjustments under soft orthogonality regularization to enhance the adaptation capability of downstream semantic deviations. Extensive experiments on various tasks and pretrained models validate the effectiveness of our methods. | [
"['Xinyu Ma' 'Xu Chu' 'Zhibang Yang' 'Yang Lin' 'Xin Gao' 'Junfeng Zhao']"
]
|
null | null | 2404.04317 | null | null | http://arxiv.org/pdf/2404.04317v1 | 2024-04-05T17:47:50Z | 2024-04-05T17:47:50Z | DeepLINK-T: deep learning inference for time series data using knockoffs
and LSTM | High-dimensional longitudinal time series data is prevalent across various real-world applications. Many such applications can be modeled as regression problems with high-dimensional time series covariates. Deep learning has been a popular and powerful tool for fitting these regression models. Yet, the development of interpretable and reproducible deep-learning models is challenging and remains underexplored. This study introduces a novel method, Deep Learning Inference using Knockoffs for Time series data (DeepLINK-T), focusing on the selection of significant time series variables in regression while controlling the false discovery rate (FDR) at a predetermined level. DeepLINK-T combines deep learning with knockoff inference to control FDR in feature selection for time series models, accommodating a wide variety of feature distributions. It addresses dependencies across time and features by leveraging a time-varying latent factor structure in time series covariates. Three key ingredients for DeepLINK-T are 1) a Long Short-Term Memory (LSTM) autoencoder for generating time series knockoff variables, 2) an LSTM prediction network using both original and knockoff variables, and 3) the application of the knockoffs framework for variable selection with FDR control. Extensive simulation studies have been conducted to evaluate DeepLINK-T's performance, showing its capability to control FDR effectively while demonstrating superior feature selection power for high-dimensional longitudinal time series data compared to its non-time series counterpart. DeepLINK-T is further applied to three metagenomic data sets, validating its practical utility and effectiveness, and underscoring its potential in real-world applications. | [
"['Wenxuan Zuo' 'Zifan Zhu' 'Yuxuan Du' 'Yi-Chun Yeh' 'Jed A. Fuhrman'\n 'Jinchi Lv' 'Yingying Fan' 'Fengzhu Sun']"
]
|
null | null | 2404.04326 | null | null | http://arxiv.org/pdf/2404.04326v1 | 2024-04-05T18:00:07Z | 2024-04-05T18:00:07Z | Hypothesis Generation with Large Language Models | Effective generation of novel hypotheses is instrumental to scientific progress. So far, researchers have been the main powerhouse behind hypothesis generation by painstaking data analysis and thinking (also known as the Eureka moment). In this paper, we examine the potential of large language models (LLMs) to generate hypotheses. We focus on hypothesis generation based on data (i.e., labeled examples). To enable LLMs to handle arbitrarily long contexts, we generate initial hypotheses from a small number of examples and then update them iteratively to improve the quality of hypotheses. Inspired by multi-armed bandits, we design a reward function to inform the exploitation-exploration tradeoff in the update process. Our algorithm is able to generate hypotheses that enable much better predictive performance than few-shot prompting in classification tasks, improving accuracy by 31.7% on a synthetic dataset and by 13.9%, 3.3% and, 24.9% on three real-world datasets. We also outperform supervised learning by 12.8% and 11.2% on two challenging real-world datasets. Furthermore, we find that the generated hypotheses not only corroborate human-verified theories but also uncover new insights for the tasks. | [
"['Yangqiaoyu Zhou' 'Haokun Liu' 'Tejes Srivastava' 'Hongyuan Mei'\n 'Chenhao Tan']"
]
|
null | null | 2404.04351 | null | null | http://arxiv.org/pdf/2404.04351v1 | 2024-04-05T18:44:54Z | 2024-04-05T18:44:54Z | Assisting humans in complex comparisons: automated information
comparison at scale | Generative Large Language Models enable efficient analytics across knowledge domains, rivalling human experts in information comparisons. However, the applications of LLMs for information comparisons face scalability challenges due to the difficulties in maintaining information across large contexts and overcoming model token limitations. To address these challenges, we developed the novel Abstractive Summarization & Criteria-driven Comparison Endpoint (ASC$^2$End) system to automate information comparison at scale. Our system employs Semantic Text Similarity comparisons for generating evidence-supported analyses. We utilize proven data-handling strategies such as abstractive summarization and retrieval augmented generation to overcome token limitations and retain relevant information during model inference. Prompts were designed using zero-shot strategies to contextualize information for improved model reasoning. We evaluated abstractive summarization using ROUGE scoring and assessed the generated comparison quality using survey responses. Models evaluated on the ASC$^2$End system show desirable results providing insights on the expected performance of the system. ASC$^2$End is a novel system and tool that enables accurate, automated information comparison at scale across knowledge domains, overcoming limitations in context length and retrieval. | [
"['Truman Yuen' 'Graham A. Watt' 'Yuri Lawryshyn']"
]
|
null | null | 2404.04356 | null | null | http://arxiv.org/pdf/2404.04356v1 | 2024-04-05T18:56:00Z | 2024-04-05T18:56:00Z | Pixel-wise RL on Diffusion Models: Reinforcement Learning from Rich
Feedback | Latent diffusion models are the state-of-the-art for synthetic image generation. To align these models with human preferences, training the models using reinforcement learning on human feedback is crucial. Black et. al 2024 introduced denoising diffusion policy optimisation (DDPO), which accounts for the iterative denoising nature of the generation by modelling it as a Markov chain with a final reward. As the reward is a single value that determines the model's performance on the entire image, the model has to navigate a very sparse reward landscape and so requires a large sample count. In this work, we extend the DDPO by presenting the Pixel-wise Policy Optimisation (PXPO) algorithm, which can take feedback for each pixel, providing a more nuanced reward to the model. | [
"['Mo Kordzanganeh' 'Danial Keshvary' 'Nariman Arian']"
]
|
null | null | 2404.04360 | null | null | http://arxiv.org/pdf/2404.04360v1 | 2024-04-05T19:14:14Z | 2024-04-05T19:14:14Z | Prompt Public Large Language Models to Synthesize Data for Private
On-device Applications | Pre-training on public data is an effective method to improve the performance for federated learning (FL) with differential privacy (DP). This paper investigates how large language models (LLMs) trained on public data can improve the quality of pre-training data for the on-device language models trained with DP and FL. We carefully design LLM prompts to filter and transform existing public data, and generate new data to resemble the real user data distribution. The model pre-trained on our synthetic dataset achieves relative improvement of 19.0% and 22.8% in next word prediction accuracy compared to the baseline model pre-trained on a standard public dataset, when evaluated over the real user data in Gboard (Google Keyboard, a production mobile keyboard application). Furthermore, our method achieves evaluation accuracy better than or comparable to the baseline during the DP FL fine-tuning over millions of mobile devices, and our final model outperforms the baseline in production A/B testing. Our experiments demonstrate the strengths of LLMs in synthesizing data close to the private distribution even without accessing the private data, and also suggest future research directions to further reduce the distribution gap. | [
"['Shanshan Wu' 'Zheng Xu' 'Yanxiang Zhang' 'Yuanbo Zhang' 'Daniel Ramage']"
]
|
null | null | 2404.04375 | null | null | http://arxiv.org/pdf/2404.04375v1 | 2024-04-05T19:36:26Z | 2024-04-05T19:36:26Z | Compositional Estimation of Lipschitz Constants for Deep Neural Networks | The Lipschitz constant plays a crucial role in certifying the robustness of neural networks to input perturbations and adversarial attacks, as well as the stability and safety of systems with neural network controllers. Therefore, estimation of tight bounds on the Lipschitz constant of neural networks is a well-studied topic. However, typical approaches involve solving a large matrix verification problem, the computational cost of which grows significantly for deeper networks. In this letter, we provide a compositional approach to estimate Lipschitz constants for deep feedforward neural networks by obtaining an exact decomposition of the large matrix verification problem into smaller sub-problems. We further obtain a closed-form solution that applies to most common neural network activation functions, which will enable rapid robustness and stability certificates for neural networks deployed in online control settings. Finally, we demonstrate through numerical experiments that our approach provides a steep reduction in computation time while yielding Lipschitz bounds that are very close to those achieved by state-of-the-art approaches. | [
"['Yuezhu Xu' 'S. Sivaranjani']"
]
|
null | null | 2404.04388 | null | null | http://arxiv.org/pdf/2404.04388v2 | 2024-07-09T12:36:12Z | 2024-04-05T20:12:02Z | Mining Potentially Explanatory Patterns via Partial Solutions | Genetic Algorithms have established their capability for solving many complex optimization problems. Even as good solutions are produced, the user's understanding of a problem is not necessarily improved, which can lead to a lack of confidence in the results. To mitigate this issue, explainability aims to give insight to the user by presenting them with the knowledge obtained by the algorithm. In this paper we introduce Partial Solutions in order to improve the explainability of solutions to combinatorial optimization problems. Partial Solutions represent beneficial traits found by analyzing a population, and are presented to the user for explainability, but also provide an explicit model from which new solutions can be generated. We present an algorithm that assembles a collection of Partial Solutions chosen to strike a balance between high fitness, simplicity and atomicity. Experiments with standard benchmarks show that the proposed algorithm is able to find Partial Solutions which improve explainability at reasonable computational cost without affecting search performance. | [
"['GianCarlo Catalano' 'Alexander E. I. Brownlee' 'David Cairns'\n 'John McCall' 'Russell Ainslie']"
]
|
null | null | 2404.04393 | null | null | http://arxiv.org/pdf/2404.04393v1 | 2024-04-05T20:36:30Z | 2024-04-05T20:36:30Z | Counting Like Transformers: Compiling Temporal Counting Logic Into
Softmax Transformers | Deriving formal bounds on the expressivity of transformers, as well as studying transformers that are constructed to implement known algorithms, are both effective methods for better understanding the computational power of transformers. Towards both ends, we introduce the temporal counting logic $textbf{K}_text{t}$[#] alongside the RASP variant $textbf{C-RASP}$. We show they are equivalent to each other, and that together they are the best-known lower bound on the formal expressivity of future-masked soft attention transformers with unbounded input size. We prove this by showing all $textbf{K}_text{t}$[#] formulas can be compiled into these transformers. As a case study, we demonstrate on paper how to use $textbf{C-RASP}$ to construct simple transformer language models that, using greedy decoding, can only generate sentences that have given properties formally specified in $textbf{K}_text{t}$[#]. | [
"['Andy Yang' 'David Chiang']"
]
|
null | null | 2404.04397 | null | null | http://arxiv.org/pdf/2404.04397v1 | 2024-04-05T20:50:06Z | 2024-04-05T20:50:06Z | Generating Synthetic Ground Truth Distributions for Multi-step
Trajectory Prediction using Probabilistic Composite Bézier Curves | An appropriate data basis grants one of the most important aspects for training and evaluating probabilistic trajectory prediction models based on neural networks. In this regard, a common shortcoming of current benchmark datasets is their limitation to sets of sample trajectories and a lack of actual ground truth distributions, which prevents the use of more expressive error metrics, such as the Wasserstein distance for model evaluation. Towards this end, this paper proposes a novel approach to synthetic dataset generation based on composite probabilistic B'ezier curves, which is capable of generating ground truth data in terms of probability distributions over full trajectories. This allows the calculation of arbitrary posterior distributions. The paper showcases an exemplary trajectory prediction model evaluation using generated ground truth distribution data. | [
"['Ronny Hug' 'Stefan Becker' 'Wolfgang Hübner' 'Michael Arens']"
]
|
null | null | 2404.04399 | null | null | http://arxiv.org/pdf/2404.04399v1 | 2024-04-05T20:56:15Z | 2024-04-05T20:56:15Z | Longitudinal Targeted Minimum Loss-based Estimation with
Temporal-Difference Heterogeneous Transformer | We propose Deep Longitudinal Targeted Minimum Loss-based Estimation (Deep LTMLE), a novel approach to estimate the counterfactual mean of outcome under dynamic treatment policies in longitudinal problem settings. Our approach utilizes a transformer architecture with heterogeneous type embedding trained using temporal-difference learning. After obtaining an initial estimate using the transformer, following the targeted minimum loss-based likelihood estimation (TMLE) framework, we statistically corrected for the bias commonly associated with machine learning algorithms. Furthermore, our method also facilitates statistical inference by enabling the provision of 95% confidence intervals grounded in asymptotic statistical theory. Simulation results demonstrate our method's superior performance over existing approaches, particularly in complex, long time-horizon scenarios. It remains effective in small-sample, short-duration contexts, matching the performance of asymptotically efficient estimators. To demonstrate our method in practice, we applied our method to estimate counterfactual mean outcomes for standard versus intensive blood pressure management strategies in a real-world cardiovascular epidemiology cohort study. | [
"['Toru Shirakawa' 'Yi Li' 'Yulun Wu' 'Sky Qiu' 'Yuxuan Li' 'Mingduo Zhao'\n 'Hiroyasu Iso' 'Mark van der Laan']"
]
|
null | null | 2404.04405 | null | null | http://arxiv.org/pdf/2404.04405v1 | 2024-04-05T21:03:11Z | 2024-04-05T21:03:11Z | Dynamic Switch Layers For Unsupervised Learning | On-device machine learning (ODML) enables intelligent applications on resource-constrained devices. However, power consumption poses a major challenge, forcing a trade-off between model accuracy and power efficiency that often limits model complexity. The previously established Gated Compression (GC) layers offer a solution, enabling power efficiency without sacrificing model performance by selectively gating samples that lack signals of interest. However, their reliance on ground truth labels limits GC layers to supervised tasks. This work introduces the Dynamic Switch Layer (DSL), extending the benefits of GC layers to unsupervised learning scenarios, and maintaining power efficiency without the need for labeled data. The DSL builds upon the GC architecture, leveraging a dynamic pathway selection, and adapting model complexity in response to the innate structure of the data. We integrate the DSL into the SoundStream architecture and demonstrate that by routing up to 80% of samples through a lightweight pass we achieve a 12.3x reduction in the amount of computation performed and a 20.9x reduction in model size. This reduces the on-device inference latency by up to 26.5% and improves power efficiency by up to 21.4% without impacting model performance. | [
"['Haiguang Li' 'Usama Pervaiz' 'Michał Matuszak' 'Robert Kamara'\n 'Gilles Roux' 'Trausti Thormundsson' 'Joseph Antognini']"
]
|
null | null | 2404.04420 | null | null | http://arxiv.org/abs/2404.04420v1 | 2024-04-05T21:41:20Z | 2024-04-05T21:41:20Z | The NES Video-Music Database: A Dataset of Symbolic Video Game Music
Paired with Gameplay Videos | Neural models are one of the most popular approaches for music generation, yet there aren't standard large datasets tailored for learning music directly from game data. To address this research gap, we introduce a novel dataset named NES-VMDB, containing 98,940 gameplay videos from 389 NES games, each paired with its original soundtrack in symbolic format (MIDI). NES-VMDB is built upon the Nintendo Entertainment System Music Database (NES-MDB), encompassing 5,278 music pieces from 397 NES games. Our approach involves collecting long-play videos for 389 games of the original dataset, slicing them into 15-second-long clips, and extracting the audio from each clip. Subsequently, we apply an audio fingerprinting algorithm (similar to Shazam) to automatically identify the corresponding piece in the NES-MDB dataset. Additionally, we introduce a baseline method based on the Controllable Music Transformer to generate NES music conditioned on gameplay clips. We evaluated this approach with objective metrics, and the results showed that the conditional CMT improves musical structural quality when compared to its unconditional counterpart. Moreover, we used a neural classifier to predict the game genre of the generated pieces. Results showed that the CMT generator can learn correlations between gameplay videos and game genres, but further research has to be conducted to achieve human-level performance. | [
"['Igor Cardoso' 'Rubens O. Moraes' 'Lucas N. Ferreira']"
]
|
null | null | 2404.04425 | null | null | http://arxiv.org/pdf/2404.04425v1 | 2024-04-05T21:47:32Z | 2024-04-05T21:47:32Z | Bayesian Additive Regression Networks | We apply Bayesian Additive Regression Tree (BART) principles to training an ensemble of small neural networks for regression tasks. Using Markov Chain Monte Carlo, we sample from the posterior distribution of neural networks that have a single hidden layer. To create an ensemble of these, we apply Gibbs sampling to update each network against the residual target value (i.e. subtracting the effect of the other networks). We demonstrate the effectiveness of this technique on several benchmark regression problems, comparing it to equivalent shallow neural networks, BART, and ordinary least squares. Our Bayesian Additive Regression Networks (BARN) provide more consistent and often more accurate results. On test data benchmarks, BARN averaged between 5 to 20 percent lower root mean square error. This error performance does come at the cost, however, of greater computation time. BARN sometimes takes on the order of a minute where competing methods take a second or less. But, BARN without cross-validated hyperparameter tuning takes about the same amount of computation time as tuned other methods. Yet BARN is still typically more accurate. | [
"['Danielle Van Boxel']"
]
|
null | null | 2404.04434 | null | null | http://arxiv.org/pdf/2404.04434v1 | 2024-04-05T22:21:49Z | 2024-04-05T22:21:49Z | Robust Few-Shot Ensemble Learning with Focal Diversity-Based Pruning | This paper presents FusionShot, a focal diversity optimized few-shot ensemble learning approach for boosting the robustness and generalization performance of pre-trained few-shot models. The paper makes three original contributions. First, we explore the unique characteristics of few-shot learning to ensemble multiple few-shot (FS) models by creating three alternative fusion channels. Second, we introduce the concept of focal error diversity to learn the most efficient ensemble teaming strategy, rather than assuming that an ensemble of a larger number of base models will outperform those sub-ensembles of smaller size. We develop a focal-diversity ensemble pruning method to effectively prune out the candidate ensembles with low ensemble error diversity and recommend top-$K$ FS ensembles with the highest focal error diversity. Finally, we capture the complex non-linear patterns of ensemble few-shot predictions by designing the learn-to-combine algorithm, which can learn the diverse weight assignments for robust ensemble fusion over different member models. Extensive experiments on representative few-shot benchmarks show that the top-K ensembles recommended by FusionShot can outperform the representative SOTA few-shot models on novel tasks (different distributions and unknown at training), and can prevail over existing few-shot learners in both cross-domain settings and adversarial settings. For reproducibility purposes, FusionShot trained models, results, and code are made available at https://github.com/sftekin/fusionshot | [
"['Selim Furkan Tekin' 'Fatih Ilhan' 'Tiansheng Huang' 'Sihao Hu'\n 'Ka-Ho Chow' 'Margaret L. Loper' 'Ling Liu']"
]
|
null | null | 2404.04439 | null | null | http://arxiv.org/pdf/2404.04439v1 | 2024-04-05T22:48:57Z | 2024-04-05T22:48:57Z | Rethinking Non-Negative Matrix Factorization with Implicit Neural
Representations | Non-negative Matrix Factorization (NMF) is a powerful technique for analyzing regularly-sampled data, i.e., data that can be stored in a matrix. For audio, this has led to numerous applications using time-frequency (TF) representations like the Short-Time Fourier Transform. However extending these applications to irregularly-spaced TF representations, like the Constant-Q transform, wavelets, or sinusoidal analysis models, has not been possible since these representations cannot be directly stored in matrix form. In this paper, we formulate NMF in terms of continuous functions (instead of fixed vectors) and show that NMF can be extended to a wider variety of signal classes that need not be regularly sampled. | [
"['Krishna Subramani' 'Paris Smaragdis' 'Takuya Higuchi' 'Mehrez Souden']"
]
|
null | null | 2404.04454 | null | null | http://arxiv.org/pdf/2404.04454v1 | 2024-04-05T23:56:50Z | 2024-04-05T23:56:50Z | Implicit Bias of AdamW: $\ell_\infty$ Norm Constrained Optimization | Adam with decoupled weight decay, also known as AdamW, is widely acclaimed for its superior performance in language modeling tasks, surpassing Adam with $ell_2$ regularization in terms of generalization and optimization. However, this advantage is not theoretically well-understood. One challenge here is that though intuitively Adam with $ell_2$ regularization optimizes the $ell_2$ regularized loss, it is not clear if AdamW optimizes a specific objective. In this work, we make progress toward understanding the benefit of AdamW by showing that it implicitly performs constrained optimization. More concretely, we show in the full-batch setting, if AdamW converges with any non-increasing learning rate schedule whose partial sum diverges, it must converge to a KKT point of the original loss under the constraint that the $ell_infty$ norm of the parameter is bounded by the inverse of the weight decay factor. This result is built on the observation that Adam can be viewed as a smoothed version of SignGD, which is the normalized steepest descent with respect to $ell_infty$ norm, and a surprising connection between normalized steepest descent with weight decay and Frank-Wolfe. | [
"['Shuo Xie' 'Zhiyuan Li']"
]
|
null | null | 2404.04456 | null | null | http://arxiv.org/pdf/2404.04456v1 | 2024-04-06T00:04:19Z | 2024-04-06T00:04:19Z | Beyond the Known: Adversarial Autoencoders in Novelty Detection | In novelty detection, the goal is to decide if a new data point should be categorized as an inlier or an outlier, given a training dataset that primarily captures the inlier distribution. Recent approaches typically use deep encoder and decoder network frameworks to derive a reconstruction error, and employ this error either to determine a novelty score, or as the basis for a one-class classifier. In this research, we use a similar framework but with a lightweight deep network, and we adopt a probabilistic score with reconstruction error. Our methodology calculates the probability of whether the sample comes from the inlier distribution or not. This work makes two key contributions. The first is that we compute the novelty probability by linearizing the manifold that holds the structure of the inlier distribution. This allows us to interpret how the probability is distributed and can be determined in relation to the local coordinates of the manifold tangent space. The second contribution is that we improve the training protocol for the network. Our results indicate that our approach is effective at learning the target class, and it outperforms recent state-of-the-art methods on several benchmark datasets. | [
"['Muhammad Asad' 'Ihsan Ullah' 'Ganesh Sistu' 'Michael G. Madden']"
]
|
null | null | 2404.04467 | null | null | http://arxiv.org/pdf/2404.04467v1 | 2024-04-06T01:39:51Z | 2024-04-06T01:39:51Z | Demand Balancing in Primal-Dual Optimization for Blind Network Revenue
Management | This paper proposes a practically efficient algorithm with optimal theoretical regret which solves the classical network revenue management (NRM) problem with unknown, nonparametric demand. Over a time horizon of length $T$, in each time period the retailer needs to decide prices of $N$ types of products which are produced based on $M$ types of resources with unreplenishable initial inventory. When demand is nonparametric with some mild assumptions, Miao and Wang (2021) is the first paper which proposes an algorithm with $O(text{poly}(N,M,ln(T))sqrt{T})$ type of regret (in particular, $tilde O(N^{3.5}sqrt{T})$ plus additional high-order terms that are $o(sqrt{T})$ with sufficiently large $Tgg N$). In this paper, we improve the previous result by proposing a primal-dual optimization algorithm which is not only more practical, but also with an improved regret of $tilde O(N^{3.25}sqrt{T})$ free from additional high-order terms. A key technical contribution of the proposed algorithm is the so-called demand balancing, which pairs the primal solution (i.e., the price) in each time period with another price to offset the violation of complementary slackness on resource inventory constraints. Numerical experiments compared with several benchmark algorithms further illustrate the effectiveness of our algorithm. | [
"['Sentao Miao' 'Yining Wang']"
]
|
null | null | 2404.04475 | null | null | http://arxiv.org/pdf/2404.04475v1 | 2024-04-06T02:29:02Z | 2024-04-06T02:29:02Z | Length-Controlled AlpacaEval: A Simple Way to Debias Automatic
Evaluators | LLM-based auto-annotators have become a key component of the LLM development process due to their cost-effectiveness and scalability compared to human-based evaluation. However, these auto-annotators can introduce complex biases that are hard to remove. Even simple, known confounders such as preference for longer outputs remain in existing automated evaluation metrics. We propose a simple regression analysis approach for controlling biases in auto-evaluations. As a real case study, we focus on reducing the length bias of AlpacaEval, a fast and affordable benchmark for chat LLMs that uses LLMs to estimate response quality. Despite being highly correlated with human preferences, AlpacaEval is known to favor models that generate longer outputs. We introduce a length-controlled AlpacaEval that aims to answer the counterfactual question: "What would the preference be if the model's and baseline's output had the same length?". To achieve this, we first fit a generalized linear model to predict the biased output of interest (auto-annotator preferences) based on the mediators we want to control for (length difference) and other relevant features. We then obtain length-controlled preferences by predicting preferences while conditioning the GLM with a zero difference in lengths. Length-controlling not only improves the robustness of the metric to manipulations in model verbosity, we also find that it increases the Spearman correlation with LMSYS' Chatbot Arena from 0.94 to 0.98. We release the code and leaderboard at https://tatsu-lab.github.io/alpaca_eval/ . | [
"['Yann Dubois' 'Balázs Galambosi' 'Percy Liang' 'Tatsunori B. Hashimoto']"
]
|
null | null | 2404.04476 | null | null | http://arxiv.org/pdf/2404.04476v1 | 2024-04-06T02:33:04Z | 2024-04-06T02:33:04Z | DELTA: Decoupling Long-Tailed Online Continual Learning | A significant challenge in achieving ubiquitous Artificial Intelligence is the limited ability of models to rapidly learn new information in real-world scenarios where data follows long-tailed distributions, all while avoiding forgetting previously acquired knowledge. In this work, we study the under-explored problem of Long-Tailed Online Continual Learning (LTOCL), which aims to learn new tasks from sequentially arriving class-imbalanced data streams. Each data is observed only once for training without knowing the task data distribution. We present DELTA, a decoupled learning approach designed to enhance learning representations and address the substantial imbalance in LTOCL. We enhance the learning process by adapting supervised contrastive learning to attract similar samples and repel dissimilar (out-of-class) samples. Further, by balancing gradients during training using an equalization loss, DELTA significantly enhances learning outcomes and successfully mitigates catastrophic forgetting. Through extensive evaluation, we demonstrate that DELTA improves the capacity for incremental learning, surpassing existing OCL methods. Our results suggest considerable promise for applying OCL in real-world applications. | [
"['Siddeshwar Raghavan' 'Jiangpeng He' 'Fengqing Zhu']"
]
|
null | null | 2404.04481 | null | null | http://arxiv.org/pdf/2404.04481v1 | 2024-04-06T03:11:31Z | 2024-04-06T03:11:31Z | Joint Identifiability of Cross-Domain Recommendation via Hierarchical
Subspace Disentanglement | Cross-Domain Recommendation (CDR) seeks to enable effective knowledge transfer across domains. Existing works rely on either representation alignment or transformation bridges, but they struggle on identifying domain-shared from domain-specific latent factors. Specifically, while CDR describes user representations as a joint distribution over two domains, these methods fail to account for its joint identifiability as they primarily fixate on the marginal distribution within a particular domain. Such a failure may overlook the conditionality between two domains and how it contributes to latent factor disentanglement, leading to negative transfer when domains are weakly correlated. In this study, we explore what should and should not be transferred in cross-domain user representations from a causality perspective. We propose a Hierarchical subspace disentanglement approach to explore the Joint IDentifiability of cross-domain joint distribution, termed HJID, to preserve domain-specific behaviors from domain-shared factors. HJID organizes user representations into layers: generic shallow subspaces and domain-oriented deep subspaces. We first encode the generic pattern in the shallow subspace by minimizing the Maximum Mean Discrepancy of initial layer activation. Then, to dissect how domain-oriented latent factors are encoded in deeper layers activation, we construct a cross-domain causality-based data generation graph, which identifies cross-domain consistent and domain-specific components, adhering to the Minimal Change principle. This allows HJID to maintain stability whilst discovering unique factors for different domains, all within a generative framework of invertible transformations that guarantee the joint identifiability. With experiments on real-world datasets, we show that HJID outperforms SOTA methods on a range of strongly and weakly correlated CDR tasks. | [
"['Jing Du' 'Zesheng Ye' 'Bin Guo' 'Zhiwen Yu' 'Lina Yao']"
]
|
null | null | 2404.04490 | null | null | http://arxiv.org/pdf/2404.04490v1 | 2024-04-06T03:46:42Z | 2024-04-06T03:46:42Z | Hyperparameter Optimization for SecureBoost via Constrained
Multi-Objective Federated Learning | SecureBoost is a tree-boosting algorithm that leverages homomorphic encryption (HE) to protect data privacy in vertical federated learning. SecureBoost and its variants have been widely adopted in fields such as finance and healthcare. However, the hyperparameters of SecureBoost are typically configured heuristically for optimizing model performance (i.e., utility) solely, assuming that privacy is secured. Our study found that SecureBoost and some of its variants are still vulnerable to label leakage. This vulnerability may lead the current heuristic hyperparameter configuration of SecureBoost to a suboptimal trade-off between utility, privacy, and efficiency, which are pivotal elements toward a trustworthy federated learning system. To address this issue, we propose the Constrained Multi-Objective SecureBoost (CMOSB) algorithm, which aims to approximate Pareto optimal solutions that each solution is a set of hyperparameters achieving an optimal trade-off between utility loss, training cost, and privacy leakage. We design measurements of the three objectives, including a novel label inference attack named instance clustering attack (ICA) to measure the privacy leakage of SecureBoost. Additionally, we provide two countermeasures against ICA. The experimental results demonstrate that the CMOSB yields superior hyperparameters over those optimized by grid search and Bayesian optimization regarding the trade-off between utility loss, training cost, and privacy leakage. | [
"['Yan Kang' 'Ziyao Ren' 'Lixin Fan' 'Linghua Yang' 'Yongxin Tong'\n 'Qiang Yang']"
]
|
null | null | 2404.04491 | null | null | http://arxiv.org/abs/2404.04491v1 | 2024-04-06T03:48:11Z | 2024-04-06T03:48:11Z | Galaxy 3D Shape Recovery using Mixture Density Network | Since the turn of the century, astronomers have been exploiting the rich information afforded by combining stellar kinematic maps and imaging in an attempt to recover the intrinsic, three-dimensional (3D) shape of a galaxy. A common intrinsic shape recovery method relies on an expected monotonic relationship between the intrinsic misalignment of the kinematic and morphological axes and the triaxiality parameter. Recent studies have, however, cast doubt about underlying assumptions relating shape and intrinsic kinematic misalignment. In this work, we aim to recover the 3D shape of individual galaxies using their projected stellar kinematic and flux distributions using a supervised machine learning approach with mixture density network (MDN). Using a mock dataset of the EAGLE hydrodynamical cosmological simulation, we train the MDN model for a carefully selected set of common kinematic and photometric parameters. Compared to previous methods, we demonstrate potential improvements achieved with the MDN model to retrieve the 3D galaxy shape along with the uncertainties, especially for prolate and triaxial systems. We make specific recommendations for recovering galaxy intrinsic shapes relevant for current and future integral field spectroscopic galaxy surveys. | [
"['Suk Yee Yong' 'K. E. Harborne' 'Caroline Foster' 'Robert Bassett'\n 'Gregory B. Poole' 'Mitchell Cavanagh']"
]
|
null | null | 2404.04498 | null | null | http://arxiv.org/pdf/2404.04498v2 | 2024-06-15T06:58:56Z | 2024-04-06T04:22:48Z | Bayesian Inference for Consistent Predictions in Overparameterized
Nonlinear Regression | The remarkable generalization performance of large-scale models has been challenging the conventional wisdom of the statistical learning theory. Although recent theoretical studies have shed light on this behavior in linear models and nonlinear classifiers, a comprehensive understanding of overparameterization in nonlinear regression models is still lacking. This study explores the predictive properties of overparameterized nonlinear regression within the Bayesian framework, extending the methodology of the adaptive prior considering the intrinsic spectral structure of the data. Posterior contraction is established for generalized linear and single-neuron models with Lipschitz continuous activation functions, demonstrating the consistency in the predictions of the proposed approach. Moreover, the Bayesian framework enables uncertainty estimation of the predictions. The proposed method was validated via numerical simulations and a real data application, showing its ability to achieve accurate predictions and reliable uncertainty estimates. This work provides a theoretical understanding of the advantages of overparameterization and a principled Bayesian approach to large nonlinear models. | [
"['Tomoya Wakayama']"
]
|
null | null | 2404.04500 | null | null | http://arxiv.org/pdf/2404.04500v1 | 2024-04-06T04:43:06Z | 2024-04-06T04:43:06Z | Trustless Audits without Revealing Data or Models | There is an increasing conflict between business incentives to hide models and data as trade secrets, and the societal need for algorithmic transparency. For example, a rightsholder wishing to know whether their copyrighted works have been used during training must convince the model provider to allow a third party to audit the model and data. Finding a mutually agreeable third party is difficult, and the associated costs often make this approach impractical. In this work, we show that it is possible to simultaneously allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties. We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights, alongside a zero-knowledge proof (ZKP) certifying that published commitments are derived from training the model. Model providers can then respond to audit requests by privately computing any function F of the dataset (or model) and releasing the output of F alongside another ZKP certifying the correct execution of F. To enable ZkAudit, we develop new methods of computing ZKPs for SGD on modern neural nets for simple recommender systems and image classification models capable of high accuracies on ImageNet. Empirically, we show it is possible to provide trustless audits of DNNs, including copyright, censorship, and counterfactual audits with little to no loss in accuracy. | [
"['Suppakit Waiwitlikhit' 'Ion Stoica' 'Yi Sun' 'Tatsunori Hashimoto'\n 'Daniel Kang']"
]
|
null | null | 2404.04509 | null | null | http://arxiv.org/pdf/2404.04509v1 | 2024-04-06T05:34:12Z | 2024-04-06T05:34:12Z | Distributed No-Regret Learning for Multi-Stage Systems with End-to-End
Bandit Feedback | This paper studies multi-stage systems with end-to-end bandit feedback. In such systems, each job needs to go through multiple stages, each managed by a different agent, before generating an outcome. Each agent can only control its own action and learn the final outcome of the job. It has neither knowledge nor control on actions taken by agents in the next stage. The goal of this paper is to develop distributed online learning algorithms that achieve sublinear regret in adversarial environments. The setting of this paper significantly expands the traditional multi-armed bandit problem, which considers only one agent and one stage. In addition to the exploration-exploitation dilemma in the traditional multi-armed bandit problem, we show that the consideration of multiple stages introduces a third component, education, where an agent needs to choose its actions to facilitate the learning of agents in the next stage. To solve this newly introduced exploration-exploitation-education trilemma, we propose a simple distributed online learning algorithm, $epsilon-$EXP3. We theoretically prove that the $epsilon-$EXP3 algorithm is a no-regret policy that achieves sublinear regret. Simulation results show that the $epsilon-$EXP3 algorithm significantly outperforms existing no-regret online learning algorithms for the traditional multi-armed bandit problem. | [
"['I-Hong Hou']"
]
|
null | null | 2404.04510 | null | null | http://arxiv.org/pdf/2404.04510v1 | 2024-04-06T05:44:53Z | 2024-04-06T05:44:53Z | IITK at SemEval-2024 Task 2: Exploring the Capabilities of LLMs for Safe
Biomedical Natural Language Inference for Clinical Trials | Large Language models (LLMs) have demonstrated state-of-the-art performance in various natural language processing (NLP) tasks across multiple domains, yet they are prone to shortcut learning and factual inconsistencies. This research investigates LLMs' robustness, consistency, and faithful reasoning when performing Natural Language Inference (NLI) on breast cancer Clinical Trial Reports (CTRs) in the context of SemEval 2024 Task 2: Safe Biomedical Natural Language Inference for Clinical Trials. We examine the reasoning capabilities of LLMs and their adeptness at logical problem-solving. A comparative analysis is conducted on pre-trained language models (PLMs), GPT-3.5, and Gemini Pro under zero-shot settings using Retrieval-Augmented Generation (RAG) framework, integrating various reasoning chains. The evaluation yields an F1 score of 0.69, consistency of 0.71, and a faithfulness score of 0.90 on the test dataset. | [
"['Shreyasi Mandal' 'Ashutosh Modi']"
]
|
null | null | 2404.04513 | null | null | http://arxiv.org/pdf/2404.04513v1 | 2024-04-06T05:58:42Z | 2024-04-06T05:58:42Z | IITK at SemEval-2024 Task 1: Contrastive Learning and Autoencoders for
Semantic Textual Relatedness in Multilingual Texts | This paper describes our system developed for the SemEval-2024 Task 1: Semantic Textual Relatedness. The challenge is focused on automatically detecting the degree of relatedness between pairs of sentences for 14 languages including both high and low-resource Asian and African languages. Our team participated in two subtasks consisting of Track A: supervised and Track B: unsupervised. This paper focuses on a BERT-based contrastive learning and similarity metric based approach primarily for the supervised track while exploring autoencoders for the unsupervised track. It also aims on the creation of a bigram relatedness corpus using negative sampling strategy, thereby producing refined word embeddings. | [
"['Udvas Basak' 'Rajarshi Dutta' 'Shivam Pandey' 'Ashutosh Modi']"
]
|
null | null | 2404.04520 | null | null | http://arxiv.org/pdf/2404.04520v1 | 2024-04-06T06:28:02Z | 2024-04-06T06:28:02Z | IITK at SemEval-2024 Task 4: Hierarchical Embeddings for Detection of
Persuasion Techniques in Memes | Memes are one of the most popular types of content used in an online disinformation campaign. They are primarily effective on social media platforms since they can easily reach many users. Memes in a disinformation campaign achieve their goal of influencing the users through several rhetorical and psychological techniques, such as causal oversimplification, name-calling, and smear. The SemEval 2024 Task 4 textit{Multilingual Detection of Persuasion Technique in Memes} on identifying such techniques in the memes is divided across three sub-tasks: ($mathbf{1}$) Hierarchical multi-label classification using only textual content of the meme, ($mathbf{2}$) Hierarchical multi-label classification using both, textual and visual content of the meme and ($mathbf{3}$) Binary classification of whether the meme contains a persuasion technique or not using it's textual and visual content. This paper proposes an ensemble of Class Definition Prediction (CDP) and hyperbolic embeddings-based approaches for this task. We enhance meme classification accuracy and comprehensiveness by integrating HypEmo's hierarchical label embeddings (Chen et al., 2023) and a multi-task learning framework for emotion prediction. We achieve a hierarchical F1-score of 0.60, 0.67, and 0.48 on the respective sub-tasks. | [
"['Shreenaga Chikoti' 'Shrey Mehta' 'Ashutosh Modi']"
]
|
null | null | 2404.04522 | null | null | http://arxiv.org/pdf/2404.04522v2 | 2024-04-12T00:18:06Z | 2024-04-06T06:44:41Z | Q-PEFT: Query-dependent Parameter Efficient Fine-tuning for Text
Reranking with Large Language Models | Parameter Efficient Fine-Tuning (PEFT) methods have been extensively utilized in Large Language Models (LLMs) to improve the down-streaming tasks without the cost of fine-tuing the whole LLMs. Recent studies have shown how to effectively use PEFT for fine-tuning LLMs in ranking tasks with convincing performance; there are some limitations, including the learned prompt being fixed for different documents, overfitting to specific tasks, and low adaptation ability. In this paper, we introduce a query-dependent parameter efficient fine-tuning (Q-PEFT) approach for text reranking to leak the information of the true queries to LLMs and then make the generation of true queries from input documents much easier. Specifically, we utilize the query to extract the top-$k$ tokens from concatenated documents, serving as contextual clues. We further augment Q-PEFT by substituting the retrieval mechanism with a multi-head attention layer to achieve end-to-end training and cover all the tokens in the documents, guiding the LLMs to generate more document-specific synthetic queries, thereby further improving the reranking performance. Extensive experiments are conducted on four public datasets, demonstrating the effectiveness of our proposed approach. | [
"['Zhiyuan Peng' 'Xuyang Wu' 'Qifan Wang' 'Sravanthi Rajanala' 'Yi Fang']"
]
|
null | null | 2404.04525 | null | null | http://arxiv.org/pdf/2404.04525v1 | 2024-04-06T06:47:44Z | 2024-04-06T06:47:44Z | IITK at SemEval-2024 Task 10: Who is the speaker? Improving Emotion
Recognition and Flip Reasoning in Conversations via Speaker Embeddings | This paper presents our approach for the SemEval-2024 Task 10: Emotion Discovery and Reasoning its Flip in Conversations. For the Emotion Recognition in Conversations (ERC) task, we utilize a masked-memory network along with speaker participation. We propose a transformer-based speaker-centric model for the Emotion Flip Reasoning (EFR) task. We also introduce Probable Trigger Zone, a region of the conversation that is more likely to contain the utterances causing the emotion to flip. For sub-task 3, the proposed approach achieves a 5.9 (F1 score) improvement over the task baseline. The ablation study results highlight the significance of various design choices in the proposed method. | [
"['Shubham Patel' 'Divyaksh Shukla' 'Ashutosh Modi']"
]
|
null | null | 2404.04534 | null | null | http://arxiv.org/pdf/2404.04534v2 | 2024-05-19T11:40:42Z | 2024-04-06T07:21:41Z | Impact of Fairness Regulations on Institutions' Policies and Population
Qualifications | The proliferation of algorithmic systems has fueled discussions surrounding the regulation and control of their social impact. Herein, we consider a system whose primary objective is to maximize utility by selecting the most qualified individuals. To promote demographic parity in the selection algorithm, we consider penalizing discrimination across social groups. We examine conditions under which a discrimination penalty can effectively reduce disparity in the selection. Additionally, we explore the implications of such a penalty when individual qualifications may evolve over time in response to the imposed penalizing policy. We identify scenarios where the penalty could hinder the natural attainment of equity within the population. Moreover, we propose certain conditions that can counteract this undesirable outcome, thus ensuring fairness. | [
"['Hamidreza Montaseri' 'Amin Gohari']"
]
|
null | null | 2404.04547 | null | null | http://arxiv.org/abs/2404.04547v1 | 2024-04-06T08:07:48Z | 2024-04-06T08:07:48Z | Exhaustive Exploitation of Nature-inspired Computation for Cancer
Screening in an Ensemble Manner | Accurate screening of cancer types is crucial for effective cancer detection and precise treatment selection. However, the association between gene expression profiles and tumors is often limited to a small number of biomarker genes. While computational methods using nature-inspired algorithms have shown promise in selecting predictive genes, existing techniques are limited by inefficient search and poor generalization across diverse datasets. This study presents a framework termed Evolutionary Optimized Diverse Ensemble Learning (EODE) to improve ensemble learning for cancer classification from gene expression data. The EODE methodology combines an intelligent grey wolf optimization algorithm for selective feature space reduction, guided random injection modeling for ensemble diversity enhancement, and subset model optimization for synergistic classifier combinations. Extensive experiments were conducted across 35 gene expression benchmark datasets encompassing varied cancer types. Results demonstrated that EODE obtained significantly improved screening accuracy over individual and conventionally aggregated models. The integrated optimization of advanced feature selection, directed specialized modeling, and cooperative classifier ensembles helps address key challenges in current nature-inspired approaches. This provides an effective framework for robust and generalized ensemble learning with gene expression biomarkers. Specifically, we have opened EODE source code on Github at https://github.com/wangxb96/EODE. | [
"['Xubin Wang' 'Yunhe Wang' 'Zhiqing Ma' 'Ka-Chun Wong' 'Xiangtao Li']"
]
|
null | null | 2404.04549 | null | null | http://arxiv.org/pdf/2404.04549v1 | 2024-04-06T08:17:07Z | 2024-04-06T08:17:07Z | Efficient Learning Using Spiking Neural Networks Equipped With Affine
Encoders and Decoders | We study the learning problem associated with spiking neural networks. Specifically, we consider hypothesis sets of spiking neural networks with affine temporal encoders and decoders and simple spiking neurons having only positive synaptic weights. We demonstrate that the positivity of the weights continues to enable a wide range of expressivity results, including rate-optimal approximation of smooth functions or approximation without the curse of dimensionality. Moreover, positive-weight spiking neural networks are shown to depend continuously on their parameters which facilitates classical covering number-based generalization statements. Finally, we observe that from a generalization perspective, contrary to feedforward neural networks or previous results for general spiking neural networks, the depth has little to no adverse effect on the generalization capabilities. | [
"['A. Martina Neuman' 'Philipp Christian Petersen']"
]
|
null | null | 2404.04559 | null | null | http://arxiv.org/pdf/2404.04559v1 | 2024-04-06T08:53:26Z | 2024-04-06T08:53:26Z | Spectral GNN via Two-dimensional (2-D) Graph Convolution | Spectral Graph Neural Networks (GNNs) have achieved tremendous success in graph learning. As an essential part of spectral GNNs, spectral graph convolution extracts crucial frequency information in graph data, leading to superior performance of spectral GNNs in downstream tasks. However, in this paper, we show that existing spectral GNNs remain critical drawbacks in performing the spectral graph convolution. Specifically, considering the spectral graph convolution as a construction operation towards target output, we prove that existing popular convolution paradigms cannot construct the target output with mild conditions on input graph signals, causing spectral GNNs to fall into suboptimal solutions. To address the issues, we rethink the spectral graph convolution from a more general two-dimensional (2-D) signal convolution perspective and propose a new convolution paradigm, named 2-D graph convolution. We prove that 2-D graph convolution unifies existing graph convolution paradigms, and is capable to construct arbitrary target output. Based on the proposed 2-D graph convolution, we further propose ChebNet2D, an efficient and effective GNN implementation of 2-D graph convolution through applying Chebyshev interpolation. Extensive experiments on benchmark datasets demonstrate both effectiveness and efficiency of the ChebNet2D. | [
"['Guoming Li' 'Jian Yang' 'Shangsong Liang' 'Dongsheng Luo']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.