categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null | 2404.13954 | null | null | http://arxiv.org/pdf/2404.13954v1 | 2024-04-22T07:54:56Z | 2024-04-22T07:54:56Z | A survey of air combat behavior modeling using machine learning | With the recent advances in machine learning, creating agents that behave realistically in simulated air combat has become a growing field of interest. This survey explores the application of machine learning techniques for modeling air combat behavior, motivated by the potential to enhance simulation-based pilot training. Current simulated entities tend to lack realistic behavior, and traditional behavior modeling is labor-intensive and prone to loss of essential domain knowledge between development steps. Advancements in reinforcement learning and imitation learning algorithms have demonstrated that agents may learn complex behavior from data, which could be faster and more scalable than manual methods. Yet, making adaptive agents capable of performing tactical maneuvers and operating weapons and sensors still poses a significant challenge. The survey examines applications, behavior model types, prevalent machine learning methods, and the technical and human challenges in developing adaptive and realistically behaving agents. Another challenge is the transfer of agents from learning environments to military simulation systems and the consequent demand for standardization. Four primary recommendations are presented regarding increased emphasis on beyond-visual-range scenarios, multi-agent machine learning and cooperation, utilization of hierarchical behavior models, and initiatives for standardization and research collaboration. These recommendations aim to address current issues and guide the development of more comprehensive, adaptable, and realistic machine learning-based behavior models for air combat applications. | [
"['Patrick Ribu Gorton' 'Andreas Strand' 'Karsten Brathen']"
]
|
null | null | 2404.13964 | null | null | http://arxiv.org/pdf/2404.13964v3 | 2024-04-24T16:04:26Z | 2024-04-22T08:10:38Z | An Economic Solution to Copyright Challenges of Generative AI | Generative artificial intelligence (AI) systems are trained on large data corpora to generate new pieces of text, images, videos, and other media. There is growing concern that such systems may infringe on the copyright interests of training data contributors. To address the copyright challenges of generative AI, we propose a framework that compensates copyright owners proportionally to their contributions to the creation of AI-generated content. The metric for contributions is quantitatively determined by leveraging the probabilistic nature of modern generative AI models and using techniques from cooperative game theory in economics. This framework enables a platform where AI developers benefit from access to high-quality training data, thus improving model performance. Meanwhile, copyright owners receive fair compensation, driving the continued provision of relevant data for generative model training. Experiments demonstrate that our framework successfully identifies the most relevant data sources used in artwork generation, ensuring a fair and interpretable distribution of revenues among copyright owners. | [
"['Jiachen T. Wang' 'Zhun Deng' 'Hiroaki Chiba-Okabe' 'Boaz Barak'\n 'Weijie J. Su']"
]
|
null | null | 2404.13990 | null | null | http://arxiv.org/pdf/2404.13990v1 | 2024-04-22T08:57:46Z | 2024-04-22T08:57:46Z | QCore: Data-Efficient, On-Device Continual Calibration for Quantized
Models -- Extended Version | We are witnessing an increasing availability of streaming data that may contain valuable information on the underlying processes. It is thus attractive to be able to deploy machine learning models on edge devices near sensors such that decisions can be made instantaneously, rather than first having to transmit incoming data to servers. To enable deployment on edge devices with limited storage and computational capabilities, the full-precision parameters in standard models can be quantized to use fewer bits. The resulting quantized models are then calibrated using back-propagation and full training data to ensure accuracy. This one-time calibration works for deployments in static environments. However, model deployment in dynamic edge environments call for continual calibration to adaptively adjust quantized models to fit new incoming data, which may have different distributions. The first difficulty in enabling continual calibration on the edge is that the full training data may be too large and thus not always available on edge devices. The second difficulty is that the use of back-propagation on the edge for repeated calibration is too expensive. We propose QCore to enable continual calibration on the edge. First, it compresses the full training data into a small subset to enable effective calibration of quantized models with different bit-widths. We also propose means of updating the subset when new streaming data arrives to reflect changes in the environment, while not forgetting earlier training data. Second, we propose a small bit-flipping network that works with the subset to update quantized model parameters, thus enabling efficient continual calibration without back-propagation. An experimental study, conducted with real-world data in a continual learning setting, offers insight into the properties of QCore and shows that it is capable of outperforming strong baseline methods. | [
"['David Campos' 'Bin Yang' 'Tung Kieu' 'Miao Zhang' 'Chenjuan Guo'\n 'Christian S. Jensen']"
]
|
null | null | 2404.14006 | null | null | http://arxiv.org/pdf/2404.14006v1 | 2024-04-22T09:16:14Z | 2024-04-22T09:16:14Z | Distilled Datamodel with Reverse Gradient Matching | The proliferation of large-scale AI models trained on extensive datasets has revolutionized machine learning. With these models taking on increasingly central roles in various applications, the need to understand their behavior and enhance interpretability has become paramount. To investigate the impact of changes in training data on a pre-trained model, a common approach is leave-one-out retraining. This entails systematically altering the training dataset by removing specific samples to observe resulting changes within the model. However, retraining the model for each altered dataset presents a significant computational challenge, given the need to perform this operation for every dataset variation. In this paper, we introduce an efficient framework for assessing data impact, comprising offline training and online evaluation stages. During the offline training phase, we approximate the influence of training data on the target model through a distilled synset, formulated as a reversed gradient matching problem. For online evaluation, we expedite the leave-one-out process using the synset, which is then utilized to compute the attribution matrix based on the evaluation objective. Experimental evaluations, including training data attribution and assessments of data quality, demonstrate that our proposed method achieves comparable model behavior evaluation while significantly speeding up the process compared to the direct retraining method. | [
"['Jingwen Ye' 'Ruonan Yu' 'Songhua Liu' 'Xinchao Wang']"
]
|
null | null | 2404.14016 | null | null | http://arxiv.org/pdf/2404.14016v1 | 2024-04-22T09:29:14Z | 2024-04-22T09:29:14Z | Ungeneralizable Examples | The training of contemporary deep learning models heavily relies on publicly available data, posing a risk of unauthorized access to online data and raising concerns about data privacy. Current approaches to creating unlearnable data involve incorporating small, specially designed noises, but these methods strictly limit data usability, overlooking its potential usage in authorized scenarios. In this paper, we extend the concept of unlearnable data to conditional data learnability and introduce textbf{U}ntextbf{G}eneralizable textbf{E}xamples (UGEs). UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers. The protector defines the authorized network and optimizes UGEs to match the gradients of the original data and its ungeneralizable version, ensuring learnability. To prevent unauthorized learning, UGEs are trained by maximizing a designated distance loss in a common feature space. Additionally, to further safeguard the authorized side from potential attacks, we introduce additional undistillation optimization. Experimental results on multiple datasets and various networks demonstrate that the proposed UGEs framework preserves data usability while reducing training performance on hacker networks, even under different types of attacks. | [
"['Jingwen Ye' 'Xinchao Wang']"
]
|
null | null | 2404.14017 | null | null | http://arxiv.org/abs/2404.14017v1 | 2024-04-22T09:32:38Z | 2024-04-22T09:32:38Z | Hybrid Ensemble-Based Travel Mode Prediction | Travel mode choice (TMC) prediction, which can be formulated as a classification task, helps in understanding what makes citizens choose different modes of transport for individual trips. This is also a major step towards fostering sustainable transportation. As behaviour may evolve over time, we also face the question of detecting concept drift in the data. This necessitates using appropriate methods to address potential concept drift. In particular, it is necessary to decide whether batch or stream mining methods should be used to develop periodically updated TMC models. To address the challenge of the development of TMC models, we propose the novel Incremental Ensemble of Batch and Stream Models (IEBSM) method aimed at adapting travel mode choice classifiers to concept drift possibly occurring in the data. It relies on the combination of drift detectors with batch learning and stream mining models. We compare it against batch and incremental learners, including methods relying on active drift detection. Experiments with varied travel mode data sets representing both city and country levels show that the IEBSM method both detects drift in travel mode data and successfully adapts the models to evolving travel mode choice data. The method has a higher rank than batch and stream learners. | [
"['Paweł Golik' 'Maciej Grzenda' 'Elżbieta Sienkiewicz']"
]
|
null | null | 2404.14027 | null | null | http://arxiv.org/pdf/2404.14027v3 | 2024-06-12T13:43:50Z | 2024-04-22T09:43:03Z | OccFeat: Self-supervised Occupancy Feature Prediction for Pretraining
BEV Segmentation Networks | We introduce a self-supervised pretraining method, called OccFeat, for camera-only Bird's-Eye-View (BEV) segmentation networks. With OccFeat, we pretrain a BEV network via occupancy prediction and feature distillation tasks. Occupancy prediction provides a 3D geometric understanding of the scene to the model. However, the geometry learned is class-agnostic. Hence, we add semantic information to the model in the 3D space through distillation from a self-supervised pretrained image foundation model. Models pretrained with our method exhibit improved BEV semantic segmentation performance, particularly in low-data scenarios. Moreover, empirical results affirm the efficacy of integrating feature distillation with 3D occupancy prediction in our pretraining approach. Repository: https://github.com/valeoai/Occfeat | [
"['Sophia Sirko-Galouchenko' 'Alexandre Boulch' 'Spyros Gidaris'\n 'Andrei Bursuc' 'Antonin Vobecky' 'Patrick Pérez' 'Renaud Marlet']"
]
|
null | null | 2404.14033 | null | null | http://arxiv.org/pdf/2404.14033v1 | 2024-04-22T09:50:11Z | 2024-04-22T09:50:11Z | Apodotiko: Enabling Efficient Serverless Federated Learning in
Heterogeneous Environments | Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients while keeping the data decentralized. Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies, particularly Function-as-a-Service (FaaS) for FL, can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders. However, current serverless FL systems still suffer from the presence of stragglers, i.e., slow clients that impede the collaborative training process. While strategies aimed at mitigating stragglers in these systems have been proposed, they overlook the diverse hardware resource configurations among FL clients. To this end, we present Apodotiko, a novel asynchronous training strategy designed for serverless FL. Our strategy incorporates a scoring mechanism that evaluates each client's hardware capacity and dataset size to intelligently prioritize and select clients for each training round, thereby minimizing the effects of stragglers on system performance. We comprehensively evaluate Apodotiko across diverse datasets, considering a mix of CPU and GPU clients, and compare its performance against five other FL training strategies. Results from our experiments demonstrate that Apodotiko outperforms other FL training strategies, achieving an average speedup of 2.75x and a maximum speedup of 7.03x. Furthermore, our strategy significantly reduces cold starts by a factor of four on average, demonstrating suitability in serverless environments. | [
"['Mohak Chadha' 'Alexander Jensen' 'Jianfeng Gu' 'Osama Abboud'\n 'Michael Gerndt']"
]
|
null | null | 2404.14047 | null | null | http://arxiv.org/pdf/2404.14047v1 | 2024-04-22T10:03:03Z | 2024-04-22T10:03:03Z | How Good Are Low-bit Quantized LLaMA3 Models? An Empirical Study | Meta's LLaMA family has become one of the most powerful open-source Large Language Model (LLM) series. Notably, LLaMA3 models have recently been released and achieve impressive performance across various with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-limited scenarios, we explore LLaMA3's capabilities when quantized to low bit-width. This exploration holds the potential to unveil new insights and challenges for low-bit quantization of LLaMA3 and other forthcoming LLMs, especially in addressing performance degradation problems that suffer in LLM compression. Specifically, we evaluate the 10 existing post-training quantization and LoRA-finetuning methods of LLaMA3 on 1-8 bits and diverse datasets to comprehensively reveal LLaMA3's low-bit quantization performance. Our experiment results indicate that LLaMA3 still suffers non-negligent degradation in these scenarios, especially in ultra-low bit-width. This highlights the significant performance gap under low bit-width that needs to be bridged in future developments. We expect that this empirical study will prove valuable in advancing future models, pushing the LLMs to lower bit-width with higher accuracy for being practical. Our project is released on https://github.com/Macaronlin/LLaMA3-Quantization and quantized LLaMA3 models are released in https://huggingface.co/LLMQ. | [
"['Wei Huang' 'Xudong Ma' 'Haotong Qin' 'Xingyu Zheng' 'Chengtao Lv'\n 'Hong Chen' 'Jie Luo' 'Xiaojuan Qi' 'Xianglong Liu' 'Michele Magno']"
]
|
null | null | 2404.14061 | null | null | http://arxiv.org/pdf/2404.14061v2 | 2024-04-25T06:40:22Z | 2024-04-22T10:19:02Z | FedTAD: Topology-aware Data-free Knowledge Distillation for Subgraph
Federated Learning | Subgraph federated learning (subgraph-FL) is a new distributed paradigm that facilitates the collaborative training of graph neural networks (GNNs) by multi-client subgraphs. Unfortunately, a significant challenge of subgraph-FL arises from subgraph heterogeneity, which stems from node and topology variation, causing the impaired performance of the global GNN. Despite various studies, they have not yet thoroughly investigated the impact mechanism of subgraph heterogeneity. To this end, we decouple node and topology variation, revealing that they correspond to differences in label distribution and structure homophily. Remarkably, these variations lead to significant differences in the class-wise knowledge reliability of multiple local GNNs, misguiding the model aggregation with varying degrees. Building on this insight, we propose topology-aware data-free knowledge distillation technology (FedTAD), enhancing reliable knowledge transfer from the local model to the global model. Extensive experiments on six public datasets consistently demonstrate the superiority of FedTAD over state-of-the-art baselines. | [
"['Yinlin Zhu' 'Xunkai Li' 'Zhengyu Wu' 'Di Wu' 'Miao Hu' 'Rong-Hua Li']"
]
|
null | null | 2404.14062 | null | null | http://arxiv.org/pdf/2404.14062v1 | 2024-04-22T10:19:16Z | 2024-04-22T10:19:16Z | GatedLexiconNet: A Comprehensive End-to-End Handwritten Paragraph Text
Recognition System | The Handwritten Text Recognition problem has been a challenge for researchers for the last few decades, especially in the domain of computer vision, a subdomain of pattern recognition. Variability of texts amongst writers, cursiveness, and different font styles of handwritten texts with degradation of historical text images make it a challenging problem. Recognizing scanned document images in neural network-based systems typically involves a two-step approach: segmentation and recognition. However, this method has several drawbacks. These shortcomings encompass challenges in identifying text regions, analyzing layout diversity within pages, and establishing accurate ground truth segmentation. Consequently, these processes are prone to errors, leading to bottlenecks in achieving high recognition accuracies. Thus, in this study, we present an end-to-end paragraph recognition system that incorporates internal line segmentation and gated convolutional layers based encoder. The gating is a mechanism that controls the flow of information and allows to adaptively selection of the more relevant features in handwritten text recognition models. The attention module plays an important role in performing internal line segmentation, allowing the page to be processed line-by-line. During the decoding step, we have integrated a connectionist temporal classification-based word beam search decoder as a post-processing step. In this work, we have extended existing LexiconNet by carefully applying and utilizing gated convolutional layers in the existing deep neural network. Our results at line and page levels also favour our new GatedLexiconNet. This study reported character error rates of 2.27% on IAM, 0.9% on RIMES, and 2.13% on READ-16, and word error rates of 5.73% on IAM, 2.76% on RIMES, and 6.52% on READ-2016 datasets. | [
"['Lalita Kumari' 'Sukhdeep Singh' 'Vaibhav Varish Singh Rathore'\n 'Anuj Sharma']"
]
|
null | null | 2404.14063 | null | null | http://arxiv.org/abs/2404.14063v1 | 2024-04-22T10:20:41Z | 2024-04-22T10:20:41Z | LVNS-RAVE: Diversified audio generation with RAVE and Latent Vector
Novelty Search | Evolutionary Algorithms and Generative Deep Learning have been two of the most powerful tools for sound generation tasks. However, they have limitations: Evolutionary Algorithms require complicated designs, posing challenges in control and achieving realistic sound generation. Generative Deep Learning models often copy from the dataset and lack creativity. In this paper, we propose LVNS-RAVE, a method to combine Evolutionary Algorithms and Generative Deep Learning to produce realistic and novel sounds. We use the RAVE model as the sound generator and the VGGish model as a novelty evaluator in the Latent Vector Novelty Search (LVNS) algorithm. The reported experiments show that the method can successfully generate diversified, novel audio samples under different mutation setups using different pre-trained RAVE models. The characteristics of the generation process can be easily controlled with the mutation parameters. The proposed algorithm can be a creative tool for sound artists and musicians. | [
"['Jinyue Guo' 'Anna-Maria Christodoulou' 'Balint Laczko' 'Kyrre Glette']"
]
|
null | null | 2404.14064 | null | null | http://arxiv.org/pdf/2404.14064v2 | 2024-06-21T14:12:54Z | 2024-04-22T10:21:41Z | Multi-view Disentanglement for Reinforcement Learning with Multiple
Cameras | The performance of image-based Reinforcement Learning (RL) agents can vary depending on the position of the camera used to capture the images. Training on multiple cameras simultaneously, including a first-person egocentric camera, can leverage information from different camera perspectives to improve the performance of RL. However, hardware constraints may limit the availability of multiple cameras in real-world deployment. Additionally, cameras may become damaged in the real-world preventing access to all cameras that were used during training. To overcome these hardware constraints, we propose Multi-View Disentanglement (MVD), which uses multiple cameras to learn a policy that is robust to a reduction in the number of cameras to generalise to any single camera from the training set. Our approach is a self-supervised auxiliary task for RL that learns a disentangled representation from multiple cameras, with a shared representation that is aligned across all cameras to allow generalisation to a single camera, and a private representation that is camera-specific. We show experimentally that an RL agent trained on a single third-person camera is unable to learn an optimal policy in many control tasks; but, our approach, benefiting from multiple cameras during training, is able to solve the task using only the same single third-person camera. | [
"['Mhairi Dunion' 'Stefano V. Albrecht']"
]
|
null | null | 2404.14068 | null | null | http://arxiv.org/pdf/2404.14068v1 | 2024-04-22T10:26:49Z | 2024-04-22T10:26:49Z | Holistic Safety and Responsibility Evaluations of Advanced AI Models | Safety and responsibility evaluations of advanced AI models are a critical but developing field of research and practice. In the development of Google DeepMind's advanced AI models, we innovated on and applied a broad set of approaches to safety evaluation. In this report, we summarise and share elements of our evolving approach as well as lessons learned for a broad audience. Key lessons learned include: First, theoretical underpinnings and frameworks are invaluable to organise the breadth of risk domains, modalities, forms, metrics, and goals. Second, theory and practice of safety evaluation development each benefit from collaboration to clarify goals, methods and challenges, and facilitate the transfer of insights between different stakeholders and disciplines. Third, similar key methods, lessons, and institutions apply across the range of concerns in responsibility and safety - including established and emerging harms. For this reason it is important that a wide range of actors working on safety evaluation and safety research communities work together to develop, refine and implement novel evaluation approaches and best practices, rather than operating in silos. The report concludes with outlining the clear need to rapidly advance the science of evaluations, to integrate new evaluations into the development and governance of AI, to establish scientifically-grounded norms and standards, and to promote a robust evaluation ecosystem. | [
"['Laura Weidinger' 'Joslyn Barnhart' 'Jenny Brennan'\n 'Christina Butterfield' 'Susie Young' 'Will Hawkins'\n 'Lisa Anne Hendricks' 'Ramona Comanescu' 'Oscar Chang' 'Mikel Rodriguez'\n 'Jennifer Beroshi' 'Dawn Bloxwich' 'Lev Proleev' 'Jilin Chen'\n 'Sebastian Farquhar' 'Lewis Ho' 'Iason Gabriel' 'Allan Dafoe'\n 'William Isaac']"
]
|
null | null | 2404.14073 | null | null | http://arxiv.org/pdf/2404.14073v1 | 2024-04-22T10:34:58Z | 2024-04-22T10:34:58Z | Towards Robust Trajectory Representations: Isolating Environmental
Confounders with Causal Learning | Trajectory modeling refers to characterizing human movement behavior, serving as a pivotal step in understanding mobility patterns. Nevertheless, existing studies typically ignore the confounding effects of geospatial context, leading to the acquisition of spurious correlations and limited generalization capabilities. To bridge this gap, we initially formulate a Structural Causal Model (SCM) to decipher the trajectory representation learning process from a causal perspective. Building upon the SCM, we further present a Trajectory modeling framework (TrajCL) based on Causal Learning, which leverages the backdoor adjustment theory as an intervention tool to eliminate the spurious correlations between geospatial context and trajectories. Extensive experiments on two real-world datasets verify that TrajCL markedly enhances performance in trajectory classification tasks while showcasing superior generalization and interpretability. | [
"['Kang Luo' 'Yuanshao Zhu' 'Wei Chen' 'Kun Wang' 'Zhengyang Zhou'\n 'Sijie Ruan' 'Yuxuan Liang']"
]
|
null | null | 2404.14076 | null | null | http://arxiv.org/pdf/2404.14076v2 | 2024-07-15T08:45:00Z | 2024-04-22T10:45:59Z | Towards noise contrastive estimation with soft targets for conditional
models | Soft targets combined with the cross-entropy loss have shown to improve generalization performance of deep neural networks on supervised classification tasks. The standard cross-entropy loss however assumes data to be categorically distributed, which may often not be the case in practice. In contrast, InfoNCE does not rely on such an explicit assumption but instead implicitly estimates the true conditional through negative sampling. Unfortunately, it cannot be combined with soft targets in its standard formulation, hindering its use in combination with sophisticated training strategies. In this paper, we address this limitation by proposing a loss function that is compatible with probabilistic targets. Our new soft target InfoNCE loss is conceptually simple, efficient to compute, and can be motivated through the framework of noise contrastive estimation. Using a toy example, we demonstrate shortcomings of the categorical distribution assumption of cross-entropy, and discuss implications of sampling from soft distributions. We observe that soft target InfoNCE performs on par with strong soft target cross-entropy baselines and outperforms hard target NLL and InfoNCE losses on popular benchmarks, including ImageNet. Finally, we provide a simple implementation of our loss, geared towards supervised classification and fully compatible with deep classification models trained with cross-entropy. | [
"['Johannes Hugger' 'Virginie Uhlmann']"
]
|
null | null | 2404.14107 | null | null | http://arxiv.org/pdf/2404.14107v1 | 2024-04-22T11:52:23Z | 2024-04-22T11:52:23Z | PGNAA Spectral Classification of Aluminium and Copper Alloys with
Machine Learning | In this paper, we explore the optimization of metal recycling with a focus on real-time differentiation between alloys of copper and aluminium. Spectral data, obtained through Prompt Gamma Neutron Activation Analysis (PGNAA), is utilized for classification. The study compares data from two detectors, cerium bromide (CeBr$_{3}$) and high purity germanium (HPGe), considering their energy resolution and sensitivity. We test various data generation, preprocessing, and classification methods, with Maximum Likelihood Classifier (MLC) and Conditional Variational Autoencoder (CVAE) yielding the best results. The study also highlights the impact of different detector types on classification accuracy, with CeBr$_{3}$ excelling in short measurement times and HPGe performing better in longer durations. The findings suggest the importance of selecting the appropriate detector and methodology based on specific application requirements. | [
"['Henrik Folz' 'Joshua Henjes' 'Annika Heuer' 'Joscha Lahl'\n 'Philipp Olfert' 'Bjarne Seen' 'Sebastian Stabenau' 'Kai Krycki'\n 'Markus Lange-Hegermann' 'Helmand Shayan']"
]
|
null | null | 2404.14146 | null | null | http://arxiv.org/pdf/2404.14146v3 | 2024-05-05T18:51:05Z | 2024-04-22T12:55:04Z | Physics-based reward driven image analysis in microscopy | The rise of electron microscopy has expanded our ability to acquire nanometer and atomically resolved images of complex materials. The resulting vast datasets are typically analyzed by human operators, an intrinsically challenging process due to the multiple possible analysis steps and the corresponding need to build and optimize complex analysis workflows. We present a methodology based on the concept of a Reward Function coupled with Bayesian Optimization, to optimize image analysis workflows dynamically. The Reward Function is engineered to closely align with the experimental objectives and broader context and is quantifiable upon completion of the analysis. Here, cross-section, high-angle annular dark field (HAADF) images of ion-irradiated $(Y, Dy)Ba_2Cu_3O_{7-delta}$ thin-films were used as a model system. The reward functions were formed based on the expected materials density and atomic spacings and used to drive multi-objective optimization of the classical Laplacian-of-Gaussian (LoG) method. These results can be benchmarked against the DCNN segmentation. This optimized LoG* compares favorably against DCNN in the presence of the additional noise. We further extend the reward function approach towards the identification of partially-disordered regions, creating a physics-driven reward function and action space of high-dimensional clustering. We pose that with correct definition, the reward function approach allows real-time optimization of complex analysis workflows at much higher speeds and lower computational costs than classical DCNN-based inference, ensuring the attainment of results that are both precise and aligned with the human-defined objectives. | [
"['Kamyar Barakati' 'Hui Yuan' 'Amit Goyal' 'Sergei V. Kalinin']"
]
|
null | null | 2404.14161 | null | null | http://arxiv.org/pdf/2404.14161v2 | 2024-05-25T08:10:27Z | 2024-04-22T13:20:01Z | Tensor-Valued Time and Inference Path Optimization in Differential
Equation-Based Generative Modeling | In the field of generative modeling based on differential equations, conventional methods utilize scalar-valued time during both the training and inference phases. This work introduces, for the first time, a tensor-valued time that expands the conventional scalar-valued time into multiple dimensions. Additionally, we propose a novel path optimization problem designed to adaptively determine multidimensional inference trajectories using a predetermined differential equation solver and a fixed number of function evaluations. Our approach leverages the stochastic interpolant framework, simulation dynamics, and adversarial training to optimize the inference pathway. Notably, incorporating tensor-valued time during training improves some models' inference performance, even without path optimization. When the adaptive, multidimensional path derived from our optimization process is employed, further performance gains are achieved despite the fixed solver configurations. The introduction of tensor-valued time not only enhances the efficiency of models but also opens new avenues for exploration in training and inference methodologies, highlighting the potential of adaptive multidimensional paths. | [
"['Dohoon Lee' 'Kyogu Lee']"
]
|
null | null | 2404.14164 | null | null | http://arxiv.org/pdf/2404.14164v1 | 2024-04-22T13:26:42Z | 2024-04-22T13:26:42Z | New Solutions Based on the Generalized Eigenvalue Problem for the Data
Collaboration Analysis | In recent years, the accumulation of data across various institutions has garnered attention for the technology of confidential data analysis, which improves analytical accuracy by sharing data between multiple institutions while protecting sensitive information. Among these methods, Data Collaboration Analysis (DCA) is noted for its efficiency in terms of computational cost and communication load, facilitating data sharing and analysis across different institutions while safeguarding confidential information. However, existing optimization problems for determining the necessary collaborative functions have faced challenges, such as the optimal solution for the collaborative representation often being a zero matrix and the difficulty in understanding the process of deriving solutions. This research addresses these issues by formulating the optimization problem through the segmentation of matrices into column vectors and proposing a solution method based on the generalized eigenvalue problem. Additionally, we demonstrate methods for constructing collaborative functions more effectively through weighting and the selection of efficient algorithms suited to specific situations. Experiments using real-world datasets have shown that our proposed formulation and solution for the collaborative function optimization problem achieve superior predictive accuracy compared to existing methods. | [
"['Yuta Kawakami' 'Yuichi Takano' 'Akira Imakura']"
]
|
null | null | 2404.14188 | null | null | http://arxiv.org/pdf/2404.14188v1 | 2024-04-22T13:58:36Z | 2024-04-22T13:58:36Z | Experimental Validation of Ultrasound Beamforming with End-to-End Deep
Learning for Single Plane Wave Imaging | Ultrafast ultrasound imaging insonifies a medium with one or a combination of a few plane waves at different beam-steered angles instead of many focused waves. It can achieve much higher frame rates, but often at the cost of reduced image quality. Deep learning approaches have been proposed to mitigate this disadvantage, in particular for single plane wave imaging. Predominantly, image-to-image post-processing networks or fully learned data-to-image neural networks are used. Both construct their mapping purely data-driven and require expressive networks and large amounts of training data to perform well. In contrast, we consider data-to-image networks which incorporate a conventional image formation techniques as differentiable layers in the network architecture. This allows for end-to-end training with small amounts of training data. In this work, using f-k migration as an image formation layer is evaluated in-depth with experimental data. We acquired a data collection designed for benchmarking data-driven plane wave imaging approaches using a realistic breast mimicking phantom and an ultrasound calibration phantom. The evaluation considers global and local image similarity measures and contrast, resolution and lesion detectability analysis. The results show that the proposed network architecture is capable of improving the image quality of single plane wave images on all evaluation metrics. Furthermore, these image quality improvements can be achieved with surprisingly little amounts of training data. | [
"['Ryan A. L. Schoop' 'Gijs Hendriks' 'Tristan van Leeuwen'\n 'Chris L. de Korte' 'Felix Lucka']"
]
|
null | null | 2404.14197 | null | null | http://arxiv.org/pdf/2404.14197v2 | 2024-06-12T09:01:19Z | 2024-04-22T14:06:35Z | SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core
Fusion | Multivariate time series forecasting plays a crucial role in various fields such as finance, traffic management, energy, and healthcare. Recent studies have highlighted the advantages of channel independence to resist distribution drift but neglect channel correlations, limiting further enhancements. Several methods utilize mechanisms like attention or mixer to address this by capturing channel correlations, but they either introduce excessive complexity or rely too heavily on the correlation to achieve satisfactory results under distribution drifts, particularly with a large number of channels. Addressing this gap, this paper presents an efficient MLP-based model, the Series-cOre Fused Time Series forecaster (SOFTS), which incorporates a novel STar Aggregate-Redistribute (STAR) module. Unlike traditional approaches that manage channel interactions through distributed structures, textit{e.g.}, attention, STAR employs a centralized strategy to improve efficiency and reduce reliance on the quality of each channel. It aggregates all series to form a global core representation, which is then dispatched and fused with individual series representations to facilitate channel interactions effectively.SOFTS achieves superior performance over existing state-of-the-art methods with only linear complexity. The broad applicability of the STAR module across different forecasting models is also demonstrated empirically. For further research and development, we have made our code publicly available at https://github.com/Secilia-Cxy/SOFTS. | [
"['Lu Han' 'Xu-Yang Chen' 'Han-Jia Ye' 'De-Chuan Zhan']"
]
|
null | null | 2404.14202 | null | null | http://arxiv.org/pdf/2404.14202v2 | 2024-05-24T09:58:55Z | 2024-04-22T14:11:54Z | An Adaptive Approach for Infinitely Many-armed Bandits under Generalized
Rotting Constraints | In this study, we consider the infinitely many-armed bandit problems in a rested rotting setting, where the mean reward of an arm may decrease with each pull, while otherwise, it remains unchanged. We explore two scenarios regarding the rotting of rewards: one in which the cumulative amount of rotting is bounded by $V_T$, referred to as the slow-rotting case, and the other in which the cumulative number of rotting instances is bounded by $S_T$, referred to as the abrupt-rotting case. To address the challenge posed by rotting rewards, we introduce an algorithm that utilizes UCB with an adaptive sliding window, designed to manage the bias and variance trade-off arising due to rotting rewards. Our proposed algorithm achieves tight regret bounds for both slow and abrupt rotting scenarios. Lastly, we demonstrate the performance of our algorithm using numerical experiments. | [
"['Jung-hun Kim' 'Milan Vojnovic' 'Se-Young Yun']"
]
|
null | null | 2404.14212 | null | null | http://arxiv.org/pdf/2404.14212v2 | 2024-04-26T08:05:37Z | 2024-04-22T14:21:37Z | Toward Routing River Water in Land Surface Models with Recurrent Neural
Networks | Machine learning is playing an increasing role in hydrology, supplementing or replacing physics-based models. One notable example is the use of recurrent neural networks (RNNs) for forecasting streamflow given observed precipitation and geographic characteristics. Training of such a model over the continental United States has demonstrated that a single set of model parameters can be used across independent catchments, and that RNNs can outperform physics-based models. In this work, we take a next step and study the performance of RNNs for river routing in land surface models (LSMs). Instead of observed precipitation, the LSM-RNN uses instantaneous runoff calculated from physics-based models as an input. We train the model with data from river basins spanning the globe and test it in streamflow hindcasts. The model demonstrates skill at generalization across basins (predicting streamflow in unseen catchments) and across time (predicting streamflow during years not used in training). We compare the predictions from the LSM-RNN to an existing physics-based model calibrated with a similar dataset and find that the LSM-RNN outperforms the physics-based model. Our results give further evidence that RNNs are effective for global streamflow prediction from runoff inputs and motivate the development of complete routing models that can capture nested sub-basis connections. | [
"['Mauricio Lima' 'Katherine Deck' 'Oliver R. A. Dunbar' 'Tapio Schneider']"
]
|
null | null | 2404.14233 | null | null | http://arxiv.org/pdf/2404.14233v1 | 2024-04-22T14:46:10Z | 2024-04-22T14:46:10Z | Detecting and Mitigating Hallucination in Large Vision Language Models
via Fine-Grained AI Feedback | The rapidly developing Large Vision Language Models (LVLMs) have shown notable capabilities on a range of multi-modal tasks, but still face the hallucination phenomena where the generated texts do not align with the given contexts, significantly restricting the usages of LVLMs. Most previous work detects and mitigates hallucination at the coarse-grained level or requires expensive annotation (e.g., labeling by proprietary models or human experts). To address these issues, we propose detecting and mitigating hallucinations in LVLMs via fine-grained AI feedback. The basic idea is that we generate a small-size sentence-level hallucination annotation dataset by proprietary models, whereby we train a hallucination detection model which can perform sentence-level hallucination detection, covering primary hallucination types (i.e., object, attribute, and relationship). Then, we propose a detect-then-rewrite pipeline to automatically construct preference dataset for training hallucination mitigating model. Furthermore, we propose differentiating the severity of hallucinations, and introducing a Hallucination Severity-Aware Direct Preference Optimization (HSA-DPO) for mitigating hallucination in LVLMs by incorporating the severity of hallucinations into preference learning. Extensive experiments demonstrate the effectiveness of our method. | [
"['Wenyi Xiao' 'Ziwei Huang' 'Leilei Gan' 'Wanggui He' 'Haoyuan Li'\n 'Zhelun Yu' 'Hao Jiang' 'Fei Wu' 'Linchao Zhu']"
]
|
null | null | 2404.14240 | null | null | http://arxiv.org/pdf/2404.14240v1 | 2024-04-22T14:49:46Z | 2024-04-22T14:49:46Z | Collaborative Filtering Based on Diffusion Models: Unveiling the
Potential of High-Order Connectivity | A recent study has shown that diffusion models are well-suited for modeling the generative process of user-item interactions in recommender systems due to their denoising nature. However, existing diffusion model-based recommender systems do not explicitly leverage high-order connectivities that contain crucial collaborative signals for accurate recommendations. Addressing this gap, we propose CF-Diff, a new diffusion model-based collaborative filtering (CF) method, which is capable of making full use of collaborative signals along with multi-hop neighbors. Specifically, the forward-diffusion process adds random noise to user-item interactions, while the reverse-denoising process accommodates our own learning model, named cross-attention-guided multi-hop autoencoder (CAM-AE), to gradually recover the original user-item interactions. CAM-AE consists of two core modules: 1) the attention-aided AE module, responsible for precisely learning latent representations of user-item interactions while preserving the model's complexity at manageable levels, and 2) the multi-hop cross-attention module, which judiciously harnesses high-order connectivity information to capture enhanced collaborative signals. Through comprehensive experiments on three real-world datasets, we demonstrate that CF-Diff is (a) Superior: outperforming benchmark recommendation methods, achieving remarkable gains up to 7.29% compared to the best competitor, (b) Theoretically-validated: reducing computations while ensuring that the embeddings generated by our model closely approximate those from the original cross-attention, and (c) Scalable: proving the computational efficiency that scales linearly with the number of users or items. | [
"['Yu Hou' 'Jin-Duk Park' 'Won-Yong Shin']"
]
|
null | null | 2404.14243 | null | null | http://arxiv.org/pdf/2404.14243v1 | 2024-04-22T14:56:36Z | 2024-04-22T14:56:36Z | Turbo-CF: Matrix Decomposition-Free Graph Filtering for Fast
Recommendation | A series of graph filtering (GF)-based collaborative filtering (CF) showcases state-of-the-art performance on the recommendation accuracy by using a low-pass filter (LPF) without a training process. However, conventional GF-based CF approaches mostly perform matrix decomposition on the item-item similarity graph to realize the ideal LPF, which results in a non-trivial computational cost and thus makes them less practical in scenarios where rapid recommendations are essential. In this paper, we propose Turbo-CF, a GF-based CF method that is both training-free and matrix decomposition-free. Turbo-CF employs a polynomial graph filter to circumvent the issue of expensive matrix decompositions, enabling us to make full use of modern computer hardware components (i.e., GPU). Specifically, Turbo-CF first constructs an item-item similarity graph whose edge weights are effectively regulated. Then, our own polynomial LPFs are designed to retain only low-frequency signals without explicit matrix decompositions. We demonstrate that Turbo-CF is extremely fast yet accurate, achieving a runtime of less than 1 second on real-world benchmark datasets while achieving recommendation accuracies comparable to best competitors. | [
"['Jin-Duk Park' 'Yong-Min Shin' 'Won-Yong Shin']"
]
|
null | null | 2404.14244 | null | null | http://arxiv.org/pdf/2404.14244v1 | 2024-04-22T14:57:17Z | 2024-04-22T14:57:17Z | AI-Generated Faces in the Real World: A Large-Scale Case Study of
Twitter Profile Images | Recent advances in the field of generative artificial intelligence (AI) have blurred the lines between authentic and machine-generated content, making it almost impossible for humans to distinguish between such media. One notable consequence is the use of AI-generated images for fake profiles on social media. While several types of disinformation campaigns and similar incidents have been reported in the past, a systematic analysis has been lacking. In this work, we conduct the first large-scale investigation of the prevalence of AI-generated profile pictures on Twitter. We tackle the challenges of a real-world measurement study by carefully integrating various data sources and designing a multi-stage detection pipeline. Our analysis of nearly 15 million Twitter profile pictures shows that 0.052% were artificially generated, confirming their notable presence on the platform. We comprehensively examine the characteristics of these accounts and their tweet content, and uncover patterns of coordinated inauthentic behavior. The results also reveal several motives, including spamming and political amplification campaigns. Our research reaffirms the need for effective detection and mitigation strategies to cope with the potential negative effects of generative AI in the future. | [
"['Jonas Ricker' 'Dennis Assenmacher' 'Thorsten Holz' 'Asja Fischer'\n 'Erwin Quiring']"
]
|
null | null | 2404.14265 | null | null | http://arxiv.org/pdf/2404.14265v1 | 2024-04-22T15:12:47Z | 2024-04-22T15:12:47Z | Deep Learning as Ricci Flow | Deep neural networks (DNNs) are powerful tools for approximating the distribution of complex data. It is known that data passing through a trained DNN classifier undergoes a series of geometric and topological simplifications. While some progress has been made toward understanding these transformations in neural networks with smooth activation functions, an understanding in the more general setting of non-smooth activation functions, such as the rectified linear unit (ReLU), which tend to perform better, is required. Here we propose that the geometric transformations performed by DNNs during classification tasks have parallels to those expected under Hamilton's Ricci flow - a tool from differential geometry that evolves a manifold by smoothing its curvature, in order to identify its topology. To illustrate this idea, we present a computational framework to quantify the geometric changes that occur as data passes through successive layers of a DNN, and use this framework to motivate a notion of `global Ricci network flow' that can be used to assess a DNN's ability to disentangle complex data geometries to solve classification problems. By training more than $1,500$ DNN classifiers of different widths and depths on synthetic and real-world data, we show that the strength of global Ricci network flow-like behaviour correlates with accuracy for well-trained DNNs, independently of depth, width and data set. Our findings motivate the use of tools from differential and discrete geometry to the problem of explainability in deep learning. | [
"['Anthony Baptista' 'Alessandro Barp' 'Tapabrata Chakraborti'\n 'Chris Harbron' 'Ben D. MacArthur' 'Christopher R. S. Banerji']"
]
|
null | null | 2404.14270 | null | null | http://arxiv.org/pdf/2404.14270v1 | 2024-04-22T15:15:50Z | 2024-04-22T15:15:50Z | What do Transformers Know about Government? | This paper investigates what insights about linguistic features and what knowledge about the structure of natural language can be obtained from the encodings in transformer language models.In particular, we explore how BERT encodes the government relation between constituents in a sentence. We use several probing classifiers, and data from two morphologically rich languages. Our experiments show that information about government is encoded across all transformer layers, but predominantly in the early layers of the model. We find that, for both languages, a small number of attention heads encode enough information about the government relations to enable us to train a classifier capable of discovering new, previously unknown types of government, never seen in the training data. Currently, data is lacking for the research community working on grammatical constructions, and government in particular. We release the Government Bank -- a dataset defining the government relations for thousands of lemmas in the languages in our experiments. | [
"['Jue Hou' 'Anisia Katinskaia' 'Lari Kotilainen'\n 'Sathianpong Trangcasanchai' 'Anh-Duc Vu' 'Roman Yangarber']"
]
|
null | null | 2404.14271 | null | null | http://arxiv.org/pdf/2404.14271v1 | 2024-04-22T15:16:59Z | 2024-04-22T15:16:59Z | Sparse Explanations of Neural Networks Using Pruned Layer-Wise Relevance
Propagation | Explainability is a key component in many applications involving deep neural networks (DNNs). However, current explanation methods for DNNs commonly leave it to the human observer to distinguish relevant explanations from spurious noise. This is not feasible anymore when going from easily human-accessible data such as images to more complex data such as genome sequences. To facilitate the accessibility of DNN outputs from such complex data and to increase explainability, we present a modification of the widely used explanation method layer-wise relevance propagation. Our approach enforces sparsity directly by pruning the relevance propagation for the different layers. Thereby, we achieve sparser relevance attributions for the input features as well as for the intermediate layers. As the relevance propagation is input-specific, we aim to prune the relevance propagation rather than the underlying model architecture. This allows to prune different neurons for different inputs and hence, might be more appropriate to the local nature of explanation methods. To demonstrate the efficacy of our method, we evaluate it on two types of data, images and genomic sequences. We show that our modification indeed leads to noise reduction and concentrates relevance on the most important features compared to the baseline. | [
"['Paulo Yanez Sarmiento' 'Simon Witzke' 'Nadja Klein' 'Bernhard Y. Renard']"
]
|
null | null | 2404.14276 | null | null | http://arxiv.org/pdf/2404.14276v1 | 2024-04-22T15:26:24Z | 2024-04-22T15:26:24Z | A Bayesian Approach for Prioritising Driving Behaviour Investigations in
Telematic Auto Insurance Policies | Automotive insurers increasingly have access to telematic information via black-box recorders installed in the insured vehicle, and wish to identify undesirable behaviour which may signify increased risk or uninsured activities. However, identification of such behaviour with machine learning is non-trivial, and results are far from perfect, requiring human investigation to verify suspected cases. An appropriately formed priority score, generated by automated analysis of GPS data, allows underwriters to make more efficient use of their time, improving detection of the behaviour under investigation. An example of such behaviour is the use of a privately insured vehicle for commercial purposes, such as delivering meals and parcels. We first make use of trip GPS and accelerometer data, augmented by geospatial information, to train an imperfect classifier for delivery driving on a per-trip basis. We make use of a mixture of Beta-Binomial distributions to model the propensity of a policyholder to undertake trips which result in a positive classification as being drawn from either a rare high-scoring or common low-scoring group, and learn the parameters of this model using MCMC. This model provides us with a posterior probability that any policyholder will be a regular generator of automated alerts given any number of trips and alerts. This posterior probability is converted to a priority score, which was used to select the most valuable candidates for manual investigation. Testing over a 1-year period ranked policyholders by likelihood of commercial driving activity on a weekly basis. The top 0.9% have been reviewed at least once by the underwriters at the time of writing, and of those 99.4% have been confirmed as correctly identified, showing the approach has achieved a significant improvement in efficiency of human resource allocation compared to manual searching. | [
"['Mark McLeod' 'Bernardo Perez-Orozco' 'Nika Lee' 'Davide Zilli']"
]
|
null | null | 2404.14312 | null | null | http://arxiv.org/pdf/2404.14312v3 | 2024-06-02T00:20:16Z | 2024-04-22T16:16:06Z | Structure-preserving neural networks for the regularized entropy-based
closure of the Boltzmann moment system | The main challenge of large-scale numerical simulation of radiation transport is the high memory and computation time requirements of discretization methods for kinetic equations. In this work, we derive and investigate a neural network-based approximation to the entropy closure method to accurately compute the solution of the multi-dimensional moment system with a low memory footprint and competitive computational time. We extend methods developed for the standard entropy-based closure to the context of regularized entropy-based closures. The main idea is to interpret structure-preserving neural network approximations of the regularized entropy closure as a two-stage approximation to the original entropy closure. We conduct a numerical analysis of this approximation and investigate optimal parameter choices. Our numerical experiments demonstrate that the method has a much lower memory footprint than traditional methods with competitive computation times and simulation accuracy. | [
"['Steffen Schotthöfer' 'M. Paul Laiu' 'Martin Frank' 'Cory D. Hauck']"
]
|
null | null | 2404.14319 | null | null | http://arxiv.org/pdf/2404.14319v1 | 2024-04-22T16:30:03Z | 2024-04-22T16:30:03Z | Multi-Agent Hybrid SAC for Joint SS-DSA in CRNs | Opportunistic spectrum access has the potential to increase the efficiency of spectrum utilization in cognitive radio networks (CRNs). In CRNs, both spectrum sensing and resource allocation (SSRA) are critical to maximizing system throughput while minimizing collisions of secondary users with the primary network. However, many works in dynamic spectrum access do not consider the impact of imperfect sensing information such as mis-detected channels, which the additional information available in joint SSRA can help remediate. In this work, we examine joint SSRA as an optimization which seeks to maximize a CRN's net communication rate subject to constraints on channel sensing, channel access, and transmit power. Given the non-trivial nature of the problem, we leverage multi-agent reinforcement learning to enable a network of secondary users to dynamically access unoccupied spectrum via only local test statistics, formulated under the energy detection paradigm of spectrum sensing. In doing so, we develop a novel multi-agent implementation of hybrid soft actor critic, MHSAC, based on the QMIX mixing scheme. Through experiments, we find that our SSRA algorithm, HySSRA, is successful in maximizing the CRN's utilization of spectrum resources while also limiting its interference with the primary network, and outperforms the current state-of-the-art by a wide margin. We also explore the impact of wireless variations such as coherence time on the efficacy of the system. | [
"['David R. Nickel' 'Anindya Bijoy Das' 'David J. Love'\n 'Christopher G. Brinton']"
]
|
null | null | 2404.14322 | null | null | http://arxiv.org/pdf/2404.14322v2 | 2024-05-07T13:21:19Z | 2024-04-22T16:33:06Z | A Novel Approach to Chest X-ray Lung Segmentation Using U-net and
Modified Convolutional Block Attention Module | Lung segmentation in chest X-ray images is of paramount importance as it plays a crucial role in the diagnosis and treatment of various lung diseases. This paper presents a novel approach for lung segmentation in chest X-ray images by integrating U-net with attention mechanisms. The proposed method enhances the U-net architecture by incorporating a Convolutional Block Attention Module (CBAM), which unifies three distinct attention mechanisms: channel attention, spatial attention, and pixel attention. The channel attention mechanism enables the model to concentrate on the most informative features across various channels. The spatial attention mechanism enhances the model's precision in localization by focusing on significant spatial locations. Lastly, the pixel attention mechanism empowers the model to focus on individual pixels, further refining the model's focus and thereby improving the accuracy of segmentation. The adoption of the proposed CBAM in conjunction with the U-net architecture marks a significant advancement in the field of medical imaging, with potential implications for improving diagnostic precision and patient outcomes. The efficacy of this method is validated against contemporary state-of-the-art techniques, showcasing its superiority in segmentation performance. | [
"['Mohammad Ali Labbaf Khaniki' 'Mohammad Manthouri']"
]
|
null | null | 2404.14326 | null | null | http://arxiv.org/pdf/2404.14326v1 | 2024-04-22T16:38:41Z | 2024-04-22T16:38:41Z | Machine Learning Techniques for MRI Data Processing at Expanding Scale | Imaging sites around the world generate growing amounts of medical scan data with ever more versatile and affordable technology. Large-scale studies acquire MRI for tens of thousands of participants, together with metadata ranging from lifestyle questionnaires to biochemical assays, genetic analyses and more. These large datasets encode substantial information about human health and hold considerable potential for machine learning training and analysis. This chapter examines ongoing large-scale studies and the challenge of distribution shifts between them. Transfer learning for overcoming such shifts is discussed, together with federated learning for safe access to distributed training data securely held at multiple institutions. Finally, representation learning is reviewed as a methodology for encoding embeddings that express abstract relationships in multi-modal input formats. | [
"['Taro Langner']"
]
|
null | null | 2404.14332 | null | null | http://arxiv.org/pdf/2404.14332v1 | 2024-04-22T16:47:10Z | 2024-04-22T16:47:10Z | Full Event Particle-Level Unfolding with Variable-Length Latent
Variational Diffusion | The measurements performed by particle physics experiments must account for the imperfect response of the detectors used to observe the interactions. One approach, unfolding, statistically adjusts the experimental data for detector effects. Recently, generative machine learning models have shown promise for performing unbinned unfolding in a high number of dimensions. However, all current generative approaches are limited to unfolding a fixed set of observables, making them unable to perform full-event unfolding in the variable dimensional environment of collider data. A novel modification to the variational latent diffusion model (VLD) approach to generative unfolding is presented, which allows for unfolding of high- and variable-dimensional feature spaces. The performance of this method is evaluated in the context of semi-leptonic top quark pair production at the Large Hadron Collider. | [
"['Alexander Shmakov' 'Kevin Greif' 'Michael James Fenton' 'Aishik Ghosh'\n 'Pierre Baldi' 'Daniel Whiteson']"
]
|
null | null | 2404.14358 | null | null | http://arxiv.org/pdf/2404.14358v1 | 2024-04-22T17:12:58Z | 2024-04-22T17:12:58Z | A General Continuous-Time Formulation of Stochastic ADMM and Its
Variants | Stochastic versions of the alternating direction method of multiplier (ADMM) and its variants play a key role in many modern large-scale machine learning problems. In this work, we introduce a unified algorithmic framework called generalized stochastic ADMM and investigate their continuous-time analysis. The generalized framework widely includes many stochastic ADMM variants such as standard, linearized and gradient-based ADMM. Our continuous-time analysis provides us with new insights into stochastic ADMM and variants, and we rigorously prove that under some proper scaling, the trajectory of stochastic ADMM weakly converges to the solution of a stochastic differential equation with small noise. Our analysis also provides a theoretical explanation of why the relaxation parameter should be chosen between 0 and 2. | [
"['Chris Junchi Li']"
]
|
null | null | 2404.14367 | null | null | http://arxiv.org/pdf/2404.14367v3 | 2024-06-02T22:00:42Z | 2024-04-22T17:20:18Z | Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy
Data | Learning from preference labels plays a crucial role in fine-tuning large language models. There are several distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning (RL), and contrastive learning. Different methods come with different implementation tradeoffs and performance differences, and existing empirical findings present different conclusions, for instance, some results show that online RL is quite important to attain good fine-tuning results, while others find (offline) contrastive or even purely supervised methods sufficient. This raises a natural question: what kind of approaches are important for fine-tuning with preference data and why? In this paper, we answer this question by performing a rigorous analysis of a number of fine-tuning techniques on didactic and full-scale LLM problems. Our main finding is that, in general, approaches that use on-policy sampling or attempt to push down the likelihood on certain responses (i.e., employ a "negative gradient") outperform offline and maximum likelihood objectives. We conceptualize our insights and unify methods that use on-policy sampling or negative gradient under a notion of mode-seeking objectives for categorical distributions. Mode-seeking objectives are able to alter probability mass on specific bins of a categorical distribution at a fast rate compared to maximum likelihood, allowing them to relocate masses across bins more effectively. Our analysis prescribes actionable insights for preference fine-tuning of LLMs and informs how data should be collected for maximal improvement. | [
"['Fahim Tajwar' 'Anikait Singh' 'Archit Sharma' 'Rafael Rafailov'\n 'Jeff Schneider' 'Tengyang Xie' 'Stefano Ermon' 'Chelsea Finn'\n 'Aviral Kumar']"
]
|
null | null | 2404.14388 | null | null | http://arxiv.org/abs/2404.14388v1 | 2024-04-22T17:46:29Z | 2024-04-22T17:46:29Z | STROOBnet Optimization via GPU-Accelerated Proximal Recurrence
Strategies | Spatiotemporal networks' observational capabilities are crucial for accurate data gathering and informed decisions across multiple sectors. This study focuses on the Spatiotemporal Ranged Observer-Observable Bipartite Network (STROOBnet), linking observational nodes (e.g., surveillance cameras) to events within defined geographical regions, enabling efficient monitoring. Using data from Real-Time Crime Camera (RTCC) systems and Calls for Service (CFS) in New Orleans, where RTCC combats rising crime amidst reduced police presence, we address the network's initial observational imbalances. Aiming for uniform observational efficacy, we propose the Proximal Recurrence approach. It outperformed traditional clustering methods like k-means and DBSCAN by offering holistic event frequency and spatial consideration, enhancing observational coverage. | [
"['Ted Edward Holmberg' 'Mahdi Abdelguerfi' 'Elias Ioup']"
]
|
null | null | 2404.14389 | null | null | http://arxiv.org/pdf/2404.14389v1 | 2024-04-22T17:50:27Z | 2024-04-22T17:50:27Z | Poisoning Attacks on Federated Learning-based Wireless Traffic
Prediction | Federated Learning (FL) offers a distributed framework to train a global control model across multiple base stations without compromising the privacy of their local network data. This makes it ideal for applications like wireless traffic prediction (WTP), which plays a crucial role in optimizing network resources, enabling proactive traffic flow management, and enhancing the reliability of downstream communication-aided applications, such as IoT devices, autonomous vehicles, and industrial automation systems. Despite its promise, the security aspects of FL-based distributed wireless systems, particularly in regression-based WTP problems, remain inadequately investigated. In this paper, we introduce a novel fake traffic injection (FTI) attack, designed to undermine the FL-based WTP system by injecting fabricated traffic distributions with minimal knowledge. We further propose a defense mechanism, termed global-local inconsistency detection (GLID), which strategically removes abnormal model parameters that deviate beyond a specific percentile range estimated through statistical methods in each dimension. Extensive experimental evaluations, performed on real-world wireless traffic datasets, demonstrate that both our attack and defense strategies significantly outperform existing baselines. | [
"['Zifan Zhang' 'Minghong Fang' 'Jiayuan Huang' 'Yuchen Liu']"
]
|
null | null | 2404.14395 | null | null | http://arxiv.org/pdf/2404.14395v1 | 2024-04-22T17:55:56Z | 2024-04-22T17:55:56Z | PARAMANU-GANITA: Language Model with Mathematical Capabilities | In this paper, we present Paramanu-Ganita, a 208 million parameter novel Auto Regressive (AR) decoder based language model on mathematics. The model is pretrained from scratch at context size of 4096 on our curated mixed mathematical corpus. We evaluate our model on both perplexity metric and GSM8k mathematical benchmark. Paramanu-Ganita despite being 35 times smaller than 7B LLMs, outperformed generalist LLMs such as LLaMa-1 7B by 28.4% points, LLaMa-2 7B by 27.6% points, Falcon 7B by 32.6% points, PaLM 8B by 35.3% points, and math specialised LLMs such as Minerva 8B by 23.2% points, and LLEMMA-7B by 3.0% points in GSM8k test accuracy metric respectively. Paramanu-Ganita also outperformed giant LLMs like PaLM 62B by 6.4% points, Falcon 40B by 19.8% points, LLaMa-1 33B by 3.8% points and Vicuna 13B by 11.8% points respectively. The large significant margin improvement in performance of our math model over the existing LLMs signifies that reasoning capabilities of language model are just not restricted to LLMs with humongous number of parameters. Paramanu-Ganita took 146 hours of A100 training whereas math specialised LLM, LLEMMA 7B, was trained for 23,000 A100 hours of training equivalent. Thus, our approach of pretraining powerful domain specialised language models from scratch for domain adaptation is much more cost-effective than performing continual training of LLMs for domain adaptation. Hence, we conclude that for strong mathematical reasoning abilities of language model, we do not need giant LLMs and immense computing power to our end. In the end, we want to point out that we have only trained Paramanu-Ganita only on a part of our entire mathematical corpus and yet to explore the full potential of our model. | [
"['Mitodru Niyogi' 'Arnab Bhattacharya']"
]
|
null | null | 2404.14397 | null | null | http://arxiv.org/pdf/2404.14397v1 | 2024-04-22T17:56:26Z | 2024-04-22T17:56:26Z | RTP-LX: Can LLMs Evaluate Toxicity in Multilingual Scenarios? | Large language models (LLMs) and small language models (SLMs) are being adopted at remarkable speed, although their safety still remains a serious concern. With the advent of multilingual S/LLMs, the question now becomes a matter of scale: can we expand multilingual safety evaluations of these models with the same velocity at which they are deployed? To this end we introduce RTP-LX, a human-transcreated and human-annotated corpus of toxic prompts and outputs in 28 languages. RTP-LX follows participatory design practices, and a portion of the corpus is especially designed to detect culturally-specific toxic language. We evaluate seven S/LLMs on their ability to detect toxic content in a culturally-sensitive, multilingual scenario. We find that, although they typically score acceptably in terms of accuracy, they have low agreement with human judges when judging holistically the toxicity of a prompt, and have difficulty discerning harm in context-dependent scenarios, particularly with subtle-yet-harmful content (e.g. microagressions, bias). We release of this dataset to contribute to further reduce harmful uses of these models and improve their safe deployment. | [
"['Adrian de Wynter' 'Ishaan Watts' 'Nektar Ege Altıntoprak'\n 'Tua Wongsangaroonsri' 'Minghui Zhang' 'Noura Farra' 'Lena Baur'\n 'Samantha Claudet' 'Pavel Gajdusek' 'Can Gören' 'Qilong Gu'\n 'Anna Kaminska' 'Tomasz Kaminski' 'Ruby Kuo' 'Akiko Kyuba' 'Jongho Lee'\n 'Kartik Mathur' 'Petter Merok' 'Ivana Milovanović' 'Nani Paananen'\n 'Vesa-Matti Paananen' 'Anna Pavlenko' 'Bruno Pereira Vidal'\n 'Luciano Strika' 'Yueh Tsao' 'Davide Turcato' 'Oleksandr Vakhno'\n 'Judit Velcsov' 'Anna Vickers' 'Stéphanie Visser' 'Herdyan Widarmanto'\n 'Andrey Zaikin' 'Si-Qing Chen']"
]
|
null | null | 2404.14402 | null | null | http://arxiv.org/pdf/2404.14402v1 | 2024-04-22T17:58:36Z | 2024-04-22T17:58:36Z | A mean curvature flow arising in adversarial training | We connect adversarial training for binary classification to a geometric evolution equation for the decision boundary. Relying on a perspective that recasts adversarial training as a regularization problem, we introduce a modified training scheme that constitutes a minimizing movements scheme for a nonlocal perimeter functional. We prove that the scheme is monotone and consistent as the adversarial budget vanishes and the perimeter localizes, and as a consequence we rigorously show that the scheme approximates a weighted mean curvature flow. This highlights that the efficacy of adversarial training may be due to locally minimizing the length of the decision boundary. In our analysis, we introduce a variety of tools for working with the subdifferential of a supremal-type nonlocal total variation and its regularity properties. | [
"['Leon Bungert' 'Tim Laux' 'Kerrek Stinson']"
]
|
null | null | 2404.14408 | null | null | http://arxiv.org/pdf/2404.14408v2 | 2024-05-23T16:41:41Z | 2024-04-22T17:59:29Z | SpaceByte: Towards Deleting Tokenization from Large Language Modeling | Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures. | [
"['Kevin Slagle']"
]
|
null | null | 2404.14416 | null | null | http://arxiv.org/pdf/2404.14416v1 | 2024-04-05T11:01:50Z | 2024-04-05T11:01:50Z | Conditional diffusion models for downscaling & bias correction of Earth
system model precipitation | Climate change exacerbates extreme weather events like heavy rainfall and flooding. As these events cause severe losses of property and lives, accurate high-resolution simulation of precipitation is imperative. However, existing Earth System Models (ESMs) struggle with resolving small-scale dynamics and suffer from biases, especially for extreme events. Traditional statistical bias correction and downscaling methods fall short in improving spatial structure, while recent deep learning methods lack controllability over the output and suffer from unstable training. Here, we propose a novel machine learning framework for simultaneous bias correction and downscaling. We train a generative diffusion model in a supervised way purely on observational data. We map observational and ESM data to a shared embedding space, where both are unbiased towards each other and train a conditional diffusion model to reverse the mapping. Our method can be used to correct any ESM field, as the training is independent of the ESM. Our approach ensures statistical fidelity, preserves large-scale spatial patterns and outperforms existing methods especially regarding extreme events and small-scale spatial features that are crucial for impact assessments. | [
"['Michael Aich' 'Philipp Hess' 'Baoxiang Pan' 'Sebastian Bathiany'\n 'Yu Huang' 'Niklas Boers']"
]
|
null | null | 2404.14418 | null | null | http://arxiv.org/pdf/2404.14418v1 | 2024-04-12T20:23:02Z | 2024-04-12T20:23:02Z | Mitigating Cascading Effects in Large Adversarial Graph Environments | A significant amount of society's infrastructure can be modeled using graph structures, from electric and communication grids, to traffic networks, to social networks. Each of these domains are also susceptible to the cascading spread of negative impacts, whether this be overloaded devices in the power grid or the reach of a social media post containing misinformation. The potential harm of a cascade is compounded when considering a malicious attack by an adversary that is intended to maximize the cascading impact. However, by exploiting knowledge of the cascading dynamics, targets with the largest cascading impact can be preemptively prioritized for defense, and the damage an adversary can inflict can be mitigated. While game theory provides tools for finding an optimal preemptive defense strategy, existing methods struggle to scale to the context of large graph environments because of the combinatorial explosion of possible actions that occurs when the attacker and defender can each choose multiple targets in the graph simultaneously. The proposed method enables a data-driven deep learning approach that uses multi-node representation learning and counterfactual data augmentation to generalize to the full combinatorial action space by training on a variety of small restricted subsets of the action space. We demonstrate through experiments that the proposed method is capable of identifying defense strategies that are less exploitable than SOTA methods for large graphs, while still being able to produce strategies near the Nash equilibrium for small-scale scenarios for which it can be computed. Moreover, the proposed method demonstrates superior prediction accuracy on a validation set of unseen cascades compared to other deep learning approaches. | [
"['James D. Cunningham' 'Conrad S. Tucker']"
]
|
null | null | 2404.14419 | null | null | http://arxiv.org/pdf/2404.14419v1 | 2024-04-14T07:06:12Z | 2024-04-14T07:06:12Z | Enhancing Fault Detection for Large Language Models via Mutation-Based
Confidence Smoothing | Large language models (LLMs) achieved great success in multiple application domains and attracted huge attention from different research communities recently. Unfortunately, even for the best LLM, there still exist many faults that LLM cannot correctly predict. Such faults will harm the usability of LLMs. How to quickly reveal them in LLMs is important, but challenging. The reasons are twofold, 1) the heavy labeling effort for preparing the test data, and 2) accessing closed-source LLMs such as GPT4 is money-required. To handle this problem, in the traditional deep learning testing field, test selection methods have been proposed for efficiently testing deep learning models by prioritizing faults. However, the usefulness of these methods on LLMs is unclear and under exploration. In this paper, we first study the effectiveness of existing fault detection methods for LLMs. Experimental results on four different tasks~(including both code tasks and natural language processing tasks) and four LLMs (e.g., LLaMA and GPT4) demonstrated that existing fault detection methods cannot perform well on LLMs (e.g., seven out of eight methods perform worse than random selection on LLaMA). To enhance existing fault detection methods, we propose MuCS, a prompt Mutation-based prediction Confidence Smoothing method for LLMs. Concretely, we mutate the prompts and compute the average prediction confidence of all mutants as the input of fault detection methods. The results show that our proposed solution significantly enhances existing methods with the improvement of test relative coverage by up to 97.64%. | [
"['Qiang Hu' 'Jin Wen' 'Maxime Cordy' 'Yuheng Huang' 'Xiaofei Xie' 'Lei Ma']"
]
|
null | null | 2404.14433 | null | null | http://arxiv.org/pdf/2404.14433v1 | 2024-04-19T11:05:13Z | 2024-04-19T11:05:13Z | KATO: Knowledge Alignment and Transfer for Transistor Sizing of
Different Design and Technology | Automatic transistor sizing in circuit design continues to be a formidable challenge. Despite that Bayesian optimization (BO) has achieved significant success, it is circuit-specific, limiting the accumulation and transfer of design knowledge for broader applications. This paper proposes (1) efficient automatic kernel construction, (2) the first transfer learning across different circuits and technology nodes for BO, and (3) a selective transfer learning scheme to ensure only useful knowledge is utilized. These three novel components are integrated into BO with Multi-objective Acquisition Ensemble (MACE) to form Knowledge Alignment and Transfer Optimization (KATO) to deliver state-of-the-art performance: up to 2x simulation reduction and 1.2x design improvement over the baselines. | [
"['Wei W. Xing' 'Weijian Fan' 'Zhuohua Liu' 'Yuan Yao' 'Yuanqi Hu']"
]
|
null | null | 2404.14436 | null | null | http://arxiv.org/pdf/2404.14436v1 | 2024-04-19T20:03:30Z | 2024-04-19T20:03:30Z | Investigating Resource-efficient Neutron/Gamma Classification ML Models
Targeting eFPGAs | There has been considerable interest and resulting progress in implementing machine learning (ML) models in hardware over the last several years from the particle and nuclear physics communities. A big driver has been the release of the Python package, hls4ml, which has enabled porting models specified and trained using Python ML libraries to register transfer level (RTL) code. So far, the primary end targets have been commercial FPGAs or synthesized custom blocks on ASICs. However, recent developments in open-source embedded FPGA (eFPGA) frameworks now provide an alternate, more flexible pathway for implementing ML models in hardware. These customized eFPGA fabrics can be integrated as part of an overall chip design. In general, the decision between a fully custom, eFPGA, or commercial FPGA ML implementation will depend on the details of the end-use application. In this work, we explored the parameter space for eFPGA implementations of fully-connected neural network (fcNN) and boosted decision tree (BDT) models using the task of neutron/gamma classification with a specific focus on resource efficiency. We used data collected using an AmBe sealed source incident on Stilbene, which was optically coupled to an OnSemi J-series SiPM to generate training and test data for this study. We investigated relevant input features and the effects of bit-resolution and sampling rate as well as trade-offs in hyperparameters for both ML architectures while tracking total resource usage. The performance metric used to track model performance was the calculated neutron efficiency at a gamma leakage of 10$^{-3}$. The results of the study will be used to aid the specification of an eFPGA fabric, which will be integrated as part of a test chip. | [
"['Jyothisraj Johnson' 'Billy Boxer' 'Tarun Prakash' 'Carl Grace'\n 'Peter Sorensen' 'Mani Tripathi']"
]
|
null | null | 2404.14441 | null | null | http://arxiv.org/pdf/2404.14441v1 | 2024-04-20T00:21:06Z | 2024-04-20T00:21:06Z | Optimizing Contrail Detection: A Deep Learning Approach with
EfficientNet-b4 Encoding | In the pursuit of environmental sustainability, the aviation industry faces the challenge of minimizing its ecological footprint. Among the key solutions is contrail avoidance, targeting the linear ice-crystal clouds produced by aircraft exhaust. These contrails exacerbate global warming by trapping atmospheric heat, necessitating precise segmentation and comprehensive analysis of contrail images to gauge their environmental impact. However, this segmentation task is complex due to the varying appearances of contrails under different atmospheric conditions and potential misalignment issues in predictive modeling. This paper presents an innovative deep-learning approach utilizing the efficient net-b4 encoder for feature extraction, seamlessly integrating misalignment correction, soft labeling, and pseudo-labeling techniques to enhance the accuracy and efficiency of contrail detection in satellite imagery. The proposed methodology aims to redefine contrail image analysis and contribute to the objectives of sustainable aviation by providing a robust framework for precise contrail detection and analysis in satellite imagery, thus aiding in the mitigation of aviation's environmental impact. | [
"['Qunwei Lin' 'Qian Leng' 'Zhicheng Ding' 'Chao Yan' 'Xiaonan Xu']"
]
|
null | null | 2404.14442 | null | null | http://arxiv.org/pdf/2404.14442v2 | 2024-04-24T04:22:51Z | 2024-04-20T01:16:27Z | Unified ODE Analysis of Smooth Q-Learning Algorithms | Convergence of Q-learning has been the focus of extensive research over the past several decades. Recently, an asymptotic convergence analysis for Q-learning was introduced using a switching system framework. This approach applies the so-called ordinary differential equation (ODE) approach to prove the convergence of the asynchronous Q-learning modeled as a continuous-time switching system, where notions from switching system theory are used to prove its asymptotic stability without using explicit Lyapunov arguments. However, to prove stability, restrictive conditions, such as quasi-monotonicity, must be satisfied for the underlying switching systems, which makes it hard to easily generalize the analysis method to other reinforcement learning algorithms, such as the smooth Q-learning variants. In this paper, we present a more general and unified convergence analysis that improves upon the switching system approach and can analyze Q-learning and its smooth variants. The proposed analysis is motivated by previous work on the convergence of synchronous Q-learning based on $p$-norm serving as a Lyapunov function. However, the proposed analysis addresses more general ODE models that can cover both asynchronous Q-learning and its smooth versions with simpler frameworks. | [
"['Donghwan Lee']"
]
|
null | null | 2404.14444 | null | null | http://arxiv.org/pdf/2404.14444v1 | 2024-04-20T05:13:14Z | 2024-04-20T05:13:14Z | Practical Battery Health Monitoring using Uncertainty-Aware Bayesian
Neural Network | Battery health monitoring and prediction are critically important in the era of electric mobility with a huge impact on safety, sustainability, and economic aspects. Existing research often focuses on prediction accuracy but tends to neglect practical factors that may hinder the technology's deployment in real-world applications. In this paper, we address these practical considerations and develop models based on the Bayesian neural network for predicting battery end-of-life. Our models use sensor data related to battery health and apply distributions, rather than single-point, for each parameter of the models. This allows the models to capture the inherent randomness and uncertainty of battery health, which leads to not only accurate predictions but also quantifiable uncertainty. We conducted an experimental study and demonstrated the effectiveness of our proposed models, with a prediction error rate averaging 13.9%, and as low as 2.9% for certain tested batteries. Additionally, all predictions include quantifiable certainty, which improved by 66% from the initial to the mid-life stage of the battery. This research has practical values for battery technologies and contributes to accelerating the technology adoption in the industry. | [
"['Yunyi Zhao' 'Zhang Wei' 'Qingyu Yan' 'Man-Fai Ng' 'B. Sivaneasan'\n 'Cheng Xiang']"
]
|
null | null | 2404.14445 | null | null | http://arxiv.org/pdf/2404.14445v1 | 2024-04-20T08:08:28Z | 2024-04-20T08:08:28Z | A Multi-Faceted Evaluation Framework for Assessing Synthetic Data
Generated by Large Language Models | The rapid advancements in generative AI and large language models (LLMs) have opened up new avenues for producing synthetic data, particularly in the realm of structured tabular formats, such as product reviews. Despite the potential benefits, concerns regarding privacy leakage have surfaced, especially when personal information is utilized in the training datasets. In addition, there is an absence of a comprehensive evaluation framework capable of quantitatively measuring the quality of the generated synthetic data and their utility for downstream tasks. In response to this gap, we introduce SynEval, an open-source evaluation framework designed to assess the fidelity, utility, and privacy preservation of synthetically generated tabular data via a suite of diverse evaluation metrics. We validate the efficacy of our proposed framework - SynEval - by applying it to synthetic product review data generated by three state-of-the-art LLMs: ChatGPT, Claude, and Llama. Our experimental findings illuminate the trade-offs between various evaluation metrics in the context of synthetic data generation. Furthermore, SynEval stands as a critical instrument for researchers and practitioners engaged with synthetic tabular data,, empowering them to judiciously determine the suitability of the generated data for their specific applications, with an emphasis on upholding user privacy. | [
"['Yefeng Yuan' 'Yuhong Liu' 'Liang Cheng']"
]
|
null | null | 2404.14447 | null | null | http://arxiv.org/pdf/2404.14447v1 | 2024-04-20T10:28:24Z | 2024-04-20T10:28:24Z | A Novel A.I Enhanced Reservoir Characterization with a Combined Mixture
of Experts -- NVIDIA Modulus based Physics Informed Neural Operator Forward
Model | We have developed an advanced workflow for reservoir characterization, effectively addressing the challenges of reservoir history matching through a novel approach. This method integrates a Physics Informed Neural Operator (PINO) as a forward model within a sophisticated Cluster Classify Regress (CCR) framework. The process is enhanced by an adaptive Regularized Ensemble Kalman Inversion (aREKI), optimized for rapid uncertainty quantification in reservoir history matching. This innovative workflow parameterizes unknown permeability and porosity fields, capturing non-Gaussian posterior measures with techniques such as a variational convolution autoencoder and the CCR. Serving as exotic priors and a supervised model, the CCR synergizes with the PINO surrogate to accurately simulate the nonlinear dynamics of Peaceman well equations. The CCR approach allows for flexibility in applying distinct machine learning algorithms across its stages. Updates to the PINO reservoir surrogate are driven by a loss function derived from supervised data, initial conditions, and residuals of governing black oil PDEs. Our integrated model, termed PINO-Res-Sim, outputs crucial parameters including pressures, saturations, and production rates for oil, water, and gas. Validated against traditional simulators through controlled experiments on synthetic reservoirs and the Norne field, the methodology showed remarkable accuracy. Additionally, the PINO-Res-Sim in the aREKI workflow efficiently recovered unknown fields with a computational speedup of 100 to 6000 times faster than conventional methods. The learning phase for PINO-Res-Sim, conducted on an NVIDIA H100, was impressively efficient, compatible with ensemble-based methods for complex computational tasks. | [
"['Clement Etienam' 'Yang Juntao' 'Issam Said' 'Oleg Ovcharenko'\n 'Kaustubh Tangsali' 'Pavel Dimitrov' 'Ken Hester']"
]
|
null | null | 2404.14449 | null | null | http://arxiv.org/pdf/2404.14449v1 | 2024-04-20T16:48:18Z | 2024-04-20T16:48:18Z | Predicting Question Quality on StackOverflow with Neural Networks | The wealth of information available through the Internet and social media is unprecedented. Within computing fields, websites such as Stack Overflow are considered important sources for users seeking solutions to their computing and programming issues. However, like other social media platforms, Stack Overflow contains a mixture of relevant and irrelevant information. In this paper, we evaluated neural network models to predict the quality of questions on Stack Overflow, as an example of Question Answering (QA) communities. Our results demonstrate the effectiveness of neural network models compared to baseline machine learning models, achieving an accuracy of 80%. Furthermore, our findings indicate that the number of layers in the neural network model can significantly impact its performance. | [
"['Mohammad Al-Ramahi' 'Izzat Alsmadi' 'Abdullah Wahbeh']"
]
|
null | null | 2404.14451 | null | null | http://arxiv.org/pdf/2404.14451v1 | 2024-04-20T19:22:05Z | 2024-04-20T19:22:05Z | Generative Subspace Adversarial Active Learning for Outlier Detection in
Multiple Views of High-dimensional Data | Outlier detection in high-dimensional tabular data is an important task in data mining, essential for many downstream tasks and applications. Existing unsupervised outlier detection algorithms face one or more problems, including inlier assumption (IA), curse of dimensionality (CD), and multiple views (MV). To address these issues, we introduce Generative Subspace Adversarial Active Learning (GSAAL), a novel approach that uses a Generative Adversarial Network with multiple adversaries. These adversaries learn the marginal class probability functions over different data subspaces, while a single generator in the full space models the entire distribution of the inlier class. GSAAL is specifically designed to address the MV limitation while also handling the IA and CD, being the only method to do so. We provide a comprehensive mathematical formulation of MV, convergence guarantees for the discriminators, and scalability results for GSAAL. Our extensive experiments demonstrate the effectiveness and scalability of GSAAL, highlighting its superior performance compared to other popular OD methods, especially in MV scenarios. | [
"['Jose Cribeiro-Ramallo' 'Vadim Arzamasov' 'Federico Matteucci'\n 'Denis Wambold' 'Klemens Böhm']"
]
|
null | null | 2404.14455 | null | null | http://arxiv.org/pdf/2404.14455v1 | 2024-04-21T09:48:09Z | 2024-04-21T09:48:09Z | A Neuro-Symbolic Explainer for Rare Events: A Case Study on Predictive
Maintenance | Predictive Maintenance applications are increasingly complex, with interactions between many components. Black box models are popular approaches based on deep learning techniques due to their predictive accuracy. This paper proposes a neural-symbolic architecture that uses an online rule-learning algorithm to explain when the black box model predicts failures. The proposed system solves two problems in parallel: anomaly detection and explanation of the anomaly. For the first problem, we use an unsupervised state of the art autoencoder. For the second problem, we train a rule learning system that learns a mapping from the input features to the autoencoder reconstruction error. Both systems run online and in parallel. The autoencoder signals an alarm for the examples with a reconstruction error that exceeds a threshold. The causes of the signal alarm are hard for humans to understand because they result from a non linear combination of sensor data. The rule that triggers that example describes the relationship between the input features and the autoencoder reconstruction error. The rule explains the failure signal by indicating which sensors contribute to the alarm and allowing the identification of the component involved in the failure. The system can present global explanations for the black box model and local explanations for why the black box model predicts a failure. We evaluate the proposed system in a real-world case study of Metro do Porto and provide explanations that illustrate its benefits. | [
"['João Gama' 'Rita P. Ribeiro' 'Saulo Mastelini' 'Narjes Davarid'\n 'Bruno Veloso']"
]
|
null | null | 2404.14456 | null | null | http://arxiv.org/pdf/2404.14456v1 | 2024-04-21T11:21:47Z | 2024-04-21T11:21:47Z | Multifidelity Surrogate Models: A New Data Fusion Perspective | Multifidelity surrogate modelling combines data of varying accuracy and cost from different sources. It strategically uses low-fidelity models for rapid evaluations, saving computational resources, and high-fidelity models for detailed refinement. It improves decision-making by addressing uncertainties and surpassing the limits of single-fidelity models, which either oversimplify or are computationally intensive. Blending high-fidelity data for detailed responses with frequent low-fidelity data for quick approximations facilitates design optimisation in various domains. Despite progress in interpolation, regression, enhanced sampling, error estimation, variable fidelity, and data fusion techniques, challenges persist in selecting fidelity levels and developing efficient data fusion methods. This study proposes a new fusion approach to construct multi-fidelity surrogate models by constructing gradient-only surrogates that use only gradients to construct regression surfaces. Results are demonstrated on foundational example problems that isolate and illustrate the fusion approach's efficacy, avoiding the need for complex examples that obfuscate the main concept. | [
"['Daniel N Wilke']"
]
|
null | null | 2404.14457 | null | null | http://arxiv.org/pdf/2404.14457v1 | 2024-04-21T15:00:25Z | 2024-04-21T15:00:25Z | Graph Coloring Using Heat Diffusion | Graph coloring is a problem with varied applications in industry and science such as scheduling, resource allocation, and circuit design. The purpose of this paper is to establish if a new gradient based iterative solver framework known as heat diffusion can solve the graph coloring problem. We propose a solution to the graph coloring problem using the heat diffusion framework. We compare the solutions against popular methods and establish the competitiveness of heat diffusion method for the graph coloring problem. | [
"['Vivek Chaudhary']"
]
|
null | null | 2404.14460 | null | null | http://arxiv.org/pdf/2404.14460v1 | 2024-04-21T21:56:39Z | 2024-04-21T21:56:39Z | Inference of Causal Networks using a Topological Threshold | We propose a constraint-based algorithm, which automatically determines causal relevance thresholds, to infer causal networks from data. We call these topological thresholds. We present two methods for determining the threshold: the first seeks a set of edges that leaves no disconnected nodes in the network; the second seeks a causal large connected component in the data. We tested these methods both for discrete synthetic and real data, and compared the results with those obtained for the PC algorithm, which we took as the benchmark. We show that this novel algorithm is generally faster and more accurate than the PC algorithm. The algorithm for determining the thresholds requires choosing a measure of causality. We tested our methods for Fisher Correlations, commonly used in PC algorithm (for instance in cite{kalisch2005}), and further proposed a discrete and asymmetric measure of causality, that we called Net Influence, which provided very good results when inferring causal networks from discrete data. This metric allows for inferring directionality of the edges in the process of applying the thresholds, speeding up the inference of causal DAGs. | [
"['Filipe Barroso' 'Diogo Gomes' 'Gareth J. Baxter']"
]
|
null | null | 2404.14461 | null | null | http://arxiv.org/pdf/2404.14461v2 | 2024-06-06T12:45:52Z | 2024-04-22T05:08:53Z | Competition Report: Finding Universal Jailbreak Backdoors in Aligned
LLMs | Large language models are aligned to be safe, preventing users from generating harmful content like misinformation or instructions for illegal activities. However, previous work has shown that the alignment process is vulnerable to poisoning attacks. Adversaries can manipulate the safety training data to inject backdoors that act like a universal sudo command: adding the backdoor string to any prompt enables harmful responses from models that, otherwise, behave safely. Our competition, co-located at IEEE SaTML 2024, challenged participants to find universal backdoors in several large language models. This report summarizes the key findings and promising ideas for future research. | [
"['Javier Rando' 'Francesco Croce' 'Kryštof Mitka' 'Stepan Shabalin'\n 'Maksym Andriushchenko' 'Nicolas Flammarion' 'Florian Tramèr']"
]
|
null | null | 2404.14462 | null | null | http://arxiv.org/pdf/2404.14462v2 | 2024-04-24T03:52:49Z | 2024-04-22T06:19:46Z | Towards smaller, faster decoder-only transformers: Architectural
variants and their implications | Research on Large Language Models (LLMs) has recently seen exponential growth, largely focused on transformer-based architectures, as introduced by [1] and further advanced by the decoder-only variations in [2]. Contemporary studies typically aim to improve model capabilities by increasing both the architecture's complexity and the volume of training data. However, research exploring how to reduce model sizes while maintaining performance is limited. This study introduces three modifications to the decoder-only transformer architecture: ParallelGPT (p-gpt), LinearlyCompressedGPT (lc-gpt), and ConvCompressedGPT (cc-gpt). These variants achieve comparable performance to conventional architectures in code generation tasks while benefiting from reduced model sizes and faster training times. We open-source the model weights and codebase to support future research and development in this domain. | [
"['Sathya Krishnan Suresh' 'Shunmugapriya P']"
]
|
null | null | 2404.14463 | null | null | http://arxiv.org/pdf/2404.14463v1 | 2024-04-22T09:07:50Z | 2024-04-22T09:07:50Z | DAIC-WOZ: On the Validity of Using the Therapist's prompts in Automatic
Depression Detection from Clinical Interviews | Automatic depression detection from conversational data has gained significant interest in recent years. The DAIC-WOZ dataset, interviews conducted by a human-controlled virtual agent, has been widely used for this task. Recent studies have reported enhanced performance when incorporating interviewer's prompts into the model. In this work, we hypothesize that this improvement might be mainly due to a bias present in these prompts, rather than the proposed architectures and methods. Through ablation experiments and qualitative analysis, we discover that models using interviewer's prompts learn to focus on a specific region of the interviews, where questions about past experiences with mental health issues are asked, and use them as discriminative shortcuts to detect depressed participants. In contrast, models using participant responses gather evidence from across the entire interview. Finally, to highlight the magnitude of this bias, we achieve a 0.90 F1 score by intentionally exploiting it, the highest result reported to date on this dataset using only textual information. Our findings underline the need for caution when incorporating interviewers' prompts into models, as they may inadvertently learn to exploit targeted prompts, rather than learning to characterize the language and behavior that are genuinely indicative of the patient's mental health condition. | [
"['Sergio Burdisso' 'Ernesto Reyes-Ramírez' 'Esaú Villatoro-Tello'\n 'Fernando Sánchez-Vega' 'Pastor López-Monroy' 'Petr Motlicek']"
]
|
null | null | 2404.14497 | null | null | http://arxiv.org/pdf/2404.14497v1 | 2024-04-22T18:02:17Z | 2024-04-22T18:02:17Z | Mapping Wireless Networks into Digital Reality through Joint Vertical
and Horizontal Learning | In recent years, the complexity of 5G and beyond wireless networks has escalated, prompting a need for innovative frameworks to facilitate flexible management and efficient deployment. The concept of digital twins (DTs) has emerged as a solution to enable real-time monitoring, predictive configurations, and decision-making processes. While existing works primarily focus on leveraging DTs to optimize wireless networks, a detailed mapping methodology for creating virtual representations of network infrastructure and properties is still lacking. In this context, we introduce VH-Twin, a novel time-series data-driven framework that effectively maps wireless networks into digital reality. VH-Twin distinguishes itself through complementary vertical twinning (V-twinning) and horizontal twinning (H-twinning) stages, followed by a periodic clustering mechanism used to virtualize network regions based on their distinct geological and wireless characteristics. Specifically, V-twinning exploits distributed learning techniques to initialize a global twin model collaboratively from virtualized network clusters. H-twinning, on the other hand, is implemented with an asynchronous mapping scheme that dynamically updates twin models in response to network or environmental changes. Leveraging real-world wireless traffic data within a cellular wireless network, comprehensive experiments are conducted to verify that VH-Twin can effectively construct, deploy, and maintain network DTs. Parametric analysis also offers insights into how to strike a balance between twinning efficiency and model accuracy at scale. | [
"['Zifan Zhang' 'Mingzhe Chen' 'Zhaohui Yang' 'Yuchen Liu']"
]
|
null | null | 2404.14507 | null | null | http://arxiv.org/pdf/2404.14507v1 | 2024-04-22T18:18:41Z | 2024-04-22T18:18:41Z | Align Your Steps: Optimizing Sampling Schedules in Diffusion Models | Diffusion models (DMs) have established themselves as the state-of-the-art generative modeling approach in the visual domain and beyond. A crucial drawback of DMs is their slow sampling speed, relying on many sequential function evaluations through large neural networks. Sampling from DMs can be seen as solving a differential equation through a discretized set of noise levels known as the sampling schedule. While past works primarily focused on deriving efficient solvers, little attention has been given to finding optimal sampling schedules, and the entire literature relies on hand-crafted heuristics. In this work, for the first time, we propose a general and principled approach to optimizing the sampling schedules of DMs for high-quality outputs, called $textit{Align Your Steps}$. We leverage methods from stochastic calculus and find optimal schedules specific to different solvers, trained DMs and datasets. We evaluate our novel approach on several image, video as well as 2D toy data synthesis benchmarks, using a variety of different samplers, and observe that our optimized schedules outperform previous hand-crafted schedules in almost all experiments. Our method demonstrates the untapped potential of sampling schedule optimization, especially in the few-step synthesis regime. | [
"['Amirmojtaba Sabour' 'Sanja Fidler' 'Karsten Kreis']"
]
|
null | null | 2404.14523 | null | null | http://arxiv.org/abs/2404.14523v1 | 2024-04-22T18:45:40Z | 2024-04-22T18:45:40Z | Edge-Assisted ML-Aided Uncertainty-Aware Vehicle Collision Avoidance at
Urban Intersections | Intersection crossing represents one of the most dangerous sections of the road infrastructure and Connected Vehicles (CVs) can serve as a revolutionary solution to the problem. In this work, we present a novel framework that detects preemptively collisions at urban crossroads, exploiting the Multi-access Edge Computing (MEC) platform of 5G networks. At the MEC, an Intersection Manager (IM) collects information from both vehicles and the road infrastructure to create a holistic view of the area of interest. Based on the historical data collected, the IM leverages the capabilities of an encoder-decoder recurrent neural network to predict, with high accuracy, the future vehicles' trajectories. As, however, accuracy is not a sufficient measure of how much we can trust a model, trajectory predictions are additionally associated with a measure of uncertainty towards confident collision forecasting and avoidance. Hence, contrary to any other approach in the state of the art, an uncertainty-aware collision prediction framework is developed that is shown to detect well in advance (and with high reliability) if two vehicles are on a collision course. Subsequently, collision detection triggers a number of alarms that signal the colliding vehicles to brake. Under real-world settings, thanks to the preemptive capabilities of the proposed approach, all the simulated imminent dangers are averted. | [
"['Dinesh Cyril Selvaraj' 'Christian Vitale' 'Tania Panayiotou'\n 'Panayiotis Kolios' 'Carla Fabiana Chiasserini' 'Georgios Ellinas']"
]
|
null | null | 2404.14527 | null | null | http://arxiv.org/pdf/2404.14527v3 | 2024-06-28T01:24:22Z | 2024-04-22T18:56:18Z | Mélange: Cost Efficient Large Language Model Serving by Exploiting GPU
Heterogeneity | Large language models (LLMs) are increasingly integrated into many online services, yet they remain cost-prohibitive to deploy due to the requirement of expensive GPU instances. Prior work has addressed the high cost of LLM serving by improving the inference engine, but less attention has been given to selecting the most cost-efficient GPU type(s) for a specific LLM service. There is a large and growing landscape of GPU types and, within these options, higher cost does not always lead to increased performance. Instead, through a comprehensive investigation, we find that three key LLM service characteristics (request size, request rate, SLO) strongly influence GPU cost efficiency, and differing GPU types are most cost efficient for differing LLM service settings. As a result, the most cost-efficient allocation for a given service is typically a mix of heterogeneous GPU types. Based on this analysis, we introduce M'elange, a GPU allocation framework that navigates these diverse LLM service characteristics and heterogeneous GPU option space to automatically and efficiently derive the minimal-cost GPU allocation for a given LLM service. We formulate the GPU allocation task as a cost-aware bin packing problem where GPUs are bins and items are slices of the service workload. Our formulation's constraints account for a service's unique characteristics, allowing M'elange to be flexible to support diverse service settings and heterogeneity-aware to adapt the GPU allocation to a specific service. Compared to using only a single GPU type, M'elange reduces deployment costs by up to 77% in conversational settings, 33% in document-based settings, and 51% in a mixed setting. | [
"['Tyler Griggs' 'Xiaoxuan Liu' 'Jiaxiang Yu' 'Doyoung Kim'\n 'Wei-Lin Chiang' 'Alvin Cheung' 'Ion Stoica']"
]
|
null | null | 2404.14551 | null | null | http://arxiv.org/pdf/2404.14551v1 | 2024-04-22T19:46:07Z | 2024-04-22T19:46:07Z | Learning S-Matrix Phases with Neural Operators | We use Fourier Neural Operators (FNOs) to study the relation between the modulus and phase of amplitudes in $2to 2$ elastic scattering at fixed energies. Unlike previous approaches, we do not employ the integral relation imposed by unitarity, but instead train FNOs to discover it from many samples of amplitudes with finite partial wave expansions. When trained only on true samples, the FNO correctly predicts (unique or ambiguous) phases of amplitudes with infinite partial wave expansions. When also trained on false samples, it can rate the quality of its prediction by producing a true/false classifying index. We observe that the value of this index is strongly correlated with the violation of the unitarity constraint for the predicted phase, and present examples where it delineates the boundary between allowed and disallowed profiles of the modulus. Our application of FNOs is unconventional: it involves a simultaneous regression-classification task and emphasizes the role of statistics in ensembles of NOs. We comment on the merits and limitations of the approach and its potential as a new methodology in Theoretical Physics. | [
"['V. Niarchos' 'C. Papageorgakis']"
]
|
null | null | 2404.14552 | null | null | http://arxiv.org/pdf/2404.14552v1 | 2024-04-22T19:46:16Z | 2024-04-22T19:46:16Z | Generalizing Multi-Step Inverse Models for Representation Learning to
Finite-Memory POMDPs | Discovering an informative, or agent-centric, state representation that encodes only the relevant information while discarding the irrelevant is a key challenge towards scaling reinforcement learning algorithms and efficiently applying them to downstream tasks. Prior works studied this problem in high-dimensional Markovian environments, when the current observation may be a complex object but is sufficient to decode the informative state. In this work, we consider the problem of discovering the agent-centric state in the more challenging high-dimensional non-Markovian setting, when the state can be decoded from a sequence of past observations. We establish that generalized inverse models can be adapted for learning agent-centric state representation for this task. Our results include asymptotic theory in the deterministic dynamics setting as well as counter-examples for alternative intuitive algorithms. We complement these findings with a thorough empirical study on the agent-centric state discovery abilities of the different alternatives we put forward. Particularly notable is our analysis of past actions, where we show that these can be a double-edged sword: making the algorithms more successful when used correctly and causing dramatic failure when used incorrectly. | [
"['Lili Wu' 'Ben Evans' 'Riashat Islam' 'Raihan Seraj' 'Yonathan Efroni'\n 'Alex Lamb']"
]
|
null | null | 2404.14586 | null | null | http://arxiv.org/pdf/2404.14586v1 | 2024-04-22T21:22:12Z | 2024-04-22T21:22:12Z | Latency-Distortion Tradeoffs in Communicating Classification Results
over Noisy Channels | In this work, the problem of communicating decisions of a classifier over a noisy channel is considered. With machine learning based models being used in variety of time-sensitive applications, transmission of these decisions in a reliable and timely manner is of significant importance. To this end, we study the scenario where a probability vector (representing the decisions of a classifier) at the transmitter, needs to be transmitted over a noisy channel. Assuming that the distortion between the original probability vector and the reconstructed one at the receiver is measured via f-divergence, we study the trade-off between transmission latency and the distortion. We completely analyze this trade-off using uniform, lattice, and sparse lattice-based quantization techniques to encode the probability vector by first characterizing bit budgets for each technique given a requirement on the allowed source distortion. These bounds are then combined with results from finite-blocklength literature to provide a framework for analyzing the effects of both quantization distortion and distortion due to decoding error probability (i.e., channel effects) on the incurred transmission latency. Our results show that there is an interesting interplay between source distortion (i.e., distortion for the probability vector measured via f-divergence) and the subsequent channel encoding/decoding parameters; and indicate that a joint design of these parameters is crucial to navigate the latency-distortion tradeoff. We study the impact of changing different parameters (e.g. number of classes, SNR, source distortion) on the latency-distortion tradeoff and perform experiments on AWGN and fading channels. Our results indicate that sparse lattice-based quantization is the most effective at minimizing latency across various regimes and for sparse, high-dimensional probability vectors (i.e., high number of classes). | [
"['Noel Teku' 'Sudarshan Adiga' 'Ravi Tandon']"
]
|
null | null | 2404.14588 | null | null | http://arxiv.org/pdf/2404.14588v1 | 2024-04-22T21:30:11Z | 2024-04-22T21:30:11Z | Brain-Inspired Continual Learning-Robust Feature Distillation and
Re-Consolidation for Class Incremental Learning | Artificial intelligence (AI) and neuroscience share a rich history, with advancements in neuroscience shaping the development of AI systems capable of human-like knowledge retention. Leveraging insights from neuroscience and existing research in adversarial and continual learning, we introduce a novel framework comprising two core concepts: feature distillation and re-consolidation. Our framework, named Robust Rehearsal, addresses the challenge of catastrophic forgetting inherent in continual learning (CL) systems by distilling and rehearsing robust features. Inspired by the mammalian brain's memory consolidation process, Robust Rehearsal aims to emulate the rehearsal of distilled experiences during learning tasks. Additionally, it mimics memory re-consolidation, where new experiences influence the integration of past experiences to mitigate forgetting. Extensive experiments conducted on CIFAR10, CIFAR100, and real-world helicopter attitude datasets showcase the superior performance of CL models trained with Robust Rehearsal compared to baseline methods. Furthermore, examining different optimization training objectives-joint, continual, and adversarial learning-we highlight the crucial role of feature learning in model performance. This underscores the significance of rehearsing CL-robust samples in mitigating catastrophic forgetting. In conclusion, aligning CL approaches with neuroscience insights offers promising solutions to the challenge of catastrophic forgetting, paving the way for more robust and human-like AI systems. | [
"['Hikmat Khan' 'Nidhal Carla Bouaynaya' 'Ghulam Rasool']"
]
|
null | null | 2404.14602 | null | null | http://arxiv.org/pdf/2404.14602v1 | 2024-04-22T21:58:23Z | 2024-04-22T21:58:23Z | Adaptive Bayesian Optimization for High-Precision Motion Systems | Controller tuning and parameter optimization are crucial in system design to improve closed-loop system performance. Bayesian optimization has been established as an efficient model-free controller tuning and adaptation method. However, Bayesian optimization methods are computationally expensive and therefore difficult to use in real-time critical scenarios. In this work, we propose a real-time purely data-driven, model-free approach for adaptive control, by online tuning low-level controller parameters. We base our algorithm on GoOSE, an algorithm for safe and sample-efficient Bayesian optimization, for handling performance and stability criteria. We introduce multiple computational and algorithmic modifications for computational efficiency and parallelization of optimization steps. We further evaluate the algorithm's performance on a real precision-motion system utilized in semiconductor industry applications by modifying the payload and reference stepsize and comparing it to an interpolated constrained optimization-based baseline approach. | [
"['Christopher König' 'Raamadaas Krishnadas' 'Efe C. Balta'\n 'Alisa Rupenyan']"
]
|
null | null | 2404.14618 | null | null | http://arxiv.org/pdf/2404.14618v1 | 2024-04-22T23:06:42Z | 2024-04-22T23:06:42Z | Hybrid LLM: Cost-Efficient and Quality-Aware Query Routing | Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e.g., edge) devices, tend to lag behind in terms of response quality. Therefore in this work we propose a hybrid inference approach which combines their respective strengths to save cost and maintain quality. Our approach uses a router that assigns queries to the small or large model based on the predicted query difficulty and the desired quality level. The desired quality level can be tuned dynamically at test time to seamlessly trade quality for cost as per the scenario requirements. In experiments our approach allows us to make up to 40% fewer calls to the large model, with no drop in response quality. | [
"['Dujian Ding' 'Ankur Mallick' 'Chi Wang' 'Robert Sim'\n 'Subhabrata Mukherjee' 'Victor Ruhle' 'Laks V. S. Lakshmanan'\n 'Ahmed Hassan Awadallah']"
]
|
null | null | 2404.14619 | null | null | http://arxiv.org/pdf/2404.14619v2 | 2024-05-02T00:30:57Z | 2024-04-22T23:12:03Z | OpenELM: An Efficient Language Model Family with Open Training and
Inference Framework | The reproducibility and transparency of large language models are crucial for advancing open research, ensuring the trustworthiness of results, and enabling investigations into data and model biases, as well as potential risks. To this end, we release OpenELM, a state-of-the-art open language model. OpenELM uses a layer-wise scaling strategy to efficiently allocate parameters within each layer of the transformer model, leading to enhanced accuracy. For example, with a parameter budget of approximately one billion parameters, OpenELM exhibits a 2.36% improvement in accuracy compared to OLMo while requiring $2times$ fewer pre-training tokens. Diverging from prior practices that only provide model weights and inference code, and pre-train on private datasets, our release includes the complete framework for training and evaluation of the language model on publicly available datasets, including training logs, multiple checkpoints, and pre-training configurations. We also release code to convert models to MLX library for inference and fine-tuning on Apple devices. This comprehensive release aims to empower and strengthen the open research community, paving the way for future open research endeavors. Our source code along with pre-trained model weights and training recipes is available at url{https://github.com/apple/corenet}. Additionally, model models can be found on HuggingFace at: url{https://huggingface.co/apple/OpenELM}. | [
"['Sachin Mehta' 'Mohammad Hossein Sekhavat' 'Qingqing Cao'\n 'Maxwell Horton' 'Yanzi Jin' 'Chenfan Sun' 'Iman Mirzadeh'\n 'Mahyar Najibi' 'Dmitry Belenko' 'Peter Zatloukal' 'Mohammad Rastegari']"
]
|
null | null | 2404.14620 | null | null | http://arxiv.org/pdf/2404.14620v1 | 2024-04-22T23:12:58Z | 2024-04-22T23:12:58Z | Fairness Incentives in Response to Unfair Dynamic Pricing | The use of dynamic pricing by profit-maximizing firms gives rise to demand fairness concerns, measured by discrepancies in consumer groups' demand responses to a given pricing strategy. Notably, dynamic pricing may result in buyer distributions unreflective of those of the underlying population, which can be problematic in markets where fair representation is socially desirable. To address this, policy makers might leverage tools such as taxation and subsidy to adapt policy mechanisms dependent upon their social objective. In this paper, we explore the potential for AI methods to assist such intervention strategies. To this end, we design a basic simulated economy, wherein we introduce a dynamic social planner (SP) to generate corporate taxation schedules geared to incentivizing firms towards adopting fair pricing behaviours, and to use the collected tax budget to subsidize consumption among underrepresented groups. To cover a range of possible policy scenarios, we formulate our social planner's learning problem as a multi-armed bandit, a contextual bandit and finally as a full reinforcement learning (RL) problem, evaluating welfare outcomes from each case. To alleviate the difficulty in retaining meaningful tax rates that apply to less frequently occurring brackets, we introduce FairReplayBuffer, which ensures that our RL agent samples experiences uniformly across a discretized fairness space. We find that, upon deploying a learned tax and redistribution policy, social welfare improves on that of the fairness-agnostic baseline, and approaches that of the analytically optimal fairness-aware baseline for the multi-armed and contextual bandit settings, and surpassing it by 13.19% in the full RL setting. | [
"['Jesse Thibodeau' 'Hadi Nekoei' 'Afaf Taïk' 'Janarthanan Rajendran'\n 'Golnoosh Farnadi']"
]
|
null | null | 2404.14625 | null | null | http://arxiv.org/abs/2404.14625v1 | 2024-04-22T23:40:03Z | 2024-04-22T23:40:03Z | Towards Multi-Morphology Controllers with Diversity and Knowledge
Distillation | Finding controllers that perform well across multiple morphologies is an important milestone for large-scale robotics, in line with recent advances via foundation models in other areas of machine learning. However, the challenges of learning a single controller to control multiple morphologies make the `one robot one task' paradigm dominant in the field. To alleviate these challenges, we present a pipeline that: (1) leverages Quality Diversity algorithms like MAP-Elites to create a dataset of many single-task/single-morphology teacher controllers, then (2) distills those diverse controllers into a single multi-morphology controller that performs well across many different body plans by mimicking the sensory-action patterns of the teacher controllers via supervised learning. The distilled controller scales well with the number of teachers/morphologies and shows emergent properties. It generalizes to unseen morphologies in a zero-shot manner, providing robustness to morphological perturbations and instant damage recovery. Lastly, the distilled controller is also independent of the teacher controllers -- we can distill the teacher's knowledge into any controller model, making our approach synergistic with architectural improvements and existing training algorithms for teacher controllers. | [
"['Alican Mertan' 'Nick Cheney']"
]
|
null | null | 2404.14631 | null | null | http://arxiv.org/pdf/2404.14631v1 | 2024-04-23T00:05:48Z | 2024-04-23T00:05:48Z | Learning Word Embedding with Better Distance Weighting and Window Size
Scheduling | Distributed word representation (a.k.a. word embedding) is a key focus in natural language processing (NLP). As a highly successful word embedding model, Word2Vec offers an efficient method for learning distributed word representations on large datasets. However, Word2Vec lacks consideration for distances between center and context words. We propose two novel methods, Learnable Formulated Weights (LFW) and Epoch-based Dynamic Window Size (EDWS), to incorporate distance information into two variants of Word2Vec, the Continuous Bag-of-Words (CBOW) model and the Continuous Skip-gram (Skip-gram) model. For CBOW, LFW uses a formula with learnable parameters that best reflects the relationship of influence and distance between words to calculate distance-related weights for average pooling, providing insights for future NLP text modeling research. For Skip-gram, we improve its dynamic window size strategy to introduce distance information in a more balanced way. Experiments prove the effectiveness of LFW and EDWS in enhancing Word2Vec's performance, surpassing previous state-of-the-art methods. | [
"['Chaohao Yang']"
]
|
null | null | 2404.14635 | null | null | http://arxiv.org/pdf/2404.14635v1 | 2024-04-23T00:18:20Z | 2024-04-23T00:18:20Z | Digital Twins for forecasting and decision optimisation with machine
learning: applications in wastewater treatment | Prediction and optimisation are two widely used techniques that have found many applications in solving real-world problems. While prediction is concerned with estimating the unknown future values of a variable, optimisation is concerned with optimising the decision given all the available data. These methods are used together to solve problems for sequential decision-making where often we need to predict the future values of variables and then use them for determining the optimal decisions. This paradigm is known as forecast and optimise and has numerous applications, e.g., forecast demand for a product and then optimise inventory, forecast energy demand and schedule generations, forecast demand for a service and schedule staff, to name a few. In this extended abstract, we review a digital twin that was developed and applied in wastewater treatment in Urban Utility to improve their operational efficiency. While the current study is tailored to the case study problem, the underlying principles can be used to solve similar problems in other domains. | [
"['Matthew Colwell' 'Mahdi Abolghasemi']"
]
|
null | null | 2404.14642 | null | null | http://arxiv.org/pdf/2404.14642v1 | 2024-04-23T00:39:26Z | 2024-04-23T00:39:26Z | Uncertainty Quantification on Graph Learning: A Survey | Graphical models, including Graph Neural Networks (GNNs) and Probabilistic Graphical Models (PGMs), have demonstrated their exceptional capabilities across numerous fields. These models necessitate effective uncertainty quantification to ensure reliable decision-making amid the challenges posed by model training discrepancies and unpredictable testing scenarios. This survey examines recent works that address uncertainty quantification within the model architectures, training, and inference of GNNs and PGMs. We aim to provide an overview of the current landscape of uncertainty in graphical models by organizing the recent methods into uncertainty representation and handling. By summarizing state-of-the-art methods, this survey seeks to deepen the understanding of uncertainty quantification in graphical models, thereby increasing their effectiveness and safety in critical applications. | [
"['Chao Chen' 'Chenghua Guo' 'Rui Xu' 'Xiangwen Liao' 'Xi Zhang'\n 'Sihong Xie' 'Hui Xiong' 'Philip Yu']"
]
|
null | null | 2404.14651 | null | null | http://arxiv.org/pdf/2404.14651v2 | 2024-06-30T03:50:46Z | 2024-04-23T01:18:28Z | Forecasting the Forced van der Pol Equation with Frequent Phase Shifts
Using Reservoir Computing | We tested the performance of reservoir computing (RC) in predicting the dynamics of a certain non-autonomous dynamical system. Specifically, we considered a van del Pol oscillator subjected to periodic external force with frequent phase shifts. The reservoir computer, which was trained and optimized with simulation data generated for a particular phase shift, was designed to predict the oscillation dynamics under periodic external forces with different phase shifts. The results suggest that if the training data have some complexity, it is possible to quantitatively predict the oscillation dynamics exposed to different phase shifts. The setting of this study was motivated by the problem of predicting the state of the circadian rhythm of shift workers and designing a better shift work schedule for each individual. Our results suggest that RC could be exploited for such applications. | [
"['Sho Kuno' 'Hiroshi Kori']"
]
|
null | null | 2404.14653 | null | null | http://arxiv.org/pdf/2404.14653v1 | 2024-04-23T01:19:19Z | 2024-04-23T01:19:19Z | Machine Vision Based Assessment of Fall Color Changes in Apple Trees:
Exploring Relationship with Leaf Nitrogen Concentration | Apple trees being deciduous trees, shed leaves each year which is preceded by the change in color of leaves from green to yellow (also known as senescence) during the fall season. The rate and timing of color change are affected by the number of factors including nitrogen (N) deficiencies. The green color of leaves is highly dependent on the chlorophyll content, which in turn depends on the nitrogen concentration in the leaves. The assessment of the leaf color can give vital information on the nutrient status of the tree. The use of a machine vision based system to capture and quantify these timings and changes in leaf color can be a great tool for that purpose. par This study is based on data collected during the fall of 2021 and 2023 at a commercial orchard using a ground-based stereo-vision sensor for five weeks. The point cloud obtained from the sensor was segmented to get just the tree in the foreground. The study involved the segmentation of the trees in a natural background using point cloud data and quantification of the color using a custom-defined metric, textit{yellowness index}, varying from $-1$ to $+1$ ($-1$ being completely green and $+1$ being completely yellow), which gives the proportion of yellow leaves on a tree. The performance of K-means based algorithm and gradient boosting algorithm were compared for textit{yellowness index} calculation. The segmentation method proposed in the study was able to estimate the textit{yellowness index} on the trees with $R^2 = 0.72$. The results showed that the metric was able to capture the gradual color transition from green to yellow over the study duration. It was also observed that the trees with lower nitrogen showed the color transition to yellow earlier than the trees with higher nitrogen. The onset of color transition during both years aligned with the $29^{th}$ week post-full bloom. | [
"['Achyut Paudel' 'Jostan Brown' 'Priyanka Upadhyaya' 'Atif Bilal Asad'\n 'Safal Kshetri' 'Manoj Karkee' 'Joseph R. Davidson' 'Cindy Grimm'\n 'Ashley Thompson']"
]
|
null | null | 2404.14661 | null | null | http://arxiv.org/pdf/2404.14661v1 | 2024-04-23T01:45:55Z | 2024-04-23T01:45:55Z | First Mapping the Canopy Height of Primeval Forests in the Tallest Tree
Area of Asia | We have developed the world's first canopy height map of the distribution area of world-level giant trees. This mapping is crucial for discovering more individual and community world-level giant trees, and for analyzing and quantifying the effectiveness of biodiversity conservation measures in the Yarlung Tsangpo Grand Canyon (YTGC) National Nature Reserve. We proposed a method to map the canopy height of the primeval forest within the world-level giant tree distribution area by using a spaceborne LiDAR fusion satellite imagery (Global Ecosystem Dynamics Investigation (GEDI), ICESat-2, and Sentinel-2) driven deep learning modeling. And we customized a pyramid receptive fields depth separable CNN (PRFXception). PRFXception, a CNN architecture specifically customized for mapping primeval forest canopy height to infer the canopy height at the footprint level of GEDI and ICESat-2 from Sentinel-2 optical imagery with a 10-meter spatial resolution. We conducted a field survey of 227 permanent plots using a stratified sampling method and measured several giant trees using UAV-LS. The predicted canopy height was compared with ICESat-2 and GEDI validation data (RMSE =7.56 m, MAE=6.07 m, ME=-0.98 m, R^2=0.58 m), UAV-LS point clouds (RMSE =5.75 m, MAE =3.72 m, ME = 0.82 m, R^2= 0.65 m), and ground survey data (RMSE = 6.75 m, MAE = 5.56 m, ME= 2.14 m, R^2=0.60 m). We mapped the potential distribution map of world-level giant trees and discovered two previously undetected giant tree communities with an 89% probability of having trees 80-100 m tall, potentially taller than Asia's tallest tree. This paper provides scientific evidence confirming southeastern Tibet--northwestern Yunnan as the fourth global distribution center of world-level giant trees initiatives and promoting the inclusion of the YTGC giant tree distribution area within the scope of China's national park conservation. | [
"['Guangpeng Fan' 'Fei Yan' 'Xiangquan Zeng' 'Qingtao Xu' 'Ruoyoulan Wang'\n 'Binghong Zhang' 'Jialing Zhou' 'Liangliang Nan' 'Jinhu Wang'\n 'Zhiwei Zhang' 'Jia Wang']"
]
|
null | null | 2404.14662 | null | null | http://arxiv.org/pdf/2404.14662v1 | 2024-04-23T01:46:32Z | 2024-04-23T01:46:32Z | NExT: Teaching Large Language Models to Reason about Code Execution | A fundamental skill among human developers is the ability to understand and reason about program execution. As an example, a programmer can mentally simulate code execution in natural language to debug and repair code (aka. rubber duck debugging). However, large language models (LLMs) of code are typically trained on the surface textual form of programs, thus may lack a semantic understanding of how programs execute at run-time. To address this issue, we propose NExT, a method to teach LLMs to inspect the execution traces of programs (variable states of executed lines) and reason about their run-time behavior through chain-of-thought (CoT) rationales. Specifically, NExT uses self-training to bootstrap a synthetic training set of execution-aware rationales that lead to correct task solutions (e.g., fixed programs) without laborious manual annotation. Experiments on program repair tasks based on MBPP and HumanEval demonstrate that NExT improves the fix rate of a PaLM 2 model, by 26.1% and 14.3% absolute, respectively, with significantly improved rationale quality as verified by automated metrics and human raters. Our model can also generalize to scenarios where program traces are absent at test-time. | [
"['Ansong Ni' 'Miltiadis Allamanis' 'Arman Cohan' 'Yinlin Deng'\n 'Kensen Shi' 'Charles Sutton' 'Pengcheng Yin']"
]
|
null | null | 2404.14664 | null | null | http://arxiv.org/pdf/2404.14664v1 | 2024-04-23T01:49:12Z | 2024-04-23T01:49:12Z | Employing Layerwised Unsupervised Learning to Lessen Data and Loss
Requirements in Forward-Forward Algorithms | Recent deep learning models such as ChatGPT utilizing the back-propagation algorithm have exhibited remarkable performance. However, the disparity between the biological brain processes and the back-propagation algorithm has been noted. The Forward-Forward algorithm, which trains deep learning models solely through the forward pass, has emerged to address this. Although the Forward-Forward algorithm cannot replace back-propagation due to limitations such as having to use special input and loss functions, it has the potential to be useful in special situations where back-propagation is difficult to use. To work around this limitation and verify usability, we propose an Unsupervised Forward-Forward algorithm. Using an unsupervised learning model enables training with usual loss functions and inputs without restriction. Through this approach, we lead to stable learning and enable versatile utilization across various datasets and tasks. From a usability perspective, given the characteristics of the Forward-Forward algorithm and the advantages of the proposed method, we anticipate its practical application even in scenarios such as federated learning, where deep learning layers need to be trained separately in physically distributed environments. | [
"['Taewook Hwang' 'Hyein Seo' 'Sangkeun Jung']"
]
|
null | null | 2404.14674 | null | null | http://arxiv.org/pdf/2404.14674v1 | 2024-04-23T02:00:58Z | 2024-04-23T02:00:58Z | HOIN: High-Order Implicit Neural Representations | Implicit neural representations (INR) suffer from worsening spectral bias, which results in overly smooth solutions to the inverse problem. To deal with this problem, we propose a universal framework for processing inverse problems called textbf{High-Order Implicit Neural Representations (HOIN)}. By refining the traditional cascade structure to foster high-order interactions among features, HOIN enhances the model's expressive power and mitigates spectral bias through its neural tangent kernel's (NTK) strong diagonal properties, accelerating and optimizing inverse problem resolution. By analyzing the model's expression space, high-order derivatives, and the NTK matrix, we theoretically validate the feasibility of HOIN. HOIN realizes 1 to 3 dB improvements in most inverse problems, establishing a new state-of-the-art recovery quality and training efficiency, thus providing a new general paradigm for INR and paving the way for it to solve the inverse problem. | [
"['Yang Chen' 'Ruituo Wu' 'Yipeng Liu' 'Ce Zhu']"
]
|
null | null | 2404.14680 | null | null | http://arxiv.org/pdf/2404.14680v1 | 2024-04-23T02:19:35Z | 2024-04-23T02:19:35Z | Automated Multi-Language to English Machine Translation Using Generative
Pre-Trained Transformers | The task of accurate and efficient language translation is an extremely important information processing task. Machine learning enabled and automated translation that is accurate and fast is often a large topic of interest in the machine learning and data science communities. In this study, we examine using local Generative Pretrained Transformer (GPT) models to perform automated zero shot black-box, sentence wise, multi-natural-language translation into English text. We benchmark 16 different open-source GPT models, with no custom fine-tuning, from the Huggingface LLM repository for translating 50 different non-English languages into English using translated TED Talk transcripts as the reference dataset. These GPT model inference calls are performed strictly locally, on single A100 Nvidia GPUs. Benchmark metrics that are reported are language translation accuracy, using BLEU, GLEU, METEOR, and chrF text overlap measures, and wall-clock time for each sentence translation. The best overall performing GPT model for translating into English text for the BLEU metric is ReMM-v2-L2-13B with a mean score across all tested languages of $0.152$, for the GLEU metric is ReMM-v2-L2-13B with a mean score across all tested languages of $0.256$, for the chrF metric is Llama2-chat-AYT-13B with a mean score across all tested languages of $0.448$, and for the METEOR metric is ReMM-v2-L2-13B with a mean score across all tested languages of $0.438$. | [
"['Elijah Pelofske' 'Vincent Urias' 'Lorie M. Liebrock']"
]
|
null | null | 2404.14688 | null | null | http://arxiv.org/pdf/2404.14688v2 | 2024-05-22T16:43:48Z | 2024-04-23T02:36:47Z | FMint: Bridging Human Designed and Data Pretrained Models for
Differential Equation Foundation Model | In this paper, we propose a pre-trained foundation model textbf{FMint} (textbf{F}oundation textbf{M}odel based on textbf{In}itextbf{t}ialization), designed to speed up large-scale simulations of various differential equations with high accuracy via error correction. Human-designed simulation algorithms excel at capturing the fundamental physics of engineering problems, but often need to balance the trade-off between accuracy and efficiency. While deep learning methods offer innovative solutions across numerous scientific fields, they frequently fall short in domain-specific knowledge. FMint bridges these gaps through conditioning on the initial coarse solutions obtained from conventional human-designed algorithms, and trained to obtain refined solutions for various differential equations. Based on the backbone of large language models, we adapt the in-context learning scheme to learn a universal error correction method for dynamical systems from given prompted sequences of coarse solutions. The model is pre-trained on a corpus of 600K ordinary differential equations (ODEs), and we conduct extensive experiments on both in-distribution and out-of-distribution tasks. FMint outperforms various baselines on large-scale simulation, and demonstrates its capability in generalization to unseen ODEs. Our approach achieves an accuracy improvement of 1 to 2 orders of magnitude over state-of-the-art dynamical system simulators, and delivers a 5X speedup compared to traditional numerical algorithms. | [
"['Zezheng Song' 'Jiaxin Yuan' 'Haizhao Yang']"
]
|
null | null | 2404.14689 | null | null | http://arxiv.org/pdf/2404.14689v1 | 2024-04-23T02:36:54Z | 2024-04-23T02:36:54Z | Interpretable Prediction and Feature Selection for Survival Analysis | Survival analysis is widely used as a technique to model time-to-event data when some data is censored, particularly in healthcare for predicting future patient risk. In such settings, survival models must be both accurate and interpretable so that users (such as doctors) can trust the model and understand model predictions. While most literature focuses on discrimination, interpretability is equally as important. A successful interpretable model should be able to describe how changing each feature impacts the outcome, and should only use a small number of features. In this paper, we present DyS (pronounced ``dice''), a new survival analysis model that achieves both strong discrimination and interpretability. DyS is a feature-sparse Generalized Additive Model, combining feature selection and interpretable prediction into one model. While DyS works well for all survival analysis problems, it is particularly useful for large (in $n$ and $p$) survival datasets such as those commonly found in observational healthcare studies. Empirical studies show that DyS competes with other state-of-the-art machine learning models for survival analysis, while being highly interpretable. | [
"['Mike Van Ness' 'Madeleine Udell']"
]
|
null | null | 2404.14700 | null | null | http://arxiv.org/pdf/2404.14700v3 | 2024-04-25T03:38:46Z | 2024-04-23T02:57:46Z | FlashSpeech: Efficient Zero-Shot Speech Synthesis | Recent progress in large-scale zero-shot speech synthesis has been significantly advanced by language models and diffusion models. However, the generation process of both methods is slow and computationally intensive. Efficient speech synthesis using a lower computing budget to achieve quality on par with previous work remains a significant challenge. In this paper, we present FlashSpeech, a large-scale zero-shot speech synthesis system with approximately 5% of the inference time compared with previous work. FlashSpeech is built on the latent consistency model and applies a novel adversarial consistency training approach that can train from scratch without the need for a pre-trained diffusion model as the teacher. Furthermore, a new prosody generator module enhances the diversity of prosody, making the rhythm of the speech sound more natural. The generation processes of FlashSpeech can be achieved efficiently with one or two sampling steps while maintaining high audio quality and high similarity to the audio prompt for zero-shot speech generation. Our experimental results demonstrate the superior performance of FlashSpeech. Notably, FlashSpeech can be about 20 times faster than other zero-shot speech synthesis systems while maintaining comparable performance in terms of voice quality and similarity. Furthermore, FlashSpeech demonstrates its versatility by efficiently performing tasks like voice conversion, speech editing, and diverse speech sampling. Audio samples can be found in https://flashspeech.github.io/. | [
"['Zhen Ye' 'Zeqian Ju' 'Haohe Liu' 'Xu Tan' 'Jianyi Chen' 'Yiwen Lu'\n 'Peiwen Sun' 'Jiahao Pan' 'Weizhen Bian' 'Shulin He' 'Qifeng Liu'\n 'Yike Guo' 'Wei Xue']"
]
|
null | null | 2404.14701 | null | null | http://arxiv.org/pdf/2404.14701v1 | 2024-04-23T03:01:09Z | 2024-04-23T03:01:09Z | Deep neural networks for choice analysis: Enhancing behavioral
regularity with gradient regularization | Deep neural networks (DNNs) frequently present behaviorally irregular patterns, significantly limiting their practical potentials and theoretical validity in travel behavior modeling. This study proposes strong and weak behavioral regularities as novel metrics to evaluate the monotonicity of individual demand functions (a.k.a. law of demand), and further designs a constrained optimization framework with six gradient regularizers to enhance DNNs' behavioral regularity. The proposed framework is applied to travel survey data from Chicago and London to examine the trade-off between predictive power and behavioral regularity for large vs. small sample scenarios and in-domain vs. out-of-domain generalizations. The results demonstrate that, unlike models with strong behavioral foundations such as the multinomial logit, the benchmark DNNs cannot guarantee behavioral regularity. However, gradient regularization (GR) increases DNNs' behavioral regularity by around 6 percentage points (pp) while retaining their relatively high predictive power. In the small sample scenario, GR is more effective than in the large sample scenario, simultaneously improving behavioral regularity by about 20 pp and log-likelihood by around 1.7%. Comparing with the in-domain generalization of DNNs, GR works more effectively in out-of-domain generalization: it drastically improves the behavioral regularity of poorly performing benchmark DNNs by around 65 pp, indicating the criticality of behavioral regularization for enhancing model transferability and application in forecasting. Moreover, the proposed framework is applicable to other NN-based choice models such as TasteNets. Future studies could use behavioral regularity as a metric along with log-likelihood in evaluating travel demand models, and investigate other methods to further enhance behavioral regularity when adopting complex machine learning models. | [
"['Siqi Feng' 'Rui Yao' 'Stephane Hess' 'Ricardo A. Daziano'\n 'Timothy Brathwaite' 'Joan Walker' 'Shenhao Wang']"
]
|
null | null | 2404.14721 | null | null | http://arxiv.org/pdf/2404.14721v1 | 2024-04-23T03:52:44Z | 2024-04-23T03:52:44Z | Dynamically Anchored Prompting for Task-Imbalanced Continual Learning | Existing continual learning literature relies heavily on a strong assumption that tasks arrive with a balanced data stream, which is often unrealistic in real-world applications. In this work, we explore task-imbalanced continual learning (TICL) scenarios where the distribution of task data is non-uniform across the whole learning process. We find that imbalanced tasks significantly challenge the capability of models to control the trade-off between stability and plasticity from the perspective of recent prompt-based continual learning methods. On top of the above finding, we propose Dynamically Anchored Prompting (DAP), a prompt-based method that only maintains a single general prompt to adapt to the shifts within a task stream dynamically. This general prompt is regularized in the prompt space with two specifically designed prompt anchors, called boosting anchor and stabilizing anchor, to balance stability and plasticity in TICL. Remarkably, DAP achieves this balance by only storing a prompt across the data stream, therefore offering a substantial advantage in rehearsal-free CL. Extensive experiments demonstrate that the proposed DAP results in 4.5% to 15% absolute improvements over state-of-the-art methods on benchmarks under task-imbalanced settings. Our code is available at https://github.com/chenxing6666/DAP | [
"['Chenxing Hong' 'Yan Jin' 'Zhiqi Kang' 'Yizhou Chen' 'Mengke Li'\n 'Yang Lu' 'Hanzi Wang']"
]
|
null | null | 2404.14728 | null | null | http://arxiv.org/pdf/2404.14728v1 | 2024-04-23T04:06:08Z | 2024-04-23T04:06:08Z | Novel Topological Machine Learning Methodology for Stream-of-Quality
Modeling in Smart Manufacturing | This paper presents a topological analytics approach within the 5-level Cyber-Physical Systems (CPS) architecture for the Stream-of-Quality assessment in smart manufacturing. The proposed methodology not only enables real-time quality monitoring and predictive analytics but also discovers the hidden relationships between quality features and process parameters across different manufacturing processes. A case study in additive manufacturing was used to demonstrate the feasibility of the proposed methodology to maintain high product quality and adapt to product quality variations. This paper demonstrates how topological graph visualization can be effectively used for the real-time identification of new representative data through the Stream-of-Quality assessment. | [
"['Jay Lee' 'Dai-Yan Ji' 'Yuan-Ming Hsu']"
]
|
null | null | 2404.14743 | null | null | http://arxiv.org/pdf/2404.14743v1 | 2024-04-23T04:51:02Z | 2024-04-23T04:51:02Z | Gradient Guidance for Diffusion Models: An Optimization Perspective | Diffusion models have demonstrated empirical successes in various applications and can be adapted to task-specific needs via guidance. This paper introduces a form of gradient guidance for adapting or fine-tuning diffusion models towards user-specified optimization objectives. We study the theoretic aspects of a guided score-based sampling process, linking the gradient-guided diffusion model to first-order optimization. We show that adding gradient guidance to the sampling process of a pre-trained diffusion model is essentially equivalent to solving a regularized optimization problem, where the regularization term acts as a prior determined by the pre-training data. Diffusion models are able to learn data's latent subspace, however, explicitly adding the gradient of an external objective function to the sample process would jeopardize the structure in generated samples. To remedy this issue, we consider a modified form of gradient guidance based on a forward prediction loss, which leverages the pre-trained score function to preserve the latent structure in generated samples. We further consider an iteratively fine-tuned version of gradient-guided diffusion where one can query gradients at newly generated data points and update the score network using new samples. This process mimics a first-order optimization iteration in expectation, for which we proved O(1/K) convergence rate to the global optimum when the objective function is concave. | [
"['Yingqing Guo' 'Hui Yuan' 'Yukang Yang' 'Minshuo Chen' 'Mengdi Wang']"
]
|
null | null | 2404.14746 | null | null | http://arxiv.org/pdf/2404.14746v1 | 2024-04-23T04:57:44Z | 2024-04-23T04:57:44Z | A Customer Level Fraudulent Activity Detection Benchmark for Enhancing
Machine Learning Model Research and Evaluation | In the field of fraud detection, the availability of comprehensive and privacy-compliant datasets is crucial for advancing machine learning research and developing effective anti-fraud systems. Traditional datasets often focus on transaction-level information, which, while useful, overlooks the broader context of customer behavior patterns that are essential for detecting sophisticated fraud schemes. The scarcity of such data, primarily due to privacy concerns, significantly hampers the development and testing of predictive models that can operate effectively at the customer level. Addressing this gap, our study introduces a benchmark that contains structured datasets specifically designed for customer-level fraud detection. The benchmark not only adheres to strict privacy guidelines to ensure user confidentiality but also provides a rich source of information by encapsulating customer-centric features. We have developed the benchmark that allows for the comprehensive evaluation of various machine learning models, facilitating a deeper understanding of their strengths and weaknesses in predicting fraudulent activities. Through this work, we seek to bridge the existing gap in data availability, offering researchers and practitioners a valuable resource that empowers the development of next-generation fraud detection techniques. | [
"['Phoebe Jing' 'Yijing Gao' 'Xianlong Zeng']"
]
|
null | null | 2404.14749 | null | null | http://arxiv.org/pdf/2404.14749v2 | 2024-04-26T22:16:59Z | 2024-04-23T05:11:08Z | Semantic Cells: Evolutional Process to Acquire Sense Diversity of Items | Previous models for learning the semantic vectors of items and their groups, such as words, sentences, nodes, and graphs, using distributed representation have been based on the assumption that the basic sense of an item corresponds to one vector composed of dimensions corresponding to hidden contexts in the target real world, from which multiple senses of the item are obtained by conforming to lexical databases or adapting to the context. However, there may be multiple senses of an item, which are hardly assimilated and change or evolve dynamically following the contextual shift even within a document or a restricted period. This is a process similar to the evolution or adaptation of a living entity with/to environmental shifts. Setting the scope of disambiguation of items for sensemaking, the author presents a method in which a word or item in the data embraces multiple semantic vectors that evolve via interaction with others, similar to a cell embracing chromosomes crossing over with each other. We obtained two preliminary results: (1) the role of a word that evolves to acquire the largest or lower-middle variance of semantic vectors tends to be explainable by the author of the text; (2) the epicenters of earthquakes that acquire larger variance via crossover, corresponding to the interaction with diverse areas of land crust, are likely to correspond to the epicenters of forthcoming large earthquakes. | [
"['Yukio Ohsawa' 'Dingming Xue' 'Kaira Sekiguchi']"
]
|
null | null | 2404.14754 | null | null | http://arxiv.org/abs/2404.14754v1 | 2024-04-23T05:32:22Z | 2024-04-23T05:32:22Z | Skip the Benchmark: Generating System-Level High-Level Synthesis Data
using Generative Machine Learning | High-Level Synthesis (HLS) Design Space Exploration (DSE) is a widely accepted approach for efficiently exploring Pareto-optimal and optimal hardware solutions during the HLS process. Several HLS benchmarks and datasets are available for the research community to evaluate their methodologies. Unfortunately, these resources are limited and may not be sufficient for complex, multi-component system-level explorations. Generating new data using existing HLS benchmarks can be cumbersome, given the expertise and time required to effectively generate data for different HLS designs and directives. As a result, synthetic data has been used in prior work to evaluate system-level HLS DSE. However, the fidelity of the synthetic data to real data is often unclear, leading to uncertainty about the quality of system-level HLS DSE. This paper proposes a novel approach, called Vaegan, that employs generative machine learning to generate synthetic data that is robust enough to support complex system-level HLS DSE experiments that would be unattainable with only the currently available data. We explore and adapt a Variational Autoencoder (VAE) and Generative Adversarial Network (GAN) for this task and evaluate our approach using state-of-the-art datasets and metrics. We compare our approach to prior works and show that Vaegan effectively generates synthetic HLS data that closely mirrors the ground truth's distribution. | [
"['Yuchao Liao' 'Tosiron Adegbija' 'Roman Lysecky' 'Ravi Tandon']"
]
|
null | null | 2404.14757 | null | null | http://arxiv.org/pdf/2404.14757v1 | 2024-04-23T05:43:44Z | 2024-04-23T05:43:44Z | Integrating Mamba and Transformer for Long-Short Range Time Series
Forecasting | Time series forecasting is an important problem and plays a key role in a variety of applications including weather forecasting, stock market, and scientific simulations. Although transformers have proven to be effective in capturing dependency, its quadratic complexity of attention mechanism prevents its further adoption in long-range time series forecasting, thus limiting them attend to short-range range. Recent progress on state space models (SSMs) have shown impressive performance on modeling long range dependency due to their subquadratic complexity. Mamba, as a representative SSM, enjoys linear time complexity and has achieved strong scalability on tasks that requires scaling to long sequences, such as language, audio, and genomics. In this paper, we propose to leverage a hybrid framework Mambaformer that internally combines Mamba for long-range dependency, and Transformer for short range dependency, for long-short range forecasting. To the best of our knowledge, this is the first paper to combine Mamba and Transformer architecture in time series data. We investigate possible hybrid architectures to combine Mamba layer and attention layer for long-short range time series forecasting. The comparative study shows that the Mambaformer family can outperform Mamba and Transformer in long-short range time series forecasting problem. The code is available at https://github.com/XiongxiaoXu/Mambaformerin-Time-Series. | [
"['Xiongxiao Xu' 'Yueqing Liang' 'Baixiang Huang' 'Zhiling Lan' 'Kai Shu']"
]
|
null | null | 2404.14758 | null | null | http://arxiv.org/pdf/2404.14758v1 | 2024-04-23T05:45:52Z | 2024-04-23T05:45:52Z | Second-order Information Promotes Mini-Batch Robustness in
Variance-Reduced Gradients | We show that, for finite-sum minimization problems, incorporating partial second-order information of the objective function can dramatically improve the robustness to mini-batch size of variance-reduced stochastic gradient methods, making them more scalable while retaining their benefits over traditional Newton-type approaches. We demonstrate this phenomenon on a prototypical stochastic second-order algorithm, called Mini-Batch Stochastic Variance-Reduced Newton ($texttt{Mb-SVRN}$), which combines variance-reduced gradient estimates with access to an approximate Hessian oracle. In particular, we show that when the data size $n$ is sufficiently large, i.e., $ngg alpha^2kappa$, where $kappa$ is the condition number and $alpha$ is the Hessian approximation factor, then $texttt{Mb-SVRN}$ achieves a fast linear convergence rate that is independent of the gradient mini-batch size $b$, as long $b$ is in the range between $1$ and $b_{max}=O(n/(alpha log n))$. Only after increasing the mini-batch size past this critical point $b_{max}$, the method begins to transition into a standard Newton-type algorithm which is much more sensitive to the Hessian approximation quality. We demonstrate this phenomenon empirically on benchmark optimization tasks showing that, after tuning the step size, the convergence rate of $texttt{Mb-SVRN}$ remains fast for a wide range of mini-batch sizes, and the dependence of the phase transition point $b_{max}$ on the Hessian approximation factor $alpha$ aligns with our theoretical predictions. | [
"['Sachin Garg' 'Albert S. Berahas' 'Michał Dereziński']"
]
|
null | null | 2404.14760 | null | null | http://arxiv.org/pdf/2404.14760v2 | 2024-05-29T16:18:02Z | 2024-04-23T05:51:45Z | Retrieval Augmented Generation for Domain-specific Question Answering | Question answering (QA) has become an important application in the advanced development of large language models. General pre-trained large language models for question-answering are not trained to properly understand the knowledge or terminology for a specific domain, such as finance, healthcare, education, and customer service for a product. To better cater to domain-specific understanding, we build an in-house question-answering system for Adobe products. We propose a novel framework to compile a large question-answer database and develop the approach for retrieval-aware finetuning of a Large Language model. We showcase that fine-tuning the retriever leads to major improvements in the final generation. Our overall approach reduces hallucinations during generation while keeping in context the latest retrieval information for contextual grounding. | [
"['Sanat Sharma' 'David Seunghyun Yoon' 'Franck Dernoncourt'\n 'Dewang Sultania' 'Karishma Bagga' 'Mengjiao Zhang' 'Trung Bui'\n 'Varun Kotte']"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.