categories
string
doi
string
id
string
year
float64
venue
string
link
string
updated
string
published
string
title
string
abstract
string
authors
list
null
null
2404.09962
null
null
http://arxiv.org/pdf/2404.09962v1
2024-04-15T17:39:44Z
2024-04-15T17:39:44Z
Invariant Subspace Decomposition
We consider the task of predicting a response Y from a set of covariates X in settings where the conditional distribution of Y given X changes over time. For this to be feasible, assumptions on how the conditional distribution changes over time are required. Existing approaches assume, for example, that changes occur smoothly over time so that short-term prediction using only the recent past becomes feasible. In this work, we propose a novel invariance-based framework for linear conditionals, called Invariant Subspace Decomposition (ISD), that splits the conditional distribution into a time-invariant and a residual time-dependent component. As we show, this decomposition can be utilized both for zero-shot and time-adaptation prediction tasks, that is, settings where either no or a small amount of training data is available at the time points we want to predict Y at, respectively. We propose a practical estimation procedure, which automatically infers the decomposition using tools from approximate joint matrix diagonalization. Furthermore, we provide finite sample guarantees for the proposed estimator and demonstrate empirically that it indeed improves on approaches that do not use the additional invariant structure.
[ "['Margherita Lazzaretto' 'Jonas Peters' 'Niklas Pfister']" ]
null
null
2404.09964
null
null
http://arxiv.org/pdf/2404.09964v1
2024-04-15T17:40:23Z
2024-04-15T17:40:23Z
Design and Analysis of Efficient Attention in Transformers for Social Group Activity Recognition
Social group activity recognition is a challenging task extended from group activity recognition, where social groups must be recognized with their activities and group members. Existing methods tackle this task by leveraging region features of individuals following existing group activity recognition methods. However, the effectiveness of region features is susceptible to person localization and variable semantics of individual actions. To overcome these issues, we propose leveraging attention modules in transformers to generate social group features. In this method, multiple embeddings are used to aggregate features for a social group, each of which is assigned to a group member without duplication. Due to this non-duplicated assignment, the number of embeddings must be significant to avoid missing group members and thus renders attention in transformers ineffective. To find optimal attention designs with a large number of embeddings, we explore several design choices of queries for feature aggregation and self-attention modules in transformer decoders. Extensive experimental results show that the proposed method achieves state-of-the-art performance and verify that the proposed attention designs are highly effective on social group activity recognition.
[ "['Masato Tamura']" ]
null
null
2404.09967
null
null
http://arxiv.org/pdf/2404.09967v2
2024-05-24T16:29:38Z
2024-04-15T17:45:36Z
Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model
ControlNets are widely used for adding spatial control to text-to-image diffusion models with different conditions, such as depth maps, scribbles/sketches, and human poses. However, when it comes to controllable video generation, ControlNets cannot be directly integrated into new backbones due to feature space mismatches, and training ControlNets for new backbones can be a significant burden for many users. Furthermore, applying ControlNets independently to different frames cannot effectively maintain object temporal consistency. To address these challenges, we introduce Ctrl-Adapter, an efficient and versatile framework that adds diverse controls to any image/video diffusion model through the adaptation of pretrained ControlNets. Ctrl-Adapter offers strong and diverse capabilities, including image and video control, sparse-frame video control, fine-grained patch-level multi-condition control (via an MoE router), zero-shot adaptation to unseen conditions, and supports a variety of downstream tasks beyond spatial control, including video editing, video style transfer, and text-guided motion control. With six diverse U-Net/DiT-based image/video diffusion models (SDXL, PixArt-$alpha$, I2VGen-XL, SVD, Latte, Hotshot-XL), Ctrl-Adapter matches the performance of pretrained ControlNets on COCO and achieves the state-of-the-art on DAVIS 2017 with significantly lower computation (< 10 GPU hours).
[ "['Han Lin' 'Jaemin Cho' 'Abhay Zala' 'Mohit Bansal']" ]
null
null
2404.09995
null
null
http://arxiv.org/pdf/2404.09995v1
2024-04-15T17:59:57Z
2024-04-15T17:59:57Z
Taming Latent Diffusion Model for Neural Radiance Field Inpainting
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images. Despite some recent work showing preliminary success in editing a reconstructed NeRF with diffusion prior, they remain struggling to synthesize reasonable geometry in completely uncovered regions. One major reason is the high diversity of synthetic contents from the diffusion model, which hinders the radiance field from converging to a crisp and deterministic geometry. Moreover, applying latent diffusion models on real data often yields a textural shift incoherent to the image condition due to auto-encoding errors. These two problems are further reinforced with the use of pixel-distance losses. To address these issues, we propose tempering the diffusion model's stochasticity with per-scene customization and mitigating the textural shift with masked adversarial training. During the analyses, we also found the commonly used pixel and perceptual losses are harmful in the NeRF inpainting task. Through rigorous experiments, our framework yields state-of-the-art NeRF inpainting results on various real-world scenes. Project page: https://hubert0527.github.io/MALD-NeRF
[ "['Chieh Hubert Lin' 'Changil Kim' 'Jia-Bin Huang' 'Qinbo Li' 'Chih-Yao Ma'\n 'Johannes Kopf' 'Ming-Hsuan Yang' 'Hung-Yu Tseng']" ]
null
null
2404.10003
null
null
http://arxiv.org/pdf/2404.10003v1
2024-04-05T17:13:51Z
2024-04-05T17:13:51Z
Lightweight Geometric Deep Learning for Molecular Modelling in Catalyst Discovery
New technology for energy storage is necessary for the large-scale adoption of renewable energy sources like wind and solar. The ability to discover suitable catalysts is crucial for making energy storage more cost-effective and scalable. The Open Catalyst Project aims to apply advances in graph neural networks (GNNs) to accelerate progress in catalyst discovery, replacing Density Functional Theory-based (DFT) approaches that are computationally burdensome. Current approaches involve scaling GNNs to over 1 billion parameters, pushing the problem out of reach for a vast majority of machine learning practitioner around the world. This study aims to evaluate the performance and insights gained from using more lightweight approaches for this task that are more approachable for smaller teams to encourage participation from individuals from diverse backgrounds. By implementing robust design patterns like geometric and symmetric message passing, we were able to train a GNN model that reached a MAE of 0.0748 in predicting the per-atom forces of adsorbate-surface interactions, rivaling established model architectures like SchNet and DimeNet++ while using only a fraction of trainable parameters.
[ "['Patrick Geitner']" ]
null
null
2404.10004
null
null
http://arxiv.org/pdf/2404.10004v1
2024-04-10T02:25:27Z
2024-04-10T02:25:27Z
A Strategy Transfer and Decision Support Approach for Epidemic Control in Experience Shortage Scenarios
Epidemic outbreaks can cause critical health concerns and severe global economic crises. For countries or regions with new infectious disease outbreaks, it is essential to generate preventive strategies by learning lessons from others with similar risk profiles. A Strategy Transfer and Decision Support Approach (STDSA) is proposed based on the profile similarity evaluation. There are four steps in this method: (1) The similarity evaluation indicators are determined from three dimensions, i.e., the Basis of National Epidemic Prevention & Control, Social Resilience, and Infection Situation. (2) The data related to the indicators are collected and preprocessed. (3) The first round of screening on the preprocessed dataset is conducted through an improved collaborative filtering algorithm to calculate the preliminary similarity result from the perspective of the infection situation. (4) Finally, the K-Means model is used for the second round of screening to obtain the final similarity values. The approach will be applied to decision-making support in the context of COVID-19. Our results demonstrate that the recommendations generated by the STDSA model are more accurate and aligned better with the actual situation than those produced by pure K-means models. This study will provide new insights into preventing and controlling epidemics in regions that lack experience.
[ "['X. Xiao' 'P. Chen' 'X. Cao' 'K. Liu' 'L. Deng' 'D. Zhao' 'Z. Chen'\n 'Q. Deng' 'F. Yu' 'H. Zhang']" ]
null
null
2404.10010
null
null
http://arxiv.org/pdf/2404.10010v1
2024-04-12T05:51:28Z
2024-04-12T05:51:28Z
Kinematics Modeling of Peroxy Free Radicals: A Deep Reinforcement Learning Approach
Tropospheric ozone, known as a concerning air pollutant, has been associated with health issues including asthma, bronchitis, and impaired lung function. The rates at which peroxy radicals react with NO play a critical role in the overall formation and depletion of tropospheric ozone. However, obtaining comprehensive kinetic data for these reactions remains challenging. Traditional approaches to determine rate constants are costly and technically intricate. Fortunately, the emergence of machine learning-based models offers a less resource and time-intensive alternative for acquiring kinetics information. In this study, we leveraged deep reinforcement learning to predict ranges of rate constants (textit{k}) with exceptional accuracy, achieving a testing set accuracy of 100%. To analyze reactivity trends based on the molecular structure of peroxy radicals, we employed 51 global descriptors as input parameters. These descriptors were derived from optimized minimum energy geometries of peroxy radicals using the quantum composite G3B3 method. Through the application of Integrated Gradients (IGs), we gained valuable insights into the significance of the various descriptors in relation to reaction rates. We successfully validated and contextualized our findings by conducting cross-comparisons with established trends in the existing literature. These results establish a solid foundation for pioneering advancements in chemistry, where computer analysis serves as an inspirational source driving innovation.
[ "['Subhadarsi Nayak' 'Hrithwik Shalu' 'Joseph Stember']" ]
null
null
2404.10017
null
null
http://arxiv.org/pdf/2404.10017v1
2024-04-14T15:11:27Z
2024-04-14T15:11:27Z
Model-based Offline Quantum Reinforcement Learning
This paper presents the first algorithm for model-based offline quantum reinforcement learning and demonstrates its functionality on the cart-pole benchmark. The model and the policy to be optimized are each implemented as variational quantum circuits. The model is trained by gradient descent to fit a pre-recorded data set. The policy is optimized with a gradient-free optimization scheme using the return estimate given by the model as the fitness function. This model-based approach allows, in principle, full realization on a quantum computer during the optimization phase and gives hope that a quantum advantage can be achieved as soon as sufficiently powerful quantum computers are available.
[ "['Simon Eisenmann' 'Daniel Hein' 'Steffen Udluft' 'Thomas A. Runkler']" ]
null
null
2404.10019
null
null
http://arxiv.org/pdf/2404.10019v1
2024-04-14T20:52:19Z
2024-04-14T20:52:19Z
Can AI Understand Our Universe? Test of Fine-Tuning GPT by Astrophysical Data
ChatGPT has been the most talked-about concept in recent months, captivating both professionals and the general public alike, and has sparked discussions about the changes that artificial intelligence (AI) will bring to the world. As physicists and astrophysicists, we are curious about if scientific data can be correctly analyzed by large language models (LLMs) and yield accurate physics. In this article, we fine-tune the generative pre-trained transformer (GPT) model by the astronomical data from the observations of galaxies, quasars, stars, gamma-ray bursts (GRBs), and the simulations of black holes (BHs), the fine-tuned model demonstrates its capability to classify astrophysical phenomena, distinguish between two types of GRBs, deduce the redshift of quasars, and estimate BH parameters. We regard this as a successful test, marking the LLM's proven efficacy in scientific research. With the ever-growing volume of multidisciplinary data and the advancement of AI technology, we look forward to the emergence of a more fundamental and comprehensive understanding of our universe. This article also shares some interesting thoughts on data collection and AI design. Using the approach of understanding the universe - looking outward at data and inward for fundamental building blocks - as a guideline, we propose a method of series expansion for AI, suggesting ways to train and control AI that is smarter than humans.
[ "['Yu Wang' 'Shu-Rui Zhang' 'Aidin Momtaz' 'Rahim Moradi'\n 'Fatemeh Rastegarnia' 'Narek Sahakyan' 'Soroush Shakeri' 'Liang Li']" ]
null
null
2404.10024
null
null
http://arxiv.org/pdf/2404.10024v1
2024-04-15T06:38:21Z
2024-04-15T06:38:21Z
ClimODE: Climate and Weather Forecasting with Physics-informed Neural ODEs
Climate and weather prediction traditionally relies on complex numerical simulations of atmospheric physics. Deep learning approaches, such as transformers, have recently challenged the simulation paradigm with complex network forecasts. However, they often act as data-driven black-box models that neglect the underlying physics and lack uncertainty quantification. We address these limitations with ClimODE, a spatiotemporal continuous-time process that implements a key principle of advection from statistical mechanics, namely, weather changes due to a spatial movement of quantities over time. ClimODE models precise weather evolution with value-conserving dynamics, learning global weather transport as a neural flow, which also enables estimating the uncertainty in predictions. Our approach outperforms existing data-driven methods in global and regional forecasting with an order of magnitude smaller parameterization, establishing a new state of the art.
[ "['Yogesh Verma' 'Markus Heinonen' 'Vikas Garg']" ]
null
null
2404.10026
null
null
http://arxiv.org/pdf/2404.10026v1
2024-04-15T09:07:19Z
2024-04-15T09:07:19Z
Distributed Federated Learning-Based Deep Learning Model for Privacy MRI Brain Tumor Detection
Distributed training can facilitate the processing of large medical image datasets, and improve the accuracy and efficiency of disease diagnosis while protecting patient privacy, which is crucial for achieving efficient medical image analysis and accelerating medical research progress. This paper presents an innovative approach to medical image classification, leveraging Federated Learning (FL) to address the dual challenges of data privacy and efficient disease diagnosis. Traditional Centralized Machine Learning models, despite their widespread use in medical imaging for tasks such as disease diagnosis, raise significant privacy concerns due to the sensitive nature of patient data. As an alternative, FL emerges as a promising solution by allowing the training of a collective global model across local clients without centralizing the data, thus preserving privacy. Focusing on the application of FL in Magnetic Resonance Imaging (MRI) brain tumor detection, this study demonstrates the effectiveness of the Federated Learning framework coupled with EfficientNet-B0 and the FedAvg algorithm in enhancing both privacy and diagnostic accuracy. Through a meticulous selection of preprocessing methods, algorithms, and hyperparameters, and a comparative analysis of various Convolutional Neural Network (CNN) architectures, the research uncovers optimal strategies for image classification. The experimental results reveal that EfficientNet-B0 outperforms other models like ResNet in handling data heterogeneity and achieving higher accuracy and lower loss, highlighting the potential of FL in overcoming the limitations of traditional models. The study underscores the significance of addressing data heterogeneity and proposes further research directions for broadening the applicability of FL in medical image analysis.
[ "['Lisang Zhou' 'Meng Wang' 'Ning Zhou']" ]
null
null
2404.10029
null
null
http://arxiv.org/pdf/2404.10029v1
2024-04-15T12:32:20Z
2024-04-15T12:32:20Z
Federated Learning on Riemannian Manifolds with Differential Privacy
In recent years, federated learning (FL) has emerged as a prominent paradigm in distributed machine learning. Despite the partial safeguarding of agents' information within FL systems, a malicious adversary can potentially infer sensitive information through various means. In this paper, we propose a generic private FL framework defined on Riemannian manifolds (PriRFed) based on the differential privacy (DP) technique. We analyze the privacy guarantee while establishing the convergence properties. To the best of our knowledge, this is the first federated learning framework on Riemannian manifold with a privacy guarantee and convergence results. Numerical simulations are performed on synthetic and real-world datasets to showcase the efficacy of the proposed PriRFed approach.
[ "['Zhenwei Huang' 'Wen Huang' 'Pratik Jawanpuria' 'Bamdev Mishra']" ]
null
null
2404.10030
null
null
http://arxiv.org/pdf/2404.10030v1
2024-04-15T13:34:27Z
2024-04-15T13:34:27Z
Hyperspectral Reconstruction of Skin Through Fusion of Scattering Transform Features
Hyperspectral imagery (HSI) is an established technique with an array of applications, but its use is limited due to both practical and technical issues associated with spectral devices. The goal of the ICASSP 2024 'Hyper-Skin' Challenge is to extract skin HSI from matching RGB images and an infrared band. To address this problem we propose a model using features of the scattering transform - a type of convolutional neural network with predefined filters. Our model matches and inverts those features, rather than the pixel values, reducing the complexity of matching while grouping similar features together, resulting in an improved learning process.
[ "['Wojciech Czaja' 'Jeremiah Emidih' 'Brandon Kolstoe' 'Richard G. Spencer']" ]
null
null
2404.10031
null
null
http://arxiv.org/pdf/2404.10031v1
2024-04-15T13:51:05Z
2024-04-15T13:51:05Z
Emergent Language Symbolic Autoencoder (ELSA) with Weak Supervision to Model Hierarchical Brain Networks
Brain networks display a hierarchical organization, a complexity that poses a challenge for existing deep learning models, often structured as flat classifiers, leading to difficulties in interpretability and the 'black box' issue. To bridge this gap, we propose a novel architecture: a symbolic autoencoder informed by weak supervision and an Emergent Language (EL) framework. This model moves beyond traditional flat classifiers by producing hierarchical clusters and corresponding imagery, subsequently represented through symbolic sentences to improve the clinical interpretability of hierarchically organized data such as intrinsic brain networks, which can be characterized using resting-state fMRI images. Our innovation includes a generalized hierarchical loss function designed to ensure that both sentences and images accurately reflect the hierarchical structure of functional brain networks. This enables us to model functional brain networks from a broader perspective down to more granular details. Furthermore, we introduce a quantitative method to assess the hierarchical consistency of these symbolic representations. Our qualitative analyses show that our model successfully generates hierarchically organized, clinically interpretable images, a finding supported by our quantitative evaluations. We find that our best performing loss function leads to a hierarchical consistency of over 97% when identifying images corresponding to brain networks. This approach not only advances the interpretability of deep learning models in neuroimaging analysis but also represents a significant step towards modeling the intricate hierarchical nature of brain networks.
[ "['Ammar Ahmed Pallikonda Latheef' 'Alberto Santamaria-Pang'\n 'Craig K Jones' 'Haris I Sair']" ]
null
null
2404.10032
null
null
http://arxiv.org/pdf/2404.10032v1
2024-04-15T16:37:44Z
2024-04-15T16:37:44Z
Detecting AI Generated Text Based on NLP and Machine Learning Approaches
Recent advances in natural language processing (NLP) may enable artificial intelligence (AI) models to generate writing that is identical to human written form in the future. This might have profound ethical, legal, and social repercussions. This study aims to address this problem by offering an accurate AI detector model that can differentiate between electronically produced text and human-written text. Our approach includes machine learning methods such as XGB Classifier, SVM, BERT architecture deep learning models. Furthermore, our results show that the BERT performs better than previous models in identifying information generated by AI from information provided by humans. Provide a comprehensive analysis of the current state of AI-generated text identification in our assessment of pertinent studies. Our testing yielded positive findings, showing that our strategy is successful, with the BERT emerging as the most probable answer. We analyze the research's societal implications, highlighting the possible advantages for various industries while addressing sustainability issues pertaining to morality and the environment. The XGB classifier and SVM give 0.84 and 0.81 accuracy in this article, respectively. The greatest accuracy in this research is provided by the BERT model, which provides 0.93% accuracy.
[ "['Nuzhat Prova']" ]
null
null
2404.10034
null
null
http://arxiv.org/pdf/2404.10034v1
2024-04-15T17:25:21Z
2024-04-15T17:25:21Z
Realistic Model Selection for Weakly Supervised Object Localization
Weakly Supervised Object Localization (WSOL) allows for training deep learning models for classification and localization, using only global class-level labels. The lack of bounding box (bbox) supervision during training represents a considerable challenge for hyper-parameter search and model selection. Earlier WSOL works implicitly observed localization performance over a test set which leads to biased performance evaluation. More recently, a better WSOL protocol has been proposed, where a validation set with bbox annotations is held out for model selection. Although it does not rely on the test set, this protocol is unrealistic since bboxes are not available in real-world applications, and when available, it is better to use them directly to fit model weights. Our initial empirical analysis shows that the localization performance of a model declines significantly when using only image-class labels for model selection (compared to using bounding-box annotations). This suggests that adding bounding-box labels is preferable for selecting the best model for localization. In this paper, we introduce a new WSOL validation protocol that provides a localization signal without the need for manual bbox annotations. In particular, we leverage noisy pseudo boxes from an off-the-shelf ROI proposal generator such as Selective-Search, CLIP, and RPN pretrained models for model selection. Our experimental results with several WSOL methods on ILSVRC and CUB-200-2011 datasets show that our noisy boxes allow selecting models with performance close to those selected using ground truth boxes, and better than models selected using only image-class labels.
[ "['Shakeeb Murtaza' 'Soufiane Belharbi' 'Marco Pedersoli' 'Eric Granger']" ]
null
null
2404.10044
null
null
http://arxiv.org/pdf/2404.10044v3
2024-06-18T18:45:36Z
2024-04-15T18:00:03Z
Variational quantum simulation: a case study for understanding warm starts
The barren plateau phenomenon, characterized by loss gradients that vanish exponentially with system size, poses a challenge to scaling variational quantum algorithms. Here we explore the potential of warm starts, whereby one initializes closer to a solution in the hope of enjoying larger loss variances. Focusing on an iterative variational method for learning shorter-depth circuits for quantum real and imaginary time evolution we conduct a case study to elucidate the potential and limitations of warm starts. We start by proving that the iterative variational algorithm will exhibit substantial (at worst vanishing polynomially in system size) gradients in a small region around the initializations at each time-step. Convexity guarantees for these regions are then established, suggesting trainability for polynomial size time-steps. However, our study highlights scenarios where a good minimum shifts outside the region with trainability guarantees. Our analysis leaves open the question whether such minima jumps necessitate optimization across barren plateau landscapes or whether there exist gradient flows, i.e., fertile valleys away from the plateau with substantial gradients, that allow for training.
[ "['Ricard Puig' 'Marc Drudis' 'Supanut Thanasilp' 'Zoë Holmes']" ]
null
null
2404.10091
null
null
http://arxiv.org/pdf/2404.10091v1
2024-04-15T18:58:39Z
2024-04-15T18:58:39Z
Empowering Federated Learning with Implicit Gossiping: Mitigating Connection Unreliability Amidst Unknown and Arbitrary Dynamics
Federated learning is a popular distributed learning approach for training a machine learning model without disclosing raw data. It consists of a parameter server and a possibly large collection of clients (e.g., in cross-device federated learning) that may operate in congested and changing environments. In this paper, we study federated learning in the presence of stochastic and dynamic communication failures wherein the uplink between the parameter server and client $i$ is on with unknown probability $p_i^t$ in round $t$. Furthermore, we allow the dynamics of $p_i^t$ to be arbitrary. We first demonstrate that when the $p_i^t$'s vary across clients, the most widely adopted federated learning algorithm, Federated Average (FedAvg), experiences significant bias. To address this observation, we propose Federated Postponed Broadcast (FedPBC), a simple variant of FedAvg. FedPBC differs from FedAvg in that the parameter server postpones broadcasting the global model till the end of each round. Despite uplink failures, we show that FedPBC converges to a stationary point of the original non-convex objective. On the technical front, postponing the global model broadcasts enables implicit gossiping among the clients with active links in round $t$. Despite the time-varying nature of $p_i^t$, we can bound the perturbation of the global model dynamics using techniques to control gossip-type information mixing errors. Extensive experiments have been conducted on real-world datasets over diversified unreliable uplink patterns to corroborate our analysis.
[ "['Ming Xiang' 'Stratis Ioannidis' 'Edmund Yeh' 'Carlee Joe-Wong' 'Lili Su']" ]
null
null
2404.10094
null
null
http://arxiv.org/pdf/2404.10094v1
2024-04-15T19:01:20Z
2024-04-15T19:01:20Z
Towards DNA-Encoded Library Generation with GFlowNets
DNA-encoded libraries (DELs) are a powerful approach for rapidly screening large numbers of diverse compounds. One of the key challenges in using DELs is library design, which involves choosing the building blocks that will be combinatorially combined to produce the final library. In this paper we consider the task of protein-protein interaction (PPI) biased DEL design. To this end, we evaluate several machine learning algorithms on the PPI modulation task and use them as a reward for the proposed GFlowNet-based generative approach. We additionally investigate the possibility of using structural information about building blocks to design a hierarchical action space for the GFlowNet. The observed results indicate that GFlowNets are a promising approach for generating diverse combinatorial library candidates.
[ "['Michał Koziarski' 'Mohammed Abukalam' 'Vedant Shah' 'Louis Vaillancourt'\n 'Doris Alexandra Schuetz' 'Moksh Jain' 'Almer van der Sloot'\n 'Mathieu Bourgey' 'Anne Marinier' 'Yoshua Bengio']" ]
null
null
2404.10097
null
null
http://arxiv.org/pdf/2404.10097v1
2024-04-15T19:08:48Z
2024-04-15T19:08:48Z
LegalPro-BERT: Classification of Legal Provisions by fine-tuning BERT Large Language Model
A contract is a type of legal document commonly used in organizations. Contract review is an integral and repetitive process to avoid business risk and liability. Contract analysis requires the identification and classification of key provisions and paragraphs within an agreement. Identification and validation of contract clauses can be a time-consuming and challenging task demanding the services of trained and expensive lawyers, paralegals or other legal assistants. Classification of legal provisions in contracts using artificial intelligence and natural language processing is complex due to the requirement of domain-specialized legal language for model training and the scarcity of sufficient labeled data in the legal domain. Using general-purpose models is not effective in this context due to the use of specialized legal vocabulary in contracts which may not be recognized by a general model. To address this problem, we propose the use of a pre-trained large language model which is subsequently calibrated on legal taxonomy. We propose LegalPro-BERT, a BERT transformer architecture model that we fine-tune to efficiently handle classification task for legal provisions. We conducted experiments to measure and compare metrics with current benchmark results. We found that LegalPro-BERT outperforms the previous benchmark used for comparison in this research.
[ "['Amit Tewari']" ]
null
null
2404.10099
null
null
http://arxiv.org/pdf/2404.10099v1
2024-04-15T19:15:32Z
2024-04-15T19:15:32Z
Feature selection in linear SVMs via hard cardinality constraint: a scalable SDP decomposition approach
In this paper, we study the embedded feature selection problem in linear Support Vector Machines (SVMs), in which a cardinality constraint is employed, leading to a fully explainable selection model. The problem is NP-hard due to the presence of the cardinality constraint, even though the original linear SVM amounts to a problem solvable in polynomial time. To handle the hard problem, we first introduce two mixed-integer formulations for which novel SDP relaxations are proposed. Exploiting the sparsity pattern of the relaxations, we decompose the problems and obtain equivalent relaxations in a much smaller cone, making the conic approaches scalable. To make the best usage of the decomposed relaxations, we propose heuristics using the information of its optimal solution. Moreover, an exact procedure is proposed by solving a sequence of mixed-integer decomposed SDPs. Numerical results on classical benchmarking datasets are reported, showing the efficiency and effectiveness of our approach.
[ "['Immanuel Bomze' \"Federico D'Onofrio\" 'Laura Palagi' 'Bo Peng']" ]
null
null
2404.10108
null
null
http://arxiv.org/pdf/2404.10108v2
2024-04-22T17:53:08Z
2024-04-15T19:43:16Z
GeoAI Reproducibility and Replicability: a computational and spatial perspective
GeoAI has emerged as an exciting interdisciplinary research area that combines spatial theories and data with cutting-edge AI models to address geospatial problems in a novel, data-driven manner. While GeoAI research has flourished in the GIScience literature, its reproducibility and replicability (R&R), fundamental principles that determine the reusability, reliability, and scientific rigor of research findings, have rarely been discussed. This paper aims to provide an in-depth analysis of this topic from both computational and spatial perspectives. We first categorize the major goals for reproducing GeoAI research, namely, validation (repeatability), learning and adapting the method for solving a similar or new problem (reproducibility), and examining the generalizability of the research findings (replicability). Each of these goals requires different levels of understanding of GeoAI, as well as different methods to ensure its success. We then discuss the factors that may cause the lack of R&R in GeoAI research, with an emphasis on (1) the selection and use of training data; (2) the uncertainty that resides in the GeoAI model design, training, deployment, and inference processes; and more importantly (3) the inherent spatial heterogeneity of geospatial data and processes. We use a deep learning-based image analysis task as an example to demonstrate the results' uncertainty and spatial variance caused by different factors. The findings reiterate the importance of knowledge sharing, as well as the generation of a "replicability map" that incorporates spatial autocorrelation and spatial heterogeneity into consideration in quantifying the spatial replicability of GeoAI research.
[ "['Wenwen Li' 'Chia-Yu Hsu' 'Sizhe Wang' 'Peter Kedron']" ]
null
null
2404.10110
null
null
http://arxiv.org/pdf/2404.10110v1
2024-04-15T19:45:07Z
2024-04-15T19:45:07Z
Communication-Efficient Hybrid Federated Learning for E-health with Horizontal and Vertical Data Partitioning
E-health allows smart devices and medical institutions to collaboratively collect patients' data, which is trained by Artificial Intelligence (AI) technologies to help doctors make diagnosis. By allowing multiple devices to train models collaboratively, federated learning is a promising solution to address the communication and privacy issues in e-health. However, applying federated learning in e-health faces many challenges. First, medical data is both horizontally and vertically partitioned. Since single Horizontal Federated Learning (HFL) or Vertical Federated Learning (VFL) techniques cannot deal with both types of data partitioning, directly applying them may consume excessive communication cost due to transmitting a part of raw data when requiring high modeling accuracy. Second, a naive combination of HFL and VFL has limitations including low training efficiency, unsound convergence analysis, and lack of parameter tuning strategies. In this paper, we provide a thorough study on an effective integration of HFL and VFL, to achieve communication efficiency and overcome the above limitations when data is both horizontally and vertically partitioned. Specifically, we propose a hybrid federated learning framework with one intermediate result exchange and two aggregation phases. Based on this framework, we develop a Hybrid Stochastic Gradient Descent (HSGD) algorithm to train models. Then, we theoretically analyze the convergence upper bound of the proposed algorithm. Using the convergence results, we design adaptive strategies to adjust the training parameters and shrink the size of transmitted data. Experimental results validate that the proposed HSGD algorithm can achieve the desired accuracy while reducing communication cost, and they also verify the effectiveness of the adaptive strategies.
[ "['Chong Yu' 'Shuaiqi Shen' 'Shiqiang Wang' 'Kuan Zhang' 'Hai Zhao']" ]
null
null
2404.10115
null
null
http://arxiv.org/pdf/2404.10115v1
2024-04-15T20:07:44Z
2024-04-15T20:07:44Z
Multiple-Input Fourier Neural Operator (MIFNO) for source-dependent 3D elastodynamics
Numerical simulations are essential tools to evaluate the solution of the wave equation in complex settings, such as three-dimensional (3D) domains with heterogeneous properties. However, their application is limited by high computational costs and existing surrogate models lack the flexibility of numerical solvers. This work introduces the Multiple-Input Fourier Neural Operator (MIFNO) to deal with structured 3D fields representing material properties as well as vectors describing the source characteristics. The MIFNO is applied to the problem of elastic wave propagation in the Earth's crust. It is trained on the HEMEW^S-3D database containing 30000 earthquake simulations in different heterogeneous domains with random source positions and orientations. Outputs are time- and space-dependent surface wavefields. The MIFNO predictions are assessed as good to excellent based on Goodness-Of-Fit (GOF) criteria. Wave arrival times and wave fronts' propagation are very accurate since 80% of the predictions have an excellent phase GOF. The fluctuations amplitudes are good for 87% of the predictions. The envelope score is hindered by the small-scale fluctuations that are challenging to capture due to the complex physical phenomena associated with high-frequency features. Nevertheless, the MIFNO can generalize to sources located outside the training domain and it shows good generalization ability to a real complex overthrust geology. When focusing on a region of interest, transfer learning improves the accuracy with limited additional costs, since GOF scores improved by more than 1 GOF unit with only 500 additional specific samples. The MIFNO is the first surrogate model offering the flexibility of an earthquake simulator with varying sources and material properties. Its good accuracy and massive speed-up offer new perspectives to replace numerical simulations in many-query problems.
[ "['Fanny Lehmann' 'Filippo Gatti' 'Didier Clouteau']" ]
null
null
2404.10122
null
null
http://arxiv.org/pdf/2404.10122v1
2024-04-15T20:19:18Z
2024-04-15T20:19:18Z
Online Estimation via Offline Estimation: An Information-Theoretic Framework
$ $The classical theory of statistical estimation aims to estimate a parameter of interest under data generated from a fixed design ("offline estimation"), while the contemporary theory of online learning provides algorithms for estimation under adaptively chosen covariates ("online estimation"). Motivated by connections between estimation and interactive decision making, we ask: is it possible to convert offline estimation algorithms into online estimation algorithms in a black-box fashion? We investigate this question from an information-theoretic perspective by introducing a new framework, Oracle-Efficient Online Estimation (OEOE), where the learner can only interact with the data stream indirectly through a sequence of offline estimators produced by a black-box algorithm operating on the stream. Our main results settle the statistical and computational complexity of online estimation in this framework. $bullet$ Statistical complexity. We show that information-theoretically, there exist algorithms that achieve near-optimal online estimation error via black-box offline estimation oracles, and give a nearly-tight characterization for minimax rates in the OEOE framework. $bullet$ Computational complexity. We show that the guarantees above cannot be achieved in a computationally efficient fashion in general, but give a refined characterization for the special case of conditional density estimation: computationally efficient online estimation via black-box offline estimation is possible whenever it is possible via unrestricted algorithms. Finally, we apply our results to give offline oracle-efficient algorithms for interactive decision making.
[ "['Dylan J. Foster' 'Yanjun Han' 'Jian Qian' 'Alexander Rakhlin']" ]
null
null
2404.10124
null
null
http://arxiv.org/pdf/2404.10124v1
2024-04-15T20:21:05Z
2024-04-15T20:21:05Z
Epistemic Uncertainty Quantification For Pre-trained Neural Network
Epistemic uncertainty quantification (UQ) identifies where models lack knowledge. Traditional UQ methods, often based on Bayesian neural networks, are not suitable for pre-trained non-Bayesian models. Our study addresses quantifying epistemic uncertainty for any pre-trained model, which does not need the original training data or model modifications and can ensure broad applicability regardless of network architectures or training techniques. Specifically, we propose a gradient-based approach to assess epistemic uncertainty, analyzing the gradients of outputs relative to model parameters, and thereby indicating necessary model adjustments to accurately represent the inputs. We first explore theoretical guarantees of gradient-based methods for epistemic UQ, questioning the view that this uncertainty is only calculable through differences between multiple models. We further improve gradient-driven UQ by using class-specific weights for integrating gradients and emphasizing distinct contributions from neural network layers. Additionally, we enhance UQ accuracy by combining gradient and perturbation methods to refine the gradients. We evaluate our approach on out-of-distribution detection, uncertainty calibration, and active learning, demonstrating its superiority over current state-of-the-art UQ methods for pre-trained models.
[ "['Hanjing Wang' 'Qiang Ji']" ]
null
null
2404.10135
null
null
http://arxiv.org/pdf/2404.10135v2
2024-04-19T22:44:29Z
2024-04-15T21:01:31Z
Using Long Short-term Memory (LSTM) to merge precipitation data over mountainous area in Sierra Nevada
Obtaining reliable precipitation estimation with high resolutions in time and space is of great importance to hydrological studies. However, accurately estimating precipitation is a challenging task over high mountainous complex terrain. The three widely used precipitation measurement approaches, namely rainfall gauge, precipitation radars, and satellite-based precipitation sensors, have their own pros and cons in producing reliable precipitation products over complex areas. One way to decrease the detection error probability and improve data reliability is precipitation data merging. With the rapid advancements in computational capabilities and the escalating volume and diversity of earth observational data, Deep Learning (DL) models have gained considerable attention in geoscience. In this study, a deep learning technique, namely Long Short-term Memory (LSTM), was employed to merge a radar-based and a satellite-based Global Precipitation Measurement (GPM) precipitation product Integrated Multi-Satellite Retrievals for GPM (IMERG) precipitation product at hourly scale. The merged results are compared with the widely used reanalysis precipitation product, Multi-Radar Multi-Sensor (MRMS), and assessed against gauge observational data from the California Data Exchange Center (CDEC). The findings indicated that the LSTM-based merged precipitation notably underestimated gauge observations and, at times, failed to provide meaningful estimates, showing predominantly near-zero values. Relying solely on individual Quantitative Precipitation Estimates (QPEs) without additional meteorological input proved insufficient for generating reliable merged QPE. However, the merged results effectively captured the temporal trends of the observations, outperforming MRMS in this aspect. This suggested that incorporating bias correction techniques could potentially enhance the accuracy of the merged product.
[ "['Yihan Wang' 'Lujun Zhang']" ]
null
null
2404.10136
null
null
http://arxiv.org/pdf/2404.10136v1
2024-04-15T21:02:48Z
2024-04-15T21:02:48Z
Language Model Cascades: Token-level uncertainty and beyond
Recent advances in language models (LMs) have led to significant improvements in quality on complex NLP tasks, but at the expense of increased inference costs. Cascading offers a simple strategy to achieve more favorable cost-quality tradeoffs: here, a small model is invoked for most "easy" instances, while a few "hard" instances are deferred to the large model. While the principles underpinning cascading are well-studied for classification tasks - with deferral based on predicted class uncertainty favored theoretically and practically - a similar understanding is lacking for generative LM tasks. In this work, we initiate a systematic study of deferral rules for LM cascades. We begin by examining the natural extension of predicted class uncertainty to generative LM tasks, namely, the predicted sequence uncertainty. We show that this measure suffers from the length bias problem, either over- or under-emphasizing outputs based on their lengths. This is because LMs produce a sequence of uncertainty values, one for each output token; and moreover, the number of output tokens is variable across examples. To mitigate this issue, we propose to exploit the richer token-level uncertainty information implicit in generative LMs. We argue that naive predicted sequence uncertainty corresponds to a simple aggregation of these uncertainties. By contrast, we show that incorporating token-level uncertainty through learned post-hoc deferral rules can significantly outperform such simple aggregation strategies, via experiments on a range of natural language benchmarks with FLAN-T5 models. We further show that incorporating embeddings from the smaller model and intermediate layers of the larger model can give an additional boost in the overall cost-quality tradeoff.
[ "['Neha Gupta' 'Harikrishna Narasimhan' 'Wittawat Jitkrittum'\n 'Ankit Singh Rawat' 'Aditya Krishna Menon' 'Sanjiv Kumar']" ]
null
null
2404.10148
null
null
http://arxiv.org/pdf/2404.10148v1
2024-04-15T21:35:25Z
2024-04-15T21:35:25Z
Node Similarities under Random Projections: Limits and Pathological Cases
Random Projections have been widely used to generate embeddings for various graph tasks due to their computational efficiency. The majority of applications have been justified through the Johnson-Lindenstrauss Lemma. In this paper, we take a step further and investigate how well dot product and cosine similarity are preserved by Random Projections. Our analysis provides new theoretical results, identifies pathological cases, and tests them with numerical experiments. We find that, for nodes of lower or higher degrees, the method produces especially unreliable embeddings for the dot product, regardless of whether the adjacency or the (normalized version) transition is used. With respect to the statistical noise introduced by Random Projections, we show that cosine similarity produces remarkably more precise approximations.
[ "['Tvrtko Tadić' 'Cassiano Becker' 'Jennifer Neville']" ]
null
null
2404.10155
null
null
http://arxiv.org/pdf/2404.10155v1
2024-04-15T22:02:58Z
2024-04-15T22:02:58Z
Quality Assessment of Prompts Used in Code Generation
Large Language Models (LLMs) are gaining popularity among software engineers. A crucial aspect of developing effective code-generation LLMs is to evaluate these models using a robust benchmark. Evaluation benchmarks with quality issues can provide a false sense of performance. In this work, we conduct the first-of-its-kind study of the quality of prompts within benchmarks used to compare the performance of different code generation models. To conduct this study, we analyzed 3,566 prompts from 9 code generation benchmarks to identify quality issues in them. We also investigated whether fixing the identified quality issues in the benchmarks' prompts affects a model's performance. We also studied memorization issues of the evaluation dataset, which can put into question a benchmark's trustworthiness. We found that code generation evaluation benchmarks mainly focused on Python and coding exercises and had very limited contextual dependencies to challenge the model. These datasets and the developers' prompts suffer from quality issues like spelling and grammatical errors, unclear sentences to express developers' intent, and not using proper documentation style. Fixing all these issues in the benchmarks can lead to a better performance for Python code generation, but not a significant improvement was observed for Java code generation. We also found evidence that GPT-3.5-Turbo and CodeGen-2.5 models possibly have data contamination issues.
[ "['Mohammed Latif Siddiq' 'Simantika Dristi' 'Joy Saha'\n 'Joanna C. S. Santos']" ]
null
null
2404.10157
null
null
http://arxiv.org/pdf/2404.10157v1
2024-04-15T22:13:35Z
2024-04-15T22:13:35Z
Salient Object-Aware Background Generation using Text-Guided Diffusion Models
Generating background scenes for salient objects plays a crucial role across various domains including creative design and e-commerce, as it enhances the presentation and context of subjects by integrating them into tailored environments. Background generation can be framed as a task of text-conditioned outpainting, where the goal is to extend image content beyond a salient object's boundaries on a blank background. Although popular diffusion models for text-guided inpainting can also be used for outpainting by mask inversion, they are trained to fill in missing parts of an image rather than to place an object into a scene. Consequently, when used for background creation, inpainting models frequently extend the salient object's boundaries and thereby change the object's identity, which is a phenomenon we call "object expansion." This paper introduces a model for adapting inpainting diffusion models to the salient object outpainting task using Stable Diffusion and ControlNet architectures. We present a series of qualitative and quantitative results across models and datasets, including a newly proposed metric to measure object expansion that does not require any human labeling. Compared to Stable Diffusion 2.0 Inpainting, our proposed approach reduces object expansion by 3.6x on average with no degradation in standard visual metrics across multiple datasets.
[ "['Amir Erfan Eshratifar' 'Joao V. B. Soares' 'Kapil Thadani'\n 'Shaunak Mishra' 'Mikhail Kuznetsov' 'Yueh-Ning Ku' 'Paloma de Juan']" ]
null
null
2404.10162
null
null
http://arxiv.org/pdf/2404.10162v1
2024-04-15T22:25:54Z
2024-04-15T22:25:54Z
Optimal Kernel Tuning Parameter Prediction using Deep Sequence Models
GPU kernels have come to the forefront of computing due to their utility in varied fields, from high-performance computing to machine learning. A typical GPU compute kernel is invoked millions, if not billions of times in a typical application, which makes their performance highly critical. Due to the unknown nature of the optimization surface, an exhaustive search is required to discover the global optimum, which is infeasible due to the possible exponential number of parameter combinations. In this work, we propose a methodology that uses deep sequence-to-sequence models to predict the optimal tuning parameters governing compute kernels. This work considers the prediction of kernel parameters as a sequence to the sequence translation problem, borrowing models from the Natural Language Processing (NLP) domain. Parameters describing the input, output and weight tensors are considered as the input language to the model that emits the corresponding kernel parameters. In essence, the model translates the problem parameter language to kernel parameter language. The core contributions of this work are: a) Proposing that a sequence to sequence model can accurately learn the performance dynamics of a GPU compute kernel b) A novel network architecture which predicts the kernel tuning parameters for GPU kernels, c) A constrained beam search which incorporates the physical limits of the GPU hardware as well as other expert knowledge reducing the search space. The proposed algorithm can achieve more than 90% accuracy on various convolutional kernels in MIOpen, the AMD machine learning primitives library. As a result, the proposed technique can reduce the development time and compute resources required to tune unseen input configurations, resulting in shorter development cycles, reduced development costs, and better user experience.
[ "['Khawir Mahmood' 'Jehandad Khan' 'Hammad Afzal']" ]
null
null
2404.10163
null
null
http://arxiv.org/pdf/2404.10163v2
2024-04-21T03:17:23Z
2024-04-15T22:26:27Z
EyeFormer: Predicting Personalized Scanpaths with Transformer-Guided Reinforcement Learning
From a visual perception perspective, modern graphical user interfaces (GUIs) comprise a complex graphics-rich two-dimensional visuospatial arrangement of text, images, and interactive objects such as buttons and menus. While existing models can accurately predict regions and objects that are likely to attract attention ``on average'', so far there is no scanpath model capable of predicting scanpaths for an individual. To close this gap, we introduce EyeFormer, which leverages a Transformer architecture as a policy network to guide a deep reinforcement learning algorithm that controls gaze locations. Our model has the unique capability of producing personalized predictions when given a few user scanpath samples. It can predict full scanpath information, including fixation positions and duration, across individuals and various stimulus types. Additionally, we demonstrate applications in GUI layout optimization driven by our model. Our software and models will be publicly available.
[ "['Yue Jiang' 'Zixin Guo' 'Hamed Rezazadegan Tavakoli' 'Luis A. Leiva'\n 'Antti Oulasvirta']" ]
null
null
2404.10166
null
null
http://arxiv.org/pdf/2404.10166v1
2024-04-15T22:32:50Z
2024-04-15T22:32:50Z
Self-Supervised Learning Featuring Small-Scale Image Dataset for Treatable Retinal Diseases Classification
Automated medical diagnosis through image-based neural networks has increased in popularity and matured over years. Nevertheless, it is confined by the scarcity of medical images and the expensive labor annotation costs. Self-Supervised Learning (SSL) is an good alternative to Transfer Learning (TL) and is suitable for imbalanced image datasets. In this study, we assess four pretrained SSL models and two TL models in treatable retinal diseases classification using small-scale Optical Coherence Tomography (OCT) images ranging from 125 to 4000 with balanced or imbalanced distribution for training. The proposed SSL model achieves the state-of-art accuracy of 98.84% using only 4,000 training images. Our results suggest the SSL models provide superior performance under both the balanced and imbalanced training scenarios. The SSL model with MoCo-v2 scheme has consistent good performance under the imbalanced scenario and, especially, surpasses the other models when the training set is less than 500 images.
[ "['Luffina C. Huang' 'Darren J. Chiu' 'Manish Mehta']" ]
null
null
2404.10176
null
null
http://arxiv.org/pdf/2404.10176v1
2024-04-15T23:07:57Z
2024-04-15T23:07:57Z
Multi-objective evolutionary GAN for tabular data synthesis
Synthetic data has a key role to play in data sharing by statistical agencies and other generators of statistical data products. Generative Adversarial Networks (GANs), typically applied to image synthesis, are also a promising method for tabular data synthesis. However, there are unique challenges in tabular data compared to images, eg tabular data may contain both continuous and discrete variables and conditional sampling, and, critically, the data should possess high utility and low disclosure risk (the risk of re-identifying a population unit or learning something new about them), providing an opportunity for multi-objective (MO) optimization. Inspired by MO GANs for images, this paper proposes a smart MO evolutionary conditional tabular GAN (SMOE-CTGAN). This approach models conditional synthetic data by applying conditional vectors in training, and uses concepts from MO optimisation to balance disclosure risk against utility. Our results indicate that SMOE-CTGAN is able to discover synthetic datasets with different risk and utility levels for multiple national census datasets. We also find a sweet spot in the early stage of training where a competitive utility and extremely low risk are achieved, by using an Improvement Score. The full code can be downloaded from https://github.com/HuskyNian/SMO_EGAN_pytorch.
[ "['Nian Ran' 'Bahrul Ilmi Nasution' 'Claire Little' 'Richard Allmendinger'\n 'Mark Elliot']" ]
null
null
2404.10177
null
null
http://arxiv.org/pdf/2404.10177v1
2024-03-20T14:22:12Z
2024-03-20T14:22:12Z
Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data
Ambient diffusion is a recently proposed framework for training diffusion models using corrupted data. Both Ambient Diffusion and alternative SURE-based approaches for learning diffusion models from corrupted data resort to approximations which deteriorate performance. We present the first framework for training diffusion models that provably sample from the uncorrupted distribution given only noisy training data, solving an open problem in this space. Our key technical contribution is a method that uses a double application of Tweedie's formula and a consistency loss function that allows us to extend sampling at noise levels below the observed data noise. We also provide further evidence that diffusion models memorize from their training sets by identifying extremely corrupted images that are almost perfectly reconstructed, raising copyright and privacy concerns. Our method for training using corrupted samples can be used to mitigate this problem. We demonstrate this by fine-tuning Stable Diffusion XL to generate samples from a distribution using only noisy samples. Our framework reduces the amount of memorization of the fine-tuning dataset, while maintaining competitive performance.
[ "['Giannis Daras' 'Alexandros G. Dimakis' 'Constantinos Daskalakis']" ]
null
null
2404.10179
null
null
http://arxiv.org/pdf/2404.10179v2
2024-04-17T14:36:27Z
2024-03-13T17:50:32Z
Scaling Instructable Agents Across Many Simulated Worlds
Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions, in order to accomplish complex tasks. The Scalable, Instructable, Multiworld Agent (SIMA) project tackles this by training agents to follow free-form instructions across a diverse range of virtual 3D environments, including curated research environments as well as open-ended, commercial video games. Our goal is to develop an instructable agent that can accomplish anything a human can do in any simulated 3D environment. Our approach focuses on language-driven generality while imposing minimal assumptions. Our agents interact with environments in real-time using a generic, human-like interface: the inputs are image observations and language instructions and the outputs are keyboard-and-mouse actions. This general approach is challenging, but it allows agents to ground language across many visually complex and semantically rich environments while also allowing us to readily run agents in new environments. In this paper we describe our motivation and goal, the initial progress we have made, and promising preliminary results on several diverse research environments and a variety of commercial video games.
[ "['SIMA Team' 'Maria Abi Raad' 'Arun Ahuja' 'Catarina Barros'\n 'Frederic Besse' 'Andrew Bolt' 'Adrian Bolton' 'Bethanie Brownfield'\n 'Gavin Buttimore' 'Max Cant' 'Sarah Chakera' 'Stephanie C. Y. Chan'\n 'Jeff Clune' 'Adrian Collister' 'Vikki Copeman' 'Alex Cullum'\n 'Ishita Dasgupta' 'Dario de Cesare' 'Julia Di Trapani' 'Yani Donchev'\n 'Emma Dunleavy' 'Martin Engelcke' 'Ryan Faulkner' 'Frankie Garcia'\n 'Charles Gbadamosi' 'Zhitao Gong' 'Lucy Gonzales' 'Kshitij Gupta'\n 'Karol Gregor' 'Arne Olav Hallingstad' 'Tim Harley' 'Sam Haves'\n 'Felix Hill' 'Ed Hirst' 'Drew A. Hudson' 'Jony Hudson'\n 'Steph Hughes-Fitt' 'Danilo J. Rezende' 'Mimi Jasarevic' 'Laura Kampis'\n 'Rosemary Ke' 'Thomas Keck' 'Junkyung Kim' 'Oscar Knagg'\n 'Kavya Kopparapu' 'Andrew Lampinen' 'Shane Legg' 'Alexander Lerchner'\n 'Marjorie Limont' 'Yulan Liu' 'Maria Loks-Thompson' 'Joseph Marino'\n 'Kathryn Martin Cussons' 'Loic Matthey' 'Siobhan Mcloughlin'\n 'Piermaria Mendolicchio' 'Hamza Merzic' 'Anna Mitenkova'\n 'Alexandre Moufarek' 'Valeria Oliveira' 'Yanko Oliveira'\n 'Hannah Openshaw' 'Renke Pan' 'Aneesh Pappu' 'Alex Platonov'\n 'Ollie Purkiss' 'David Reichert' 'John Reid' 'Pierre Harvey Richemond'\n 'Tyson Roberts' 'Giles Ruscoe' 'Jaume Sanchez Elias' 'Tasha Sandars'\n 'Daniel P. Sawyer' 'Tim Scholtes' 'Guy Simmons' 'Daniel Slater'\n 'Hubert Soyer' 'Heiko Strathmann' 'Peter Stys' 'Allison C. Tam'\n 'Denis Teplyashin' 'Tayfun Terzi' 'Davide Vercelli' 'Bojan Vujatovic'\n 'Marcus Wainwright' 'Jane X. Wang' 'Zhengdong Wang' 'Daan Wierstra'\n 'Duncan Williams' 'Nathaniel Wong' 'Sarah York' 'Nick Young']" ]
null
null
2404.10180
null
null
http://arxiv.org/pdf/2404.10180v2
2024-04-23T13:43:26Z
2024-04-15T23:28:13Z
Deferred NAM: Low-latency Top-K Context Injection via Deferred Context Encoding for Non-Streaming ASR
Contextual biasing enables speech recognizers to transcribe important phrases in the speaker's context, such as contact names, even if they are rare in, or absent from, the training data. Attention-based biasing is a leading approach which allows for full end-to-end cotraining of the recognizer and biasing system and requires no separate inference-time components. Such biasers typically consist of a context encoder; followed by a context filter which narrows down the context to apply, improving per-step inference time; and, finally, context application via cross attention. Though much work has gone into optimizing per-frame performance, the context encoder is at least as important: recognition cannot begin before context encoding ends. Here, we show the lightweight phrase selection pass can be moved before context encoding, resulting in a speedup of up to 16.1 times and enabling biasing to scale to 20K phrases with a maximum pre-decoding delay under 33ms. With the addition of phrase- and wordpiece-level cross-entropy losses, our technique also achieves up to a 37.5% relative WER reduction over the baseline without the losses and lightweight phrase selection pass.
[ "['Zelin Wu' 'Gan Song' 'Christopher Li' 'Pat Rondon' 'Zhong Meng'\n 'Xavier Velez' 'Weiran Wang' 'Diamantino Caseiro' 'Golan Pundak'\n 'Tsendsuren Munkhdalai' 'Angad Chandorkar' 'Rohit Prabhavalkar']" ]
null
null
2404.10188
null
null
http://arxiv.org/pdf/2404.10188v1
2024-04-16T00:04:46Z
2024-04-16T00:04:46Z
Smart Pilot Assignment for IoT in Massive MIMO Systems: A Path Towards Scalable IoT Infrastructure
5G sets the foundation for an era of creativity with its faster speeds, increased data throughput, reduced latency, and enhanced IoT connectivity, all enabled by Massive MIMO (M-MIMO) technology. M-MIMO boosts network efficiency and enhances user experience by employing intelligent user scheduling. This paper presents a user scheduling scheme and pilot assignment strategy designed for IoT devices, emphasizing mitigating pilot contamination, a key obstacle to improving spectral efficiency (SE) and system scalability in M-MIMO networks. We utilize a user clustering-based pilot allocation scheme to boost IoT device scalability in M-MIMO systems. Additionally, our smart pilot allocation minimizes interference and enhances SE by treating pilot assignment as a graph coloring problem, optimizing it through integer linear programming (ILP). Recognizing the computational complexity of ILP, we introduced a binary search-based heuristic predicated on interference threshold to expedite the computation, while maintaining a near-optimal solution. The simulation results show a significant decrease in the required pilot overhead (about 17%), and substantial enhancement in SE (about 8-14%).
[ "['Muhammad Kamran Saeed' 'Ashfaq Khokhar']" ]
null
null
2404.10201
null
null
http://arxiv.org/pdf/2404.10201v2
2024-04-25T05:09:49Z
2024-04-16T00:56:36Z
Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages
We study the problem of private vector mean estimation in the shuffle model of privacy where $n$ users each have a unit vector $v^{(i)} inmathbb{R}^d$. We propose a new multi-message protocol that achieves the optimal error using $tilde{mathcal{O}}left(min(nvarepsilon^2,d)right)$ messages per user. Moreover, we show that any (unbiased) protocol that achieves optimal error requires each user to send $Omega(min(nvarepsilon^2,d)/log(n))$ messages, demonstrating the optimality of our message complexity up to logarithmic factors. Additionally, we study the single-message setting and design a protocol that achieves mean squared error $mathcal{O}(dn^{d/(d+2)}varepsilon^{-4/(d+2)})$. Moreover, we show that any single-message protocol must incur mean squared error $Omega(dn^{d/(d+2)})$, showing that our protocol is optimal in the standard setting where $varepsilon = Theta(1)$. Finally, we study robustness to malicious users and show that malicious users can incur large additive error with a single shuffler.
[ "['Hilal Asi' 'Vitaly Feldman' 'Jelani Nelson' 'Huy L. Nguyen'\n 'Kunal Talwar' 'Samson Zhou']" ]
null
null
2404.10202
null
null
http://arxiv.org/pdf/2404.10202v1
2024-04-16T00:58:46Z
2024-04-16T00:58:46Z
Towards a Novel Perspective on Adversarial Examples Driven by Frequency
Enhancing our understanding of adversarial examples is crucial for the secure application of machine learning models in real-world scenarios. A prevalent method for analyzing adversarial examples is through a frequency-based approach. However, existing research indicates that attacks designed to exploit low-frequency or high-frequency information can enhance attack performance, leading to an unclear relationship between adversarial perturbations and different frequency components. In this paper, we seek to demystify this relationship by exploring the characteristics of adversarial perturbations within the frequency domain. We employ wavelet packet decomposition for detailed frequency analysis of adversarial examples and conduct statistical examinations across various frequency bands. Intriguingly, our findings indicate that significant adversarial perturbations are present within the high-frequency components of low-frequency bands. Drawing on this insight, we propose a black-box adversarial attack algorithm based on combining different frequency bands. Experiments conducted on multiple datasets and models demonstrate that combining low-frequency bands and high-frequency components of low-frequency bands can significantly enhance attack efficiency. The average attack success rate reaches 99%, surpassing attacks that utilize a single frequency segment. Additionally, we introduce the normalized disturbance visibility index as a solution to the limitations of $L_2$ norm in assessing continuous and discrete perturbations.
[ "['Zhun Zhang' 'Yi Zeng' 'Qihe Liu' 'Shijie Zhou']" ]
null
null
2404.10204
null
null
http://arxiv.org/abs/2404.10204v1
2024-04-16T01:10:09Z
2024-04-16T01:10:09Z
The Impact of Machine Learning on Society: An Analysis of Current Trends and Future Implications
The Machine learning (ML) is a rapidly evolving field of technology that has the potential to greatly impact society in a variety of ways. However, there are also concerns about the potential negative effects of ML on society, such as job displacement and privacy issues. This research aimed to conduct a comprehensive analysis of the current and future impact of ML on society. The research included a thorough literature review, case studies, and surveys to gather data on the economic impact of ML, ethical and privacy implications, and public perceptions of the technology. The survey was conducted on 150 respondents from different areas. The case studies conducted were on the impact of ML on healthcare, finance, transportation, and manufacturing. The findings of this research revealed that the majority of respondents have a moderate level of familiarity with the concept of ML, believe that it has the potential to benefit society, and think that society should prioritize the development and use of ML. Based on these findings, it was recommended that more research is conducted on the impact of ML on society, stronger regulations and laws to protect the privacy and rights of individuals when it comes to ML should be developed, transparency and accountability in ML decision-making processes should be increased, and public education and awareness about ML should be enhanced.
[ "['Md Kamrul Hossain Siam' 'Manidipa Bhattacharjee' 'Shakik Mahmud'\n 'Md. Saem Sarkar' 'Md. Masud Rana']" ]
null
null
2404.10207
null
null
http://arxiv.org/pdf/2404.10207v1
2024-04-16T01:20:51Z
2024-04-16T01:20:51Z
HELLINGER-UCB: A novel algorithm for stochastic multi-armed bandit problem and cold start problem in recommender system
In this paper, we study the stochastic multi-armed bandit problem, where the reward is driven by an unknown random variable. We propose a new variant of the Upper Confidence Bound (UCB) algorithm called Hellinger-UCB, which leverages the squared Hellinger distance to build the upper confidence bound. We prove that the Hellinger-UCB reaches the theoretical lower bound. We also show that the Hellinger-UCB has a solid statistical interpretation. We show that Hellinger-UCB is effective in finite time horizons with numerical experiments between Hellinger-UCB and other variants of the UCB algorithm. As a real-world example, we apply the Hellinger-UCB algorithm to solve the cold-start problem for a content recommender system of a financial app. With reasonable assumption, the Hellinger-UCB algorithm has a convenient but important lower latency feature. The online experiment also illustrates that the Hellinger-UCB outperforms both KL-UCB and UCB1 in the sense of a higher click-through rate (CTR).
[ "['Ruibo Yang' 'Jiazhou Wang' 'Andrew Mullhaupt']" ]
null
null
2404.10209
null
null
http://arxiv.org/pdf/2404.10209v3
2024-04-24T23:50:13Z
2024-04-16T01:38:34Z
Demonstration of DB-GPT: Next Generation Data Interaction System Empowered by Large Language Models
The recent breakthroughs in large language models (LLMs) are positioned to transition many areas of software. The technologies of interacting with data particularly have an important entanglement with LLMs as efficient and intuitive data interactions are paramount. In this paper, we present DB-GPT, a revolutionary and product-ready Python library that integrates LLMs into traditional data interaction tasks to enhance user experience and accessibility. DB-GPT is designed to understand data interaction tasks described by natural language and provide context-aware responses powered by LLMs, making it an indispensable tool for users ranging from novice to expert. Its system design supports deployment across local, distributed, and cloud environments. Beyond handling basic data interaction tasks like Text-to-SQL with LLMs, it can handle complex tasks like generative data analysis through a Multi-Agents framework and the Agentic Workflow Expression Language (AWEL). The Service-oriented Multi-model Management Framework (SMMF) ensures data privacy and security, enabling users to employ DB-GPT with private LLMs. Additionally, DB-GPT offers a series of product-ready features designed to enable users to integrate DB-GPT within their product environments easily. The code of DB-GPT is available at Github(https://github.com/eosphoros-ai/DB-GPT) which already has over 10.7k stars. Please install DB-GPT for your own usage with the instructions(https://github.com/eosphoros-ai/DB-GPT#install) and watch a 5-minute introduction video on Youtube(https://youtu.be/n_8RI1ENyl4) to further investigate DB-GPT.
[ "['Siqiao Xue' 'Danrui Qi' 'Caigao Jiang' 'Wenhui Shi' 'Fangyin Cheng'\n 'Keting Chen' 'Hongjun Yang' 'Zhiping Zhang' 'Jianshan He'\n 'Hongyang Zhang' 'Ganglin Wei' 'Wang Zhao' 'Fan Zhou' 'Hong Yi'\n 'Shaodong Liu' 'Hongjun Yang' 'Faqiang Chen']" ]
null
null
2404.10211
null
null
http://arxiv.org/pdf/2404.10211v1
2024-04-16T01:45:18Z
2024-04-16T01:45:18Z
Anomaly Correction of Business Processes Using Transformer Autoencoder
Event log records all events that occur during the execution of business processes, so detecting and correcting anomalies in event log can provide reliable guarantee for subsequent process analysis. The previous works mainly include next event prediction based methods and autoencoder-based methods. These methods cannot accurately and efficiently detect anomalies and correct anomalies at the same time, and they all rely on the set threshold to detect anomalies. To solve these problems, we propose a business process anomaly correction method based on Transformer autoencoder. By using self-attention mechanism and autoencoder structure, it can efficiently process event sequences of arbitrary length, and can directly output corrected business process instances, so that it can adapt to various scenarios. At the same time, the anomaly detection is transformed into a classification problem by means of selfsupervised learning, so that there is no need to set a specific threshold in anomaly detection. The experimental results on several real-life event logs show that the proposed method is superior to the previous methods in terms of anomaly detection accuracy and anomaly correction results while ensuring high running efficiency.
[ "['Ziyou Gong' 'Xianwen Fang' 'Ping Wu']" ]
null
null
2404.10220
null
null
http://arxiv.org/pdf/2404.10220v1
2024-04-16T02:01:56Z
2024-04-16T02:01:56Z
Closed-Loop Open-Vocabulary Mobile Manipulation with GPT-4V
Autonomous robot navigation and manipulation in open environments require reasoning and replanning with closed-loop feedback. We present COME-robot, the first closed-loop framework utilizing the GPT-4V vision-language foundation model for open-ended reasoning and adaptive planning in real-world scenarios. We meticulously construct a library of action primitives for robot exploration, navigation, and manipulation, serving as callable execution modules for GPT-4V in task planning. On top of these modules, GPT-4V serves as the brain that can accomplish multimodal reasoning, generate action policy with code, verify the task progress, and provide feedback for replanning. Such design enables COME-robot to (i) actively perceive the environments, (ii) perform situated reasoning, and (iii) recover from failures. Through comprehensive experiments involving 8 challenging real-world tabletop and manipulation tasks, COME-robot demonstrates a significant improvement in task success rate (~25%) compared to state-of-the-art baseline methods. We further conduct comprehensive analyses to elucidate how COME-robot's design facilitates failure recovery, free-form instruction following, and long-horizon task planning.
[ "['Peiyuan Zhi' 'Zhiyuan Zhang' 'Muzhi Han' 'Zeyu Zhang' 'Zhitian Li'\n 'Ziyuan Jiao' 'Baoxiong Jia' 'Siyuan Huang']" ]
null
null
2404.10226
null
null
http://arxiv.org/pdf/2404.10226v1
2024-04-16T02:11:46Z
2024-04-16T02:11:46Z
Find The Gap: Knowledge Base Reasoning For Visual Question Answering
We analyze knowledge-based visual question answering, for which given a question, the models need to ground it into the visual modality and retrieve the relevant knowledge from a given large knowledge base (KB) to be able to answer. Our analysis has two folds, one based on designing neural architectures and training them from scratch, and another based on large pre-trained language models (LLMs). Our research questions are: 1) Can we effectively augment models by explicit supervised retrieval of the relevant KB information to solve the KB-VQA problem? 2) How do task-specific and LLM-based models perform in the integration of visual and external knowledge, and multi-hop reasoning over both sources of information? 3) Is the implicit knowledge of LLMs sufficient for KB-VQA and to what extent it can replace the explicit KB? Our results demonstrate the positive impact of empowering task-specific and LLM models with supervised external and visual knowledge retrieval models. Our findings show that though LLMs are stronger in 1-hop reasoning, they suffer in 2-hop reasoning in comparison with our fine-tuned NN model even if the relevant information from both modalities is available to the model. Moreover, we observed that LLM models outperform the NN model for KB-related questions which confirms the effectiveness of implicit knowledge in LLMs however, they do not alleviate the need for external KB.
[ "['Elham J. Barezi' 'Parisa Kordjamshidi']" ]
null
null
2404.10228
null
null
http://arxiv.org/pdf/2404.10228v2
2024-05-17T14:07:24Z
2024-04-16T02:18:30Z
Two-Stage Stance Labeling: User-Hashtag Heuristics with Graph Neural Networks
The high volume and rapid evolution of content on social media present major challenges for studying the stance of social media users. In this work, we develop a two stage stance labeling method that utilizes the user-hashtag bipartite graph and the user-user interaction graph. In the first stage, a simple and efficient heuristic for stance labeling uses the user-hashtag bipartite graph to iteratively update the stance association of user and hashtag nodes via a label propagation mechanism. This set of soft labels is then integrated with the user-user interaction graph to train a graph neural network (GNN) model using semi-supervised learning. We evaluate this method on two large-scale datasets containing tweets related to climate change from June 2021 to June 2022 and gun control from January 2022 to January 2023. Our experiments demonstrate that enriching text-based embeddings of users with network information from the user interaction graph using our semi-supervised GNN method outperforms both classifiers trained on user textual embeddings and zero-shot classification using LLMs such as GPT4. We discuss the need for integrating nuanced understanding from social science with the scalability of computational methods to better understand how polarization on social media occurs for divisive issues such as climate change and gun control.
[ "['Joshua Melton' 'Shannon Reid' 'Gabriel Terejanu' 'Siddharth Krishnan']" ]
null
null
2404.10242
null
null
http://arxiv.org/pdf/2404.10242v1
2024-04-16T02:42:06Z
2024-04-16T02:42:06Z
Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology
Featurizing microscopy images for use in biological research remains a significant challenge, especially for large-scale experiments spanning millions of images. This work explores the scaling properties of weakly supervised classifiers and self-supervised masked autoencoders (MAEs) when training with increasingly larger model backbones and microscopy datasets. Our results show that ViT-based MAEs outperform weakly supervised classifiers on a variety of tasks, achieving as much as a 11.5% relative improvement when recalling known biological relationships curated from public databases. Additionally, we develop a new channel-agnostic MAE architecture (CA-MAE) that allows for inputting images of different numbers and orders of channels at inference time. We demonstrate that CA-MAEs effectively generalize by inferring and evaluating on a microscopy image dataset (JUMP-CP) generated under different experimental conditions with a different channel structure than our pretraining data (RPI-93M). Our findings motivate continued research into scaling self-supervised learning on microscopy data in order to create powerful foundation models of cellular biology that have the potential to catalyze advancements in drug discovery and beyond.
[ "['Oren Kraus' 'Kian Kenyon-Dean' 'Saber Saberian' 'Maryam Fallah'\n 'Peter McLean' 'Jess Leung' 'Vasudev Sharma' 'Ayla Khan'\n 'Jia Balakrishnan' 'Safiye Celik' 'Dominique Beaini' 'Maciej Sypetkowski'\n 'Chi Vicky Cheng' 'Kristen Morse' 'Maureen Makes' 'Ben Mabey'\n 'Berton Earnshaw']" ]
null
null
2404.10255
null
null
http://arxiv.org/pdf/2404.10255v2
2024-04-27T12:39:28Z
2024-04-16T03:18:27Z
Privacy-Enhanced Training-as-a-Service for On-Device Intelligence: Concept, Architectural Scheme, and Open Problems
On-device intelligence (ODI) enables artificial intelligence (AI) applications to run on end devices, providing real-time and customized AI inference without relying on remote servers. However, training models for on-device deployment face significant challenges due to the decentralized and privacy-sensitive nature of users' data, along with end-side constraints related to network connectivity, computation efficiency, etc. Existing training paradigms, such as cloud-based training, federated learning, and transfer learning, fail to sufficiently address these practical constraints that are prevalent for devices. To overcome these challenges, we propose Privacy-Enhanced Training-as-a-Service (PTaaS), a novel service computing paradigm that provides privacy-friendly, customized AI model training for end devices. PTaaS outsources the core training process to remote and powerful cloud or edge servers, efficiently developing customized on-device models based on uploaded anonymous queries, enhancing data privacy while reducing the computation load on individual devices. We explore the definition, goals, and design principles of PTaaS, alongside emerging technologies that support the PTaaS paradigm. An architectural scheme for PTaaS is also presented, followed by a series of open problems that set the stage for future research directions in the field of PTaaS.
[ "['Zhiyuan Wu' 'Sheng Sun' 'Yuwei Wang' 'Min Liu' 'Bo Gao' 'Tianliu He'\n 'Wen Wang']" ]
null
null
2404.10259
null
null
http://arxiv.org/pdf/2404.10259v2
2024-07-15T13:00:46Z
2024-04-16T03:26:43Z
Uncovering Latent Arguments in Social Media Messaging by Employing LLMs-in-the-Loop Strategy
The widespread use of social media has led to a surge in popularity for automated methods of analyzing public opinion. Supervised methods are adept at text categorization, yet the dynamic nature of social media discussions poses a continual challenge for these techniques due to the constant shifting of the focus. On the other hand, traditional unsupervised methods for extracting themes from public discourse, such as topic modeling, often reveal overarching patterns that might not capture specific nuances. Consequently, a significant portion of research into social media discourse still depends on labor-intensive manual coding techniques and a human-in-the-loop approach, which are both time-consuming and costly. In this work, we study the problem of discovering arguments associated with a specific theme. We propose a generic LLMs-in-the-Loop strategy that leverages the advanced capabilities of Large Language Models (LLMs) to extract latent arguments from social media messaging. To demonstrate our approach, we apply our framework to contentious topics. We use two publicly available datasets: (1) the climate campaigns dataset of 14k Facebook ads with 25 themes and (2) the COVID-19 vaccine campaigns dataset of 9k Facebook ads with 14 themes. Additionally, we design a downstream task as stance prediction by leveraging talking points in climate debates. Furthermore, we analyze demographic targeting and the adaptation of messaging based on real-world events.
[ "['Tunazzina Islam' 'Dan Goldwasser']" ]
null
null
2404.10261
null
null
http://arxiv.org/pdf/2404.10261v2
2024-04-21T15:47:28Z
2024-04-16T03:31:28Z
Lighter, Better, Faster Multi-Source Domain Adaptation with Gaussian Mixture Models and Optimal Transport
In this paper, we tackle Multi-Source Domain Adaptation (MSDA), a task in transfer learning where one adapts multiple heterogeneous, labeled source probability measures towards a different, unlabeled target measure. We propose a novel framework for MSDA, based on Optimal Transport (OT) and Gaussian Mixture Models (GMMs). Our framework has two key advantages. First, OT between GMMs can be solved efficiently via linear programming. Second, it provides a convenient model for supervised learning, especially classification, as components in the GMM can be associated with existing classes. Based on the GMM-OT problem, we propose a novel technique for calculating barycenters of GMMs. Based on this novel algorithm, we propose two new strategies for MSDA: GMM-WBT and GMM-DaDiL. We empirically evaluate our proposed methods on four benchmarks in image classification and fault diagnosis, showing that we improve over the prior art while being faster and involving fewer parameters.
[ "['Eduardo Fernandes Montesuma' 'Fred Ngolè Mboula' 'Antoine Souloumiac']" ]
null
null
2404.10271
null
null
http://arxiv.org/pdf/2404.10271v2
2024-06-04T14:34:38Z
2024-04-16T03:59:33Z
Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback
Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" preferences or otherwise use it to make collective choices about model behavior? In this paper, we argue that the field of social choice is well positioned to address these questions, and we discuss ways forward for this agenda, drawing on discussions in a recent workshop on Social Choice for AI Ethics and Safety held in Berkeley, CA, USA in December 2023.
[ "['Vincent Conitzer' 'Rachel Freedman' 'Jobst Heitzig' 'Wesley H. Holliday'\n 'Bob M. Jacobs' 'Nathan Lambert' 'Milan Mossé' 'Eric Pacuit'\n 'Stuart Russell' 'Hailey Schoelkopf' 'Emanuel Tewolde'\n 'William S. Zwicker']" ]
null
null
2404.10274
null
null
http://arxiv.org/pdf/2404.10274v1
2024-04-16T04:17:17Z
2024-04-16T04:17:17Z
Sparse Attention Regression Network Based Soil Fertility Prediction With Ummaso
The challenge of imbalanced soil nutrient datasets significantly hampers accurate predictions of soil fertility. To tackle this, a new method is suggested in this research, combining Uniform Manifold Approximation and Projection (UMAP) with Least Absolute Shrinkage and Selection Operator (LASSO). The main aim is to counter the impact of uneven data distribution and improve soil fertility models' predictive precision. The model introduced uses Sparse Attention Regression, effectively incorporating pertinent features from the imbalanced dataset. UMAP is utilized initially to reduce data complexity, unveiling hidden structures and important patterns. Following this, LASSO is applied to refine features and enhance the model's interpretability. The experimental outcomes highlight the effectiveness of the UMAP and LASSO hybrid approach. The proposed model achieves outstanding performance metrics, reaching a predictive accuracy of 98%, demonstrating its capability in accurate soil fertility predictions. Additionally, it showcases a Precision of 91.25%, indicating its adeptness in identifying fertile soil instances accurately. The Recall metric stands at 90.90%, emphasizing the model's ability to capture true positive cases effectively.
[ "['R V Raghavendra Rao' 'U Srinivasulu Reddy']" ]
null
null
2404.10275
null
null
http://arxiv.org/pdf/2404.10275v1
2024-04-16T04:21:59Z
2024-04-16T04:21:59Z
OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a Gradient Based Learning
This paper presents a novel approach to optimizing profit margins in non-life insurance markets through a gradient descent-based method, targeting three key objectives: 1) maximizing profit margins, 2) ensuring conversion rates, and 3) enforcing fairness criteria such as demographic parity (DP). Traditional pricing optimization, which heavily lean on linear and semi definite programming, encounter challenges in balancing profitability and fairness. These challenges become especially pronounced in situations that necessitate continuous rate adjustments and the incorporation of fairness criteria. Specifically, indirect Ratebook optimization, a widely-used method for new business price setting, relies on predictor models such as XGBoost or GLMs/GAMs to estimate on downstream individually optimized prices. However, this strategy is prone to sequential errors and struggles to effectively manage optimizations for continuous rate scenarios. In practice, to save time actuaries frequently opt for optimization within discrete intervals (e.g., range of [-20%, +20%] with fix increments) leading to approximate estimations. Moreover, to circumvent infeasible solutions they often use relaxed constraints leading to suboptimal pricing strategies. The reverse-engineered nature of traditional models complicates the enforcement of fairness and can lead to biased outcomes. Our method addresses these challenges by employing a direct optimization strategy in the continuous space of rates and by embedding fairness through an adversarial predictor model. This innovation not only reduces sequential errors and simplifies the complexities found in traditional models but also directly integrates fairness measures into the commercial premium calculation. We demonstrate improved margin performance and stronger enforcement of fairness highlighting the critical need to evolve existing pricing strategies.
[ "['Vincent Grari' 'Marcin Detyniecki']" ]
null
null
2404.10282
null
null
http://arxiv.org/pdf/2404.10282v2
2024-05-24T20:52:02Z
2024-04-16T04:52:41Z
Tripod: Three Complementary Inductive Biases for Disentangled Representation Learning
Inductive biases are crucial in disentangled representation learning for narrowing down an underspecified solution set. In this work, we consider endowing a neural network autoencoder with three select inductive biases from the literature: data compression into a grid-like latent space via quantization, collective independence amongst latents, and minimal functional influence of any latent on how other latents determine data generation. In principle, these inductive biases are deeply complementary: they most directly specify properties of the latent space, encoder, and decoder, respectively. In practice, however, naively combining existing techniques instantiating these inductive biases fails to yield significant benefits. To address this, we propose adaptations to the three techniques that simplify the learning problem, equip key regularization terms with stabilizing invariances, and quash degenerate incentives. The resulting model, Tripod, achieves state-of-the-art results on a suite of four image disentanglement benchmarks. We also verify that Tripod significantly improves upon its naive incarnation and that all three of its "legs" are necessary for best performance.
[ "['Kyle Hsu' 'Jubayer Ibn Hamid' 'Kaylee Burns' 'Chelsea Finn' 'Jiajun Wu']" ]
null
null
2404.10296
null
null
http://arxiv.org/pdf/2404.10296v2
2024-04-22T09:21:33Z
2024-04-16T05:40:30Z
Engineering software 2.0 by interpolating neural networks: unifying training, solving, and calibration
The evolution of artificial intelligence (AI) and neural network theories has revolutionized the way software is programmed, shifting from a hard-coded series of codes to a vast neural network. However, this transition in engineering software has faced challenges such as data scarcity, multi-modality of data, low model accuracy, and slow inference. Here, we propose a new network based on interpolation theories and tensor decomposition, the interpolating neural network (INN). Instead of interpolating training data, a common notion in computer science, INN interpolates interpolation points in the physical space whose coordinates and values are trainable. It can also extrapolate if the interpolation points reside outside of the range of training data and the interpolation functions have a larger support domain. INN features orders of magnitude fewer trainable parameters, faster training, a smaller memory footprint, and higher model accuracy compared to feed-forward neural networks (FFNN) or physics-informed neural networks (PINN). INN is poised to usher in Engineering Software 2.0, a unified neural network that spans various domains of space, time, parameters, and initial/boundary conditions. This has previously been computationally prohibitive due to the exponentially growing number of trainable parameters, easily exceeding the parameter size of ChatGPT, which is over 1 trillion. INN addresses this challenge by leveraging tensor decomposition and tensor product, with adaptable network architecture.
[ "['Chanwook Park' 'Sourav Saha' 'Jiachen Guo' 'Xiaoyu Xie'\n 'Satyajit Mojumder' 'Miguel A. Bessa' 'Dong Qian' 'Wei Chen'\n 'Gregory J. Wagner' 'Jian Cao' 'Wing Kam Liu']" ]
null
null
2404.10299
null
null
http://arxiv.org/pdf/2404.10299v1
2024-04-16T05:56:41Z
2024-04-16T05:56:41Z
Clustering and Data Augmentation to Improve Accuracy of Sleep Assessment and Sleep Individuality Analysis
Recently, growing health awareness, novel methods allow individuals to monitor sleep at home. Utilizing sleep sounds offers advantages over conventional methods like smartwatches, being non-intrusive, and capable of detecting various physiological activities. This study aims to construct a machine learning-based sleep assessment model providing evidence-based assessments, such as poor sleep due to frequent movement during sleep onset. Extracting sleep sound events, deriving latent representations using VAE, clustering with GMM, and training LSTM for subjective sleep assessment achieved a high accuracy of 94.8% in distinguishing sleep satisfaction. Moreover, TimeSHAP revealed differences in impactful sound event types and timings for different individuals.
[ "['Shintaro Tamai' 'Masayuki Numao' 'Ken-ichi Fukui']" ]
null
null
2404.10301
null
null
http://arxiv.org/pdf/2404.10301v1
2024-04-16T06:09:33Z
2024-04-16T06:09:33Z
Long-form music generation with latent diffusion
Audio-based generative models for music have seen great strides recently, but so far have not managed to produce full-length music tracks with coherent musical structure. We show that by training a generative model on long temporal contexts it is possible to produce long-form music of up to 4m45s. Our model consists of a diffusion-transformer operating on a highly downsampled continuous latent representation (latent rate of 21.5Hz). It obtains state-of-the-art generations according to metrics on audio quality and prompt alignment, and subjective tests reveal that it produces full-length music with coherent structure.
[ "['Zach Evans' 'Julian D. Parker' 'CJ Carr' 'Zack Zukowski' 'Josiah Taylor'\n 'Jordi Pons']" ]
null
null
2404.10304
null
null
http://arxiv.org/pdf/2404.10304v1
2024-04-16T06:20:06Z
2024-04-16T06:20:06Z
LLM-Powered Test Case Generation for Detecting Tricky Bugs
Conventional automated test generation tools struggle to generate test oracles and tricky bug-revealing test inputs. Large Language Models (LLMs) can be prompted to produce test inputs and oracles for a program directly, but the precision of the tests can be very low for complex scenarios (only 6.3% based on our experiments). To fill this gap, this paper proposes AID, which combines LLMs with differential testing to generate fault-revealing test inputs and oracles targeting plausibly correct programs (i.e., programs that have passed all the existing tests). In particular, AID selects test inputs that yield diverse outputs on a set of program variants generated by LLMs, then constructs the test oracle based on the outputs. We evaluate AID on two large-scale datasets with tricky bugs: TrickyBugs and EvalPlus, and compare it with three state-of-the-art baselines. The evaluation results show that the recall, precision, and F1 score of AID outperform the state-of-the-art by up to 1.80x, 2.65x, and 1.66x, respectively.
[ "['Kaibo Liu' 'Yiyang Liu' 'Zhenpeng Chen' 'Jie M. Zhang' 'Yudong Han'\n 'Yun Ma' 'Ge Li' 'Gang Huang']" ]
null
null
2404.10308
null
null
http://arxiv.org/pdf/2404.10308v1
2024-04-16T06:34:08Z
2024-04-16T06:34:08Z
Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs
Large language models (LLMs) have shown remarkable performance in various natural language processing tasks. However, a primary constraint they face is the context limit, i.e., the maximum number of tokens they can process. Previous works have explored architectural changes and modifications in positional encoding to relax the constraint, but they often require expensive training or do not address the computational demands of self-attention. In this paper, we present Hierarchical cOntext MERging (HOMER), a new training-free scheme designed to overcome the limitations. HOMER uses a divide-and-conquer algorithm, dividing long inputs into manageable chunks. Each chunk is then processed collectively, employing a hierarchical strategy that merges adjacent chunks at progressive transformer layers. A token reduction technique precedes each merging, ensuring memory usage efficiency. We also propose an optimized computational order reducing the memory requirement to logarithmically scale with respect to input length, making it especially favorable for environments with tight memory restrictions. Our experiments demonstrate the proposed method's superior performance and memory efficiency, enabling the broader use of LLMs in contexts requiring extended context. Code is available at https://github.com/alinlab/HOMER.
[ "['Woomin Song' 'Seunghyuk Oh' 'Sangwoo Mo' 'Jaehyung Kim' 'Sukmin Yun'\n 'Jung-Woo Ha' 'Jinwoo Shin']" ]
null
null
2404.10310
null
null
http://arxiv.org/pdf/2404.10310v1
2024-04-16T06:37:19Z
2024-04-16T06:37:19Z
Wireless Earphone-based Real-Time Monitoring of Breathing Exercises: A Deep Learning Approach
Several therapy routines require deep breathing exercises as a key component and patients undergoing such therapies must perform these exercises regularly. Assessing the outcome of a therapy and tailoring its course necessitates monitoring a patient's compliance with the therapy. While therapy compliance monitoring is routine in a clinical environment, it is challenging to do in an at-home setting. This is so because a home setting lacks access to specialized equipment and skilled professionals needed to effectively monitor the performance of a therapy routine by a patient. For some types of therapies, these challenges can be addressed with the use of consumer-grade hardware, such as earphones and smartphones, as practical solutions. To accurately monitor breathing exercises using wireless earphones, this paper proposes a framework that has the potential for assessing a patient's compliance with an at-home therapy. The proposed system performs real-time detection of breathing phases and channels with high accuracy by processing a $mathbf{500}$ ms audio signal through two convolutional neural networks. The first network, called a channel classifier, distinguishes between nasal and oral breathing, and a pause. The second network, called a phase classifier, determines whether the audio segment is from inhalation or exhalation. According to $k$-fold cross-validation, the channel and phase classifiers achieved a maximum F1 score of $mathbf{97.99%}$ and $mathbf{89.46%}$, respectively. The results demonstrate the potential of using commodity earphones for real-time breathing channel and phase detection for breathing therapy compliance monitoring.
[ "['Hassam Khan Wazir' 'Zaid Waghoo' 'Vikram Kapila']" ]
null
null
2404.10314
null
null
http://arxiv.org/pdf/2404.10314v1
2024-04-16T06:40:51Z
2024-04-16T06:40:51Z
Awareness of uncertainty in classification using a multivariate model and multi-views
One of the ways to make artificial intelligence more natural is to give it some room for doubt. Two main questions should be resolved in that way. First, how to train a model to estimate uncertainties of its own predictions? And then, what to do with the uncertain predictions if they appear? First, we proposed an uncertainty-aware negative log-likelihood loss for the case of N-dimensional multivariate normal distribution with spherical variance matrix to the solution of N-classes classification tasks. The loss is similar to the heteroscedastic regression loss. The proposed model regularizes uncertain predictions, and trains to calculate both the predictions and their uncertainty estimations. The model fits well with the label smoothing technique. Second, we expanded the limits of data augmentation at the training and test stages, and made the trained model to give multiple predictions for a given number of augmented versions of each test sample. Given the multi-view predictions together with their uncertainties and confidences, we proposed several methods to calculate final predictions, including mode values and bin counts with soft and hard weights. For the latter method, we formalized the model tuning task in the form of multimodal optimization with non-differentiable criteria of maximum accuracy, and applied particle swarm optimization to solve the tuning task. The proposed methodology was tested using CIFAR-10 dataset with clean and noisy labels and demonstrated good results in comparison with other uncertainty estimation methods related to sample selection, co-teaching, and label smoothing.
[ "['Alexey Kornaev' 'Elena Kornaeva' 'Oleg Ivanov' 'Ilya Pershin'\n 'Danis Alukaev']" ]
null
null
2404.10319
null
null
http://arxiv.org/pdf/2404.10319v1
2024-04-16T06:59:26Z
2024-04-16T06:59:26Z
Application of Deep Learning Methods to Processing of Noisy Medical Video Data
Cells count become a challenging problem when the cells move in a continuous stream, and their boundaries are difficult for visual detection. To resolve this problem we modified the training and decision making processes using curriculum learning and multi-view predictions techniques, respectively.
[ "['Danil Afonchikov' 'Elena Kornaeva' 'Irina Makovik' 'Alexey Kornaev']" ]
null
null
2404.10320
null
null
http://arxiv.org/pdf/2404.10320v2
2024-04-18T05:56:21Z
2024-04-16T07:02:40Z
CARE to Compare: A real-world dataset for anomaly detection in wind turbine data
Anomaly detection plays a crucial role in the field of predictive maintenance for wind turbines, yet the comparison of different algorithms poses a difficult task because domain specific public datasets are scarce. Many comparisons of different approaches either use benchmarks composed of data from many different domains, inaccessible data or one of the few publicly available datasets which lack detailed information about the faults. Moreover, many publications highlight a couple of case studies where fault detection was successful. With this paper we publish a high quality dataset that contains data from 36 wind turbines across 3 different wind farms as well as the most detailed fault information of any public wind turbine dataset as far as we know. The new dataset contains 89 years worth of real-world operating data of wind turbines, distributed across 44 labeled time frames for anomalies that led up to faults, as well as 51 time series representing normal behavior. Additionally, the quality of training data is ensured by turbine-status-based labels for each data point. Furthermore, we propose a new scoring method, called CARE (Coverage, Accuracy, Reliability and Earliness), which takes advantage of the information depth that is present in the dataset to identify a good all-around anomaly detection model. This score considers the anomaly detection performance, the ability to recognize normal behavior properly and the capability to raise as few false alarms as possible while simultaneously detecting anomalies early.
[ "['Christian Gück' 'Cyriana M. A. Roelofs' 'Stefan Faulstich']" ]
null
null
2404.10324
null
null
http://arxiv.org/pdf/2404.10324v1
2024-04-16T07:08:04Z
2024-04-16T07:08:04Z
Graph neural network-based surrogate modelling for real-time hydraulic prediction of urban drainage networks
Physics-based models are computationally time-consuming and infeasible for real-time scenarios of urban drainage networks, and a surrogate model is needed to accelerate the online predictive modelling. Fully-connected neural networks (NNs) are potential surrogate models, but may suffer from low interpretability and efficiency in fitting complex targets. Owing to the state-of-the-art modelling power of graph neural networks (GNNs) and their match with urban drainage networks in the graph structure, this work proposes a GNN-based surrogate of the flow routing model for the hydraulic prediction problem of drainage networks, which regards recent hydraulic states as initial conditions, and future runoff and control policy as boundary conditions. To incorporate hydraulic constraints and physical relationships into drainage modelling, physics-guided mechanisms are designed on top of the surrogate model to restrict the prediction variables with flow balance and flooding occurrence constraints. According to case results in a stormwater network, the GNN-based model is more cost-effective with better hydraulic prediction accuracy than the NN-based model after equal training epochs, and the designed mechanisms further limit prediction errors with interpretable domain knowledge. As the model structure adheres to the flow routing mechanisms and hydraulic constraints in urban drainage networks, it provides an interpretable and effective solution for data-driven surrogate modelling. Simultaneously, the surrogate model accelerates the predictive modelling of urban drainage networks for real-time use compared with the physics-based model.
[ "['Zhiyu Zhang' 'Chenkaixiang Lu' 'Wenchong Tian' 'Zhenliang Liao'\n 'Zhiguo Yuan']" ]
null
null
2404.10341
null
null
http://arxiv.org/abs/2404.10341v1
2024-04-16T07:24:54Z
2024-04-16T07:24:54Z
Asset management, condition monitoring and Digital Twins: damage detection and virtual inspection on a reinforced concrete bridge
In April 2021 Stava bridge, a main bridge on E6 in Norway, was abruptly closed for traffic. A structural defect had seriously compromised the bridge structural integrity. The Norwegian Public Roads Administration (NPRA) closed it, made a temporary solution and reopened with severe traffic restrictions. The incident was alerted through what constitutes the bridge Digital Twin processing data from Internet of Things sensors. The solution was crucial in online and offline diagnostics, the case demonstrating the value of technologies to tackle emerging dangerous situations as well as acting preventively. A critical and rapidly developing damage was detected in time to stop the development, but not in time to avoid the incident altogether. The paper puts risk in a broader perspective for an organization responsible for highway infrastructure. It positions online monitoring and Digital Twins in the context of Risk- and Condition-Based Maintenance. The situation that arose at Stava bridge, and how it was detected, analyzed, and diagnosed during virtual inspection, is described. The case demonstrates how combining physics-based methods with Machine Learning can facilitate damage detection and diagnostics. A summary of lessons learnt, both from technical and organizational perspectives, as well as plans of future work, is presented.
[ "['Arnulf Hagen' 'Trond Michael Andersen']" ]
null
null
2404.10351
null
null
http://arxiv.org/pdf/2404.10351v1
2024-04-16T07:39:54Z
2024-04-16T07:39:54Z
On the Use of Relative Validity Indices for Comparing Clustering Approaches
Relative Validity Indices (RVIs) such as the Silhouette Width Criterion, Calinski-Harabasz and Davie's Bouldin indices are the most popular tools for evaluating and optimising applications of clustering. Their ability to rank collections of candidate partitions has been used to guide the selection of the number of clusters, and to compare partitions from different clustering algorithms. Beyond these more conventional tasks, many examples can be found in the literature where RVIs have been used to compare and select other aspects of clustering approaches such as data normalisation procedures, data representation methods, and distance measures. The authors are not aware of any studies that have attempted to establish the suitability of RVIs for such comparisons. Moreover, given the impact of these aspects on pairwise similarities, it is not even immediately obvious how RVIs should be implemented when comparing these aspects. In this study, we conducted experiments with seven common RVIs on over 2.7 million clustering partitions for both synthetic and real-world datasets, encompassing feature-vector and time-series data. Our findings suggest that RVIs are not well-suited to these unconventional tasks, and that conclusions drawn from such applications may be misleading. It is recommended that normalisation procedures, representation methods, and distance measures instead be selected using external validation on high quality labelled datasets or carefully designed outcome-oriented objective criteria, both of which should be informed by relevant domain knowledge and clustering aims.
[ "['Luke W. Yerbury' 'Ricardo J. G. B. Campello' 'G. C. Livingston Jr'\n 'Mark Goldsworthy' \"Lachlan O'Neil\"]" ]
null
null
2404.10353
null
null
http://arxiv.org/pdf/2404.10353v1
2024-04-16T07:41:29Z
2024-04-16T07:41:29Z
Rethinking the Graph Polynomial Filter via Positive and Negative Coupling Analysis
Recently, the optimization of polynomial filters within Spectral Graph Neural Networks (GNNs) has emerged as a prominent research focus. Existing spectral GNNs mainly emphasize polynomial properties in filter design, introducing computational overhead and neglecting the integration of crucial graph structure information. We argue that incorporating graph information into basis construction can enhance understanding of polynomial basis, and further facilitate simplified polynomial filter design. Motivated by this, we first propose a Positive and Negative Coupling Analysis (PNCA) framework, where the concepts of positive and negative activation are defined and their respective and mixed effects are analysed. Then, we explore PNCA from the message propagation perspective, revealing the subtle information hidden in the activation process. Subsequently, PNCA is used to analyze the mainstream polynomial filters, and a novel simple basis that decouples the positive and negative activation and fully utilizes graph structure information is designed. Finally, a simple GNN (called GSCNet) is proposed based on the new basis. Experimental results on the benchmark datasets for node classification verify that our GSCNet obtains better or comparable results compared with existing state-of-the-art GNNs while demanding relatively less computational time.
[ "['Haodong Wen' 'Bodong Du' 'Ruixun Liu' 'Deyu Meng' 'Xiangyong Cao']" ]
null
null
2404.10354
null
null
http://arxiv.org/pdf/2404.10354v1
2024-04-16T07:42:55Z
2024-04-16T07:42:55Z
Physical formula enhanced multi-task learning for pharmacokinetics prediction
Artificial intelligence (AI) technology has demonstrated remarkable potential in drug dis-covery, where pharmacokinetics plays a crucial role in determining the dosage, safety, and efficacy of new drugs. A major challenge for AI-driven drug discovery (AIDD) is the scarcity of high-quality data, which often requires extensive wet-lab work. A typical example of this is pharmacokinetic experiments. In this work, we develop a physical formula enhanced mul-ti-task learning (PEMAL) method that predicts four key parameters of pharmacokinetics simultaneously. By incorporating physical formulas into the multi-task framework, PEMAL facilitates effective knowledge sharing and target alignment among the pharmacokinetic parameters, thereby enhancing the accuracy of prediction. Our experiments reveal that PEMAL significantly lowers the data demand, compared to typical Graph Neural Networks. Moreover, we demonstrate that PEMAL enhances the robustness to noise, an advantage that conventional Neural Networks do not possess. Another advantage of PEMAL is its high flexibility, which can be potentially applied to other multi-task machine learning scenarios. Overall, our work illustrates the benefits and potential of using PEMAL in AIDD and other scenarios with data scarcity and noise.
[ "['Ruifeng Li' 'Dongzhan Zhou' 'Ancheng Shen' 'Ao Zhang' 'Mao Su'\n 'Mingqian Li' 'Hongyang Chen' 'Gang Chen' 'Yin Zhang' 'Shufei Zhang'\n 'Yuqiang Li' 'Wanli Ouyang']" ]
null
null
2404.10356
null
null
http://arxiv.org/pdf/2404.10356v1
2024-04-16T07:44:08Z
2024-04-16T07:44:08Z
Generating Counterfactual Trajectories with Latent Diffusion Models for Concept Discovery
Trustworthiness is a major prerequisite for the safe application of opaque deep learning models in high-stakes domains like medicine. Understanding the decision-making process not only contributes to fostering trust but might also reveal previously unknown decision criteria of complex models that could advance the state of medical research. The discovery of decision-relevant concepts from black box models is a particularly challenging task. This study proposes Concept Discovery through Latent Diffusion-based Counterfactual Trajectories (CDCT), a novel three-step framework for concept discovery leveraging the superior image synthesis capabilities of diffusion models. In the first step, CDCT uses a Latent Diffusion Model (LDM) to generate a counterfactual trajectory dataset. This dataset is used to derive a disentangled representation of classification-relevant concepts using a Variational Autoencoder (VAE). Finally, a search algorithm is applied to identify relevant concepts in the disentangled latent space. The application of CDCT to a classifier trained on the largest public skin lesion dataset revealed not only the presence of several biases but also meaningful biomarkers. Moreover, the counterfactuals generated within CDCT show better FID scores than those produced by a previously established state-of-the-art method, while being 12 times more resource-efficient. Unsupervised concept discovery holds great potential for the application of trustworthy AI and the further development of human knowledge in various domains. CDCT represents a further step in this direction.
[ "['Payal Varshney' 'Adriano Lucieri' 'Christoph Balada' 'Andreas Dengel'\n 'Sheraz Ahmed']" ]
null
null
2404.10363
null
null
http://arxiv.org/pdf/2404.10363v1
2024-04-16T07:53:54Z
2024-04-16T07:53:54Z
A Survey on Data-Driven Fault Diagnostic Techniques for Marine Diesel Engines
Fault diagnosis in marine diesel engines is vital for maritime safety and operational efficiency.These engines are integral to marine vessels, and their reliable performance is crucial for safenavigation. Swift identification and resolution of faults are essential to prevent breakdowns,enhance safety, and reduce the risk of catastrophic failures at sea. Proactive fault diagnosisfacilitates timely maintenance, minimizes downtime, and ensures the overall reliability andlongevity of marine diesel engines. This paper explores the importance of fault diagnosis,emphasizing subsystems, common faults, and recent advancements in data-driven approachesfor effective marine diesel engine maintenance
[ "['Ayah Youssef' 'Hassan Noura' 'Abderrahim El Amrani' 'El Mostafa El Adel'\n 'Mustapha Ouladsine']" ]
null
null
2404.10365
null
null
http://arxiv.org/pdf/2404.10365v1
2024-04-16T07:55:34Z
2024-04-16T07:55:34Z
Learning Wireless Data Knowledge Graph for Green Intelligent Communications: Methodology and Experiments
Intelligent communications have played a pivotal role in shaping the evolution of 6G networks. Native artificial intelligence (AI) within green communication systems must meet stringent real-time requirements. To achieve this, deploying lightweight and resource-efficient AI models is necessary. However, as wireless networks generate a multitude of data fields and indicators during operation, only a fraction of them imposes significant impact on the network AI models. Therefore, real-time intelligence of communication systems heavily relies on a small but critical set of the data that profoundly influences the performance of network AI models. These challenges underscore the need for innovative architectures and solutions. In this paper, we propose a solution, termed the pervasive multi-level (PML) native AI architecture, which integrates the concept of knowledge graph (KG) into the intelligent operational manipulations of mobile networks, resulting in the establishment of a wireless data KG. Leveraging the wireless data KG, we characterize the massive and complex data collected from wireless communication networks and analyze the relationships among various data fields. The obtained graph of data field relations enables the on-demand generation of minimal and effective datasets, referred to as feature datasets, tailored to specific application requirements. Consequently, this architecture not only enhances AI training, inference, and validation processes but also significantly reduces resource wastage and overhead for communication networks. To implement this architecture, we have developed a specific solution comprising a spatio-temporal heterogeneous graph attention neural network model (STREAM) as well as a feature dataset generation algorithm. Experiments are conducted to validate the effectiveness of the proposed architecture.
[ "['Yongming Huang' 'Xiaohu You' 'Hang Zhan' 'Shiwen He' 'Ningning Fu'\n 'Wei Xu']" ]
null
null
2404.10370
null
null
http://arxiv.org/pdf/2404.10370v1
2024-04-16T08:08:47Z
2024-04-16T08:08:47Z
Know Yourself Better: Diverse Discriminative Feature Learning Improves Open Set Recognition
Open set recognition (OSR) is a critical aspect of machine learning, addressing the challenge of detecting novel classes during inference. Within the realm of deep learning, neural classifiers trained on a closed set of data typically struggle to identify novel classes, leading to erroneous predictions. To address this issue, various heuristic methods have been proposed, allowing models to express uncertainty by stating "I don't know." However, a gap in the literature remains, as there has been limited exploration of the underlying mechanisms of these methods. In this paper, we conduct an analysis of open set recognition methods, focusing on the aspect of feature diversity. Our research reveals a significant correlation between learning diverse discriminative features and enhancing OSR performance. Building on this insight, we propose a novel OSR approach that leverages the advantages of feature diversity. The efficacy of our method is substantiated through rigorous evaluation on a standard OSR testbench, demonstrating a substantial improvement over state-of-the-art methods.
[ "['Jiawen Xu']" ]
null
null
2404.10378
null
null
http://arxiv.org/pdf/2404.10378v1
2024-04-16T08:15:10Z
2024-04-16T08:15:10Z
Second Edition FRCSyn Challenge at CVPR 2024: Face Recognition Challenge in the Era of Synthetic Data
Synthetic data is gaining increasing relevance for training machine learning models. This is mainly motivated due to several factors such as the lack of real data and intra-class variability, time and errors produced in manual labeling, and in some cases privacy concerns, among others. This paper presents an overview of the 2nd edition of the Face Recognition Challenge in the Era of Synthetic Data (FRCSyn) organized at CVPR 2024. FRCSyn aims to investigate the use of synthetic data in face recognition to address current technological limitations, including data privacy concerns, demographic biases, generalization to novel scenarios, and performance constraints in challenging situations such as aging, pose variations, and occlusions. Unlike the 1st edition, in which synthetic data from DCFace and GANDiffFace methods was only allowed to train face recognition systems, in this 2nd edition we propose new sub-tasks that allow participants to explore novel face generative methods. The outcomes of the 2nd FRCSyn Challenge, along with the proposed experimental protocol and benchmarking contribute significantly to the application of synthetic data to face recognition.
[ "['Ivan DeAndres-Tame' 'Ruben Tolosana' 'Pietro Melzi'\n 'Ruben Vera-Rodriguez' 'Minchul Kim' 'Christian Rathgeb' 'Xiaoming Liu'\n 'Aythami Morales' 'Julian Fierrez' 'Javier Ortega-Garcia' 'Zhizhou Zhong'\n 'Yuge Huang' 'Yuxi Mi' 'Shouhong Ding' 'Shuigeng Zhou' 'Shuai He'\n 'Lingzhi Fu' 'Heng Cong' 'Rongyu Zhang' 'Zhihong Xiao' 'Evgeny Smirnov'\n 'Anton Pimenov' 'Aleksei Grigorev' 'Denis Timoshenko'\n 'Kaleb Mesfin Asfaw' 'Cheng Yaw Low' 'Hao Liu' 'Chuyi Wang' 'Qing Zuo'\n 'Zhixiang He' 'Hatef Otroshi Shahreza' 'Anjith George'\n 'Alexander Unnervik' 'Parsa Rahimi' 'Sébastien Marcel' 'Pedro C. Neto'\n 'Marco Huber' 'Jan Niklas Kolf' 'Naser Damer' 'Fadi Boutros'\n 'Jaime S. Cardoso' 'Ana F. Sequeira' 'Andrea Atzori' 'Gianni Fenu'\n 'Mirko Marras' 'Vitomir Štruc' 'Jiang Yu' 'Zhangjie Li' 'Jichun Li'\n 'Weisong Zhao' 'Zhen Lei' 'Xiangyu Zhu' 'Xiao-Yu Zhang'\n 'Bernardo Biesseck' 'Pedro Vidal' 'Luiz Coelho' 'Roger Granada'\n 'David Menotti']" ]
null
null
2404.10386
null
null
http://arxiv.org/pdf/2404.10386v1
2024-04-16T08:37:36Z
2024-04-16T08:37:36Z
I/O in Machine Learning Applications on HPC Systems: A 360-degree Survey
High-Performance Computing (HPC) systems excel in managing distributed workloads, and the growing interest in Artificial Intelligence (AI) has resulted in a surge in demand for faster methods of Machine Learning (ML) model training and inference. In the past, research on HPC I/O focused on optimizing the underlying storage system for modeling and simulation applications and checkpointing the results, causing writes to be the dominant I/O operation. These applications typically access large portions of the data written by simulations or experiments. ML workloads, in contrast, perform small I/O reads spread across a large number of random files. This shift of I/O access patterns poses several challenges to HPC storage systems. In this paper, we survey I/O in ML applications on HPC systems, and target literature within a 6-year time window from 2019 to 2024. We provide an overview of the common phases of ML, review available profilers and benchmarks, examine the I/O patterns encountered during ML training, explore I/O optimizations utilized in modern ML frameworks and proposed in recent literature, and lastly, present gaps requiring further R&D. We seek to summarize the common practices used in accessing data by ML applications and expose research gaps that could spawn further R&D.
[ "['Noah Lewis' 'Jean Luca Bez' 'Suren Byna']" ]
null
null
2404.10393
null
null
http://arxiv.org/pdf/2404.10393v1
2024-04-16T08:48:46Z
2024-04-16T08:48:46Z
Offline Trajectory Generalization for Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to learn policies from static datasets of previously collected trajectories. Existing methods for offline RL either constrain the learned policy to the support of offline data or utilize model-based virtual environments to generate simulated rollouts. However, these methods suffer from (i) poor generalization to unseen states; and (ii) trivial improvement from low-qualified rollout simulation. In this paper, we propose offline trajectory generalization through world transformers for offline reinforcement learning (OTTO). Specifically, we use casual Transformers, a.k.a. World Transformers, to predict state dynamics and the immediate reward. Then we propose four strategies to use World Transformers to generate high-rewarded trajectory simulation by perturbing the offline data. Finally, we jointly use offline data with simulated data to train an offline RL algorithm. OTTO serves as a plug-in module and can be integrated with existing offline RL methods to enhance them with better generalization capability of transformers and high-rewarded data augmentation. Conducting extensive experiments on D4RL benchmark datasets, we verify that OTTO significantly outperforms state-of-the-art offline RL methods.
[ "['Ziqi Zhao' 'Zhaochun Ren' 'Liu Yang' 'Fajie Yuan' 'Pengjie Ren'\n 'Zhumin Chen' 'jun Ma' 'Xin Xin']" ]
null
null
2404.10401
null
null
http://arxiv.org/abs/2404.10401v2
2024-05-17T11:38:56Z
2024-04-16T09:03:13Z
A Phone-based Distributed Ambient Temperature Measurement System with An Efficient Label-free Automated Training Strategy
Enhancing the energy efficiency of buildings significantly relies on monitoring indoor ambient temperature. The potential limitations of conventional temperature measurement techniques, together with the omnipresence of smartphones, have redirected researchers'attention towards the exploration of phone-based ambient temperature estimation methods. However, existing phone-based methods face challenges such as insufficient privacy protection, difficulty in adapting models to various phones, and hurdles in obtaining enough labeled training data. In this study, we propose a distributed phone-based ambient temperature estimation system which enables collaboration among multiple phones to accurately measure the ambient temperature in different areas of an indoor space. This system also provides an efficient, cost-effective approach with a few-shot meta-learning module and an automated label generation module. It shows that with just 5 new training data points, the temperature estimation model can adapt to a new phone and reach a good performance. Moreover, the system uses crowdsourcing to generate accurate labels for all newly collected training data, significantly reducing costs. Additionally, we highlight the potential of incorporating federated learning into our system to enhance privacy protection. We believe this study can advance the practical application of phone-based ambient temperature measurement, facilitating energy-saving efforts in buildings.
[ "['Dayin Chen' 'Xiaodan Shi' 'Haoran Zhang' 'Xuan Song' 'Dongxiao Zhang'\n 'Yuntian Chen' 'Jinyue Yan']" ]
null
null
2404.10405
null
null
http://arxiv.org/pdf/2404.10405v1
2024-04-16T09:12:16Z
2024-04-16T09:12:16Z
Integration of Self-Supervised BYOL in Semi-Supervised Medical Image Recognition
Image recognition techniques heavily rely on abundant labeled data, particularly in medical contexts. Addressing the challenges associated with obtaining labeled data has led to the prominence of self-supervised learning and semi-supervised learning, especially in scenarios with limited annotated data. In this paper, we proposed an innovative approach by integrating self-supervised learning into semi-supervised models to enhance medical image recognition. Our methodology commences with pre-training on unlabeled data utilizing the BYOL method. Subsequently, we merge pseudo-labeled and labeled datasets to construct a neural network classifier, refining it through iterative fine-tuning. Experimental results on three different datasets demonstrate that our approach optimally leverages unlabeled data, outperforming existing methods in terms of accuracy for medical image recognition.
[ "['Hao Feng' 'Yuanzhe Jia' 'Ruijia Xu' 'Mukesh Prasad' 'Ali Anaissi'\n 'Ali Braytee']" ]
null
null
2404.10413
null
null
http://arxiv.org/pdf/2404.10413v1
2024-04-16T09:31:19Z
2024-04-16T09:31:19Z
VDTuner: Automated Performance Tuning for Vector Data Management Systems
Vector data management systems (VDMSs) have become an indispensable cornerstone in large-scale information retrieval and machine learning systems like large language models. To enhance the efficiency and flexibility of similarity search, VDMS exposes many tunable index parameters and system parameters for users to specify. However, due to the inherent characteristics of VDMS, automatic performance tuning for VDMS faces several critical challenges, which cannot be well addressed by the existing auto-tuning methods. In this paper, we introduce VDTuner, a learning-based automatic performance tuning framework for VDMS, leveraging multi-objective Bayesian optimization. VDTuner overcomes the challenges associated with VDMS by efficiently exploring a complex multi-dimensional parameter space without requiring any prior knowledge. Moreover, it is able to achieve a good balance between search speed and recall rate, delivering an optimal configuration. Extensive evaluations demonstrate that VDTuner can markedly improve VDMS performance (14.12% in search speed and 186.38% in recall rate) compared with default setting, and is more efficient compared with state-of-the-art baselines (up to 3.57 times faster in terms of tuning time). In addition, VDTuner is scalable to specific user preference and cost-aware optimization objective. VDTuner is available online at https://github.com/tiannuo-yang/VDTuner.
[ "['Tiannuo Yang' 'Wen Hu' 'Wangqi Peng' 'Yusen Li' 'Jianguo Li' 'Gang Wang'\n 'Xiaoguang Liu']" ]
null
null
2404.10420
null
null
http://arxiv.org/pdf/2404.10420v2
2024-05-29T14:09:17Z
2024-04-16T09:37:41Z
AudioProtoPNet: An interpretable deep learning model for bird sound classification
Recently, scientists have proposed several deep learning models to monitor the diversity of bird species. These models can detect bird species with high accuracy by analyzing acoustic signals. However, traditional deep learning algorithms are black-box models that provide no insight into their decision-making process. For domain experts, such as ornithologists, it is crucial that these models are not only efficient, but also interpretable in order to be used as assistive tools. In this study, we present an adaption of the Prototypical Part Network (ProtoPNet) for audio classification that provides inherent interpretability through its model architecture. Our approach is based on a ConvNeXt backbone architecture for feature extraction and learns prototypical patterns for each bird species using spectrograms of the training data. Classification of new data is done by comparison with these prototypes in latent space, which simultaneously serve as easily understandable explanations for the model's decisions. We evaluated the performance of our model on seven different datasets representing bird species from different geographical regions. In our experiments, the model showed excellent results, achieving an average AUROC of 0.82 and an average cmAP of 0.37 across the seven datasets, making it comparable to state-of-the-art black-box models for bird sound classification. Thus, this work demonstrates that even for the challenging task of bioacoustic bird classification, powerful yet interpretable deep learning models can be developed to provide valuable insights to domain experts.
[ "['René Heinrich' 'Bernhard Sick' 'Christoph Scholz']" ]
null
null
2404.10433
null
null
http://arxiv.org/pdf/2404.10433v1
2024-04-16T09:56:08Z
2024-04-16T09:56:08Z
Explainable concept mappings of MRI: Revealing the mechanisms underlying deep learning-based brain disease classification
Motivation. While recent studies show high accuracy in the classification of Alzheimer's disease using deep neural networks, the underlying learned concepts have not been investigated. Goals. To systematically identify changes in brain regions through concepts learned by the deep neural network for model validation. Approach. Using quantitative R2* maps we separated Alzheimer's patients (n=117) from normal controls (n=219) by using a convolutional neural network and systematically investigated the learned concepts using Concept Relevance Propagation and compared these results to a conventional region of interest-based analysis. Results. In line with established histological findings and the region of interest-based analyses, highly relevant concepts were primarily found in and adjacent to the basal ganglia. Impact. The identification of concepts learned by deep neural networks for disease classification enables validation of the models and could potentially improve reliability.
[ "['Christian Tinauer' 'Anna Damulina' 'Maximilian Sackl' 'Martin Soellradl'\n 'Reduan Achtibat' 'Maximilian Dreyer' 'Frederik Pahde'\n 'Sebastian Lapuschkin' 'Reinhold Schmidt' 'Stefan Ropele'\n 'Wojciech Samek' 'Christian Langkammer']" ]
null
null
2404.10436
null
null
http://arxiv.org/pdf/2404.10436v1
2024-04-16T10:02:36Z
2024-04-16T10:02:36Z
Tree Bandits for Generative Bayes
In generative models with obscured likelihood, Approximate Bayesian Computation (ABC) is often the tool of last resort for inference. However, ABC demands many prior parameter trials to keep only a small fraction that passes an acceptance test. To accelerate ABC rejection sampling, this paper develops a self-aware framework that learns from past trials and errors. We apply recursive partitioning classifiers on the ABC lookup table to sequentially refine high-likelihood regions into boxes. Each box is regarded as an arm in a binary bandit problem treating ABC acceptance as a reward. Each arm has a proclivity for being chosen for the next ABC evaluation, depending on the prior distribution and past rejections. The method places more splits in those areas where the likelihood resides, shying away from low-probability regions destined for ABC rejections. We provide two versions: (1) ABC-Tree for posterior sampling, and (2) ABC-MAP for maximum a posteriori estimation. We demonstrate accurate ABC approximability at much lower simulation cost. We justify the use of our tree-based bandit algorithms with nearly optimal regret bounds. Finally, we successfully apply our approach to the problem of masked image classification using deep generative models.
[ "[\"Sean O'Hagan\" 'Jungeum Kim' 'Veronika Rockova']" ]
null
null
2404.10443
null
null
http://arxiv.org/pdf/2404.10443v1
2024-04-16T10:30:48Z
2024-04-16T10:30:48Z
AGHINT: Attribute-Guided Representation Learning on Heterogeneous Information Networks with Transformer
Recently, heterogeneous graph neural networks (HGNNs) have achieved impressive success in representation learning by capturing long-range dependencies and heterogeneity at the node level. However, few existing studies have delved into the utilization of node attributes in heterogeneous information networks (HINs). In this paper, we investigate the impact of inter-node attribute disparities on HGNNs performance within the benchmark task, i.e., node classification, and empirically find that typical models exhibit significant performance decline when classifying nodes whose attributes markedly differ from their neighbors. To alleviate this issue, we propose a novel Attribute-Guided heterogeneous Information Networks representation learning model with Transformer (AGHINT), which allows a more effective aggregation of neighbor node information under the guidance of attributes. Specifically, AGHINT transcends the constraints of the original graph structure by directly integrating higher-order similar neighbor features into the learning process and modifies the message-passing mechanism between nodes based on their attribute disparities. Extensive experimental results on three real-world heterogeneous graph benchmarks with target node attributes demonstrate that AGHINT outperforms the state-of-the-art.
[ "['Jinhui Yuan' 'Shan Lu' 'Peibo Duan' 'Jieyue He']" ]
null
null
2404.10444
null
null
http://arxiv.org/pdf/2404.10444v1
2024-04-16T10:30:52Z
2024-04-16T10:30:52Z
Semi-supervised Fréchet Regression
This paper explores the field of semi-supervised Fr'echet regression, driven by the significant costs associated with obtaining non-Euclidean labels. Methodologically, we propose two novel methods: semi-supervised NW Fr'echet regression and semi-supervised kNN Fr'echet regression, both based on graph distance acquired from all feature instances. These methods extend the scope of existing semi-supervised Euclidean regression methods. We establish their convergence rates with limited labeled data and large amounts of unlabeled data, taking into account the low-dimensional manifold structure of the feature space. Through comprehensive simulations across diverse settings and applications to real data, we demonstrate the superior performance of our methods over their supervised counterparts. This study addresses existing research gaps and paves the way for further exploration and advancements in the field of semi-supervised Fr'echet regression.
[ "['Rui Qiu' 'Zhou Yu' 'Zhenhua Lin']" ]
null
null
2404.10445
null
null
http://arxiv.org/pdf/2404.10445v2
2024-05-31T02:56:14Z
2024-04-16T10:31:06Z
SparseDM: Toward Sparse Efficient Diffusion Models
Diffusion models have been extensively used in data generation tasks and are recognized as one of the best generative models. However, their time-consuming deployment, long inference time, and requirements on large memory limit their application on mobile devices. In this paper, we propose a method based on the improved Straight-Through Estimator to improve the deployment efficiency of diffusion models. Specifically, we add sparse masks to the Convolution and Linear layers in a pre-trained diffusion model, then use design progressive sparsity for model training in the fine-tuning stage, and switch the inference mask on and off, which supports a flexible choice of sparsity during inference according to the FID and MACs requirements. Experiments on four datasets conducted on a state-of-the-art Transformer-based diffusion model demonstrate that our method reduces MACs by $50%$ while increasing FID by only 1.5 on average. Under other MACs conditions, the FID is also lower than 1$sim$137 compared to other methods.
[ "['Kafeng Wang' 'Jianfei Chen' 'He Li' 'Zhenpeng Mi' 'Jun Zhu']" ]
null
null
2404.10450
null
null
http://arxiv.org/pdf/2404.10450v1
2024-04-16T10:39:25Z
2024-04-16T10:39:25Z
Graph Neural Networks for Protein-Protein Interactions -- A Short Survey
Protein-protein interactions (PPIs) play key roles in a broad range of biological processes. Numerous strategies have been proposed for predicting PPIs, and among them, graph-based methods have demonstrated promising outcomes owing to the inherent graph structure of PPI networks. This paper reviews various graph-based methodologies, and discusses their applications in PPI prediction. We classify these approaches into two primary groups based on their model structures. The first category employs Graph Neural Networks (GNN) or Graph Convolutional Networks (GCN), while the second category utilizes Graph Attention Networks (GAT), Graph Auto-Encoders and Graph-BERT. We highlight the distinctive methodologies of each approach in managing the graph-structured data inherent in PPI networks and anticipate future research directions in this domain.
[ "['Mingda Xu' 'Peisheng Qian' 'Ziyuan Zhao' 'Zeng Zeng' 'Jianguo Chen'\n 'Weide Liu' 'Xulei Yang']" ]
null
null
2404.10457
null
null
http://arxiv.org/pdf/2404.10457v1
2024-04-16T10:54:48Z
2024-04-16T10:54:48Z
Revealing data leakage in protein interaction benchmarks
In recent years, there has been remarkable progress in machine learning for protein-protein interactions. However, prior work has predominantly focused on improving learning algorithms, with less attention paid to evaluation strategies and data preparation. Here, we demonstrate that further development of machine learning methods may be hindered by the quality of existing train-test splits. Specifically, we find that commonly used splitting strategies for protein complexes, based on protein sequence or metadata similarity, introduce major data leakage. This may result in overoptimistic evaluation of generalization, as well as unfair benchmarking of the models, biased towards assessing their overfitting capacity rather than practical utility. To overcome the data leakage, we recommend constructing data splits based on 3D structural similarity of protein-protein interfaces and suggest corresponding algorithms. We believe that addressing the data leakage problem is critical for further progress in this research area.
[ "['Anton Bushuiev' 'Roman Bushuiev' 'Jiri Sedlar' 'Tomas Pluskal'\n 'Jiri Damborsky' 'Stanislav Mazurenko' 'Josef Sivic']" ]
null
null
2404.10458
null
null
http://arxiv.org/pdf/2404.10458v1
2024-04-16T10:56:33Z
2024-04-16T10:56:33Z
Advancing Long-Term Multi-Energy Load Forecasting with Patchformer: A Patch and Transformer-Based Approach
In the context of increasing demands for long-term multi-energy load forecasting in real-world applications, this paper introduces Patchformer, a novel model that integrates patch embedding with encoder-decoder Transformer-based architectures. To address the limitation in existing Transformer-based models, which struggle with intricate temporal patterns in long-term forecasting, Patchformer employs patch embedding, which predicts multivariate time-series data by separating it into multiple univariate data and segmenting each of them into multiple patches. This method effectively enhances the model's ability to capture local and global semantic dependencies. The numerical analysis shows that the Patchformer obtains overall better prediction accuracy in both multivariate and univariate long-term forecasting on the novel Multi-Energy dataset and other benchmark datasets. In addition, the positive effect of the interdependence among energy-related products on the performance of long-term time-series forecasting across Patchformer and other compared models is discovered, and the superiority of the Patchformer against other models is also demonstrated, which presents a significant advancement in handling the interdependence and complexities of long-term multi-energy forecasting. Lastly, Patchformer is illustrated as the only model that follows the positive correlation between model performance and the length of the past sequence, which states its ability to capture long-range past local semantic information.
[ "['Qiuyi Hong' 'Fanlin Meng' 'Felipe Maldonado']" ]
null
null
2404.10472
null
null
http://arxiv.org/pdf/2404.10472v1
2024-04-16T11:25:00Z
2024-04-16T11:25:00Z
Machine Learning Based Optimization Workflow for Tuning Numerical Settings of Differential Equation Solvers for Boundary Value Problems
Several numerical differential equation solvers have been employed effectively over the years as an alternative to analytical solvers to quickly and conveniently solve differential equations. One category of these is boundary value solvers, which are used to solve real-world problems formulated as differential equations with boundary conditions. These solvers require certain numerical settings to solve the differential equations that affect their solvability and performance. A systematic fine-tuning of these settings is required to obtain the desired solution and performance. Currently, these settings are either selected by trial and error or require domain expertise. In this paper, we propose a machine learning-based optimization workflow for fine-tuning the numerical settings to reduce the time and domain expertise required in the process. In the evaluation section, we discuss the scalability, stability, and reliability of the proposed workflow. We demonstrate our workflow on a numerical boundary value problem solver.
[ "['Viny Saajan Victor' 'Manuel Ettmüller' 'Andre Schmeißer' 'Heike Leitte'\n 'Simone Gramsch']" ]
null
null
2404.10474
null
null
http://arxiv.org/abs/2404.10474v1
2024-04-16T11:29:43Z
2024-04-16T11:29:43Z
Toward a Realistic Benchmark for Out-of-Distribution Detection
Deep neural networks are increasingly used in a wide range of technologies and services, but remain highly susceptible to out-of-distribution (OOD) samples, that is, drawn from a different distribution than the original training set. A common approach to address this issue is to endow deep neural networks with the ability to detect OOD samples. Several benchmarks have been proposed to design and validate OOD detection techniques. However, many of them are based on far-OOD samples drawn from very different distributions, and thus lack the complexity needed to capture the nuances of real-world scenarios. In this work, we introduce a comprehensive benchmark for OOD detection, based on ImageNet and Places365, that assigns individual classes as in-distribution or out-of-distribution depending on the semantic similarity with the training set. Several techniques can be used to determine which classes should be considered in-distribution, yielding benchmarks with varying properties. Experimental results on different OOD detection techniques show how their measured efficacy depends on the selected benchmark and how confidence-based techniques may outperform classifier-based ones on near-OOD samples.
[ "['Pietro Recalcati' 'Fabio Garcea' 'Luca Piano' 'Fabrizio Lamberti'\n 'Lia Morra']" ]
null
null
2404.10481
null
null
http://arxiv.org/pdf/2404.10481v1
2024-04-16T11:42:06Z
2024-04-16T11:42:06Z
BayesJudge: Bayesian Kernel Language Modelling with Confidence Uncertainty in Legal Judgment Prediction
Predicting legal judgments with reliable confidence is paramount for responsible legal AI applications. While transformer-based deep neural networks (DNNs) like BERT have demonstrated promise in legal tasks, accurately assessing their prediction confidence remains crucial. We present a novel Bayesian approach called BayesJudge that harnesses the synergy between deep learning and deep Gaussian Processes to quantify uncertainty through Bayesian kernel Monte Carlo dropout. Our method leverages informative priors and flexible data modelling via kernels, surpassing existing methods in both predictive accuracy and confidence estimation as indicated through brier score. Extensive evaluations of public legal datasets showcase our model's superior performance across diverse tasks. We also introduce an optimal solution to automate the scrutiny of unreliable predictions, resulting in a significant increase in the accuracy of the model's predictions by up to 27%. By empowering judges and legal professionals with more reliable information, our work paves the way for trustworthy and transparent legal AI applications that facilitate informed decisions grounded in both knowledge and quantified uncertainty.
[ "['Ubaid Azam' 'Imran Razzak' 'Shelly Vishwakarma' 'Hakim Hacid'\n 'Dell Zhang' 'Shoaib Jameel']" ]
null
null
2404.10483
null
null
http://arxiv.org/pdf/2404.10483v1
2024-04-16T11:43:26Z
2024-04-16T11:43:26Z
Would You Trust an AI Doctor? Building Reliable Medical Predictions with Kernel Dropout Uncertainty
The growing capabilities of AI raise questions about their trustworthiness in healthcare, particularly due to opaque decision-making and limited data availability. This paper proposes a novel approach to address these challenges, introducing a Bayesian Monte Carlo Dropout model with kernel modelling. Our model is designed to enhance reliability on small medical datasets, a crucial barrier to the wider adoption of AI in healthcare. This model leverages existing language models for improved effectiveness and seamlessly integrates with current workflows. We demonstrate significant improvements in reliability, even with limited data, offering a promising step towards building trust in AI-driven medical predictions and unlocking its potential to improve patient care.
[ "['Ubaid Azam' 'Imran Razzak' 'Shelly Vishwakarma' 'Hakim Hacid'\n 'Dell Zhang' 'Shoaib Jameel']" ]
null
null
2404.10494
null
null
http://arxiv.org/pdf/2404.10494v1
2024-04-16T12:03:38Z
2024-04-16T12:03:38Z
BDAN: Mitigating Temporal Difference Across Electrodes in Cross-Subject Motor Imagery Classification via Generative Bridging Domain
Because of "the non-repeatability of the experiment settings and conditions" and "the variability of brain patterns among subjects", the data distributions across sessions and electrodes are different in cross-subject motor imagery (MI) studies, eventually reducing the performance of the classification model. Systematically summarised based on the existing studies, a novel temporal-electrode data distribution problem is investigated under both intra-subject and inter-subject scenarios in this paper. Based on the presented issue, a novel bridging domain adaptation network (BDAN) is proposed, aiming to minimise the data distribution difference across sessions in the aspect of the electrode, thus improving and enhancing model performance. In the proposed BDAN, deep features of all the EEG data are extracted via a specially designed spatial feature extractor. With the obtained spatio-temporal features, a special generative bridging domain is established, bridging the data from all the subjects across sessions. The difference across sessions and electrodes is then minimized using the customized bridging loss functions, and the known knowledge is automatically transferred through the constructed bridging domain. To show the effectiveness of the proposed BDAN, comparison experiments and ablation studies are conducted on a public EEG dataset. The overall comparison results demonstrate the superior performance of the proposed BDAN compared with the other advanced deep learning and domain adaptation methods.
[ "['Zhige Chen' 'Rui Yang' 'Mengjie Huang' 'Chengxuan Qin' 'Zidong Wang']" ]
null
null
2404.10501
null
null
http://arxiv.org/pdf/2404.10501v1
2024-04-16T12:19:54Z
2024-04-16T12:19:54Z
Self-Supervised Visual Preference Alignment
This paper makes the first attempt towards unsupervised preference alignment in Vision-Language Models (VLMs). We generate chosen and rejected responses with regard to the original and augmented image pairs, and conduct preference alignment with direct preference optimization. It is based on a core idea: properly designed augmentation to the image input will induce VLM to generate false but hard negative responses, which helps the model to learn from and produce more robust and powerful answers. The whole pipeline no longer hinges on supervision from GPT4 or human involvement during alignment, and is highly efficient with few lines of code. With only 8k randomly sampled unsupervised data, it achieves 90% relative score to GPT-4 on complex reasoning in LLaVA-Bench, and improves LLaVA-7B/13B by 6.7%/5.6% score on complex multi-modal benchmark MM-Vet. Visualizations shows its improved ability to align with user-intentions. A series of ablations are firmly conducted to reveal the latent mechanism of the approach, which also indicates its potential towards further scaling. Code will be available.
[ "['Ke Zhu' 'Liang Zhao' 'Zheng Ge' 'Xiangyu Zhang']" ]
null
null
2404.10512
null
null
http://arxiv.org/pdf/2404.10512v1
2024-04-16T12:33:44Z
2024-04-16T12:33:44Z
Four-hour thunderstorm nowcasting using deep diffusion models of satellite
Convection (thunderstorm) develops rapidly within hours and is highly destructive, posing a significant challenge for nowcasting and resulting in substantial losses to nature and society. After the emergence of artificial intelligence (AI)-based methods, convection nowcasting has experienced rapid advancements, with its performance surpassing that of physics-based numerical weather prediction and other conventional approaches. However, the lead time and coverage of it still leave much to be desired and hardly meet the needs of disaster emergency response. Here, we propose a deep diffusion model of satellite (DDMS) to establish an AI-based convection nowcasting system. On one hand, it employs diffusion processes to effectively simulate complicated spatiotemporal evolution patterns of convective clouds, significantly improving the forecast lead time. On the other hand, it utilizes geostationary satellite brightness temperature data, thereby achieving planetary-scale forecast coverage. During long-term tests and objective validation based on the FengYun-4A satellite, our system achieves, for the first time, effective convection nowcasting up to 4 hours, with broad coverage (about 20,000,000 km2), remarkable accuracy, and high resolution (15 minutes; 4 km). Its performance reaches a new height in convection nowcasting compared to the existing models. In terms of application, our system operates efficiently (forecasting 4 hours of convection in 8 minutes), and is highly transferable with the potential to collaborate with multiple satellites for global convection nowcasting. Furthermore, our results highlight the remarkable capabilities of diffusion models in convective clouds forecasting, as well as the significant value of geostationary satellite data when empowered by AI technologies.
[ "['Kuai Dai' 'Xutao Li' 'Junying Fang' 'Yunming Ye' 'Demin Yu' 'Di Xian'\n 'Danyu Qin']" ]
null
null
2404.10513
null
null
http://arxiv.org/pdf/2404.10513v1
2024-04-16T12:37:10Z
2024-04-16T12:37:10Z
CoTAR: Chain-of-Thought Attribution Reasoning with Multi-level Granularity
State-of-the-art performance in QA tasks is currently achieved by systems employing Large Language Models (LLMs), however these models tend to hallucinate information in their responses. One approach focuses on enhancing the generation process by incorporating attribution from the given input to the output. However, the challenge of identifying appropriate attributions and verifying their accuracy against a source is a complex task that requires significant improvements in assessing such systems. We introduce an attribution-oriented Chain-of-Thought reasoning method to enhance the accuracy of attributions. This approach focuses the reasoning process on generating an attribution-centric output. Evaluations on two context-enhanced question-answering datasets using GPT-4 demonstrate improved accuracy and correctness of attributions. In addition, the combination of our method with finetuning enhances the response and attribution accuracy of two smaller LLMs, showing their potential to outperform GPT-4 in some cases.
[ "['Moshe Berchansky' 'Daniel Fleischer' 'Moshe Wasserblat' 'Peter Izsak']" ]
null
null
2404.10540
null
null
http://arxiv.org/pdf/2404.10540v2
2024-04-19T20:15:45Z
2024-04-12T20:40:12Z
SEVD: Synthetic Event-based Vision Dataset for Ego and Fixed Traffic Perception
Recently, event-based vision sensors have gained attention for autonomous driving applications, as conventional RGB cameras face limitations in handling challenging dynamic conditions. However, the availability of real-world and synthetic event-based vision datasets remains limited. In response to this gap, we present SEVD, a first-of-its-kind multi-view ego, and fixed perception synthetic event-based dataset using multiple dynamic vision sensors within the CARLA simulator. Data sequences are recorded across diverse lighting (noon, nighttime, twilight) and weather conditions (clear, cloudy, wet, rainy, foggy) with domain shifts (discrete and continuous). SEVD spans urban, suburban, rural, and highway scenes featuring various classes of objects (car, truck, van, bicycle, motorcycle, and pedestrian). Alongside event data, SEVD includes RGB imagery, depth maps, optical flow, semantic, and instance segmentation, facilitating a comprehensive understanding of the scene. Furthermore, we evaluate the dataset using state-of-the-art event-based (RED, RVT) and frame-based (YOLOv8) methods for traffic participant detection tasks and provide baseline benchmarks for assessment. Additionally, we conduct experiments to assess the synthetic event-based dataset's generalization capabilities. The dataset is available at https://eventbasedvision.github.io/SEVD
[ "['Manideep Reddy Aliminati' 'Bharatesh Chakravarthi' 'Aayush Atul Verma'\n 'Arpitsinh Vaghela' 'Hua Wei' 'Xuesong Zhou' 'Yezhou Yang']" ]
null
null
2404.10546
null
null
http://arxiv.org/pdf/2404.10546v1
2024-04-16T13:16:19Z
2024-04-16T13:16:19Z
Warm-Start Variational Quantum Policy Iteration
Reinforcement learning is a powerful framework aiming to determine optimal behavior in highly complex decision-making scenarios. This objective can be achieved using policy iteration, which requires to solve a typically large linear system of equations. We propose the variational quantum policy iteration (VarQPI) algorithm, realizing this step with a NISQ-compatible quantum-enhanced subroutine. Its scalability is supported by an analysis of the structure of generic reinforcement learning environments, laying the foundation for potential quantum advantage with utility-scale quantum computers. Furthermore, we introduce the warm-start initialization variant (WS-VarQPI) that significantly reduces resource overhead. The algorithm solves a large FrozenLake environment with an underlying 256x256-dimensional linear system, indicating its practical robustness.
[ "['Nico Meyer' 'Jakob Murauer' 'Alexander Popov' 'Christian Ufrecht'\n 'Axel Plinge' 'Christopher Mutschler' 'Daniel D. Scherer']" ]
null
null
2404.10547
null
null
http://arxiv.org/pdf/2404.10547v1
2024-04-16T13:16:41Z
2024-04-16T13:16:41Z
A/B testing under Interference with Partial Network Information
A/B tests are often required to be conducted on subjects that might have social connections. For e.g., experiments on social media, or medical and social interventions to control the spread of an epidemic. In such settings, the SUTVA assumption for randomized-controlled trials is violated due to network interference, or spill-over effects, as treatments to group A can potentially also affect the control group B. When the underlying social network is known exactly, prior works have demonstrated how to conduct A/B tests adequately to estimate the global average treatment effect (GATE). However, in practice, it is often impossible to obtain knowledge about the exact underlying network. In this paper, we present UNITE: a novel estimator that relax this assumption and can identify GATE while only relying on knowledge of the superset of neighbors for any subject in the graph. Through theoretical analysis and extensive experiments, we show that the proposed approach performs better in comparison to standard estimators.
[ "['Shiv Shankar' 'Ritwik Sinha' 'Yash Chandak' 'Saayan Mitra'\n 'Madalina Fiterau']" ]