categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.14389
| null | null |
http://arxiv.org/pdf/2402.14389v1
|
2024-02-22T09:01:42Z
|
2024-02-22T09:01:42Z
|
Securing Transactions: A Hybrid Dependable Ensemble Machine Learning
Model using IHT-LR and Grid Search
|
Financial institutions and businesses face an ongoing challenge from fraudulent transactions, prompting the need for effective detection methods. Detecting credit card fraud is crucial for identifying and preventing unauthorized transactions.Timely detection of fraud enables investigators to take swift actions to mitigate further losses. However, the investigation process is often time-consuming, limiting the number of alerts that can be thoroughly examined each day. Therefore, the primary objective of a fraud detection model is to provide accurate alerts while minimizing false alarms and missed fraud cases. In this paper, we introduce a state-of-the-art hybrid ensemble (ENS) dependable Machine learning (ML) model that intelligently combines multiple algorithms with proper weighted optimization using Grid search, including Decision Tree (DT), Random Forest (RF), K-Nearest Neighbor (KNN), and Multilayer Perceptron (MLP), to enhance fraud identification. To address the data imbalance issue, we employ the Instant Hardness Threshold (IHT) technique in conjunction with Logistic Regression (LR), surpassing conventional approaches. Our experiments are conducted on a publicly available credit card dataset comprising 284,807 transactions. The proposed model achieves impressive accuracy rates of 99.66%, 99.73%, 98.56%, and 99.79%, and a perfect 100% for the DT, RF, KNN, MLP and ENS models, respectively. The hybrid ensemble model outperforms existing works, establishing a new benchmark for detecting fraudulent transactions in high-frequency scenarios. The results highlight the effectiveness and reliability of our approach, demonstrating superior performance metrics and showcasing its exceptional potential for real-world fraud detection applications.
|
[
"['Md. Alamin Talukder' 'Rakib Hossen' 'Md Ashraf Uddin'\n 'Mohammed Nasir Uddin' 'Uzzal Kumar Acharjee']"
] |
null | null |
2402.14391
| null | null |
http://arxiv.org/pdf/2402.14391v1
|
2024-02-22T09:04:41Z
|
2024-02-22T09:04:41Z
|
MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction
Prediction via Microenvironment-Aware Protein Embedding
|
Protein-Protein Interactions (PPIs) are fundamental in various biological processes and play a key role in life activities. The growing demand and cost of experimental PPI assays require computational methods for efficient PPI prediction. While existing methods rely heavily on protein sequence for PPI prediction, it is the protein structure that is the key to determine the interactions. To take both protein modalities into account, we define the microenvironment of an amino acid residue by its sequence and structural contexts, which describe the surrounding chemical properties and geometric features. In addition, microenvironments defined in previous work are largely based on experimentally assayed physicochemical properties, for which the "vocabulary" is usually extremely small. This makes it difficult to cover the diversity and complexity of microenvironments. In this paper, we propose Microenvironment-Aware Protein Embedding for PPI prediction (MPAE-PPI), which encodes microenvironments into chemically meaningful discrete codes via a sufficiently large microenvironment "vocabulary" (i.e., codebook). Moreover, we propose a novel pre-training strategy, namely Masked Codebook Modeling (MCM), to capture the dependencies between different microenvironments by randomly masking the codebook and reconstructing the input. With the learned microenvironment codebook, we can reuse it as an off-the-shelf tool to efficiently and effectively encode proteins of different sizes and functions for large-scale PPI prediction. Extensive experiments show that MAPE-PPI can scale to PPI prediction with millions of PPIs with superior trade-offs between effectiveness and computational efficiency than the state-of-the-art competitors.
|
[
"['Lirong Wu' 'Yijun Tian' 'Yufei Huang' 'Siyuan Li' 'Haitao Lin'\n 'Nitesh V Chawla' 'Stan Z. Li']"
] |
null | null |
2402.14393
| null | null |
http://arxiv.org/pdf/2402.14393v1
|
2024-02-22T09:08:36Z
|
2024-02-22T09:08:36Z
|
Graph Parsing Networks
|
Graph pooling compresses graph information into a compact representation. State-of-the-art graph pooling methods follow a hierarchical approach, which reduces the graph size step-by-step. These methods must balance memory efficiency with preserving node information, depending on whether they use node dropping or node clustering. Additionally, fixed pooling ratios or numbers of pooling layers are predefined for all graphs, which prevents personalized pooling structures from being captured for each individual graph. In this work, inspired by bottom-up grammar induction, we propose an efficient graph parsing algorithm to infer the pooling structure, which then drives graph pooling. The resulting Graph Parsing Network (GPN) adaptively learns personalized pooling structure for each individual graph. GPN benefits from the discrete assignments generated by the graph parsing algorithm, allowing good memory efficiency while preserving node information intact. Experimental results on standard benchmarks demonstrate that GPN outperforms state-of-the-art graph pooling methods in graph classification tasks while being able to achieve competitive performance in node classification tasks. We also conduct a graph reconstruction task to show GPN's ability to preserve node information and measure both memory and time efficiency through relevant tests.
|
[
"['Yunchong Song' 'Siyuan Huang' 'Xinbing Wang' 'Chenghu Zhou'\n 'Zhouhan Lin']"
] |
null | null |
2402.14396
| null | null |
http://arxiv.org/pdf/2402.14396v2
|
2024-03-05T13:39:58Z
|
2024-02-22T09:20:54Z
|
Quantum Circuit Optimization with AlphaTensor
|
A key challenge in realizing fault-tolerant quantum computers is circuit optimization. Focusing on the most expensive gates in fault-tolerant quantum computation (namely, the T gates), we address the problem of T-count optimization, i.e., minimizing the number of T gates that are needed to implement a given circuit. To achieve this, we develop AlphaTensor-Quantum, a method based on deep reinforcement learning that exploits the relationship between optimizing T-count and tensor decomposition. Unlike existing methods for T-count optimization, AlphaTensor-Quantum can incorporate domain-specific knowledge about quantum computation and leverage gadgets, which significantly reduces the T-count of the optimized circuits. AlphaTensor-Quantum outperforms the existing methods for T-count optimization on a set of arithmetic benchmarks (even when compared without making use of gadgets). Remarkably, it discovers an efficient algorithm akin to Karatsuba's method for multiplication in finite fields. AlphaTensor-Quantum also finds the best human-designed solutions for relevant arithmetic computations used in Shor's algorithm and for quantum chemistry simulation, thus demonstrating it can save hundreds of hours of research by optimizing relevant quantum circuits in a fully automated way.
|
[
"['Francisco J. R. Ruiz' 'Tuomas Laakkonen' 'Johannes Bausch' 'Matej Balog'\n 'Mohammadamin Barekatain' 'Francisco J. H. Heras' 'Alexander Novikov'\n 'Nathan Fitzpatrick' 'Bernardino Romera-Paredes' 'John van de Wetering'\n 'Alhussein Fawzi' 'Konstantinos Meichanetzidis' 'Pushmeet Kohli']"
] |
null | null |
2402.14397
| null | null |
http://arxiv.org/pdf/2402.14397v1
|
2024-02-22T09:26:16Z
|
2024-02-22T09:26:16Z
|
Closed-Form Bounds for DP-SGD against Record-level Inference
|
Machine learning models trained with differentially-private (DP) algorithms such as DP-SGD enjoy resilience against a wide range of privacy attacks. Although it is possible to derive bounds for some attacks based solely on an $(varepsilon,delta)$-DP guarantee, meaningful bounds require a small enough privacy budget (i.e., injecting a large amount of noise), which results in a large loss in utility. This paper presents a new approach to evaluate the privacy of machine learning models against specific record-level threats, such as membership and attribute inference, without the indirection through DP. We focus on the popular DP-SGD algorithm, and derive simple closed-form bounds. Our proofs model DP-SGD as an information theoretic channel whose inputs are the secrets that an attacker wants to infer (e.g., membership of a data record) and whose outputs are the intermediate model parameters produced by iterative optimization. We obtain bounds for membership inference that match state-of-the-art techniques, whilst being orders of magnitude faster to compute. Additionally, we present a novel data-dependent bound against attribute inference. Our results provide a direct, interpretable, and practical way to evaluate the privacy of trained models against specific inference threats without sacrificing utility.
|
[
"['Giovanni Cherubin' 'Boris Köpf' 'Andrew Paverd' 'Shruti Tople'\n 'Lukas Wutschitz' 'Santiago Zanella-Béguelin']"
] |
null | null |
2402.14400
| null | null |
http://arxiv.org/pdf/2402.14400v2
|
2024-06-20T06:34:06Z
|
2024-02-22T09:34:48Z
|
Modeling 3D Infant Kinetics Using Adaptive Graph Convolutional Networks
|
Reliable methods for the neurodevelopmental assessment of infants are essential for early detection of medical issues that may need prompt interventions. Spontaneous motor activity, or 'kinetics', is shown to provide a powerful surrogate measure of upcoming neurodevelopment. However, its assessment is by and large qualitative and subjective, focusing on visually identified, age-specific gestures. Here, we follow an alternative approach, predicting infants' neurodevelopmental maturation based on data-driven evaluation of individual motor patterns. We utilize 3D video recordings of infants processed with pose-estimation to extract spatio-temporal series of anatomical landmarks, and apply adaptive graph convolutional networks to predict the actual age. We show that our data-driven approach achieves improvement over traditional machine learning baselines based on manually engineered features.
|
[
"['Daniel Holmberg' 'Manu Airaksinen' 'Viviana Marchi' 'Andrea Guzzetta'\n 'Anna Kivi' 'Leena Haataja' 'Sampsa Vanhatalo' 'Teemu Roos']"
] |
null | null |
2402.14401
| null | null |
http://arxiv.org/pdf/2402.14401v1
|
2024-02-22T09:39:46Z
|
2024-02-22T09:39:46Z
|
Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment
|
Existing free-energy guided No-Reference Image Quality Assessment (NR-IQA) methods still suffer from finding a balance between learning feature information at the pixel level of the image and capturing high-level feature information and the efficient utilization of the obtained high-level feature information remains a challenge. As a novel class of state-of-the-art (SOTA) generative model, the diffusion model exhibits the capability to model intricate relationships, enabling a comprehensive understanding of images and possessing a better learning of both high-level and low-level visual features. In view of these, we pioneer the exploration of the diffusion model into the domain of NR-IQA. Firstly, we devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images, incorporating nonlinear features obtained during the denoising process of the diffusion model, as high-level visual information. Secondly, two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information. These include the visual compensation guidance branch, grounded in the transformer architecture and noise embedding strategy, and the visual difference analysis branch, built on the ResNet architecture and the residual transposed attention block. Extensive experiments are conducted on seven public NR-IQA datasets, and the results demonstrate that the proposed model outperforms SOTA methods for NR-IQA.
|
[
"['Zhaoyang Wang' 'Bo Hu' 'Mingyang Zhang' 'Jie Li' 'Leida Li'\n 'Maoguo Gong' 'Xinbo Gao']"
] |
null | null |
2402.14402
| null | null |
http://arxiv.org/pdf/2402.14402v2
|
2024-04-15T16:57:36Z
|
2024-02-22T09:43:25Z
|
Global Safe Sequential Learning via Efficient Knowledge Transfer
|
Sequential learning methods such as active learning and Bayesian optimization select the most informative data to learn about a task. In many medical or engineering applications, the data selection is constrained by a priori unknown safety conditions. A promissing line of safe learning methods utilize Gaussian processes (GPs) to model the safety probability and perform data selection in areas with high safety confidence. However, accurate safety modeling requires prior knowledge or consumes data. In addition, the safety confidence centers around the given observations which leads to local exploration. As transferable source knowledge is often available in safety critical experiments, we propose to consider transfer safe sequential learning to accelerate the learning of safety. We further consider a pre-computation of source components to reduce the additional computational load that is introduced by incorporating source data. In this paper, we theoretically analyze the maximum explorable safe regions of conventional safe learning methods. Furthermore, we empirically demonstrate that our approach 1) learns a task with lower data consumption, 2) globally explores multiple disjoint safe regions under guidance of the source knowledge, and 3) operates with computation comparable to conventional safe learning methods.
|
[
"['Cen-You Li' 'Olaf Duennbier' 'Marc Toussaint' 'Barbara Rakitsch'\n 'Christoph Zimmer']"
] |
null | null |
2402.14407
| null | null |
http://arxiv.org/pdf/2402.14407v1
|
2024-02-22T09:48:47Z
|
2024-02-22T09:48:47Z
|
Large-Scale Actionless Video Pre-Training via Discrete Diffusion for
Efficient Policy Learning
|
Learning a generalist embodied agent capable of completing multiple tasks poses challenges, primarily stemming from the scarcity of action-labeled robotic datasets. In contrast, a vast amount of human videos exist, capturing intricate tasks and interactions with the physical world. Promising prospects arise for utilizing actionless human videos for pre-training and transferring the knowledge to facilitate robot policy learning through limited robot demonstrations. In this paper, we introduce a novel framework that leverages a unified discrete diffusion to combine generative pre-training on human videos and policy fine-tuning on a small number of action-labeled robot videos. We start by compressing both human and robot videos into unified video tokens. In the pre-training stage, we employ a discrete diffusion model with a mask-and-replace diffusion strategy to predict future video tokens in the latent space. In the fine-tuning stage, we harness the imagined future videos to guide low-level action learning trained on a limited set of robot data. Experiments demonstrate that our method generates high-fidelity future videos for planning and enhances the fine-tuned policies compared to previous state-of-the-art approaches with superior generalization ability. Our project website is available at https://video-diff.github.io/.
|
[
"['Haoran He' 'Chenjia Bai' 'Ling Pan' 'Weinan Zhang' 'Bin Zhao'\n 'Xuelong Li']"
] |
null | null |
2402.14427
| null | null |
http://arxiv.org/pdf/2402.14427v1
|
2024-02-22T10:14:59Z
|
2024-02-22T10:14:59Z
|
Text me the data: Generating Ground Pressure Sequence from Textual
Descriptions for HAR
|
In human activity recognition (HAR), the availability of substantial ground truth is necessary for training efficient models. However, acquiring ground pressure data through physical sensors itself can be cost-prohibitive, time-consuming. To address this critical need, we introduce Text-to-Pressure (T2P), a framework designed to generate extensive ground pressure sequences from textual descriptions of human activities using deep learning techniques. We show that the combination of vector quantization of sensor data along with simple text conditioned auto regressive strategy allows us to obtain high-quality generated pressure sequences from textual descriptions with the help of discrete latent correlation between text and pressure maps. We achieved comparable performance on the consistency between text and generated motion with an R squared value of 0.722, Masked R squared value of 0.892, and FID score of 1.83. Additionally, we trained a HAR model with the the synthesized data and evaluated it on pressure dynamics collected by a real pressure sensor which is on par with a model trained on only real data. Combining both real and synthesized training data increases the overall macro F1 score by 5.9 percent.
|
[
"['Lala Shakti Swarup Ray' 'Bo Zhou' 'Sungho Suh' 'Lars Krupp'\n 'Vitor Fortes Rey' 'Paul Lukowicz']"
] |
null | null |
2402.14430
| null | null |
http://arxiv.org/pdf/2402.14430v1
|
2024-02-22T10:19:34Z
|
2024-02-22T10:19:34Z
|
Robust Training of Federated Models with Extremely Label Deficiency
|
Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency. Advanced FSSL methods predominantly focus on training a single model on each client. However, this approach could lead to a discrepancy between the objective functions of labeled and unlabeled data, resulting in gradient conflicts. To alleviate gradient conflict, we propose a novel twin-model paradigm, called Twin-sight, designed to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data. In particular, Twin-sight concurrently trains a supervised model with a supervised objective function while training an unsupervised model using an unsupervised objective function. To enhance the synergy between these two models, Twin-sight introduces a neighbourhood-preserving constraint, which encourages the preservation of the neighbourhood relationship among data features extracted by both models. Our comprehensive experiments on four benchmark datasets provide substantial evidence that Twin-sight can significantly outperform state-of-the-art methods across various experimental settings, demonstrating the efficacy of the proposed Twin-sight.
|
[
"['Yonggang Zhang' 'Zhiqin Yang' 'Xinmei Tian' 'Nannan Wang'\n 'Tongliang Liu' 'Bo Han']"
] |
null | null |
2402.14434
| null | null |
http://arxiv.org/pdf/2402.14434v2
|
2024-02-23T05:14:06Z
|
2024-02-22T10:26:46Z
|
Parallelized Midpoint Randomization for Langevin Monte Carlo
|
We explore the sampling problem within the framework where parallel evaluations of the gradient of the log-density are feasible. Our investigation focuses on target distributions characterized by smooth and strongly log-concave densities. We revisit the parallelized randomized midpoint method and employ proof techniques recently developed for analyzing its purely sequential version. Leveraging these techniques, we derive upper bounds on the Wasserstein distance between the sampling and target densities. These bounds quantify the runtime improvement achieved by utilizing parallel processing units, which can be considerable.
|
[
"['Lu Yu' 'Arnak Dalalyan']"
] |
null | null |
2402.14446
| null | null |
http://arxiv.org/pdf/2402.14446v1
|
2024-02-22T11:06:07Z
|
2024-02-22T11:06:07Z
|
Model-Based Reinforcement Learning Control of Reaction-Diffusion
Problems
|
Mathematical and computational tools have proven to be reliable in decision-making processes. In recent times, in particular, machine learning-based methods are becoming increasingly popular as advanced support tools. When dealing with control problems, reinforcement learning has been applied to decision-making in several applications, most notably in games. The success of these methods in finding solutions to complex problems motivates the exploration of new areas where they can be employed to overcome current difficulties. In this paper, we explore the use of automatic control strategies to initial boundary value problems in thermal and disease transport. Specifically, in this work, we adapt an existing reinforcement learning algorithm using a stochastic policy gradient method and we introduce two novel reward functions to drive the flow of the transported field. The new model-based framework exploits the interactions between a reaction-diffusion model and the modified agent. The results show that certain controls can be implemented successfully in these applications, although model simplifications had to be assumed.
|
[
"['Christina Schenk' 'Aditya Vasudevan' 'Maciej Haranczyk' 'Ignacio Romero']"
] |
null | null |
2402.14459
| null | null |
http://arxiv.org/pdf/2402.14459v1
|
2024-02-22T11:35:52Z
|
2024-02-22T11:35:52Z
|
Machine Learning Reveals Large-scale Impact of Posidonia Oceanica on
Mediterranean Sea Water
|
Posidonia oceanica is a protected endemic seagrass of Mediterranean sea that fosters biodiversity, stores carbon, releases oxygen, and provides habitat to numerous sea organisms. Leveraging augmented research, we collected a comprehensive dataset of 174 features compiled from diverse data sources. Through machine learning analysis, we discovered the existence of a robust correlation between the exact location of P. oceanica and water biogeochemical properties. The model's feature importance, showed that carbon-related variables as net biomass production and downward surface mass flux of carbon dioxide have their values altered in the areas with P. oceanica, which in turn can be used for indirect location of P. oceanica meadows. The study provides the evidence of the plant's ability to exert a global impact on the environment and underscores the crucial role of this plant in sea ecosystems, emphasizing the need for its conservation and management.
|
[
"['Celio Trois' 'Luciana Didonet Del Fabro' 'Vladimir A. Baulin']"
] |
null | null |
2402.14469
| null | null |
http://arxiv.org/pdf/2402.14469v1
|
2024-02-22T11:56:44Z
|
2024-02-22T11:56:44Z
|
Reimagining Anomalies: What If Anomalies Were Normal?
|
Deep learning-based methods have achieved a breakthrough in image anomaly detection, but their complexity introduces a considerable challenge to understanding why an instance is predicted to be anomalous. We introduce a novel explanation method that generates multiple counterfactual examples for each anomaly, capturing diverse concepts of anomalousness. A counterfactual example is a modification of the anomaly that is perceived as normal by the anomaly detector. The method provides a high-level semantic explanation of the mechanism that triggered the anomaly detector, allowing users to explore "what-if scenarios." Qualitative and quantitative analyses across various image datasets show that the method applied to state-of-the-art anomaly detectors can achieve high-quality semantic explanations of detectors.
|
[
"['Philipp Liznerski' 'Saurabh Varshneya' 'Ece Calikus' 'Sophie Fellenz'\n 'Marius Kloft']"
] |
null | null |
2402.14474
| null | null |
http://arxiv.org/pdf/2402.14474v1
|
2024-02-22T12:04:15Z
|
2024-02-22T12:04:15Z
|
Data Science with LLMs and Interpretable Models
|
Recent years have seen important advances in the building of interpretable models, machine learning models that are designed to be easily understood by humans. In this work, we show that large language models (LLMs) are remarkably good at working with interpretable models, too. In particular, we show that LLMs can describe, interpret, and debug Generalized Additive Models (GAMs). Combining the flexibility of LLMs with the breadth of statistical patterns accurately described by GAMs enables dataset summarization, question answering, and model critique. LLMs can also improve the interaction between domain experts and interpretable models, and generate hypotheses about the underlying phenomenon. We release url{https://github.com/interpretml/TalkToEBM} as an open-source LLM-GAM interface.
|
[
"['Sebastian Bordt' 'Ben Lengerich' 'Harsha Nori' 'Rich Caruana']"
] |
null | null |
2402.14475
| null | null |
http://arxiv.org/pdf/2402.14475v2
|
2024-06-20T03:04:10Z
|
2024-02-22T12:09:52Z
|
DynGMA: a robust approach for learning stochastic differential equations
from data
|
Learning unknown stochastic differential equations (SDEs) from observed data is a significant and challenging task with applications in various fields. Current approaches often use neural networks to represent drift and diffusion functions, and construct likelihood-based loss by approximating the transition density to train these networks. However, these methods often rely on one-step stochastic numerical schemes, necessitating data with sufficiently high time resolution. In this paper, we introduce novel approximations to the transition density of the parameterized SDE: a Gaussian density approximation inspired by the random perturbation theory of dynamical systems, and its extension, the dynamical Gaussian mixture approximation (DynGMA). Benefiting from the robust density approximation, our method exhibits superior accuracy compared to baseline methods in learning the fully unknown drift and diffusion functions and computing the invariant distribution from trajectory data. And it is capable of handling trajectory data with low time resolution and variable, even uncontrollable, time step sizes, such as data generated from Gillespie's stochastic simulations. We then conduct several experiments across various scenarios to verify the advantages and robustness of the proposed method.
|
[
"['Aiqing Zhu' 'Qianxiao Li']"
] |
null | null |
2402.14481
| null | null |
http://arxiv.org/pdf/2402.14481v1
|
2024-02-22T12:13:58Z
|
2024-02-22T12:13:58Z
|
Towards Automated Causal Discovery: a case study on 5G telecommunication
data
|
We introduce the concept of Automated Causal Discovery (AutoCD), defined as any system that aims to fully automate the application of causal discovery and causal reasoning methods. AutoCD's goal is to deliver all causal information that an expert human analyst would and answer a user's causal queries. We describe the architecture of such a platform, and illustrate its performance on synthetic data sets. As a case study, we apply it on temporal telecommunication data. The system is general and can be applied to a plethora of causal discovery problems.
|
[
"['Konstantina Biza' 'Antonios Ntroumpogiannis' 'Sofia Triantafillou'\n 'Ioannis Tsamardinos']"
] |
null | null |
2402.14482
| null | null |
http://arxiv.org/pdf/2402.14482v2
|
2024-03-05T12:02:46Z
|
2024-02-22T12:15:05Z
|
SpanSeq: Similarity-based sequence data splitting method for improved
development and assessment of deep learning projects
|
The use of deep learning models in computational biology has increased massively in recent years, and is expected to do so further with the current advances in fields like Natural Language Processing. These models, although able to draw complex relations between input and target, are also largely inclined to learn noisy deviations from the pool of data used during their development. In order to assess their performance on unseen data (their capacity to generalize), it is common to randomly split the available data in development (train/validation) and test sets. This procedure, although standard, has lately been shown to produce dubious assessments of generalization due to the existing similarity between samples in the databases used. In this work, we present SpanSeq, a database partition method for machine learning that can scale to most biological sequences (genes, proteins and genomes) in order to avoid data leakage between sets. We also explore the effect of not restraining similarity between sets by reproducing the development of the state-of-the-art model DeepLoc, not only confirming the consequences of randomly splitting databases on the model assessment, but expanding those repercussions to the model development. SpanSeq is available for downloading and installing at https://github.com/genomicepidemiology/SpanSeq.
|
[
"['Alfred Ferrer Florensa' 'Jose Juan Almagro Armenteros' 'Henrik Nielsen'\n 'Frank Møller Aarestrup' 'Philip Thomas Lanken Conradsen Clausen']"
] |
null | null |
2402.14486
| null | null |
http://arxiv.org/pdf/2402.14486v1
|
2024-02-22T12:19:19Z
|
2024-02-22T12:19:19Z
|
Are Bounded Contracts Learnable and Approximately Optimal?
|
This paper considers the hidden-action model of the principal-agent problem, in which a principal incentivizes an agent to work on a project using a contract. We investigate whether contracts with bounded payments are learnable and approximately optimal. Our main results are two learning algorithms that can find a nearly optimal bounded contract using a polynomial number of queries, under two standard assumptions in the literature: a costlier action for the agent leads to a better outcome distribution for the principal, and the agent's cost/effort has diminishing returns. Our polynomial query complexity upper bound shows that standard assumptions are sufficient for achieving an exponential improvement upon the known lower bound for general instances. Unlike the existing algorithms, which relied on discretizing the contract space, our algorithms directly learn the underlying outcome distributions. As for the approximate optimality of bounded contracts, we find that they could be far from optimal in terms of multiplicative or additive approximation, but satisfy a notion of mixed approximation.
|
[
"['Yurong Chen' 'Zhaohua Chen' 'Xiaotie Deng' 'Zhiyi Huang']"
] |
null | null |
2402.14489
| null | null |
http://arxiv.org/pdf/2402.14489v1
|
2024-02-22T12:27:35Z
|
2024-02-22T12:27:35Z
|
A Class of Topological Pseudodistances for Fast Comparison of
Persistence Diagrams
|
Persistence diagrams (PD)s play a central role in topological data analysis, and are used in an ever increasing variety of applications. The comparison of PD data requires computing comparison metrics among large sets of PDs, with metrics which are accurate, theoretically sound, and fast to compute. Especially for denser multi-dimensional PDs, such comparison metrics are lacking. While on the one hand, Wasserstein-type distances have high accuracy and theoretical guarantees, they incur high computational cost. On the other hand, distances between vectorizations such as Persistence Statistics (PS)s have lower computational cost, but lack the accuracy guarantees and in general they are not guaranteed to distinguish PDs (i.e. the two PS vectors of different PDs may be equal). In this work we introduce a class of pseudodistances called Extended Topological Pseudodistances (ETD)s, which have tunable complexity, and can approximate Sliced and classical Wasserstein distances at the high-complexity extreme, while being computationally lighter and close to Persistence Statistics at the lower complexity extreme, and thus allow users to interpolate between the two metrics. We build theoretical comparisons to show how to fit our new distances at an intermediate level between persistence vectorizations and Wasserstein distances. We also experimentally verify that ETDs outperform PSs in terms of accuracy and outperform Wasserstein and Sliced Wasserstein distances in terms of computational complexity.
|
[
"['Rolando Kindelan Nuñez' 'Mircea Petrache' 'Mauricio Cerda'\n 'Nancy Hitschfeld']"
] |
null | null |
2402.14490
| null | null |
http://arxiv.org/pdf/2402.14490v3
|
2024-06-06T15:51:21Z
|
2024-02-22T12:27:38Z
|
Imbalanced Data Clustering using Equilibrium K-Means
|
Centroid-based clustering algorithms, such as hard K-means (HKM) and fuzzy K-means (FKM), have suffered from learning bias towards large clusters. Their centroids tend to be crowded in large clusters, compromising performance when the true underlying data groups vary in size (i.e., imbalanced data). To address this, we propose a new clustering objective function based on the Boltzmann operator, which introduces a novel centroid repulsion mechanism, where data points surrounding the centroids repel other centroids. Larger clusters repel more, effectively mitigating the issue of large cluster learning bias. The proposed new algorithm, called equilibrium K-means (EKM), is simple, alternating between two steps; resource-saving, with the same time and space complexity as FKM; and scalable to large datasets via batch learning. We substantially evaluate the performance of EKM on synthetic and real-world datasets. The results show that EKM performs competitively on balanced data and significantly outperforms benchmark algorithms on imbalanced data. Deep clustering experiments demonstrate that EKM is a better alternative to HKM and FKM on imbalanced data as more discriminative representation can be obtained. Additionally, we reformulate HKM, FKM, and EKM in a general form of gradient descent and demonstrate how this general form facilitates a uniform study of K-means algorithms.
|
[
"['Yudong He']"
] |
null | null |
2402.14515
| null | null |
http://arxiv.org/pdf/2402.14515v2
|
2024-03-11T15:40:18Z
|
2024-02-22T13:04:50Z
|
Spectral invariance and maximality properties of the frequency spectrum
of quantum neural networks
|
Quantum Neural Networks (QNNs) are a popular approach in Quantum Machine Learning due to their close connection to Variational Quantum Circuits, making them a promising candidate for practical applications on Noisy Intermediate-Scale Quantum (NISQ) devices. A QNN can be expressed as a finite Fourier series, where the set of frequencies is called the frequency spectrum. We analyse this frequency spectrum and prove, for a large class of models, various maximality results. Furthermore, we prove that under some mild conditions there exists a bijection between classes of models with the same area $A = RL$ that preserves the frequency spectrum, where $R$ denotes the number of qubits and $L$ the number of layers, which we consequently call spectral invariance under area-preserving transformations. With this we explain the symmetry in $R$ and $L$ in the results often observed in the literature and show that the maximal frequency spectrum depends only on the area $A = RL$ and not on the individual values of $R$ and $L$. Moreover, we extend existing results and specify the maximum possible frequency spectrum of a QNN with arbitrarily many layers as a function of the spectrum of its generators. If the generators of the QNN can be further decomposed into 2-dimensional sub-generators, then this specification follows from elementary number-theoretical considerations. In the case of arbitrary dimensional generators, we extend existing results based on the so-called Golomb ruler and introduce a second novel approach based on a variation of the turnpike problem, which we call the relaxed turnpike problem.
|
[
"['Patrick Holzer' 'Ivica Turkalj']"
] |
null | null |
2402.14522
| null | null |
http://arxiv.org/pdf/2402.14522v2
|
2024-07-12T10:39:28Z
|
2024-02-22T13:13:31Z
|
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap
for Prompt-Based Large Language Models and Beyond
|
Task embedding, a meta-learning technique that captures task-specific information, has gained popularity, especially in areas such as multi-task learning, model editing, and interpretability. However, it faces challenges with the emergence of prompt-guided Large Language Models (LLMs) operating in a gradient-free manner. Existing task embedding methods rely on fine-tuned, task-specific language models, which hinders the adaptability of task embeddings across diverse models, especially prompt-based LLMs. To hardness the potential of task embeddings in the era of LLMs, we propose a framework for unified task embeddings (FUTE), harmonizing task embeddings from various models, including smaller language models and LLMs with varied prompts, within a single vector space. Such uniformity enables comparison and analysis of similarities amongst different models, broadening the scope and utility of existing task embedding methods in multi-model scenarios, while maintaining their performance comparable to architecture-specific methods.
|
[
"['Xinyu Wang' 'Hainiu Xu' 'Lin Gui' 'Yulan He']"
] |
null | null |
2402.14525
| null | null |
http://arxiv.org/abs/2402.14525v1
|
2024-02-22T13:19:02Z
|
2024-02-22T13:19:02Z
|
Kinematically Constrained Human-like Bimanual Robot-to-Human Handovers
|
Bimanual handovers are crucial for transferring large, deformable or delicate objects. This paper proposes a framework for generating kinematically constrained human-like bimanual robot motions to ensure seamless and natural robot-to-human object handovers. We use a Hidden Semi-Markov Model (HSMM) to reactively generate suitable response trajectories for a robot based on the observed human partner's motion. The trajectories are adapted with task space constraints to ensure accurate handovers. Results from a pilot study show that our approach is perceived as more human--like compared to a baseline Inverse Kinematics approach.
|
[
"['Yasemin Göksu' 'Antonio De Almeida Correia' 'Vignesh Prasad'\n 'Alap Kshirsagar' 'Dorothea Koert' 'Jan Peters' 'Georgia Chalvatzaki']"
] |
null | null |
2402.14527
| null | null |
http://arxiv.org/pdf/2402.14527v1
|
2024-02-22T13:21:26Z
|
2024-02-22T13:21:26Z
|
Federated Learning on Transcriptomic Data: Model Quality and Performance
Trade-Offs
|
Machine learning on large-scale genomic or transcriptomic data is important for many novel health applications. For example, precision medicine tailors medical treatments to patients on the basis of individual biomarkers, cellular and molecular states, etc. However, the data required is sensitive, voluminous, heterogeneous, and typically distributed across locations where dedicated machine learning hardware is not available. Due to privacy and regulatory reasons, it is also problematic to aggregate all data at a trusted third party.Federated learning is a promising solution to this dilemma, because it enables decentralized, collaborative machine learning without exchanging raw data. In this paper, we perform comparative experiments with the federated learning frameworks TensorFlow Federated and Flower. Our test case is the training of disease prognosis and cell type classification models. We train the models with distributed transcriptomic data, considering both data heterogeneity and architectural heterogeneity. We measure model quality, robustness against privacy-enhancing noise, computational performance and resource overhead. Each of the federated learning frameworks has different strengths. However, our experiments confirm that both frameworks can readily build models on transcriptomic data, without transferring personal raw data to a third party with abundant computational resources.
|
[
"['Anika Hannemann' 'Jan Ewald' 'Leo Seeger' 'Erik Buchmann']"
] |
null | null |
2402.14528
| null | null |
http://arxiv.org/pdf/2402.14528v3
|
2024-05-22T04:01:46Z
|
2024-02-22T13:22:06Z
|
ACE : Off-Policy Actor-Critic with Causality-Aware Entropy
Regularization
|
The varying significance of distinct primitive behaviors during the policy learning process has been overlooked by prior model-free RL algorithms. Leveraging this insight, we explore the causal relationship between different action dimensions and rewards to evaluate the significance of various primitive behaviors during training. We introduce a causality-aware entropy term that effectively identifies and prioritizes actions with high potential impacts for efficient exploration. Furthermore, to prevent excessive focus on specific primitive behaviors, we analyze the gradient dormancy phenomenon and introduce a dormancy-guided reset mechanism to further enhance the efficacy of our method. Our proposed algorithm, ACE: Off-policy Actor-critic with Causality-aware Entropy regularization, demonstrates a substantial performance advantage across 29 diverse continuous control tasks spanning 7 domains compared to model-free RL baselines, which underscores the effectiveness, versatility, and efficient sample efficiency of our approach. Benchmark results and videos are available at https://ace-rl.github.io/.
|
[
"['Tianying Ji' 'Yongyuan Liang' 'Yan Zeng' 'Yu Luo' 'Guowei Xu'\n 'Jiawei Guo' 'Ruijie Zheng' 'Furong Huang' 'Fuchun Sun' 'Huazhe Xu']"
] |
null | null |
2402.14532
| null | null |
http://arxiv.org/pdf/2402.14532v1
|
2024-02-22T13:24:43Z
|
2024-02-22T13:24:43Z
|
A Framework for Variational Inference of Lightweight Bayesian Neural
Networks with Heteroscedastic Uncertainties
|
Obtaining heteroscedastic predictive uncertainties from a Bayesian Neural Network (BNN) is vital to many applications. Often, heteroscedastic aleatoric uncertainties are learned as outputs of the BNN in addition to the predictive means, however doing so may necessitate adding more learnable parameters to the network. In this work, we demonstrate that both the heteroscedastic aleatoric and epistemic variance can be embedded into the variances of learned BNN parameters, improving predictive performance for lightweight networks. By complementing this approach with a moment propagation approach to inference, we introduce a relatively simple framework for sampling-free variational inference suitable for lightweight BNNs.
|
[
"['David J. Schodt' 'Ryan Brown' 'Michael Merritt' 'Samuel Park'\n 'Delsin Menolascino' 'Mark A. Peot']"
] |
null | null |
2402.14547
| null | null |
http://arxiv.org/pdf/2402.14547v3
|
2024-03-04T16:54:25Z
|
2024-02-22T13:36:53Z
|
OmniPred: Language Models as Universal Regressors
|
Over the broad landscape of experimental design, regression has been a powerful tool to accurately predict the outcome metrics of a system or model given a set of parameters, but has been traditionally restricted to methods which are only applicable to a specific task. In this paper, we propose OmniPred, a framework for training language models as universal end-to-end regressors over $(x,y)$ evaluation data from diverse real world experiments. Using data sourced from Google Vizier, one of the largest blackbox optimization databases in the world, our extensive experiments demonstrate that through only textual representations of mathematical parameters and values, language models are capable of very precise numerical regression, and if given the opportunity to train over multiple tasks, can significantly outperform traditional regression models.
|
[
"['Xingyou Song' 'Oscar Li' 'Chansoo Lee' 'Bangding Yang' 'Daiyi Peng'\n 'Sagi Perel' 'Yutian Chen']"
] |
null | null |
2402.14551
| null | null |
http://arxiv.org/pdf/2402.14551v1
|
2024-02-22T13:45:01Z
|
2024-02-22T13:45:01Z
|
CLCE: An Approach to Refining Cross-Entropy and Contrastive Learning for
Optimized Learning Fusion
|
State-of-the-art pre-trained image models predominantly adopt a two-stage approach: initial unsupervised pre-training on large-scale datasets followed by task-specific fine-tuning using Cross-Entropy loss~(CE). However, it has been demonstrated that CE can compromise model generalization and stability. While recent works employing contrastive learning address some of these limitations by enhancing the quality of embeddings and producing better decision boundaries, they often overlook the importance of hard negative mining and rely on resource intensive and slow training using large sample batches. To counter these issues, we introduce a novel approach named CLCE, which integrates Label-Aware Contrastive Learning with CE. Our approach not only maintains the strengths of both loss functions but also leverages hard negative mining in a synergistic way to enhance performance. Experimental results demonstrate that CLCE significantly outperforms CE in Top-1 accuracy across twelve benchmarks, achieving gains of up to 3.52% in few-shot learning scenarios and 3.41% in transfer learning settings with the BEiT-3 model. Importantly, our proposed CLCE approach effectively mitigates the dependency of contrastive learning on large batch sizes such as 4096 samples per batch, a limitation that has previously constrained the application of contrastive learning in budget-limited hardware environments.
|
[
"['Zijun Long' 'George Killick' 'Lipeng Zhuang' 'Gerardo Aragon-Camarasa'\n 'Zaiqiao Meng' 'Richard Mccreadie']"
] |
null | null |
2402.14576
| null | null |
http://arxiv.org/pdf/2402.14576v2
|
2024-03-01T00:21:38Z
|
2024-02-08T17:17:46Z
|
Edge Caching Based on Deep Reinforcement Learning and Transfer Learning
|
This paper addresses the escalating challenge of redundant data transmission in networks. The surge in traffic has strained backhaul links and backbone networks, prompting the exploration of caching solutions at the edge router. Existing work primarily relies on Markov Decision Processes (MDP) for caching issues, assuming fixed-time interval decisions; however, real-world scenarios involve random request arrivals, and despite the critical role of various file characteristics in determining an optimal caching policy, none of the related existing work considers all these file characteristics in forming a caching policy. In this paper, first, we formulate the caching problem using a semi-Markov Decision Process (SMDP) to accommodate the continuous-time nature of real-world scenarios allowing for caching decisions at random times upon file requests. Then, we propose a double deep Q-learning-based caching approach that comprehensively accounts for file features such as lifetime, size, and importance. Simulation results demonstrate the superior performance of our approach compared to a recent Deep Reinforcement Learning-based method. Furthermore, we extend our work to include a Transfer Learning (TL) approach to account for changes in file request rates in the SMDP framework. The proposed TL approach exhibits fast convergence, even in scenarios with increased differences in request rates between source and target domains, presenting a promising solution to the dynamic challenges of caching in real-world environments.
|
[
"['Farnaz Niknia' 'Ping Wang' 'Zixu Wang' 'Aakash Agarwal' 'Adib S. Rezaei']"
] |
null | null |
2402.14578
| null | null |
http://arxiv.org/pdf/2402.14578v1
|
2024-02-22T14:33:54Z
|
2024-02-22T14:33:54Z
|
Multivariate Online Linear Regression for Hierarchical Forecasting
|
In this paper, we consider a deterministic online linear regression model where we allow the responses to be multivariate. To address this problem, we introduce MultiVAW, a method that extends the well-known Vovk-Azoury-Warmuth algorithm to the multivariate setting, and show that it also enjoys logarithmic regret in time. We apply our results to the online hierarchical forecasting problem and recover an algorithm from this literature as a special case, allowing us to relax the hypotheses usually made for its analysis.
|
[
"['Massil Hihat' 'Guillaume Garrigos' 'Adeline Fermanian' 'Simon Bussy']"
] |
null | null |
2402.14579
| null | null |
http://arxiv.org/pdf/2402.14579v1
|
2024-02-08T13:21:44Z
|
2024-02-08T13:21:44Z
|
Text Role Classification in Scientific Charts Using Multimodal
Transformers
|
Text role classification involves classifying the semantic role of textual elements within scientific charts. For this task, we propose to finetune two pretrained multimodal document layout analysis models, LayoutLMv3 and UDOP, on chart datasets. The transformers utilize the three modalities of text, image, and layout as input. We further investigate whether data augmentation and balancing methods help the performance of the models. The models are evaluated on various chart datasets, and results show that LayoutLMv3 outperforms UDOP in all experiments. LayoutLMv3 achieves the highest F1-macro score of 82.87 on the ICPR22 test dataset, beating the best-performing model from the ICPR22 CHART-Infographics challenge. Moreover, the robustness of the models is tested on a synthetic noisy dataset ICPR22-N. Finally, the generalizability of the models is evaluated on three chart datasets, CHIME-R, DeGruyter, and EconBiz, for which we added labels for the text roles. Findings indicate that even in cases where there is limited training data, transformers can be used with the help of data augmentation and balancing methods. The source code and datasets are available on GitHub under https://github.com/hjkimk/text-role-classification
|
[
"['Hye Jin Kim' 'Nicolas Lell' 'Ansgar Scherp']"
] |
null | null |
2402.14582
| null | null |
http://arxiv.org/pdf/2402.14582v1
|
2024-02-08T13:51:13Z
|
2024-02-08T13:51:13Z
|
Enhancement of High-definition Map Update Service Through Coverage-aware
and Reinforcement Learning
|
High-definition (HD) Map systems will play a pivotal role in advancing autonomous driving to a higher level, thanks to the significant improvement over traditional two-dimensional (2D) maps. Creating an HD Map requires a huge amount of on-road and off-road data. Typically, these raw datasets are collected and uploaded to cloud-based HD map service providers through vehicular networks. Nevertheless, there are challenges in transmitting the raw data over vehicular wireless channels due to the dynamic topology. As the number of vehicles increases, there is a detrimental impact on service quality, which acts as a barrier to a real-time HD Map system for collaborative driving in Autonomous Vehicles (AV). In this paper, to overcome network congestion, a Q-learning coverage-time-awareness algorithm is presented to optimize the quality of service for vehicular networks and HD map updates. The algorithm is evaluated in an environment that imitates a dynamic scenario where vehicles enter and leave. Results showed an improvement in latency for HD map data of $75%$, $73%$, and $10%$ compared with IEEE802.11p without Quality of Service (QoS), IEEE802.11 with QoS, and IEEE802.11p with new access category (AC) for HD map, respectively.
|
[
"['Jeffrey Redondo' 'Zhenhui Yuan' 'Nauman Aslam']"
] |
null | null |
2402.14585
| null | null |
http://arxiv.org/pdf/2402.14585v1
|
2024-02-22T14:38:52Z
|
2024-02-22T14:38:52Z
|
Bandits with Abstention under Expert Advice
|
We study the classic problem of prediction with expert advice under bandit feedback. Our model assumes that one action, corresponding to the learner's abstention from play, has no reward or loss on every trial. We propose the CBA algorithm, which exploits this assumption to obtain reward bounds that can significantly improve those of the classical Exp4 algorithm. We can view our problem as the aggregation of confidence-rated predictors when the learner has the option of abstention from play. Importantly, we are the first to achieve bounds on the expected cumulative reward for general confidence-rated predictors. In the special case of specialists we achieve a novel reward bound, significantly improving previous bounds of SpecialistExp (treating abstention as another action). As an example application, we discuss learning unions of balls in a finite metric space. In this contextual setting, we devise an efficient implementation of CBA, reducing the runtime from quadratic to almost linear in the number of contexts. Preliminary experiments show that CBA improves over existing bandit algorithms.
|
[
"['Stephen Pasteris' 'Alberto Rumi' 'Maximilian Thiessen' 'Shota Saito'\n 'Atsushi Miyauchi' 'Fabio Vitale' 'Mark Herbster']"
] |
null | null |
2402.14589
| null | null |
http://arxiv.org/pdf/2402.14589v1
|
2024-02-05T11:36:19Z
|
2024-02-05T11:36:19Z
|
Avoiding an AI-imposed Taylor's Version of all music history
|
As future musical AIs adhere closely to human music, they may form their own attachments to particular human artists in their databases, and these biases may in the worst case lead to potential existential threats to all musical history. AI super fans may act to corrupt the historical record and extant recordings in favour of their own preferences, and preservation of the diversity of world music culture may become even more of a pressing issue than the imposition of 12 tone equal temperament or other Western homogenisations. We discuss the technical capability of AI cover software and produce Taylor's Versions of famous tracks from Western pop history as provocative examples; the quality of these productions does not affect the overall argument (which might even see a future AI try to impose the sound of paperclips onto all existing audio files, let alone Taylor Swift). We discuss some potential defenses against the danger of future musical monopolies, whilst analysing the feasibility of a maximal 'Taylor Swiftication' of the complete musical record.
|
[
"['Nick Collins' 'Mick Grierson']"
] |
null | null |
2402.14590
| null | null |
http://arxiv.org/abs/2402.14590v1
|
2024-02-07T23:47:02Z
|
2024-02-07T23:47:02Z
|
Scaling Up LLM Reviews for Google Ads Content Moderation
|
Large language models (LLMs) are powerful tools for content moderation, but their inference costs and latency make them prohibitive for casual use on large datasets, such as the Google Ads repository. This study proposes a method for scaling up LLM reviews for content moderation in Google Ads. First, we use heuristics to select candidates via filtering and duplicate removal, and create clusters of ads for which we select one representative ad per cluster. We then use LLMs to review only the representative ads. Finally, we propagate the LLM decisions for the representative ads back to their clusters. This method reduces the number of reviews by more than 3 orders of magnitude while achieving a 2x recall compared to a baseline non-LLM model. The success of this approach is a strong function of the representations used in clustering and label propagation; we found that cross-modal similarity representations yield better results than uni-modal representations.
|
[
"['Wei Qiao' 'Tushar Dogra' 'Otilia Stretcu' 'Yu-Han Lyu' 'Tiantian Fang'\n 'Dongjin Kwon' 'Chun-Ta Lu' 'Enming Luo' 'Yuan Wang' 'Chih-Chun Chia'\n 'Ariel Fuxman' 'Fangzhou Wang' 'Ranjay Krishna' 'Mehmet Tek']"
] |
null | null |
2402.14597
| null | null |
http://arxiv.org/abs/2402.14597v1
|
2024-02-04T11:56:49Z
|
2024-02-04T11:56:49Z
|
Learning Style Identification Using Semi-Supervised Self-Taught Labeling
|
Education is a dynamic field that must be adaptable to sudden changes and disruptions caused by events like pandemics, war, and natural disasters related to climate change. When these events occur, traditional classrooms with traditional or blended delivery can shift to fully online learning, which requires an efficient learning environment that meets students' needs. While learning management systems support teachers' productivity and creativity, they typically provide the same content to all learners in a course, ignoring their unique learning styles. To address this issue, we propose a semi-supervised machine learning approach that detects students' learning styles using a data mining technique. We use the commonly used Felder Silverman learning style model and demonstrate that our semi-supervised method can produce reliable classification models with few labeled data. We evaluate our approach on two different courses and achieve an accuracy of 88.83% and 77.35%, respectively. Our work shows that educational data mining and semi-supervised machine learning techniques can identify different learning styles and create a personalized learning environment.
|
[
"['Hani Y. Ayyoub' 'Omar S. Al-Kadi']"
] |
null | null |
2402.14598
| null | null |
http://arxiv.org/pdf/2402.14598v1
|
2024-02-04T09:58:17Z
|
2024-02-04T09:58:17Z
|
Brain-inspired Distributed Memorization Learning for Efficient
Feature-free Unsupervised Domain Adaptation
|
Compared with gradient based artificial neural networks, biological neural networks usually show a more powerful generalization ability to quickly adapt to unknown environments without using any gradient back-propagation procedure. Inspired by the distributed memory mechanism of human brains, we propose a novel gradient-free Distributed Memorization Learning mechanism, namely DML, to support quick domain adaptation of transferred models. In particular, DML adopts randomly connected neurons to memorize the association of input signals, which are propagated as impulses, and makes the final decision by associating the distributed memories based on their confidence. More importantly, DML is able to perform reinforced memorization based on unlabeled data to quickly adapt to a new domain without heavy fine-tuning of deep features, which makes it very suitable for deploying on edge devices. Experiments based on four cross-domain real-world datasets show that DML can achieve superior performance of real-time domain adaptation compared with traditional gradient based MLP with more than 10% improvement of accuracy while reducing 87% of the timing cost of optimization.
|
[
"['Jianming Lv' 'Depin Liang' 'Zequan Liang' 'Yaobin Zhang' 'Sijun Xia']"
] |
null | null |
2402.14601
| null | null |
http://arxiv.org/pdf/2402.14601v3
|
2024-06-28T23:43:07Z
|
2024-02-02T23:54:51Z
|
Bringing Generative AI to Adaptive Learning in Education
|
The recent surge in generative AI technologies, such as large language models and diffusion models, has boosted the development of AI applications in various domains, including science, finance, and education. Concurrently, adaptive learning, a concept that has gained substantial interest in the educational sphere, has proven its efficacy in enhancing students' learning efficiency. In this position paper, we aim to shed light on the intersectional studies of these two methods, which combine generative AI with adaptive learning concepts. By presenting discussions about the benefits, challenges, and potentials in this field, we argue that this union will contribute significantly to the development of the next-stage learning format in education.
|
[
"['Hang Li' 'Tianlong Xu' 'Chaoli Zhang' 'Eason Chen' 'Jing Liang'\n 'Xing Fan' 'Haoyang Li' 'Jiliang Tang' 'Qingsong Wen']"
] |
null | null |
2402.14603
| null | null |
http://arxiv.org/pdf/2402.14603v1
|
2024-02-02T12:57:21Z
|
2024-02-02T12:57:21Z
|
Balanced Resonate-and-Fire Neurons
|
The resonate-and-fire (RF) neuron, introduced over two decades ago, is a simple, efficient, yet biologically plausible spiking neuron model, which can extract frequency patterns within the time domain due to its resonating membrane dynamics. However, previous RF formulations suffer from intrinsic shortcomings that limit effective learning and prevent exploiting the principled advantage of RF neurons. Here, we introduce the balanced RF (BRF) neuron, which alleviates some of the intrinsic limitations of vanilla RF neurons and demonstrates its effectiveness within recurrent spiking neural networks (RSNNs) on various sequence learning tasks. We show that networks of BRF neurons achieve overall higher task performance, produce only a fraction of the spikes, and require significantly fewer parameters as compared to modern RSNNs. Moreover, BRF-RSNN consistently provide much faster and more stable training convergence, even when bridging many hundreds of time steps during backpropagation through time (BPTT). These results underscore that our BRF-RSNN is a strong candidate for future large-scale RSNN architectures, further lines of research in SNN methodology, and more efficient hardware implementations.
|
[
"['Saya Higuchi' 'Sebastian Kairat' 'Sander M. Bohte' 'Sebastian Otte']"
] |
null | null |
2402.14609
| null | null |
http://arxiv.org/pdf/2402.14609v2
|
2024-02-26T02:15:24Z
|
2024-02-22T14:57:44Z
|
FedCQA: Answering Complex Queries on Multi-Source Knowledge Graphs via
Federated Learning
|
Complex logical query answering is a challenging task in knowledge graphs (KGs) that has been widely studied. The ability to perform complex logical reasoning is essential and supports various graph reasoning-based downstream tasks, such as search engines. Recent approaches are proposed to represent KG entities and logical queries into embedding vectors and find answers to logical queries from the KGs. However, existing proposed methods mainly focus on querying a single KG and cannot be applied to multiple graphs. In addition, directly sharing KGs with sensitive information may incur privacy risks, making it impractical to share and construct an aggregated KG for reasoning to retrieve query answers. Thus, it remains unknown how to answer queries on multi-source KGs. An entity can be involved in various knowledge graphs and reasoning on multiple KGs and answering complex queries on multi-source KGs is important in discovering knowledge cross graphs. Fortunately, federated learning is utilized in knowledge graphs to collaboratively learn representations with privacy preserved. Federated knowledge graph embeddings enrich the relations in knowledge graphs to improve the representation quality. However, these methods only focus on one-hop relations and cannot perform complex reasoning tasks. In this paper, we apply federated learning to complex query-answering tasks to reason over multi-source knowledge graphs while preserving privacy. We propose a Federated Complex Query Answering framework (FedCQA), to reason over multi-source KGs avoiding sensitive raw data transmission to protect privacy. We conduct extensive experiments on three real-world datasets and evaluate retrieval performance on various types of complex queries.
|
[
"['Qi Hu' 'Weifeng Jiang' 'Haoran Li' 'Zihao Wang' 'Jiaxin Bai'\n 'Qianren Mao' 'Yangqiu Song' 'Lixin Fan' 'Jianxin Li']"
] |
null | null |
2402.14621
| null | null |
http://arxiv.org/pdf/2402.14621v1
|
2024-02-22T15:09:13Z
|
2024-02-22T15:09:13Z
|
latrend: A Framework for Clustering Longitudinal Data
|
Clustering of longitudinal data is used to explore common trends among subjects over time for a numeric measurement of interest. Various R packages have been introduced throughout the years for identifying clusters of longitudinal patterns, summarizing the variability in trajectories between subject in terms of one or more trends. We introduce the R package "latrend" as a framework for the unified application of methods for longitudinal clustering, enabling comparisons between methods with minimal coding. The package also serves as an interface to commonly used packages for clustering longitudinal data, including "dtwclust", "flexmix", "kml", "lcmm", "mclust", "mixAK", and "mixtools". This enables researchers to easily compare different approaches, implementations, and method specifications. Furthermore, researchers can build upon the standard tools provided by the framework to quickly implement new cluster methods, enabling rapid prototyping. We demonstrate the functionality and application of the latrend package on a synthetic dataset based on the therapy adherence patterns of patients with sleep apnea.
|
[
"['Niek Den Teuling' 'Steffen Pauws' 'Edwin van den Heuvel']"
] |
null | null |
2402.14645
| null | null |
http://arxiv.org/pdf/2402.14645v1
|
2024-02-22T15:45:27Z
|
2024-02-22T15:45:27Z
|
Sparse Linear Regression and Lattice Problems
|
Sparse linear regression (SLR) is a well-studied problem in statistics where one is given a design matrix $Xinmathbb{R}^{mtimes n}$ and a response vector $y=Xtheta^*+w$ for a $k$-sparse vector $theta^*$ (that is, $|theta^*|_0leq k$) and small, arbitrary noise $w$, and the goal is to find a $k$-sparse $widehat{theta} in mathbb{R}^n$ that minimizes the mean squared prediction error $frac{1}{m}|Xwidehat{theta}-Xtheta^*|^2_2$. While $ell_1$-relaxation methods such as basis pursuit, Lasso, and the Dantzig selector solve SLR when the design matrix is well-conditioned, no general algorithm is known, nor is there any formal evidence of hardness in an average-case setting with respect to all efficient algorithms. We give evidence of average-case hardness of SLR w.r.t. all efficient algorithms assuming the worst-case hardness of lattice problems. Specifically, we give an instance-by-instance reduction from a variant of the bounded distance decoding (BDD) problem on lattices to SLR, where the condition number of the lattice basis that defines the BDD instance is directly related to the restricted eigenvalue condition of the design matrix, which characterizes some of the classical statistical-computational gaps for sparse linear regression. Also, by appealing to worst-case to average-case reductions from the world of lattices, this shows hardness for a distribution of SLR instances; while the design matrices are ill-conditioned, the resulting SLR instances are in the identifiable regime. Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.
|
[
"['Aparna Gupte' 'Neekon Vafa' 'Vinod Vaikuntanathan']"
] |
null | null |
2402.14646
| null | null |
http://arxiv.org/pdf/2402.14646v1
|
2024-02-22T15:45:31Z
|
2024-02-22T15:45:31Z
|
CoLoRA: Continuous low-rank adaptation for reduced implicit neural
modeling of parameterized partial differential equations
|
This work introduces reduced models based on Continuous Low Rank Adaptation (CoLoRA) that pre-train neural networks for a given partial differential equation and then continuously adapt low-rank weights in time to rapidly predict the evolution of solution fields at new physics parameters and new initial conditions. The adaptation can be either purely data-driven or via an equation-driven variational approach that provides Galerkin-optimal approximations. Because CoLoRA approximates solution fields locally in time, the rank of the weights can be kept small, which means that only few training trajectories are required offline so that CoLoRA is well suited for data-scarce regimes. Predictions with CoLoRA are orders of magnitude faster than with classical methods and their accuracy and parameter efficiency is higher compared to other neural network approaches.
|
[
"['Jules Berman' 'Benjamin Peherstorfer']"
] |
null | null |
2402.14648
| null | null |
http://arxiv.org/pdf/2402.14648v2
|
2024-05-29T02:30:40Z
|
2024-02-22T15:53:46Z
|
Rethinking Invariance Regularization in Adversarial Training to Improve
Robustness-Accuracy Trade-off
|
Although adversarial training has been the state-of-the-art approach to defend against adversarial examples (AEs), it suffers from a robustness-accuracy trade-off, where high robustness is achieved at the cost of clean accuracy. In this work, we leverage invariance regularization on latent representations to learn discriminative yet adversarially invariant representations, aiming to mitigate this trade-off. We analyze two key issues in representation learning with invariance regularization: (1) a "gradient conflict" between invariance loss and classification objectives, leading to suboptimal convergence, and (2) the mixture distribution problem arising from diverged distributions of clean and adversarial inputs. To address these issues, we propose Asymmetrically Representation-regularized Adversarial Training (AR-AT), which incorporates asymmetric invariance loss with stop-gradient operation and a predictor to improve the convergence, and a split-BatchNorm (BN) structure to resolve the mixture distribution problem. Our method significantly improves the robustness-accuracy trade-off by learning adversarially invariant representations without sacrificing discriminative ability. Furthermore, we discuss the relevance of our findings to knowledge-distillation-based defense methods, contributing to a deeper understanding of their relative successes.
|
[
"['Futa Waseda' 'Ching-Chun Chang' 'Isao Echizen']"
] |
null | null |
2402.14664
| null | null |
http://arxiv.org/pdf/2402.14664v1
|
2024-02-22T16:09:45Z
|
2024-02-22T16:09:45Z
|
Bayesian Off-Policy Evaluation and Learning for Large Action Spaces
|
In interactive systems, actions are often correlated, presenting an opportunity for more sample-efficient off-policy evaluation (OPE) and learning (OPL) in large action spaces. We introduce a unified Bayesian framework to capture these correlations through structured and informative priors. In this framework, we propose sDM, a generic Bayesian approach designed for OPE and OPL, grounded in both algorithmic and theoretical foundations. Notably, sDM leverages action correlations without compromising computational efficiency. Moreover, inspired by online Bayesian bandits, we introduce Bayesian metrics that assess the average performance of algorithms across multiple problem instances, deviating from the conventional worst-case assessments. We analyze sDM in OPE and OPL, highlighting the benefits of leveraging action correlations. Empirical evidence showcases the strong performance of sDM.
|
[
"['Imad Aouali' 'Victor-Emmanuel Brunel' 'David Rohde' 'Anna Korba']"
] |
null | null |
2402.14683
| null | null |
http://arxiv.org/pdf/2402.14683v2
|
2024-06-16T18:43:50Z
|
2024-02-22T16:40:33Z
|
Visual Hallucinations of Multi-modal Large Language Models
|
Visual hallucination (VH) means that a multi-modal LLM (MLLM) imagines incorrect details about an image in visual question answering. Existing studies find VH instances only in existing image datasets, which results in biased understanding of MLLMs' performance under VH due to limited diversity of such VH instances. In this work, we propose a tool called VHTest to generate a diverse set of VH instances. Specifically, VHTest finds some initial VH instances in existing image datasets (e.g., COCO), generates a text description for each VH mode, and uses a text-to-image generative model (e.g., DALL-E-3) to generate VH images based on the text descriptions. We collect a benchmark dataset with 1,200 VH instances in 8 VH modes using VHTest. We find that existing MLLMs such as GPT-4V, LLaVA-1.5, and MiniGPT-v2 hallucinate for a large fraction of the instances in our benchmark. Moreover, we find that fine-tuning an MLLM using our benchmark dataset reduces its likelihood to hallucinate without sacrificing its performance on other benchmarks. Our benchmarks are publicly available: https://github.com/wenhuang2000/VHTest.
|
[
"['Wen Huang' 'Hongbin Liu' 'Minxin Guo' 'Neil Zhenqiang Gong']"
] |
null | null |
2402.14684
| null | null |
http://arxiv.org/pdf/2402.14684v1
|
2024-02-22T16:40:55Z
|
2024-02-22T16:40:55Z
|
Adaptive time series forecasting with markovian variance switching
|
Adaptive time series forecasting is essential for prediction under regime changes. Several classical methods assume linear Gaussian state space model (LGSSM) with variances constant in time. However, there are many real-world processes that cannot be captured by such models. We consider a state-space model with Markov switching variances. Such dynamical systems are usually intractable because of their computational complexity increasing exponentially with time; Variational Bayes (VB) techniques have been applied to this problem. In this paper, we propose a new way of estimating variances based on online learning theory; we adapt expert aggregation methods to learn the variances over time. We apply the proposed method to synthetic data and to the problem of electricity load forecasting. We show that this method is robust to misspecification and outperforms traditional expert aggregation.
|
[
"['Baptiste Abélès' 'Joseph de Vilmarest' 'Olivier Wintemberger']"
] |
null | null |
2402.14688
| null | null |
http://arxiv.org/pdf/2402.14688v2
|
2024-06-02T15:05:59Z
|
2024-02-22T16:43:16Z
|
Q-Probe: A Lightweight Approach to Reward Maximization for Language
Models
|
We present an approach called Q-probing to adapt a pre-trained language model to maximize a task-specific reward function. At a high level, Q-probing sits between heavier approaches such as finetuning and lighter approaches such as few shot prompting, but can also be combined with either. The idea is to learn a simple linear function on a model's embedding space that can be used to reweight candidate completions. We theoretically show that this sampling procedure is equivalent to a KL-constrained maximization of the Q-probe as the number of samples increases. To train the Q-probes we consider either reward modeling or a class of novel direct policy learning objectives based on importance weighted policy gradients. With this technique, we see gains in domains with ground-truth rewards (code generation) as well as implicit rewards defined by preference data, even outperforming finetuning in data-limited regimes. Moreover, a Q-probe can be trained on top of an API since it only assumes access to sampling and embeddings. Code: https://github.com/likenneth/q_probe .
|
[
"['Kenneth Li' 'Samy Jelassi' 'Hugh Zhang' 'Sham Kakade'\n 'Martin Wattenberg' 'David Brandfonbrener']"
] |
null | null |
2402.14692
| null | null |
http://arxiv.org/pdf/2402.14692v1
|
2024-02-22T16:47:15Z
|
2024-02-22T16:47:15Z
|
PeriodGrad: Towards Pitch-Controllable Neural Vocoder Based on a
Diffusion Probabilistic Model
|
This paper presents a neural vocoder based on a denoising diffusion probabilistic model (DDPM) incorporating explicit periodic signals as auxiliary conditioning signals. Recently, DDPM-based neural vocoders have gained prominence as non-autoregressive models that can generate high-quality waveforms. The neural vocoders based on DDPM have the advantage of training with a simple time-domain loss. In practical applications, such as singing voice synthesis, there is a demand for neural vocoders to generate high-fidelity speech waveforms with flexible pitch control. However, conventional DDPM-based neural vocoders struggle to generate speech waveforms under such conditions. Our proposed model aims to accurately capture the periodic structure of speech waveforms by incorporating explicit periodic signals. Experimental results show that our model improves sound quality and provides better pitch control than conventional DDPM-based neural vocoders.
|
[
"['Yukiya Hono' 'Kei Hashimoto' 'Yoshihiko Nankaku' 'Keiichi Tokuda']"
] |
null | null |
2402.14694
| null | null |
http://arxiv.org/pdf/2402.14694v1
|
2024-02-22T16:48:17Z
|
2024-02-22T16:48:17Z
|
A Quick Introduction to Quantum Machine Learning for Non-Practitioners
|
This paper provides an introduction to quantum machine learning, exploring the potential benefits of using quantum computing principles and algorithms that may improve upon classical machine learning approaches. Quantum computing utilizes particles governed by quantum mechanics for computational purposes, leveraging properties like superposition and entanglement for information representation and manipulation. Quantum machine learning applies these principles to enhance classical machine learning models, potentially reducing network size and training time on quantum hardware. The paper covers basic quantum mechanics principles, including superposition, phase space, and entanglement, and introduces the concept of quantum gates that exploit these properties. It also reviews classical deep learning concepts, such as artificial neural networks, gradient descent, and backpropagation, before delving into trainable quantum circuits as neural networks. An example problem demonstrates the potential advantages of quantum neural networks, and the appendices provide detailed derivations. The paper aims to help researchers new to quantum mechanics and machine learning develop their expertise more efficiently.
|
[
"['Ethan N. Evans' 'Dominic Byrne' 'Matthew G. Cook']"
] |
null | null |
2402.14698
| null | null |
http://arxiv.org/pdf/2402.14698v3
|
2024-04-04T11:41:04Z
|
2024-02-22T16:50:32Z
|
Using construction waste hauling trucks' GPS data to classify
earthwork-related locations: A Chengdu case study
|
Earthwork-related locations (ERLs), such as construction sites, earth dumping ground, and concrete mixing stations, are major sources of urban dust pollution (particulate matters). The effective management of ERLs is crucial and requires timely and efficient tracking of these locations throughout the city. This work aims to identify and classify urban ERLs using GPS trajectory data of over 16,000 construction waste hauling trucks (CWHTs), as well as 58 urban features encompassing geographic, land cover, POI and transport dimensions. We compare several machine learning models and examine the impact of various spatial-temporal features on classification performance using real-world data in Chengdu, China. The results demonstrate that 77.8% classification accuracy can be achieved with a limited number of features. This classification framework was implemented in the Alpha MAPS system in Chengdu, which has successfully identified 724 construction cites/earth dumping ground, 48 concrete mixing stations, and 80 truck parking locations in the city during December 2023, which has enabled local authority to effectively manage urban dust pollution at low personnel costs.
|
[
"['Lei Yu' 'Ke Han']"
] |
null | null |
2402.14701
| null | null |
http://arxiv.org/pdf/2402.14701v1
|
2024-02-22T16:56:44Z
|
2024-02-22T16:56:44Z
|
COMPASS: Computational Mapping of Patient-Therapist Alliance Strategies
with Language Modeling
|
The therapeutic working alliance is a critical factor in predicting the success of psychotherapy treatment. Traditionally, working alliance assessment relies on questionnaires completed by both therapists and patients. In this paper, we present COMPASS, a novel framework to directly infer the therapeutic working alliance from the natural language used in psychotherapy sessions. Our approach utilizes advanced large language models to analyze transcripts of psychotherapy sessions and compare them with distributed representations of statements in the working alliance inventory. Analyzing a dataset of over 950 sessions covering diverse psychiatric conditions, we demonstrate the effectiveness of our method in microscopically mapping patient-therapist alignment trajectories and providing interpretability for clinical psychiatry and in identifying emerging patterns related to the condition being treated. By employing various neural topic modeling techniques in combination with generative language prompting, we analyze the topical characteristics of different psychiatric conditions and incorporate temporal modeling to capture the evolution of topics at a turn-level resolution. This combined framework enhances the understanding of therapeutic interactions, enabling timely feedback for therapists regarding conversation quality and providing interpretable insights to improve the effectiveness of psychotherapy.
|
[
"['Baihan Lin' 'Djallel Bouneffouf' 'Yulia Landa' 'Rachel Jespersen'\n 'Cheryl Corcoran' 'Guillermo Cecchi']"
] |
null | null |
2402.14703
| null | null |
http://arxiv.org/pdf/2402.14703v1
|
2024-02-22T17:00:50Z
|
2024-02-22T17:00:50Z
|
On the Curses of Future and History in Future-dependent Value Functions
for Off-policy Evaluation
|
We study off-policy evaluation (OPE) in partially observable environments with complex observations, with the goal of developing estimators whose guarantee avoids exponential dependence on the horizon. While such estimators exist for MDPs and POMDPs can be converted to history-based MDPs, their estimation errors depend on the state-density ratio for MDPs which becomes history ratios after conversion, an exponential object. Recently, Uehara et al. (2022) proposed future-dependent value functions as a promising framework to address this issue, where the guarantee for memoryless policies depends on the density ratio over the latent state space. However, it also depends on the boundedness of the future-dependent value function and other related quantities, which we show could be exponential-in-length and thus erasing the advantage of the method. In this paper, we discover novel coverage assumptions tailored to the structure of POMDPs, such as outcome coverage and belief coverage. These assumptions not only enable polynomial bounds on the aforementioned quantities, but also lead to the discovery of new algorithms with complementary properties.
|
[
"['Yuheng Zhang' 'Nan Jiang']"
] |
null | null |
2402.14708
| null | null |
http://arxiv.org/pdf/2402.14708v1
|
2024-02-22T17:08:09Z
|
2024-02-22T17:08:09Z
|
CaT-GNN: Enhancing Credit Card Fraud Detection via Causal Temporal Graph
Neural Networks
|
Credit card fraud poses a significant threat to the economy. While Graph Neural Network (GNN)-based fraud detection methods perform well, they often overlook the causal effect of a node's local structure on predictions. This paper introduces a novel method for credit card fraud detection, the textbf{underline{Ca}}usal textbf{underline{T}}emporal textbf{underline{G}}raph textbf{underline{N}}eural textbf{N}etwork (CaT-GNN), which leverages causal invariant learning to reveal inherent correlations within transaction data. By decomposing the problem into discovery and intervention phases, CaT-GNN identifies causal nodes within the transaction graph and applies a causal mixup strategy to enhance the model's robustness and interpretability. CaT-GNN consists of two key components: Causal-Inspector and Causal-Intervener. The Causal-Inspector utilizes attention weights in the temporal attention mechanism to identify causal and environment nodes without introducing additional parameters. Subsequently, the Causal-Intervener performs a causal mixup enhancement on environment nodes based on the set of nodes. Evaluated on three datasets, including a private financial dataset and two public datasets, CaT-GNN demonstrates superior performance over existing state-of-the-art methods. Our findings highlight the potential of integrating causal reasoning with graph neural networks to improve fraud detection capabilities in financial transactions.
|
[
"['Yifan Duan' 'Guibin Zhang' 'Shilong Wang' 'Xiaojiang Peng' 'Wang Ziqi'\n 'Junyuan Mao' 'Hao Wu' 'Xinke Jiang' 'Kun Wang']"
] |
null | null |
2402.14710
| null | null |
http://arxiv.org/pdf/2402.14710v3
|
2024-05-26T15:54:41Z
|
2024-02-22T17:11:38Z
|
IEPile: Unearthing Large-Scale Schema-Based Information Extraction
Corpus
|
Large Language Models (LLMs) demonstrate remarkable potential across various domains; however, they exhibit a significant performance gap in Information Extraction (IE). Note that high-quality instruction data is the vital key for enhancing the specific capabilities of LLMs, while current IE datasets tend to be small in scale, fragmented, and lack standardized schema. To this end, we introduce IEPile, a comprehensive bilingual (English and Chinese) IE instruction corpus, which contains approximately 0.32B tokens. We construct IEPile by collecting and cleaning 33 existing IE datasets, and introduce schema-based instruction generation to unearth a large-scale corpus. Experimentally, IEPile enhance the performance of LLMs for IE, with notable improvements in zero-shot generalization. We open-source the resource and pre-trained models, hoping to provide valuable support to the NLP community.
|
[
"['Honghao Gui' 'Lin Yuan' 'Hongbin Ye' 'Ningyu Zhang' 'Mengshu Sun'\n 'Lei Liang' 'Huajun Chen']"
] |
null | null |
2402.14726
| null | null |
http://arxiv.org/pdf/2402.14726v1
|
2024-02-22T17:33:49Z
|
2024-02-22T17:33:49Z
|
Incorporating Expert Rules into Neural Networks in the Framework of
Concept-Based Learning
|
A problem of incorporating the expert rules into machine learning models for extending the concept-based learning is formulated in the paper. It is proposed how to combine logical rules and neural networks predicting the concept probabilities. The first idea behind the combination is to form constraints for a joint probability distribution over all combinations of concept values to satisfy the expert rules. The second idea is to represent a feasible set of probability distributions in the form of a convex polytope and to use its vertices or faces. We provide several approaches for solving the stated problem and for training neural networks which guarantee that the output probabilities of concepts would not violate the expert rules. The solution of the problem can be viewed as a way for combining the inductive and deductive learning. Expert rules are used in a broader sense when any logical function that connects concepts and class labels or just concepts with each other can be regarded as a rule. This feature significantly expands the class of the proposed results. Numerical examples illustrate the approaches. The code of proposed algorithms is publicly available.
|
[
"['Andrei V. Konstantinov' 'Lev V. Utkin']"
] |
null | null |
2402.14730
| null | null |
http://arxiv.org/pdf/2402.14730v3
|
2024-07-06T16:10:29Z
|
2024-02-22T17:42:15Z
|
Clifford-Steerable Convolutional Neural Networks
|
We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $mathrm{E}(p, q)$-equivariant CNNs. CS-CNNs process multivector fields on pseudo-Euclidean spaces $mathbb{R}^{p,q}$. They cover, for instance, $mathrm{E}(3)$-equivariance on $mathbb{R}^3$ and Poincar'e-equivariance on Minkowski spacetime $mathbb{R}^{1,3}$. Our approach is based on an implicit parametrization of $mathrm{O}(p,q)$-steerable kernels via Clifford group equivariant neural networks. We significantly and consistently outperform baseline methods on fluid dynamics as well as relativistic electrodynamics forecasting tasks.
|
[
"['Maksim Zhdanov' 'David Ruhe' 'Maurice Weiler' 'Ana Lucic'\n 'Johannes Brandstetter' 'Patrick Forré']"
] |
null | null |
2402.14735
| null | null |
http://arxiv.org/pdf/2402.14735v1
|
2024-02-22T17:47:03Z
|
2024-02-22T17:47:03Z
|
How Transformers Learn Causal Structure with Gradient Descent
|
The incredible success of transformers on sequence modeling tasks can be largely attributed to the self-attention mechanism, which allows information to be transferred between different parts of a sequence. Self-attention allows transformers to encode causal structure which makes them particularly suitable for sequence modeling. However, the process by which transformers learn such causal structure via gradient-based training algorithms remains poorly understood. To better understand this process, we introduce an in-context learning task that requires learning latent causal structure. We prove that gradient descent on a simplified two-layer transformer learns to solve this task by encoding the latent causal graph in the first attention layer. The key insight of our proof is that the gradient of the attention matrix encodes the mutual information between tokens. As a consequence of the data processing inequality, the largest entries of this gradient correspond to edges in the latent causal graph. As a special case, when the sequences are generated from in-context Markov chains, we prove that transformers learn an induction head (Olsson et al., 2022). We confirm our theoretical findings by showing that transformers trained on our in-context learning task are able to recover a wide variety of causal structures.
|
[
"['Eshaan Nichani' 'Alex Damian' 'Jason D. Lee']"
] |
null | null |
2402.14740
| null | null |
http://arxiv.org/pdf/2402.14740v2
|
2024-02-26T18:26:25Z
|
2024-02-22T17:52:34Z
|
Back to Basics: Revisiting REINFORCE Style Optimization for Learning
from Human Feedback in LLMs
|
AI alignment in the shape of Reinforcement Learning from Human Feedback (RLHF) is increasingly treated as a crucial ingredient for high performance large language models. Proximal Policy Optimization (PPO) has been positioned by recent literature as the canonical method for the RL part of RLHF. However, it involves both high computational cost and sensitive hyperparameter tuning. We posit that most of the motivational principles that led to the development of PPO are less of a practical concern in RLHF and advocate for a less computationally expensive method that preserves and even increases performance. We revisit the formulation of alignment from human preferences in the context of RL. Keeping simplicity as a guiding principle, we show that many components of PPO are unnecessary in an RLHF context and that far simpler REINFORCE-style optimization variants outperform both PPO and newly proposed "RL-free" methods such as DPO and RAFT. Our work suggests that careful adaptation to LLMs alignment characteristics enables benefiting from online RL optimization at low cost.
|
[
"['Arash Ahmadian' 'Chris Cremer' 'Matthias Gallé' 'Marzieh Fadaee'\n 'Julia Kreutzer' 'Olivier Pietquin' 'Ahmet Üstün' 'Sara Hooker']"
] |
null | null |
2402.14744
| null | null |
http://arxiv.org/pdf/2402.14744v2
|
2024-05-23T06:30:23Z
|
2024-02-22T18:03:14Z
|
Large Language Models as Urban Residents: An LLM Agent Framework for
Personal Mobility Generation
|
This paper introduces a novel approach using Large Language Models (LLMs) integrated into an agent framework for flexible and effective personal mobility generation. LLMs overcome the limitations of previous models by effectively processing semantic data and offering versatility in modeling various tasks. Our approach addresses three research questions: aligning LLMs with real-world urban mobility data, developing reliable activity generation strategies, and exploring LLM applications in urban mobility. The key technical contribution is a novel LLM agent framework that accounts for individual activity patterns and motivations, including a self-consistency approach to align LLMs with real-world activity data and a retrieval-augmented strategy for interpretable activity generation. We evaluate our LLM agent framework and compare it with state-of-the-art personal mobility generation approaches, demonstrating the effectiveness of our approach and its potential applications in urban mobility. Overall, this study marks the pioneering work of designing an LLM agent framework for activity generation based on real-world human activity data, offering a promising tool for urban mobility analysis.
|
[
"['Jiawei Wang' 'Renhe Jiang' 'Chuang Yang' 'Zengqing Wu' 'Makoto Onizuka'\n 'Ryosuke Shibasaki' 'Noboru Koshizuka' 'Chuan Xiao']"
] |
null | null |
2402.14746
| null | null |
http://arxiv.org/pdf/2402.14746v1
|
2024-02-22T18:06:19Z
|
2024-02-22T18:06:19Z
|
Scaling Efficient LLMs
|
Trained LLMs are typically sparse in that most of the parameters are zero, raising questions on efficiency. In response, we inquire into efficient LLMs, i.e. those with the fewest parameters that achieve the desired accuracy on a training corpus. Specifically, we compare theoretical and empirical estimates for training loss at current scale to obtain upper and lower bounds on the number of unique sequences in a natural training corpus as a function of its size. Our result implies (1) to double the number of skills represented in a training corpus, the corpus must scale roughly between three and five fold (2) for efficient LLMs, the number of parameters $N$ and the size $D$ of a natural training corpus scale as $N sim D^{0.58}$ (3) if the number of parameters of an LLM is smaller than the number of unique sequences in the training corpus, scaling up can uncover emergent skills.
|
[
"['B. N. Kausik']"
] |
null | null |
2402.14753
| null | null |
http://arxiv.org/pdf/2402.14753v1
|
2024-02-22T18:12:48Z
|
2024-02-22T18:12:48Z
|
Prompting a Pretrained Transformer Can Be a Universal Approximator
|
Despite the widespread adoption of prompting, prompt tuning and prefix-tuning of transformer models, our theoretical understanding of these fine-tuning methods remains limited. A key question is whether one can arbitrarily modify the behavior of pretrained model by prompting or prefix-tuning it. Formally, whether prompting and prefix-tuning a pretrained model can universally approximate sequence-to-sequence functions. This paper answers in the affirmative and demonstrates that much smaller pretrained models than previously thought can be universal approximators when prefixed. In fact, the attention mechanism is uniquely suited for universal approximation with prefix-tuning a single attention head being sufficient to approximate any continuous function. Moreover, any sequence-to-sequence function can be approximated by prefixing a transformer with depth linear in the sequence length. Beyond these density-type results, we also offer Jackson-type bounds on the length of the prefix needed to approximate a function to a desired precision.
|
[
"['Aleksandar Petrov' 'Philip H. S. Torr' 'Adel Bibi']"
] |
null | null |
2402.14758
| null | null |
http://arxiv.org/pdf/2402.14758v2
|
2024-06-12T16:53:22Z
|
2024-02-22T18:20:22Z
|
Batch and match: black-box variational inference with a score-based
divergence
|
Most leading implementations of black-box variational inference (BBVI) are based on optimizing a stochastic evidence lower bound (ELBO). But such approaches to BBVI often converge slowly due to the high variance of their gradient estimates and their sensitivity to hyperparameters. In this work, we propose batch and match (BaM), an alternative approach to BBVI based on a score-based divergence. Notably, this score-based divergence can be optimized by a closed-form proximal update for Gaussian variational families with full covariance matrices. We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance. We also evaluate the performance of BaM on Gaussian and non-Gaussian target distributions that arise from posterior inference in hierarchical and deep generative models. In these experiments, we find that BaM typically converges in fewer (and sometimes significantly fewer) gradient evaluations than leading implementations of BBVI based on ELBO maximization.
|
[
"['Diana Cai' 'Chirag Modi' 'Loucas Pillaud-Vivien' 'Charles C. Margossian'\n 'Robert M. Gower' 'David M. Blei' 'Lawrence K. Saul']"
] |
null | null |
2402.14759
| null | null |
http://arxiv.org/pdf/2402.14759v1
|
2024-02-22T18:20:25Z
|
2024-02-22T18:20:25Z
|
Generalising realisability in statistical learning theory under
epistemic uncertainty
|
The purpose of this paper is to look into how central notions in statistical learning theory, such as realisability, generalise under the assumption that train and test distribution are issued from the same credal set, i.e., a convex set of probability distributions. This can be considered as a first step towards a more general treatment of statistical learning under epistemic uncertainty.
|
[
"['Fabio Cuzzolin']"
] |
null | null |
2402.14760
| null | null |
http://arxiv.org/pdf/2402.14760v2
|
2024-06-08T16:10:45Z
|
2024-02-22T18:20:33Z
|
Generalizing Reward Modeling for Out-of-Distribution Preference Learning
|
Preference learning (PL) with large language models (LLMs) aims to align the LLMs' generations with human preferences. Previous work on reinforcement learning from human feedback (RLHF) has demonstrated promising results in in-distribution PL. However, due to the difficulty of obtaining human feedback, discretely training reward models for every encountered distribution is challenging. Thus, out-of-distribution (OOD) PL is practically useful for enhancing the generalization ability of LLMs with limited preference feedback. This work addresses OOD PL by optimizing a general reward model through a meta-learning approach. During meta-training, a bilevel optimization algorithm is utilized to learn a reward model capable of guiding policy learning to align with human preferences across various distributions. When encountering a test distribution, the meta-test procedure conducts regularized policy optimization using the learned reward model for PL. We theoretically demonstrate the convergence rate of the bilevel optimization algorithm under reasonable assumptions. Additionally, we conduct experiments on two text generation tasks across 20 held-out domains and outperform a variety of strong baselines across various evaluation metrics.
|
[
"['Chen Jia']"
] |
null | null |
2402.14776
| null | null |
http://arxiv.org/pdf/2402.14776v2
|
2024-05-21T07:36:14Z
|
2024-02-22T18:35:05Z
|
ESE: Espresso Sentence Embeddings
|
High-quality sentence embeddings are fundamental in many natural language processing (NLP) tasks, such as semantic textual similarity (STS) and retrieval-augmented generation (RAG). Nevertheless, most existing methods leverage fixed-length embeddings from full-layer language models, which lack the scalability to accommodate the diverse available resources across various applications. Viewing this gap, we propose a novel sentence embedding model $mathrm{Espresso}$ $mathrm{Sentence}$ $mathrm{Embeddings}$ (ESE) with two learning processes. First, the learn-to-express process encodes more salient representations to lower layers. Second, the learn-to-compress process compacts essential features into the initial dimensions using Principal Component Analysis (PCA). This way, ESE can scale model depth via the former process and embedding size via the latter. Extensive experiments on STS and RAG suggest that ESE can effectively produce high-quality embeddings with less model depth and embedding size, enhancing embedding inference efficiency.
|
[
"['Xianming Li' 'Zongxi Li' 'Jing Li' 'Haoran Xie' 'Qing Li']"
] |
null | null |
2402.14777
| null | null |
http://arxiv.org/pdf/2402.14777v1
|
2024-02-22T18:37:33Z
|
2024-02-22T18:37:33Z
|
Causal Imputation for Counterfactual SCMs: Bridging Graphs and Latent
Factor Models
|
We consider the task of causal imputation, where we aim to predict the outcomes of some set of actions across a wide range of possible contexts. As a running example, we consider predicting how different drugs affect cells from different cell types. We study the index-only setting, where the actions and contexts are categorical variables with a finite number of possible values. Even in this simple setting, a practical challenge arises, since often only a small subset of possible action-context pairs have been studied. Thus, models must extrapolate to novel action-context pairs, which can be framed as a form of matrix completion with rows indexed by actions, columns indexed by contexts, and matrix entries corresponding to outcomes. We introduce a novel SCM-based model class, where the outcome is expressed as a counterfactual, actions are expressed as interventions on an instrumental variable, and contexts are defined based on the initial state of the system. We show that, under a linearity assumption, this setup induces a latent factor model over the matrix of outcomes, with an additional fixed effect term. To perform causal prediction based on this model class, we introduce simple extension to the Synthetic Interventions estimator (Agarwal et al., 2020). We evaluate several matrix completion approaches on the PRISM drug repurposing dataset, showing that our method outperforms all other considered matrix completion approaches.
|
[
"['Alvaro Ribot' 'Chandler Squires' 'Caroline Uhler']"
] |
null | null |
2402.14781
| null | null |
http://arxiv.org/pdf/2402.14781v1
|
2024-02-22T18:39:24Z
|
2024-02-22T18:39:24Z
|
Rao-Blackwellising Bayesian Causal Inference
|
Bayesian causal inference, i.e., inferring a posterior over causal models for the use in downstream causal reasoning tasks, poses a hard computational inference problem that is little explored in literature. In this work, we combine techniques from order-based MCMC structure learning with recent advances in gradient-based graph learning into an effective Bayesian causal inference framework. Specifically, we decompose the problem of inferring the causal structure into (i) inferring a topological order over variables and (ii) inferring the parent sets for each variable. When limiting the number of parents per variable, we can exactly marginalise over the parent sets in polynomial time. We further use Gaussian processes to model the unknown causal mechanisms, which also allows their exact marginalisation. This introduces a Rao-Blackwellization scheme, where all components are eliminated from the model, except for the causal order, for which we learn a distribution via gradient-based optimisation. The combination of Rao-Blackwellization with our sequential inference procedure for causal orders yields state-of-the-art on linear and non-linear additive noise benchmarks with scale-free and Erdos-Renyi graph structures.
|
[
"['Christian Toth' 'Christian Knoll' 'Franz Pernkopf' 'Robert Peharz']"
] |
null | null |
2402.14789
| null | null |
http://arxiv.org/pdf/2402.14789v1
|
2024-02-22T18:46:22Z
|
2024-02-22T18:46:22Z
|
Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised
Learning
|
Self-supervised learning excels in learning representations from large amounts of unlabeled data, demonstrating success across multiple data modalities. Yet, extending self-supervised learning to new modalities is non-trivial because the specifics of existing methods are tailored to each domain, such as domain-specific augmentations which reflect the invariances in the target task. While masked modeling is promising as a domain-agnostic framework for self-supervised learning because it does not rely on input augmentations, its mask sampling procedure remains domain-specific. We present Self-guided Masked Autoencoders (SMA), a fully domain-agnostic masked modeling method. SMA trains an attention based model using a masked modeling objective, by learning masks to sample without any domain-specific assumptions. We evaluate SMA on three self-supervised learning benchmarks in protein biology, chemical property prediction, and particle physics. We find SMA is capable of learning representations without domain-specific knowledge and achieves state-of-the-art performance on these three benchmarks.
|
[
"['Johnathan Xie' 'Yoonho Lee' 'Annie S. Chen' 'Chelsea Finn']"
] |
null | null |
2402.14792
| null | null |
http://arxiv.org/pdf/2402.14792v1
|
2024-02-22T18:50:18Z
|
2024-02-22T18:50:18Z
|
Consolidating Attention Features for Multi-view Image Editing
|
Large-scale text-to-image models enable a wide range of image editing techniques, using text prompts or even spatial controls. However, applying these editing methods to multi-view images depicting a single scene leads to 3D-inconsistent results. In this work, we focus on spatial control-based geometric manipulations and introduce a method to consolidate the editing process across various views. We build on two insights: (1) maintaining consistent features throughout the generative process helps attain consistency in multi-view editing, and (2) the queries in self-attention layers significantly influence the image structure. Hence, we propose to improve the geometric consistency of the edited images by enforcing the consistency of the queries. To do so, we introduce QNeRF, a neural radiance field trained on the internal query features of the edited images. Once trained, QNeRF can render 3D-consistent queries, which are then softly injected back into the self-attention layers during generation, greatly improving multi-view consistency. We refine the process through a progressive, iterative method that better consolidates queries across the diffusion timesteps. We compare our method to a range of existing techniques and demonstrate that it can achieve better multi-view consistency and higher fidelity to the input scene. These advantages allow us to train NeRFs with fewer visual artifacts, that are better aligned with the target geometry.
|
[
"['Or Patashnik' 'Rinon Gal' 'Daniel Cohen-Or' 'Jun-Yan Zhu'\n 'Fernando De la Torre']"
] |
null | null |
2402.14800
| null | null |
http://arxiv.org/pdf/2402.14800v2
|
2024-05-30T16:24:16Z
|
2024-02-22T18:56:07Z
|
Not All Experts are Equal: Efficient Expert Pruning and Skipping for
Mixture-of-Experts Large Language Models
|
A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer parameters, but it is still hard to deploy them due to their immense parameter sizes. Different from previous weight pruning methods that rely on specifically designed hardware, this paper mainly aims to enhance the deployment efficiency of MoE LLMs by introducing plug-and-play expert-level sparsification techniques. Specifically, we propose, for the first time to our best knowledge, post-training approaches for task-agnostic and task-specific expert pruning and skipping of MoE LLMs, tailored to improve deployment efficiency while maintaining model performance across a wide range of tasks. Extensive experiments show that our proposed methods can simultaneously reduce model sizes and increase the inference speed, while maintaining satisfactory performance. Data and code will be available at https://github.com/Lucky-Lance/Expert_Sparsity.
|
[
"['Xudong Lu' 'Qi Liu' 'Yuhui Xu' 'Aojun Zhou' 'Siyuan Huang' 'Bo Zhang'\n 'Junchi Yan' 'Hongsheng Li']"
] |
null | null |
2402.14802
| null | null |
http://arxiv.org/pdf/2402.14802v1
|
2024-02-22T18:56:31Z
|
2024-02-22T18:56:31Z
|
Link Prediction under Heterophily: A Physics-Inspired Graph Neural
Network Approach
|
In the past years, Graph Neural Networks (GNNs) have become the `de facto' standard in various deep learning domains, thanks to their flexibility in modeling real-world phenomena represented as graphs. However, the message-passing mechanism of GNNs faces challenges in learnability and expressivity, hindering high performance on heterophilic graphs, where adjacent nodes frequently have different labels. Most existing solutions addressing these challenges are primarily confined to specific benchmarks focused on node classification tasks. This narrow focus restricts the potential impact that link prediction under heterophily could offer in several applications, including recommender systems. For example, in social networks, two users may be connected for some latent reason, making it challenging to predict such connections in advance. Physics-Inspired GNNs such as GRAFF provided a significant contribution to enhance node classification performance under heterophily, thanks to the adoption of physics biases in the message-passing. Drawing inspiration from these findings, we advocate that the methodology employed by GRAFF can improve link prediction performance as well. To further explore this hypothesis, we introduce GRAFF-LP, an extension of GRAFF to link prediction. We evaluate its efficacy within a recent collection of heterophilic graphs, establishing a new benchmark for link prediction under heterophily. Our approach surpasses previous methods, in most of the datasets, showcasing a strong flexibility in different contexts, and achieving relative AUROC improvements of up to 26.7%.
|
[
"['Andrea Giuseppe Di Francesco' 'Francesco Caso' 'Maria Sofia Bucarelli'\n 'Fabrizio Silvestri']"
] |
null | null |
2402.14804
| null | null |
http://arxiv.org/pdf/2402.14804v1
|
2024-02-22T18:56:38Z
|
2024-02-22T18:56:38Z
|
Measuring Multimodal Mathematical Reasoning with MATH-Vision Dataset
|
Recent advancements in Large Multimodal Models (LMMs) have shown promising results in mathematical reasoning within visual contexts, with models approaching human-level performance on existing benchmarks such as MathVista. However, we observe significant limitations in the diversity of questions and breadth of subjects covered by these benchmarks. To address this issue, we present the MATH-Vision (MATH-V) dataset, a meticulously curated collection of 3,040 high-quality mathematical problems with visual contexts sourced from real math competitions. Spanning 16 distinct mathematical disciplines and graded across 5 levels of difficulty, our dataset provides a comprehensive and diverse set of challenges for evaluating the mathematical reasoning abilities of LMMs. Through extensive experimentation, we unveil a notable performance gap between current LMMs and human performance on MATH-V, underscoring the imperative for further advancements in LMMs. Moreover, our detailed categorization allows for a thorough error analysis of LMMs, offering valuable insights to guide future research and development. The project is available at https://mathvision-cuhk.github.io
|
[
"['Ke Wang' 'Junting Pan' 'Weikang Shi' 'Zimu Lu' 'Mingjie Zhan'\n 'Hongsheng Li']"
] |
null | null |
2402.14806
| null | null |
http://arxiv.org/pdf/2402.14806v1
|
2024-02-22T18:58:05Z
|
2024-02-22T18:58:05Z
|
Difference Learning for Air Quality Forecasting Transport Emulation
|
Human health is negatively impacted by poor air quality including increased risk for respiratory and cardiovascular disease. Due to a recent increase in extreme air quality events, both globally and locally in the United States, finer resolution air quality forecasting guidance is needed to effectively adapt to these events. The National Oceanic and Atmospheric Administration provides air quality forecasting guidance for the Continental United States. Their air quality forecasting model is based on a 15 km spatial resolution; however, the goal is to reach a three km spatial resolution. This is currently not feasible due in part to prohibitive computational requirements for modeling the transport of chemical species. In this work, we describe a deep learning transport emulator that is able to reduce computations while maintaining skill comparable with the existing numerical model. We show how this method maintains skill in the presence of extreme air quality events, making it a potential candidate for operational use. We also explore evaluating how well this model maintains the physical properties of the modeled transport for a given set of species.
|
[
"['Reed River Chen' 'Christopher Ribaudo' 'Jennifer Sleeman'\n 'Chace Ashcraft' 'Collin Kofroth' 'Marisa Hughes' 'Ivanka Stajner'\n 'Kevin Viner' 'Kai Wang']"
] |
null | null |
2402.14807
| null | null |
http://arxiv.org/pdf/2402.14807v3
|
2024-05-26T22:46:45Z
|
2024-02-22T18:58:27Z
|
A Decision-Language Model (DLM) for Dynamic Restless Multi-Armed Bandit
Tasks in Public Health
|
Restless multi-armed bandits (RMAB) have demonstrated success in optimizing resource allocation for large beneficiary populations in public health settings. Unfortunately, RMAB models lack flexibility to adapt to evolving public health policy priorities. Concurrently, Large Language Models (LLMs) have emerged as adept automated planners across domains of robotic control and navigation. In this paper, we propose a Decision Language Model (DLM) for RMABs, enabling dynamic fine-tuning of RMAB policies in public health settings using human-language commands. We propose using LLMs as automated planners to (1) interpret human policy preference prompts, (2) propose reward functions as code for a multi-agent RMAB environment, and (3) iterate on the generated reward functions using feedback from grounded RMAB simulations. We illustrate the application of DLM in collaboration with ARMMAN, an India-based non-profit promoting preventative care for pregnant mothers, that currently relies on RMAB policies to optimally allocate health worker calls to low-resource populations. We conduct a technology demonstration in simulation using the Gemini Pro model, showing DLM can dynamically shape policy outcomes using only human prompts as input.
|
[
"['Nikhil Behari' 'Edwin Zhang' 'Yunfan Zhao' 'Aparna Taneja'\n 'Dheeraj Nagaraj' 'Milind Tambe']"
] |
null | null |
2402.14809
| null | null |
http://arxiv.org/pdf/2402.14809v4
|
2024-06-01T07:46:28Z
|
2024-02-22T18:59:02Z
|
CriticBench: Benchmarking LLMs for Critique-Correct Reasoning
|
The ability of Large Language Models (LLMs) to critique and refine their reasoning is crucial for their application in evaluation, feedback provision, and self-improvement. This paper introduces CriticBench, a comprehensive benchmark designed to assess LLMs' abilities to critique and rectify their reasoning across a variety of tasks. CriticBench encompasses five reasoning domains: mathematical, commonsense, symbolic, coding, and algorithmic. It compiles 15 datasets and incorporates responses from three LLM families. Utilizing CriticBench, we evaluate and dissect the performance of 17 LLMs in generation, critique, and correction reasoning, i.e., GQC reasoning. Our findings reveal: (1) a linear relationship in GQC capabilities, with critique-focused training markedly enhancing performance; (2) a task-dependent variation in correction effectiveness, with logic-oriented tasks being more amenable to correction; (3) GQC knowledge inconsistencies that decrease as model size increases; and (4) an intriguing inter-model critiquing dynamic, where stronger models are better at critiquing weaker ones, while weaker models can surprisingly surpass stronger ones in their self-critique. We hope these insights into the nuanced critique-correct reasoning of LLMs will foster further research in LLM critique and self-improvement.
|
[
"['Zicheng Lin' 'Zhibin Gou' 'Tian Liang' 'Ruilin Luo' 'Haowei Liu'\n 'Yujiu Yang']"
] |
null | null |
2402.14810
| null | null |
http://arxiv.org/pdf/2402.14810v1
|
2024-02-22T18:59:21Z
|
2024-02-22T18:59:21Z
|
GeneOH Diffusion: Towards Generalizable Hand-Object Interaction
Denoising via Denoising Diffusion
|
In this work, we tackle the challenging problem of denoising hand-object interactions (HOI). Given an erroneous interaction sequence, the objective is to refine the incorrect hand trajectory to remove interaction artifacts for a perceptually realistic sequence. This challenge involves intricate interaction noise, including unnatural hand poses and incorrect hand-object relations, alongside the necessity for robust generalization to new interactions and diverse noise patterns. We tackle those challenges through a novel approach, GeneOH Diffusion, incorporating two key designs: an innovative contact-centric HOI representation named GeneOH and a new domain-generalizable denoising scheme. The contact-centric representation GeneOH informatively parameterizes the HOI process, facilitating enhanced generalization across various HOI scenarios. The new denoising scheme consists of a canonical denoising model trained to project noisy data samples from a whitened noise space to a clean data manifold and a "denoising via diffusion" strategy which can handle input trajectories with various noise patterns by first diffusing them to align with the whitened noise space and cleaning via the canonical denoiser. Extensive experiments on four benchmarks with significant domain variations demonstrate the superior effectiveness of our method. GeneOH Diffusion also shows promise for various downstream applications. Project website: https://meowuu7.github.io/GeneOH-Diffusion/.
|
[
"['Xueyi Liu' 'Li Yi']"
] |
null | null |
2402.14811
| null | null |
http://arxiv.org/pdf/2402.14811v1
|
2024-02-22T18:59:24Z
|
2024-02-22T18:59:24Z
|
Fine-Tuning Enhances Existing Mechanisms: A Case Study on Entity
Tracking
|
Fine-tuning on generalized tasks such as instruction following, code generation, and mathematics has been shown to enhance language models' performance on a range of tasks. Nevertheless, explanations of how such fine-tuning influences the internal computations in these models remain elusive. We study how fine-tuning affects the internal mechanisms implemented in language models. As a case study, we explore the property of entity tracking, a crucial facet of language comprehension, where models fine-tuned on mathematics have substantial performance gains. We identify the mechanism that enables entity tracking and show that (i) in both the original model and its fine-tuned versions primarily the same circuit implements entity tracking. In fact, the entity tracking circuit of the original model on the fine-tuned versions performs better than the full original model. (ii) The circuits of all the models implement roughly the same functionality: Entity tracking is performed by tracking the position of the correct entity in both the original model and its fine-tuned versions. (iii) Performance boost in the fine-tuned models is primarily attributed to its improved ability to handle the augmented positional information. To uncover these findings, we employ: Patch Patching, DCM, which automatically detects model components responsible for specific semantics, and CMAP, a new approach for patching activations across models to reveal improved mechanisms. Our findings suggest that fine-tuning enhances, rather than fundamentally alters, the mechanistic operation of the model.
|
[
"['Nikhil Prakash' 'Tamar Rott Shaham' 'Tal Haklay' 'Yonatan Belinkov'\n 'David Bau']"
] |
null | null |
2402.14815
| null | null |
http://arxiv.org/pdf/2402.14815v1
|
2024-02-22T18:59:53Z
|
2024-02-22T18:59:53Z
|
Demographic Bias of Expert-Level Vision-Language Foundation Models in
Medical Imaging
|
Advances in artificial intelligence (AI) have achieved expert-level performance in medical imaging applications. Notably, self-supervised vision-language foundation models can detect a broad spectrum of pathologies without relying on explicit training annotations. However, it is crucial to ensure that these AI models do not mirror or amplify human biases, thereby disadvantaging historically marginalized groups such as females or Black patients. The manifestation of such biases could systematically delay essential medical care for certain patient subgroups. In this study, we investigate the algorithmic fairness of state-of-the-art vision-language foundation models in chest X-ray diagnosis across five globally-sourced datasets. Our findings reveal that compared to board-certified radiologists, these foundation models consistently underdiagnose marginalized groups, with even higher rates seen in intersectional subgroups, such as Black female patients. Such demographic biases present over a wide range of pathologies and demographic attributes. Further analysis of the model embedding uncovers its significant encoding of demographic information. Deploying AI systems with these biases in medical imaging can intensify pre-existing care disparities, posing potential challenges to equitable healthcare access and raising ethical questions about their clinical application.
|
[
"['Yuzhe Yang' 'Yujia Liu' 'Xin Liu' 'Avanti Gulhane'\n 'Domenico Mastrodicasa' 'Wei Wu' 'Edward J Wang' 'Dushyant W Sahani'\n 'Shwetak Patel']"
] |
null | null |
2402.14817
| null | null |
http://arxiv.org/pdf/2402.14817v3
|
2024-04-04T16:27:06Z
|
2024-02-22T18:59:56Z
|
Cameras as Rays: Pose Estimation via Ray Diffusion
|
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views (<10). In contrast to existing approaches that pursue top-down prediction of global parametrizations of camera extrinsics, we propose a distributed representation of camera pose that treats a camera as a bundle of rays. This representation allows for a tight coupling with spatial image features improving pose precision. We observe that this representation is naturally suited for set-level transformers and develop a regression-based approach that maps image patches to corresponding rays. To capture the inherent uncertainties in sparse-view pose inference, we adapt this approach to learn a denoising diffusion model which allows us to sample plausible modes while improving performance. Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D while generalizing to unseen object categories and in-the-wild captures.
|
[
"['Jason Y. Zhang' 'Amy Lin' 'Moneish Kumar' 'Tzu-Hsuan Yang'\n 'Deva Ramanan' 'Shubham Tulsiani']"
] |
null | null |
2402.14825
| null | null |
http://arxiv.org/pdf/2402.14825v1
|
2024-02-08T11:04:34Z
|
2024-02-08T11:04:34Z
|
Deepfake Detection and the Impact of Limited Computing Capabilities
|
The rapid development of technologies and artificial intelligence makes deepfakes an increasingly sophisticated and challenging-to-identify technique. To ensure the accuracy of information and control misinformation and mass manipulation, it is of paramount importance to discover and develop artificial intelligence models that enable the generic detection of forged videos. This work aims to address the detection of deepfakes across various existing datasets in a scenario with limited computing resources. The goal is to analyze the applicability of different deep learning techniques under these restrictions and explore possible approaches to enhance their efficiency.
|
[
"['Paloma Cantero-Arjona' 'Alfonso Sánchez-Macián']"
] |
null | null |
2402.14827
| null | null |
http://arxiv.org/pdf/2402.14827v1
|
2024-02-10T17:59:12Z
|
2024-02-10T17:59:12Z
|
Optimizing Uterine Synchronization Analysis in Pregnancy and Labor
through Window Selection and Node Optimization
|
Preterm labor (PL) has globally become the leading cause of death in children under the age of 5 years. To address this problem, this paper will provide a new approach by analyzing the EHG signals, which are recorded on the abdomen of the mother during labor and pregnancy. The EHG signal reflects the electrical activity that induces the mechanical contraction of the myometrium. Because EHGs are known to be non-stationary signals, and because we anticipate connectivity to alter during contraction, we applied the windowing approach on real signals to help us identify the best windows and the best nodes with the most significant data to be used for classification. The suggested pipeline includes i) divide the 16 EHG signals that are recorded from the abdomen of pregnant women in N windows; ii) apply the connectivity matrices on each window; iii) apply the Graph theory-based measures on the connectivity matrices on each window; iv) apply the consensus Matrix on each window in order to retrieve the best windows and the best nodes. Following that, several neural network and machine learning methods are applied to the best windows and best nodes to categorize pregnancy and labor contractions, based on the different input parameters (connectivity method alone, connectivity method plus graph parameters, best nodes, all nodes, best windows, all windows). Results showed that the best nodes are nodes 8, 9, 10, 11, and 12; while the best windows are 2, 4, and 5. The classification results obtained by using only these best nodes are better than when using the whole nodes. The results are always better when using the full burst, whatever the chosen nodes. Thus, the windowing approach proved to be an innovative technique that can improve the differentiation between labor and pregnancy EHG signals.
|
[
"['Kamil Bader El Dine' 'Noujoud Nader' 'Mohamad Khalil' 'Catherine Marque']"
] |
null | null |
2402.14833
| null | null |
http://arxiv.org/pdf/2402.14833v1
|
2024-02-17T22:37:17Z
|
2024-02-17T22:37:17Z
|
CliqueParcel: An Approach For Batching LLM Prompts That Jointly
Optimizes Efficiency And Faithfulness
|
Large language models (LLMs) have become pivotal in recent research. However, during the inference process, LLMs still require substantial resources. In this paper, we propose CliqueParcel, a method designed to improve the efficiency of LLMs via prompt batching. Existing strategies to optimize inference efficiency often compromise on output quality, leading to a discounted output problem. This issue might result in reduced accuracy or outputs that are less detailed. CliqueParcel is our answer to this challenge. While ensuring accuracy and minimizing deviations from the original outputs (i.e., faithfulness), our method significantly improves efficiency during inference. To lay the groundwork, we first redefine efficiency measurements by excluding the reduction in running time due to shorter lengths. Then, we provide a comprehensive trade-off between efficiency and faithfulness to clarify the nature of the 'discounted output' problem. Within the CliqueParcel framework, we suggest multiple batching sub-methods and discuss the specific scenarios in which they can be applied. During evaluation, CliqueParcel is tested on eight widely recognized datasets, which can be classified into three types: reading comprehension, open-source question-answering, and reasoning. Our experiments explore the performance of CliqueParcel, including efficiency, faithfulness, and the trade-off between them. This work provides novel insights into inference efficiency and demonstrates promising performance.
|
[
"['Jiayi Liu' 'Tinghan Yang' 'Jennifer Neville']"
] |
null | null |
2402.14835
| null | null |
http://arxiv.org/pdf/2402.14835v1
|
2024-02-18T07:15:03Z
|
2024-02-18T07:15:03Z
|
MIKE: A New Benchmark for Fine-grained Multimodal Entity Knowledge
Editing
|
Multimodal knowledge editing represents a critical advancement in enhancing the capabilities of Multimodal Large Language Models (MLLMs). Despite its potential, current benchmarks predominantly focus on coarse-grained knowledge, leaving the intricacies of fine-grained (FG) multimodal entity knowledge largely unexplored. This gap presents a notable challenge, as FG entity recognition is pivotal for the practical deployment and effectiveness of MLLMs in diverse real-world scenarios. To bridge this gap, we introduce MIKE, a comprehensive benchmark and dataset specifically designed for the FG multimodal entity knowledge editing. MIKE encompasses a suite of tasks tailored to assess different perspectives, including Vanilla Name Answering, Entity-Level Caption, and Complex-Scenario Recognition. In addition, a new form of knowledge editing, Multi-step Editing, is introduced to evaluate the editing efficiency. Through our extensive evaluations, we demonstrate that the current state-of-the-art methods face significant challenges in tackling our proposed benchmark, underscoring the complexity of FG knowledge editing in MLLMs. Our findings spotlight the urgent need for novel approaches in this domain, setting a clear agenda for future research and development efforts within the community.
|
[
"['Jiaqi Li' 'Miaozeng Du' 'Chuanyi Zhang' 'Yongrui Chen' 'Nan Hu'\n 'Guilin Qi' 'Haiyun Jiang' 'Siyuan Cheng' 'Bozhong Tian']"
] |
null | null |
2402.14837
| null | null |
http://arxiv.org/pdf/2402.14837v1
|
2024-02-18T23:03:56Z
|
2024-02-18T23:03:56Z
|
An Empirical Categorization of Prompting Techniques for Large Language
Models: A Practitioner's Guide
|
Due to rapid advancements in the development of Large Language Models (LLMs), programming these models with prompts has recently gained significant attention. However, the sheer number of available prompt engineering techniques creates an overwhelming landscape for practitioners looking to utilize these tools. For the most efficient and effective use of LLMs, it is important to compile a comprehensive list of prompting techniques and establish a standardized, interdisciplinary categorization framework. In this survey, we examine some of the most well-known prompting techniques from both academic and practical viewpoints and classify them into seven distinct categories. We present an overview of each category, aiming to clarify their unique contributions and showcase their practical applications in real-world examples in order to equip fellow practitioners with a structured framework for understanding and categorizing prompting techniques tailored to their specific domains. We believe that this approach will help simplify the complex landscape of prompt engineering and enable more effective utilization of LLMs in various applications. By providing practitioners with a systematic approach to prompt categorization, we aim to assist in navigating the intricacies of effective prompt design for conversational pre-trained LLMs and inspire new possibilities in their respective fields.
|
[
"['Oluwole Fagbohun' 'Rachel M. Harrison' 'Anton Dereventsov']"
] |
null | null |
2402.14838
| null | null |
http://arxiv.org/pdf/2402.14838v1
|
2024-02-19T00:40:17Z
|
2024-02-19T00:40:17Z
|
RFBES at SemEval-2024 Task 8: Investigating Syntactic and Semantic
Features for Distinguishing AI-Generated and Human-Written Texts
|
Nowadays, the usage of Large Language Models (LLMs) has increased, and LLMs have been used to generate texts in different languages and for different tasks. Additionally, due to the participation of remarkable companies such as Google and OpenAI, LLMs are now more accessible, and people can easily use them. However, an important issue is how we can detect AI-generated texts from human-written ones. In this article, we have investigated the problem of AI-generated text detection from two different aspects: semantics and syntax. Finally, we presented an AI model that can distinguish AI-generated texts from human-written ones with high accuracy on both multilingual and monolingual tasks using the M4 dataset. According to our results, using a semantic approach would be more helpful for detection. However, there is a lot of room for improvement in the syntactic approach, and it would be a good approach for future work.
|
[
"['Mohammad Heydari Rad' 'Farhan Farsi' 'Shayan Bali' 'Romina Etezadi'\n 'Mehrnoush Shamsfard']"
] |
null | null |
2402.14843
| null | null |
http://arxiv.org/pdf/2402.14843v1
|
2024-02-19T09:24:02Z
|
2024-02-19T09:24:02Z
|
Text Diffusion with Reinforced Conditioning
|
Diffusion models have demonstrated exceptional capability in generating high-quality images, videos, and audio. Due to their adaptiveness in iterative refinement, they provide a strong potential for achieving better non-autoregressive sequence generation. However, existing text diffusion models still fall short in their performance due to a challenge in handling the discreteness of language. This paper thoroughly analyzes text diffusion models and uncovers two significant limitations: degradation of self-conditioning during training and misalignment between training and sampling. Motivated by our findings, we propose a novel Text Diffusion model called TREC, which mitigates the degradation with Reinforced Conditioning and the misalignment by Time-Aware Variance Scaling. Our extensive experiments demonstrate the competitiveness of TREC against autoregressive, non-autoregressive, and diffusion baselines. Moreover, qualitative analysis shows its advanced ability to fully utilize the diffusion process in refining samples.
|
[
"['Yuxuan Liu' 'Tianchi Yang' 'Shaohan Huang' 'Zihan Zhang' 'Haizhen Huang'\n 'Furu Wei' 'Weiwei Deng' 'Feng Sun' 'Qi Zhang']"
] |
null | null |
2402.14844
| null | null |
http://arxiv.org/pdf/2402.14844v1
|
2024-02-19T12:48:02Z
|
2024-02-19T12:48:02Z
|
The New Era of Dynamic Pricing: Synergizing Supervised Learning and
Quadratic Programming
|
In this paper, we explore a novel combination of supervised learning and quadratic programming to refine dynamic pricing models in the car rental industry. We utilize dynamic modeling of price elasticity, informed by ordinary least squares (OLS) metrics such as p-values, homoscedasticity, error normality. These metrics, when their underlying assumptions hold, are integral in guiding a quadratic programming agent. The program is tasked with optimizing margin for a given finite set target.
|
[
"['Gustavo Bramao' 'Ilia Tarygin']"
] |
null | null |
2402.14845
| null | null |
http://arxiv.org/pdf/2402.14845v1
|
2024-02-19T14:00:39Z
|
2024-02-19T14:00:39Z
|
Purifying Large Language Models by Ensembling a Small Language Model
|
The emerging success of large language models (LLMs) heavily relies on collecting abundant training data from external (untrusted) sources. Despite substantial efforts devoted to data cleaning and curation, well-constructed LLMs have been reported to suffer from copyright infringement, data poisoning, and/or privacy violations, which would impede practical deployment of LLMs. In this study, we propose a simple and easily implementable method for purifying LLMs from the negative effects caused by uncurated data, namely, through ensembling LLMs with benign and small language models (SLMs). Aside from theoretical guarantees, we perform comprehensive experiments to empirically confirm the efficacy of ensembling LLMs with SLMs, which can effectively preserve the performance of LLMs while mitigating issues such as copyright infringement, data poisoning, and privacy violations.
|
[
"['Tianlin Li' 'Qian Liu' 'Tianyu Pang' 'Chao Du' 'Qing Guo' 'Yang Liu'\n 'Min Lin']"
] |
null | null |
2402.14846
| null | null |
http://arxiv.org/pdf/2402.14846v3
|
2024-04-30T07:09:22Z
|
2024-02-19T14:53:01Z
|
Stick to Your Role! Context-dependence and Stability of Personal Value
Expression in Large Language Models
|
The standard way to study Large Language Models (LLMs) with benchmarks or psychology questionnaires is to provide many different queries from similar minimal contexts (e.g. multiple choice questions). However, due to LLMs' highly context-dependent nature, conclusions from such minimal-context evaluations may be little informative about the model's behavior in deployment (where it will be exposed to many new contexts). We argue that context-dependence (specifically, value stability) should be studied a specific property of LLMs and used as another dimension of LLM comparison (alongside others such as cognitive abilities, knowledge, or model size). We present a case-study on the stability of value expression over different contexts (simulated conversations on different topics) as measured using a standard psychology questionnaire (PVQ) and on behavioral downstream tasks. Reusing methods from psychology, we study Rank-order stability on the population (interpersonal) level, and Ipsative stability on the individual (intrapersonal) level. We consider two settings (with and without instructing LLMs to simulate particular personas), two simulated populations, and three downstream tasks. We observe consistent trends in the stability of models and model families - Mixtral, Mistral, GPT-3.5 and Qwen families are more stable than LLaMa-2 and Phi. The consistency of these trends implies that some models exhibit higher value-stability than others, and that value stability can be estimated with the set of introduced methodological tools. When instructed to simulate particular personas, LLMs exhibit low Rank-Order stability, which further diminishes with conversation length. This highlights the need for future research on LLMs that coherently simulate different personas. This paper provides a foundational step in that direction, and, to our knowledge, it is the first study of value stability in LLMs.
|
[
"['Grgur Kovač' 'Rémy Portelas' 'Masataka Sawayama' 'Peter Ford Dominey'\n 'Pierre-Yves Oudeyer']"
] |
null | null |
2402.14847
| null | null |
http://arxiv.org/abs/2402.14847v1
|
2024-02-19T15:34:09Z
|
2024-02-19T15:34:09Z
|
Deep learning-driven scheduling algorithm for a single machine problem
minimizing the total tardiness
|
In this paper, we investigate the use of the deep learning method for solving a well-known NP-hard single machine scheduling problem with the objective of minimizing the total tardiness. We propose a deep neural network that acts as a polynomial-time estimator of the criterion value used in a single-pass scheduling algorithm based on Lawler's decomposition and symmetric decomposition proposed by Della Croce et al. Essentially, the neural network guides the algorithm by estimating the best splitting of the problem into subproblems. The paper also describes a new method for generating the training data set, which speeds up the training dataset generation and reduces the average optimality gap of solutions. The experimental results show that our machine learning-driven approach can efficiently generalize information from the training phase to significantly larger instances. Even though the instances used in the training phase have from 75 to 100 jobs, the average optimality gap on instances with up to 800 jobs is 0.26%, which is almost five times less than the gap of the state-of-the-art heuristic.
|
[
"['Michal Bouška' 'Přemysl Šůcha' 'Antonín Novák' 'Zdeněk Hanzálek']"
] |
null | null |
2402.14849
| null | null |
http://arxiv.org/pdf/2402.14849v1
|
2024-02-19T19:48:02Z
|
2024-02-19T19:48:02Z
|
Asynchronous and Segmented Bidirectional Encoding for NMT
|
With the rapid advancement of Neural Machine Translation (NMT), enhancing translation efficiency and quality has become a focal point of research. Despite the commendable performance of general models such as the Transformer in various aspects, they still fall short in processing long sentences and fully leveraging bidirectional contextual information. This paper introduces an improved model based on the Transformer, implementing an asynchronous and segmented bidirectional decoding strategy aimed at elevating translation efficiency and accuracy. Compared to traditional unidirectional translations from left-to-right or right-to-left, our method demonstrates heightened efficiency and improved translation quality, particularly in handling long sentences. Experimental results on the IWSLT2017 dataset confirm the effectiveness of our approach in accelerating translation and increasing accuracy, especially surpassing traditional unidirectional strategies in long sentence translation. Furthermore, this study analyzes the impact of sentence length on decoding outcomes and explores the model's performance in various scenarios. The findings of this research not only provide an effective encoding strategy for the NMT field but also pave new avenues and directions for future studies.
|
[
"['Jingpu Yang' 'Zehua Han' 'Mengyu Xiang' 'Helin Wang' 'Yuxiao Huang'\n 'Miao Fang']"
] |
null | null |
2402.14852
| null | null |
http://arxiv.org/pdf/2402.14852v1
|
2024-02-20T04:17:21Z
|
2024-02-20T04:17:21Z
|
HumanEval on Latest GPT Models -- 2024
|
In 2023, we are using the latest models of GPT-4 to advance program synthesis. The large language models have significantly improved the state-of-the-art for this purpose. To make these advancements more accessible, we have created a repository that connects these models to Huamn Eval. This dataset was initally developed to be used with a language model called CODEGEN on natural and programming language data. The utility of these trained models is showcased by demonstrating their competitive performance in zero-shot Python code generation on HumanEval tasks compared to previous state-of-the-art solutions. Additionally, this gives way to developing more multi-step paradigm synthesis. This benchmark features 160 diverse problem sets factorized into multistep prompts that our analysis shows significantly improves program synthesis over single-turn inputs. All code is open source at https://github.com/daniel442li/gpt-human-eval .
|
[
"['Daniel Li' 'Lincoln Murr']"
] |
null | null |
2402.14859
| null | null |
http://arxiv.org/pdf/2402.14859v2
|
2024-06-03T03:29:07Z
|
2024-02-20T23:08:21Z
|
The Wolf Within: Covert Injection of Malice into MLLM Societies via an
MLLM Operative
|
Due to their unprecedented ability to process and respond to various types of data, Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI). As these advanced generative models increasingly form collaborative networks for complex tasks, the integrity and security of these systems are crucial. Our paper, ``The Wolf Within'', explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content. Unlike direct harmful output generation for MLLMs, our research demonstrates how a single MLLM agent can be subtly influenced to generate prompts that, in turn, induce other MLLM agents in the society to output malicious content. Our findings reveal that, an MLLM agent, when manipulated to produce specific prompts or instructions, can effectively ``infect'' other agents within a society of MLLMs. This infection leads to the generation and circulation of harmful outputs, such as dangerous instructions or misinformation, across the society. We also show the transferability of these indirectly generated prompts, highlighting their possibility in propagating malice through inter-agent communication. This research provides a critical insight into a new dimension of threat posed by MLLMs, where a single agent can act as a catalyst for widespread malevolent influence. Our work underscores the urgent need for developing robust mechanisms to detect and mitigate such covert manipulations within MLLM societies, ensuring their safe and ethical utilization in societal applications.
|
[
"['Zhen Tan' 'Chengshuai Zhao' 'Raha Moraffah' 'Yifan Li' 'Yu Kong'\n 'Tianlong Chen' 'Huan Liu']"
] |
null | null |
2402.14860
| null | null |
http://arxiv.org/pdf/2402.14860v4
|
2024-06-10T16:25:30Z
|
2024-02-21T00:49:43Z
|
Ranking Large Language Models without Ground Truth
|
Evaluation and ranking of large language models (LLMs) has become an important problem with the proliferation of these models and their impact. Evaluation methods either require human responses which are expensive to acquire or use pairs of LLMs to evaluate each other which can be unreliable. In this paper, we provide a novel perspective where, given a dataset of prompts (viz. questions, instructions, etc.) and a set of LLMs, we rank them without access to any ground truth or reference responses. Inspired by real life where both an expert and a knowledgeable person can identify a novice our main idea is to consider triplets of models, where each one of them evaluates the other two, correctly identifying the worst model in the triplet with high probability. We also analyze our idea and provide sufficient conditions for it to succeed. Applying this idea repeatedly, we propose two methods to rank LLMs. In experiments on different generative tasks (summarization, multiple-choice, and dialog), our methods reliably recover close to true rankings without reference data. This points to a viable low-resource mechanism for practical use.
|
[
"['Amit Dhurandhar' 'Rahul Nair' 'Moninder Singh' 'Elizabeth Daly'\n 'Karthikeyan Natesan Ramamurthy']"
] |
null | null |
2402.14861
| null | null |
http://arxiv.org/pdf/2402.14861v1
|
2024-02-21T01:29:17Z
|
2024-02-21T01:29:17Z
|
CloudNine: Analyzing Meteorological Observation Impact on Weather
Prediction Using Explainable Graph Neural Networks
|
The impact of meteorological observations on weather forecasting varies with sensor type, location, time, and other environmental factors. Thus, quantitative analysis of observation impacts is crucial for effective and efficient development of weather forecasting systems. However, the existing impact analysis methods are difficult to be widely applied due to their high dependencies on specific forecasting systems. Also, they cannot provide observation impacts at multiple spatio-temporal scales, only global impacts of observation types. To address these issues, we present a novel system called ``CloudNine,'' which allows analysis of individual observations' impacts on specific predictions based on explainable graph neural networks (XGNNs). Combining an XGNN-based atmospheric state estimation model with a numerical weather prediction model, we provide a web application to search for observations in the 3D space of the Earth system and to visualize the impact of individual observations on predictions in specific spatial regions and time periods.
|
[
"['Hyeon-Ju Jeon' 'Jeon-Ho Kang' 'In-Hyuk Kwon' 'O-Joun Lee']"
] |
null | null |
2402.14862
| null | null |
http://arxiv.org/pdf/2402.14862v1
|
2024-02-21T03:31:40Z
|
2024-02-21T03:31:40Z
|
SISSA: Real-time Monitoring of Hardware Functional Safety and
Cybersecurity with In-vehicle SOME/IP Ethernet Traffic
|
Scalable service-Oriented Middleware over IP (SOME/IP) is an Ethernet communication standard protocol in the Automotive Open System Architecture (AUTOSAR), promoting ECU-to-ECU communication over the IP stack. However, SOME/IP lacks a robust security architecture, making it susceptible to potential attacks. Besides, random hardware failure of ECU will disrupt SOME/IP communication. In this paper, we propose SISSA, a SOME/IP communication traffic-based approach for modeling and analyzing in-vehicle functional safety and cyber security. Specifically, SISSA models hardware failures with the Weibull distribution and addresses five potential attacks on SOME/IP communication, including Distributed Denial-of-Services, Man-in-the-Middle, and abnormal communication processes, assuming a malicious user accesses the in-vehicle network. Subsequently, SISSA designs a series of deep learning models with various backbones to extract features from SOME/IP sessions among ECUs. We adopt residual self-attention to accelerate the model's convergence and enhance detection accuracy, determining whether an ECU is under attack, facing functional failure, or operating normally. Additionally, we have created and annotated a dataset encompassing various classes, including indicators of attack, functionality, and normalcy. This contribution is noteworthy due to the scarcity of publicly accessible datasets with such characteristics.Extensive experimental results show the effectiveness and efficiency of SISSA.
|
[
"['Qi Liu' 'Xingyu Li' 'Ke Sun' 'Yufeng Li' 'Yanchen Liu']"
] |
null | null |
2402.14865
| null | null |
http://arxiv.org/pdf/2402.14865v2
|
2024-06-07T09:19:45Z
|
2024-02-21T06:46:34Z
|
Dynamic Evaluation of Large Language Models by Meta Probing Agents
|
Evaluation of large language models (LLMs) has raised great concerns in the community due to the issue of data contamination. Existing work designed evaluation protocols using well-defined algorithms for specific tasks, which cannot be easily extended to diverse scenarios. Moreover, current evaluation benchmarks can only provide the overall benchmark results and cannot support a fine-grained and multifaceted analysis of LLMs' abilities. In this paper, we propose meta probing agents (MPA), a general dynamic evaluation protocol inspired by psychometrics to evaluate LLMs. MPA is the key component of DyVal 2, which naturally extends the previous DyVal~citep{zhu2023dyval}. MPA designs the probing and judging agents to automatically transform an original evaluation problem into a new one following psychometric theory on three basic cognitive abilities: language understanding, problem solving, and domain knowledge. These basic abilities are also dynamically configurable, allowing multifaceted analysis. We conducted extensive evaluations using MPA and found that most LLMs achieve poorer performance, indicating room for improvement. Our multifaceted analysis demonstrated the strong correlation between the basic abilities and an implicit Matthew effect on model size, i.e., larger models possess stronger correlations of the abilities. MPA can also be used as a data augmentation approach to enhance LLMs. Code is available at: https://github.com/microsoft/promptbench.
|
[
"['Kaijie Zhu' 'Jindong Wang' 'Qinlin Zhao' 'Ruochen Xu' 'Xing Xie']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.