categories
string | doi
string | id
string | year
float64 | venue
string | link
string | updated
string | published
string | title
string | abstract
string | authors
list |
---|---|---|---|---|---|---|---|---|---|---|
null | null |
2402.15180
| null | null |
http://arxiv.org/pdf/2402.15180v2
|
2024-02-27T01:39:20Z
|
2024-02-23T08:22:24Z
|
Break the Breakout: Reinventing LM Defense Against Jailbreak Attacks
with Self-Refinement
|
Caution: This paper includes offensive words that could potentially cause unpleasantness. Language models (LMs) are vulnerable to exploitation for adversarial misuse. Training LMs for safety alignment is extensive and makes it hard to respond to fast-developing attacks immediately, such as jailbreaks. We propose self-refine with formatting that achieves outstanding safety even in non-safety-aligned LMs and evaluate our method alongside several defense baselines, demonstrating that it is the safest training-free method against jailbreak attacks. Additionally, we proposed a formatting method that improves the efficiency of the self-refine process while reducing attack success rates in fewer iterations. We've also observed that non-safety-aligned LMs outperform safety-aligned LMs in safety tasks by giving more helpful and safe responses. In conclusion, our findings can achieve less safety risk with fewer computational costs, allowing non-safety LM to be easily utilized in real-world service.
|
[
"['Heegyu Kim' 'Sehyun Yuk' 'Hyunsouk Cho']"
] |
null | null |
2402.15183
| null | null |
http://arxiv.org/pdf/2402.15183v4
|
2024-03-05T05:22:00Z
|
2024-02-23T08:29:42Z
|
GraphEdit: Large Language Models for Graph Structure Learning
|
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data by generating novel graph structures. Graph Neural Networks (GNNs) have emerged as promising GSL solutions, utilizing recursive message passing to encode node-wise inter-dependencies. However, many existing GSL methods heavily depend on explicit graph structural information as supervision signals, leaving them susceptible to challenges such as data noise and sparsity. In this work, we propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data. By enhancing the reasoning capabilities of LLMs through instruction-tuning over graph structures, we aim to overcome the limitations associated with explicit graph structural information and enhance the reliability of graph structure learning. Our approach not only effectively denoises noisy connections but also identifies node-wise dependencies from a global perspective, providing a comprehensive understanding of the graph structure. We conduct extensive experiments on multiple benchmark datasets to demonstrate the effectiveness and robustness of GraphEdit across various settings. We have made our model implementation available at: https://github.com/HKUDS/GraphEdit.
|
[
"['Zirui Guo' 'Lianghao Xia' 'Yanhua Yu' 'Yuling Wang' 'Zixuan Yang'\n 'Wei Wei' 'Liang Pang' 'Tat-Seng Chua' 'Chao Huang']"
] |
null | null |
2402.15188
| null | null |
http://arxiv.org/pdf/2402.15188v1
|
2024-02-23T08:36:28Z
|
2024-02-23T08:36:28Z
|
Parameter-Free Algorithms for Performative Regret Minimization under
Decision-Dependent Distributions
|
This paper studies performative risk minimization, a formulation of stochastic optimization under decision-dependent distributions. We consider the general case where the performative risk can be non-convex, for which we develop efficient parameter-free optimistic optimization-based methods. Our algorithms significantly improve upon the existing Lipschitz bandit-based method in many aspects. In particular, our framework does not require knowledge about the sensitivity parameter of the distribution map and the Lipshitz constant of the loss function. This makes our framework practically favorable, together with the efficient optimistic optimization-based tree-search mechanism. We provide experimental results that demonstrate the numerical superiority of our algorithms over the existing method and other black-box optimistic optimization methods.
|
[
"['Sungwoo Park' 'Junyeop Kwon' 'Byeongnoh Kim' 'Suhyun Chae' 'Jeeyong Lee'\n 'Dabeen Lee']"
] |
null | null |
2402.15189
| null | null |
http://arxiv.org/pdf/2402.15189v2
|
2024-05-17T09:11:44Z
|
2024-02-23T08:40:38Z
|
Biomedical Entity Linking as Multiple Choice Question Answering
|
Although biomedical entity linking (BioEL) has made significant progress with pre-trained language models, challenges still exist for fine-grained and long-tailed entities. To address these challenges, we present BioELQA, a novel model that treats Biomedical Entity Linking as Multiple Choice Question Answering. BioELQA first obtains candidate entities with a fast retriever, jointly presents the mention and candidate entities to a generator, and then outputs the predicted symbol associated with its chosen entity. This formulation enables explicit comparison of different candidate entities, thus capturing fine-grained interactions between mentions and entities, as well as among entities themselves. To improve generalization for long-tailed entities, we retrieve similar labeled training instances as clues and concatenate the input with retrieved instances for the generator. Extensive experimental results show that BioELQA outperforms state-of-the-art baselines on several datasets.
|
[
"['Zhenxi Lin' 'Ziheng Zhang' 'Xian Wu' 'Yefeng Zheng']"
] |
null | null |
2402.15194
| null | null |
http://arxiv.org/pdf/2402.15194v2
|
2024-02-28T09:21:46Z
|
2024-02-23T08:54:42Z
|
Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized
Control
|
Diffusion models excel at capturing complex data distributions, such as those of natural images and proteins. While diffusion models are trained to represent the distribution in the training dataset, we often are more concerned with other properties, such as the aesthetic quality of the generated images or the functional properties of generated proteins. Diffusion models can be finetuned in a goal-directed way by maximizing the value of some reward function (e.g., the aesthetic quality of an image). However, these approaches may lead to reduced sample diversity, significant deviations from the training data distribution, and even poor sample quality due to the exploitation of an imperfect reward function. The last issue often occurs when the reward function is a learned model meant to approximate a ground-truth "genuine" reward, as is the case in many practical applications. These challenges, collectively termed "reward collapse," pose a substantial obstacle. To address this reward collapse, we frame the finetuning problem as entropy-regularized control against the pretrained diffusion model, i.e., directly optimizing entropy-enhanced rewards with neural SDEs. We present theoretical and empirical evidence that demonstrates our framework is capable of efficiently generating diverse samples with high genuine rewards, mitigating the overoptimization of imperfect reward models.
|
[
"['Masatoshi Uehara' 'Yulai Zhao' 'Kevin Black' 'Ehsan Hajiramezanali'\n 'Gabriele Scalia' 'Nathaniel Lee Diamant' 'Alex M Tseng'\n 'Tommaso Biancalani' 'Sergey Levine']"
] |
null | null |
2402.15195
| null | null |
http://arxiv.org/pdf/2402.15195v1
|
2024-02-23T08:55:47Z
|
2024-02-23T08:55:47Z
|
The AffectToolbox: Affect Analysis for Everyone
|
In the field of affective computing, where research continually advances at a rapid pace, the demand for user-friendly tools has become increasingly apparent. In this paper, we present the AffectToolbox, a novel software system that aims to support researchers in developing affect-sensitive studies and prototypes. The proposed system addresses the challenges posed by existing frameworks, which often require profound programming knowledge and cater primarily to power-users or skilled developers. Aiming to facilitate ease of use, the AffectToolbox requires no programming knowledge and offers its functionality to reliably analyze the affective state of users through an accessible graphical user interface. The architecture encompasses a variety of models for emotion recognition on multiple affective channels and modalities, as well as an elaborate fusion system to merge multi-modal assessments into a unified result. The entire system is open-sourced and will be publicly available to ensure easy integration into more complex applications through a well-structured, Python-based code base - therefore marking a substantial contribution toward advancing affective computing research and fostering a more collaborative and inclusive environment within this interdisciplinary field.
|
[
"['Silvan Mertes' 'Dominik Schiller' 'Michael Dietz' 'Elisabeth André'\n 'Florian Lingenfelser']"
] |
null | null |
2402.15197
| null | null |
http://arxiv.org/pdf/2402.15197v1
|
2024-02-23T08:58:38Z
|
2024-02-23T08:58:38Z
|
Safety Optimized Reinforcement Learning via Multi-Objective Policy
Optimization
|
Safe reinforcement learning (Safe RL) refers to a class of techniques that aim to prevent RL algorithms from violating constraints in the process of decision-making and exploration during trial and error. In this paper, a novel model-free Safe RL algorithm, formulated based on the multi-objective policy optimization framework is introduced where the policy is optimized towards optimality and safety, simultaneously. The optimality is achieved by the environment reward function that is subsequently shaped using a safety critic. The advantage of the Safety Optimized RL (SORL) algorithm compared to the traditional Safe RL algorithms is that it omits the need to constrain the policy search space. This allows SORL to find a natural tradeoff between safety and optimality without compromising the performance in terms of either safety or optimality due to strict search space constraints. Through our theoretical analysis of SORL, we propose a condition for SORL's converged policy to guarantee safety and then use it to introduce an aggressiveness parameter that allows for fine-tuning the mentioned tradeoff. The experimental results obtained in seven different robotic environments indicate a considerable reduction in the number of safety violations along with higher, or competitive, policy returns, in comparison to six different state-of-the-art Safe RL methods. The results demonstrate the significant superiority of the proposed SORL algorithm in safety-critical applications.
|
[
"['Homayoun Honari' 'Mehran Ghafarian Tamizi' 'Homayoun Najjaran']"
] |
null | null |
2402.15198
| null | null |
http://arxiv.org/pdf/2402.15198v2
|
2024-07-07T03:48:33Z
|
2024-02-23T08:59:04Z
|
Bidirectional Uncertainty-Based Active Learning for Open Set Annotation
|
Active learning (AL) in open set scenarios presents a novel challenge of identifying the most valuable examples in an unlabeled data pool that comprises data from both known and unknown classes. Traditional methods prioritize selecting informative examples with low confidence, with the risk of mistakenly selecting unknown-class examples with similarly low confidence. Recent methods favor the most probable known-class examples, with the risk of picking simple already mastered examples. In this paper, we attempt to query examples that are both likely from known classes and highly informative, and propose a Bidirectional Uncertainty-based Active Learning (BUAL) framework. Specifically, we achieve this by first pushing the unknown class examples toward regions with high-confidence predictions, i.e., the proposed Random Label Negative Learning method. Then, we propose a Bidirectional Uncertainty sampling strategy by jointly estimating uncertainty posed by both positive and negative learning to perform consistent and stable sampling. BUAL successfully extends existing uncertainty-based AL methods to complex open-set scenarios. Extensive experiments on multiple datasets with varying openness demonstrate that BUAL achieves state-of-the-art performance. The code is available at https://github.com/chenchenzong/BUAL.
|
[
"['Chen-Chen Zong' 'Ye-Wen Wang' 'Kun-Peng Ning' 'Hai-Bo Ye'\n 'Sheng-Jun Huang']"
] |
null | null |
2402.15213
| null | null |
http://arxiv.org/pdf/2402.15213v2
|
2024-03-22T07:26:48Z
|
2024-02-23T09:19:26Z
|
Statistical Agnostic Regression: a machine learning method to validate
regression models
|
Regression analysis is a central topic in statistical modeling, aiming to estimate the relationships between a dependent variable, commonly referred to as the response variable, and one or more independent variables, i.e., explanatory variables. Linear regression is by far the most popular method for performing this task in several fields of research, such as prediction, forecasting, or causal inference. Beyond various classical methods to solve linear regression problems, such as Ordinary Least Squares, Ridge, or Lasso regressions - which are often the foundation for more advanced machine learning (ML) techniques - the latter have been successfully applied in this scenario without a formal definition of statistical significance. At most, permutation or classical analyses based on empirical measures (e.g., residuals or accuracy) have been conducted to reflect the greater ability of ML estimations for detection. In this paper, we introduce a method, named Statistical Agnostic Regression (SAR), for evaluating the statistical significance of an ML-based linear regression based on concentration inequalities of the actual risk using the analysis of the worst case. To achieve this goal, similar to the classification problem, we define a threshold to establish that there is sufficient evidence with a probability of at least 1-eta to conclude that there is a linear relationship in the population between the explanatory (feature) and the response (label) variables. Simulations in only two dimensions demonstrate the ability of the proposed agnostic test to provide a similar analysis of variance given by the classical $F$ test for the slope parameter.
|
[
"['Juan M Gorriz' 'J. Ramirez' 'F. Segovia' 'F. J. Martinez-Murcia'\n 'C. Jiménez-Mesa' 'J. Suckling']"
] |
null | null |
2402.15220
| null | null |
http://arxiv.org/pdf/2402.15220v3
|
2024-07-13T02:53:06Z
|
2024-02-23T09:29:19Z
|
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and
Two-Phase Partition
|
Self-attention is an essential component of large language models (LLM) but a significant source of inference latency for long sequences. In multi-tenant LLM serving scenarios, the compute and memory operation cost of self-attention can be optimized by using the probability that multiple LLM requests have shared system prompts in prefixes. In this paper, we introduce ChunkAttention, a prefix-aware self-attention module that can detect matching prompt prefixes across multiple requests and share their key/value tensors in memory at runtime to improve the memory utilization of KV cache. This is achieved by breaking monolithic key/value tensors into smaller chunks and structuring them into the auxiliary prefix tree. Consequently, on top of the prefix-tree based KV cache, we design an efficient self-attention kernel, where a two-phase partition algorithm is implemented to improve the data locality during self-attention computation in the presence of shared system prompts. Experiments show that ChunkAttention can speed up the self-attention kernel by 3.2-4.8$times$ compared to the start-of-the-art implementation, with the length of the system prompt ranging from 1024 to 4096.
|
[
"['Lu Ye' 'Ze Tao' 'Yong Huang' 'Yang Li']"
] |
null | null |
2402.15227
| null | null |
http://arxiv.org/pdf/2402.15227v1
|
2024-02-23T09:43:58Z
|
2024-02-23T09:43:58Z
|
Fixed Random Classifier Rearrangement for Continual Learning
|
With the explosive growth of data, continual learning capability is increasingly important for neural networks. Due to catastrophic forgetting, neural networks inevitably forget the knowledge of old tasks after learning new ones. In visual classification scenario, a common practice of alleviating the forgetting is to constrain the backbone. However, the impact of classifiers is underestimated. In this paper, we analyze the variation of model predictions in sequential binary classification tasks and find that the norm of the equivalent one-class classifiers significantly affects the forgetting level. Based on this conclusion, we propose a two-stage continual learning algorithm named Fixed Random Classifier Rearrangement (FRCR). In first stage, FRCR replaces the learnable classifiers with fixed random classifiers, constraining the norm of the equivalent one-class classifiers without affecting the performance of the network. In second stage, FRCR rearranges the entries of new classifiers to implicitly reduce the drift of old latent representations. The experimental results on multiple datasets show that FRCR significantly mitigates the model forgetting; subsequent experimental analyses further validate the effectiveness of the algorithm.
|
[
"['Shengyang Huang' 'Jianwen Mo']"
] |
null | null |
2402.15231
| null | null |
http://arxiv.org/pdf/2402.15231v1
|
2024-02-23T09:47:27Z
|
2024-02-23T09:47:27Z
|
Which Model to Transfer? A Survey on Transferability Estimation
|
Transfer learning methods endeavor to leverage relevant knowledge from existing source pre-trained models or datasets to solve downstream target tasks. With the increase in the scale and quantity of available pre-trained models nowadays, it becomes critical to assess in advance whether they are suitable for a specific target task. Model transferability estimation is an emerging and growing area of interest, aiming to propose a metric to quantify this suitability without training them individually, which is computationally prohibitive. Despite extensive recent advances already devoted to this area, they have custom terminological definitions and experimental settings. In this survey, we present the first review of existing advances in this area and categorize them into two separate realms: source-free model transferability estimation and source-dependent model transferability estimation. Each category is systematically defined, accompanied by a comprehensive taxonomy. Besides, we address challenges and outline future research directions, intending to provide a comprehensive guide to aid researchers and practitioners.
|
[
"['Yuhe Ding' 'Bo Jiang' 'Aijing Yu' 'Aihua Zheng' 'Jian Liang']"
] |
null | null |
2402.15232
| null | null |
http://arxiv.org/pdf/2402.15232v1
|
2024-02-23T09:47:42Z
|
2024-02-23T09:47:42Z
|
Classification of compact radio sources in the Galactic plane with
supervised machine learning
|
Generation of science-ready data from processed data products is one of the major challenges in next-generation radio continuum surveys with the Square Kilometre Array (SKA) and its precursors, due to the expected data volume and the need to achieve a high degree of automated processing. Source extraction, characterization, and classification are the major stages involved in this process. In this work we focus on the classification of compact radio sources in the Galactic plane using both radio and infrared images as inputs. To this aim, we produced a curated dataset of ~20,000 images of compact sources of different astronomical classes, obtained from past radio and infrared surveys, and novel radio data from pilot surveys carried out with the Australian SKA Pathfinder (ASKAP). Radio spectral index information was also obtained for a subset of the data. We then trained two different classifiers on the produced dataset. The first model uses gradient-boosted decision trees and is trained on a set of pre-computed features derived from the data, which include radio-infrared colour indices and the radio spectral index. The second model is trained directly on multi-channel images, employing convolutional neural networks. Using a completely supervised procedure, we obtained a high classification accuracy (F1-score>90%) for separating Galactic objects from the extragalactic background. Individual class discrimination performances, ranging from 60% to 75%, increased by 10% when adding far-infrared and spectral index information, with extragalactic objects, PNe and HII regions identified with higher accuracies. The implemented tools and trained models were publicly released, and made available to the radioastronomical community for future application on new radio data.
|
[
"['S. Riggi' 'G. Umana' 'C. Trigilio' 'C. Bordiu' 'F. Bufano'\n 'A. Ingallinera' 'F. Cavallaro' 'Y. Gordon' 'R. P. Norris' 'G. Gürkan'\n 'P. Leto' 'C. Buemi' 'S. Loru' 'A. M. Hopkins' 'M. D. Filipović'\n 'T. Cecconello']"
] |
null | null |
2402.15237
| null | null |
http://arxiv.org/pdf/2402.15237v1
|
2024-02-23T10:01:22Z
|
2024-02-23T10:01:22Z
|
Unsupervised Domain Adaptation for Brain Vessel Segmentation through
Transwarp Contrastive Learning
|
Unsupervised domain adaptation (UDA) aims to align the labelled source distribution with the unlabelled target distribution to obtain domain-invariant predictive models. Since cross-modality medical data exhibit significant intra and inter-domain shifts and most are unlabelled, UDA is more important while challenging in medical image analysis. This paper proposes a simple yet potent contrastive learning framework for UDA to narrow the inter-domain gap between labelled source and unlabelled target distribution. Our method is validated on cerebral vessel datasets. Experimental results show that our approach can learn latent features from labelled 3DRA modality data and improve vessel segmentation performance in unlabelled MRA modality data.
|
[
"['Fengming Lin' 'Yan Xia' 'Michael MacRaild' 'Yash Deo' 'Haoran Dou'\n 'Qiongyao Liu' 'Kun Wu' 'Nishant Ravikumar' 'Alejandro F. Frangi']"
] |
null | null |
2402.15239
| null | null |
http://arxiv.org/pdf/2402.15239v1
|
2024-02-23T10:02:15Z
|
2024-02-23T10:02:15Z
|
GS-EMA: Integrating Gradient Surgery Exponential Moving Average with
Boundary-Aware Contrastive Learning for Enhanced Domain Generalization in
Aneurysm Segmentation
|
The automated segmentation of cerebral aneurysms is pivotal for accurate diagnosis and treatment planning. Confronted with significant domain shifts and class imbalance in 3D Rotational Angiography (3DRA) data from various medical institutions, the task becomes challenging. These shifts include differences in image appearance, intensity distribution, resolution, and aneurysm size, all of which complicate the segmentation process. To tackle these issues, we propose a novel domain generalization strategy that employs gradient surgery exponential moving average (GS-EMA) optimization technique coupled with boundary-aware contrastive learning (BACL). Our approach is distinct in its ability to adapt to new, unseen domains by learning domain-invariant features, thereby improving the robustness and accuracy of aneurysm segmentation across diverse clinical datasets. The results demonstrate that our proposed approach can extract more domain-invariant features, minimizing over-segmentation and capturing more complete aneurysm structures.
|
[
"['Fengming Lin' 'Yan Xia' 'Michael MacRaild' 'Yash Deo' 'Haoran Dou'\n 'Qiongyao Liu' 'Nina Cheng' 'Nishant Ravikumar' 'Alejandro F. Frangi']"
] |
null | null |
2402.15247
| null | null |
http://arxiv.org/pdf/2402.15247v1
|
2024-02-23T10:21:07Z
|
2024-02-23T10:21:07Z
|
A Bargaining-based Approach for Feature Trading in Vertical Federated
Learning
|
Vertical Federated Learning (VFL) has emerged as a popular machine learning paradigm, enabling model training across the data and the task parties with different features about the same user set while preserving data privacy. In production environment, VFL usually involves one task party and one data party. Fair and economically efficient feature trading is crucial to the commercialization of VFL, where the task party is considered as the data consumer who buys the data party's features. However, current VFL feature trading practices often price the data party's data as a whole and assume transactions occur prior to the performing VFL. Neglecting the performance gains resulting from traded features may lead to underpayment and overpayment issues. In this study, we propose a bargaining-based feature trading approach in VFL to encourage economically efficient transactions. Our model incorporates performance gain-based pricing, taking into account the revenue-based optimization objectives of both parties. We analyze the proposed bargaining model under perfect and imperfect performance information settings, proving the existence of an equilibrium that optimizes the parties' objectives. Moreover, we develop performance gain estimation-based bargaining strategies for imperfect performance information scenarios and discuss potential security issues and solutions. Experiments on three real-world datasets demonstrate the effectiveness of the proposed bargaining model.
|
[
"['Yue Cui' 'Liuyi Yao' 'Zitao Li' 'Yaliang Li' 'Bolin Ding'\n 'Xiaofang Zhou']"
] |
null | null |
2402.15255
| null | null |
http://arxiv.org/pdf/2402.15255v2
|
2024-06-01T10:57:01Z
|
2024-02-23T10:49:04Z
|
Optimal Transport for Structure Learning Under Missing Data
|
Causal discovery in the presence of missing data introduces a chicken-and-egg dilemma. While the goal is to recover the true causal structure, robust imputation requires considering the dependencies or, preferably, causal relations among variables. Merely filling in missing values with existing imputation methods and subsequently applying structure learning on the complete data is empirically shown to be sub-optimal. To address this problem, we propose a score-based algorithm for learning causal structures from missing data based on optimal transport. This optimal transport viewpoint diverges from existing score-based approaches that are dominantly based on expectation maximization. We formulate structure learning as a density fitting problem, where the goal is to find the causal model that induces a distribution of minimum Wasserstein distance with the observed data distribution. Our framework is shown to recover the true causal graphs more effectively than competing methods in most simulations and real-data settings. Empirical evidence also shows the superior scalability of our approach, along with the flexibility to incorporate any off-the-shelf causal discovery methods for complete data.
|
[
"['Vy Vo' 'He Zhao' 'Trung Le' 'Edwin V. Bonilla' 'Dinh Phung']"
] |
null | null |
2402.15258
| null | null |
http://arxiv.org/pdf/2402.15258v1
|
2024-02-23T10:56:47Z
|
2024-02-23T10:56:47Z
|
High Resolution Guitar Transcription via Domain Adaptation
|
Automatic music transcription (AMT) has achieved high accuracy for piano due to the availability of large, high-quality datasets such as MAESTRO and MAPS, but comparable datasets are not yet available for other instruments. In recent work, however, it has been demonstrated that aligning scores to transcription model activations can produce high quality AMT training data for instruments other than piano. Focusing on the guitar, we refine this approach to training on score data using a dataset of commercially available score-audio pairs. We propose the use of a high-resolution piano transcription model to train a new guitar transcription model. The resulting model obtains state-of-the-art transcription results on GuitarSet in a zero-shot context, improving on previously published methods.
|
[
"['Xavier Riley' 'Drew Edwards' 'Simon Dixon']"
] |
null | null |
2402.15259
| null | null |
http://arxiv.org/pdf/2402.15259v5
|
2024-07-07T12:43:35Z
|
2024-02-23T11:04:33Z
|
Open Ad Hoc Teamwork with Cooperative Game Theory
|
Ad hoc teamwork poses a challenging problem, requiring the design of an agent to collaborate with teammates without prior coordination or joint training. Open ad hoc teamwork (OAHT) further complicates this challenge by considering environments with a changing number of teammates, referred to as open teams. One promising solution in practice to this problem is leveraging the generalizability of graph neural networks to handle an unrestricted number of agents with various agent-types, named graph-based policy learning (GPL). However, its joint Q-value representation over a coordination graph lacks convincing explanations. In this paper, we establish a new theory to understand the representation of the joint Q-value for OAHT and its learning paradigm, through the lens of cooperative game theory. Building on our theory, we propose a novel algorithm named CIAO, based on GPL's framework, with additional provable implementation tricks that can facilitate learning. The demos of experimental results are available on https://sites.google.com/view/ciao2024, and the code of experiments is published on https://github.com/hsvgbkhgbv/CIAO.
|
[
"['Jianhong Wang' 'Yang Li' 'Yuan Zhang' 'Wei Pan' 'Samuel Kaski']"
] |
null | null |
2402.15262
| null | null |
http://arxiv.org/pdf/2402.15262v1
|
2024-02-23T11:19:02Z
|
2024-02-23T11:19:02Z
|
Dynamic Memory Based Adaptive Optimization
|
Define an optimizer as having memory $k$ if it stores $k$ dynamically changing vectors in the parameter space. Classical SGD has memory $0$, momentum SGD optimizer has $1$ and Adam optimizer has $2$. We address the following questions: How can optimizers make use of more memory units? What information should be stored in them? How to use them for the learning steps? As an approach to the last question, we introduce a general method called "Retrospective Learning Law Correction" or shortly RLLC. This method is designed to calculate a dynamically varying linear combination (called learning law) of memory units, which themselves may evolve arbitrarily. We demonstrate RLLC on optimizers whose memory units have linear update rules and small memory ($leq 4$ memory units). Our experiments show that in a variety of standard problems, these optimizers outperform the above mentioned three classical optimizers. We conclude that RLLC is a promising framework for boosting the performance of known optimizers by adding more memory units and by making them more adaptive.
|
[
"['Balázs Szegedy' 'Domonkos Czifra' 'Péter Kőrösi-Szabó']"
] |
null | null |
2402.15266
| null | null |
http://arxiv.org/abs/2402.15266v2
|
2024-03-20T10:12:49Z
|
2024-02-23T11:27:10Z
|
Calibration of Deep Learning Classification Models in fNIRS
|
Functional near-infrared spectroscopy (fNIRS) is a valuable non-invasive tool for monitoring brain activity. The classification of fNIRS data in relation to conscious activity holds significance for advancing our understanding of the brain and facilitating the development of brain-computer interfaces (BCI). Many researchers have turned to deep learning to tackle the classification challenges inherent in fNIRS data due to its strong generalization and robustness. In the application of fNIRS, reliability is really important, and one mathematical formulation of the reliability of confidence is calibration. However, many researchers overlook the important issue of calibration. To address this gap, we propose integrating calibration into fNIRS field and assess the reliability of existing models. Surprisingly, our results indicate poor calibration performance in many proposed models. To advance calibration development in the fNIRS field, we summarize three practical tips. Through this letter, we hope to emphasize the critical role of calibration in fNIRS research and argue for enhancing the reliability of deep learning-based predictions in fNIRS classification tasks. All data from our experimental process are openly available on GitHub.
|
[
"['Zhihao Cao' 'Zizhou Luo']"
] |
null | null |
2402.15268
| null | null |
http://arxiv.org/pdf/2402.15268v1
|
2024-02-23T11:30:39Z
|
2024-02-23T11:30:39Z
|
MemoryPrompt: A Light Wrapper to Improve Context Tracking in Pre-trained
Language Models
|
Transformer-based language models (LMs) track contextual information through large, hard-coded input windows. We introduce MemoryPrompt, a leaner approach in which the LM is complemented by a small auxiliary recurrent network that passes information to the LM by prefixing its regular input with a sequence of vectors, akin to soft prompts, without requiring LM finetuning. Tested on a task designed to probe a LM's ability to keep track of multiple fact updates, a MemoryPrompt-augmented LM outperforms much larger LMs that have access to the full input history. We also test MemoryPrompt on a long-distance dialogue dataset, where its performance is comparable to that of a model conditioned on the entire conversation history. In both experiments we also observe that, unlike full-finetuning approaches, MemoryPrompt does not suffer from catastrophic forgetting when adapted to new tasks, thus not disrupting the generalist capabilities of the underlying LM.
|
[
"['Nathanaël Carraz Rakotonirina' 'Marco Baroni']"
] |
null | null |
2402.15270
| null | null |
http://arxiv.org/pdf/2402.15270v1
|
2024-02-23T11:32:46Z
|
2024-02-23T11:32:46Z
|
Smoothed Graph Contrastive Learning via Seamless Proximity Integration
|
Graph contrastive learning (GCL) aligns node representations by classifying node pairs into positives and negatives using a selection process that typically relies on establishing correspondences within two augmented graphs. The conventional GCL approaches incorporate negative samples uniformly in the contrastive loss, resulting in the equal treatment negative nodes, regardless of their proximity to the true positive. In this paper, we present a Smoothed Graph Contrastive Learning model (SGCL), which leverages the geometric structure of augmented graphs to inject proximity information associated with positive/negative pairs in the contrastive loss, thus significantly regularizing the learning process. The proposed SGCL adjusts the penalties associated with node pairs in the contrastive loss by incorporating three distinct smoothing techniques that result in proximity aware positives and negatives. To enhance scalability for large-scale graphs, the proposed framework incorporates a graph batch-generating strategy that partitions the given graphs into multiple subgraphs, facilitating efficient training in separate batches. Through extensive experimentation in the unsupervised setting on various benchmarks, particularly those of large scale, we demonstrate the superiority of our proposed framework against recent baselines.
|
[
"['Maysam Behmanesh' 'Maks Ovsjanikov']"
] |
null | null |
2402.15273
| null | null |
http://arxiv.org/pdf/2402.15273v1
|
2024-02-23T11:35:57Z
|
2024-02-23T11:35:57Z
|
Optimized Deployment of Deep Neural Networks for Visual Pose Estimation
on Nano-drones
|
Miniaturized autonomous unmanned aerial vehicles (UAVs) are gaining popularity due to their small size, enabling new tasks such as indoor navigation or people monitoring. Nonetheless, their size and simple electronics pose severe challenges in implementing advanced onboard intelligence. This work proposes a new automatic optimization pipeline for visual pose estimation tasks using Deep Neural Networks (DNNs). The pipeline leverages two different Neural Architecture Search (NAS) algorithms to pursue a vast complexity-driven exploration in the DNNs' architectural space. The obtained networks are then deployed on an off-the-shelf nano-drone equipped with a parallel ultra-low power System-on-Chip leveraging a set of novel software kernels for the efficient fused execution of critical DNN layer sequences. Our results improve the state-of-the-art reducing inference latency by up to 3.22x at iso-error.
|
[
"['Matteo Risso' 'Francesco Daghero' 'Beatrice Alessandra Motetti'\n 'Daniele Jahier Pagliari' 'Enrico Macii' 'Massimo Poncino'\n 'Alessio Burrello']"
] |
null | null |
2402.15274
| null | null |
http://arxiv.org/pdf/2402.15274v2
|
2024-06-23T10:10:00Z
|
2024-02-23T11:37:56Z
|
Classification Under Strategic Self-Selection
|
When users stand to gain from certain predictions, they are prone to act strategically to obtain favorable predictive outcomes. Whereas most works on strategic classification consider user actions that manifest as feature modifications, we study a novel setting in which users decide -- in response to the learned classifier -- whether to at all participate (or not). For learning approaches of increasing strategic awareness, we study the effects of self-selection on learning, and the implications of learning on the composition of the self-selected population. We then propose a differentiable framework for learning under self-selective behavior, which can be optimized effectively. We conclude with experiments on real data and simulated behavior that both complement our analysis and demonstrate the utility of our approach.
|
[
"['Guy Horowitz' 'Yonatan Sommer' 'Moran Koren' 'Nir Rosenfeld']"
] |
null | null |
2402.15281
| null | null |
http://arxiv.org/pdf/2402.15281v3
|
2024-03-13T08:34:47Z
|
2024-02-23T12:06:48Z
|
Neural Implicit Swept Volume Models for Fast Collision Detection
|
Collision detection is one of the most time-consuming operations during motion planning. Thus, there is an increasing interest in exploring machine learning techniques to speed up collision detection and sampling-based motion planning. A recent line of research focuses on utilizing neural signed distance functions of either the robot geometry or the swept volume of the robot motion. Building on this, we present a novel neural implicit swept volume model to continuously represent arbitrary motions parameterized by their start and goal configurations. This allows to quickly compute signed distances for any point in the task space to the robot motion. Further, we present an algorithm combining the speed of the deep learning-based signed distance computations with the strong accuracy guarantees of geometric collision checkers. We validate our approach in simulated and real-world robotic experiments, and demonstrate that it is able to speed up a commercial bin picking application.
|
[
"['Dominik Joho' 'Jonas Schwinn' 'Kirill Safronov']"
] |
null | null |
2402.15283
| null | null |
http://arxiv.org/pdf/2402.15283v1
|
2024-02-23T12:27:48Z
|
2024-02-23T12:27:48Z
|
When in Doubt, Think Slow: Iterative Reasoning with Latent Imagination
|
In an unfamiliar setting, a model-based reinforcement learning agent can be limited by the accuracy of its world model. In this work, we present a novel, training-free approach to improving the performance of such agents separately from planning and learning. We do so by applying iterative inference at decision-time, to fine-tune the inferred agent states based on the coherence of future state representations. Our approach achieves a consistent improvement in both reconstruction accuracy and task performance when applied to visual 3D navigation tasks. We go on to show that considering more future states further improves the performance of the agent in partially-observable environments, but not in a fully-observable one. Finally, we demonstrate that agents with less training pre-evaluation benefit most from our approach.
|
[
"['Martin Benfeghoul' 'Umais Zahid' 'Qinghai Guo' 'Zafeirios Fountas']"
] |
null | null |
2402.15284
| null | null |
http://arxiv.org/pdf/2402.15284v1
|
2024-02-23T12:28:31Z
|
2024-02-23T12:28:31Z
|
Spatiotemporal Observer Design for Predictive Learning of
High-Dimensional Data
|
Although deep learning-based methods have shown great success in spatiotemporal predictive learning, the framework of those models is designed mainly by intuition. How to make spatiotemporal forecasting with theoretical guarantees is still a challenging issue. In this work, we tackle this problem by applying domain knowledge from the dynamical system to the framework design of deep learning models. An observer theory-guided deep learning architecture, called Spatiotemporal Observer, is designed for predictive learning of high dimensional data. The characteristics of the proposed framework are twofold: firstly, it provides the generalization error bound and convergence guarantee for spatiotemporal prediction; secondly, dynamical regularization is introduced to enable the model to learn system dynamics better during training. Further experimental results show that this framework could capture the spatiotemporal dynamics and make accurate predictions in both one-step-ahead and multi-step-ahead forecasting scenarios.
|
[
"['Tongyi Liang' 'Han-Xiong Li']"
] |
null | null |
2402.15285
| null | null |
http://arxiv.org/pdf/2402.15285v1
|
2024-02-23T12:30:20Z
|
2024-02-23T12:30:20Z
|
Generative Modelling with Tensor Train approximations of
Hamilton--Jacobi--Bellman equations
|
Sampling from probability densities is a common challenge in fields such as Uncertainty Quantification (UQ) and Generative Modelling (GM). In GM in particular, the use of reverse-time diffusion processes depending on the log-densities of Ornstein-Uhlenbeck forward processes are a popular sampling tool. In Berner et al. [2022] the authors point out that these log-densities can be obtained by solution of a textit{Hamilton-Jacobi-Bellman} (HJB) equation known from stochastic optimal control. While this HJB equation is usually treated with indirect methods such as policy iteration and unsupervised training of black-box architectures like Neural Networks, we propose instead to solve the HJB equation by direct time integration, using compressed polynomials represented in the Tensor Train (TT) format for spatial discretization. Crucially, this method is sample-free, agnostic to normalization constants and can avoid the curse of dimensionality due to the TT compression. We provide a complete derivation of the HJB equation's action on Tensor Train polynomials and demonstrate the performance of the proposed time-step-, rank- and degree-adaptive integration method on a nonlinear sampling task in 20 dimensions.
|
[
"['David Sommer' 'Robert Gruhlke' 'Max Kirstein' 'Martin Eigel'\n 'Claudia Schillings']"
] |
null | null |
2402.15288
| null | null |
http://arxiv.org/pdf/2402.15288v1
|
2024-02-23T12:33:27Z
|
2024-02-23T12:33:27Z
|
Real-Time FPGA Demonstrator of ANN-Based Equalization for Optical
Communications
|
In this work, we present a high-throughput field programmable gate array (FPGA) demonstrator of an artificial neural network (ANN)-based equalizer. The equalization is performed and illustrated in real-time for a 30 GBd, two-level pulse amplitude modulation (PAM2) optical communication system.
|
[
"['Jonas Ney' 'Patrick Matalla' 'Vincent Lauinger' 'Laurent Schmalen'\n 'Sebastian Randel' 'Norbert Wehn']"
] |
null | null |
2402.15289
| null | null |
http://arxiv.org/pdf/2402.15289v1
|
2024-02-23T12:35:43Z
|
2024-02-23T12:35:43Z
|
Let's Rectify Step by Step: Improving Aspect-based Sentiment Analysis
with Diffusion Models
|
Aspect-Based Sentiment Analysis (ABSA) stands as a crucial task in predicting the sentiment polarity associated with identified aspects within text. However, a notable challenge in ABSA lies in precisely determining the aspects' boundaries (start and end indices), especially for long ones, due to users' colloquial expressions. We propose DiffusionABSA, a novel diffusion model tailored for ABSA, which extracts the aspects progressively step by step. Particularly, DiffusionABSA gradually adds noise to the aspect terms in the training process, subsequently learning a denoising process that progressively restores these terms in a reverse manner. To estimate the boundaries, we design a denoising neural network enhanced by a syntax-aware temporal attention mechanism to chronologically capture the interplay between aspects and surrounding text. Empirical evaluations conducted on eight benchmark datasets underscore the compelling advantages offered by DiffusionABSA when compared against robust baseline models. Our code is publicly available at https://github.com/Qlb6x/DiffusionABSA.
|
[
"['Shunyu Liu' 'Jie Zhou' 'Qunxi Zhu' 'Qin Chen' 'Qingchun Bai' 'Jun Xiao'\n 'Liang He']"
] |
null | null |
2402.15290
| null | null |
http://arxiv.org/pdf/2402.15290v2
|
2024-06-02T05:59:38Z
|
2024-02-23T12:36:31Z
|
Appendix for Linear Dynamics-embedded Neural Network for Long-Sequence
Modeling
|
This appendix provides all necessary materials for the paper 'Linear Dynamics-embedded Neural Network for Long-Sequence Modeling', including model details, experimental configurations, and PyTorch implementation.
|
[
"['Tongyi Liang' 'Han-Xiong Li']"
] |
null | null |
2402.15294
| null | null |
http://arxiv.org/pdf/2402.15294v1
|
2024-02-23T12:41:44Z
|
2024-02-23T12:41:44Z
|
A Survey of Music Generation in the Context of Interaction
|
In recent years, machine learning, and in particular generative adversarial neural networks (GANs) and attention-based neural networks (transformers), have been successfully used to compose and generate music, both melodies and polyphonic pieces. Current research focuses foremost on style replication (eg. generating a Bach-style chorale) or style transfer (eg. classical to jazz) based on large amounts of recorded or transcribed music, which in turn also allows for fairly straight-forward "performance" evaluation. However, most of these models are not suitable for human-machine co-creation through live interaction, neither is clear, how such models and resulting creations would be evaluated. This article presents a thorough review of music representation, feature analysis, heuristic algorithms, statistical and parametric modelling, and human and automatic evaluation measures, along with a discussion of which approaches and models seem most suitable for live interaction.
|
[
"['Ismael Agchar' 'Ilja Baumann' 'Franziska Braun'\n 'Paula Andrea Perez-Toro' 'Korbinian Riedhammer' 'Sebastian Trump'\n 'Martin Ullrich']"
] |
null | null |
2402.15297
| null | null |
http://arxiv.org/pdf/2402.15297v1
|
2024-02-23T12:48:02Z
|
2024-02-23T12:48:02Z
|
Semi-supervised Counting via Pixel-by-pixel Density Distribution
Modelling
|
This paper focuses on semi-supervised crowd counting, where only a small portion of the training data are labeled. We formulate the pixel-wise density value to regress as a probability distribution, instead of a single deterministic value. On this basis, we propose a semi-supervised crowd-counting model. Firstly, we design a pixel-wise distribution matching loss to measure the differences in the pixel-wise density distributions between the prediction and the ground truth; Secondly, we enhance the transformer decoder by using density tokens to specialize the forwards of decoders w.r.t. different density intervals; Thirdly, we design the interleaving consistency self-supervised learning mechanism to learn from unlabeled data efficiently. Extensive experiments on four datasets are performed to show that our method clearly outperforms the competitors by a large margin under various labeled ratio settings. Code will be released at https://github.com/LoraLinH/Semi-supervised-Counting-via-Pixel-by-pixel-Density-Distribution-Modelling.
|
[
"['Hui Lin' 'Zhiheng Ma' 'Rongrong Ji' 'Yaowei Wang' 'Zhou Su'\n 'Xiaopeng Hong' 'Deyu Meng']"
] |
null | null |
2402.15300
| null | null |
http://arxiv.org/pdf/2402.15300v2
|
2024-04-23T09:32:25Z
|
2024-02-23T12:57:16Z
|
Seeing is Believing: Mitigating Hallucination in Large Vision-Language
Models via CLIP-Guided Decoding
|
Large Vision-Language Models (LVLMs) are susceptible to object hallucinations, an issue in which their generated text contains non-existent objects, greatly limiting their reliability and practicality. Current approaches often rely on the model's token likelihoods or other internal information, instruction tuning on additional datasets, or incorporating complex external tools. We first perform empirical analysis on sentence-level LVLM hallucination, finding that CLIP similarity to the image acts as a stronger and more robust indicator of hallucination compared to token likelihoods. Motivated by this, we introduce our CLIP-Guided Decoding (CGD) approach, a straightforward but effective training-free approach to reduce object hallucination at decoding time. CGD uses CLIP to guide the model's decoding process by enhancing visual grounding of generated text with the image. Experiments demonstrate that CGD effectively mitigates object hallucination across multiple LVLM families while preserving the utility of text generation. Codes are available at https://github.com/d-ailin/CLIP-Guided-Decoding.
|
[
"['Ailin Deng' 'Zhirui Chen' 'Bryan Hooi']"
] |
null | null |
2402.15301
| null | null |
http://arxiv.org/pdf/2402.15301v2
|
2024-06-18T05:51:50Z
|
2024-02-23T13:02:10Z
|
Causal Graph Discovery with Retrieval-Augmented Generation based Large
Language Models
|
Causal graph recovery is traditionally done using statistical estimation-based methods or based on individual's knowledge about variables of interests. They often suffer from data collection biases and limitations of individuals' knowledge. The advance of large language models (LLMs) provides opportunities to address these problems. We propose a novel method that leverages LLMs to deduce causal relationships in general causal graph recovery tasks. This method leverages knowledge compressed in LLMs and knowledge LLMs extracted from scientific publication database as well as experiment data about factors of interest to achieve this goal. Our method gives a prompting strategy to extract associational relationships among those factors and a mechanism to perform causality verification for these associations. Comparing to other LLM-based methods that directly instruct LLMs to do the highly complex causal reasoning, our method shows clear advantage on causal graph quality on benchmark datasets. More importantly, as causality among some factors may change as new research results emerge, our method show sensitivity to new evidence in the literature and can provide useful information for updating causal graphs accordingly.
|
[
"['Yuzhe Zhang' 'Yipeng Zhang' 'Yidong Gan' 'Lina Yao' 'Chen Wang']"
] |
null | null |
2402.15307
| null | null |
http://arxiv.org/pdf/2402.15307v1
|
2024-02-23T13:11:10Z
|
2024-02-23T13:11:10Z
|
Representing Online Handwriting for Recognition in Large Vision-Language
Models
|
The adoption of tablets with touchscreens and styluses is increasing, and a key feature is converting handwriting to text, enabling search, indexing, and AI assistance. Meanwhile, vision-language models (VLMs) are now the go-to solution for image understanding, thanks to both their state-of-the-art performance across a variety of tasks and the simplicity of a unified approach to training, fine-tuning, and inference. While VLMs obtain high performance on image-based tasks, they perform poorly on handwriting recognition when applied naively, i.e., by rendering handwriting as an image and performing optical character recognition (OCR). In this paper, we study online handwriting recognition with VLMs, going beyond naive OCR. We propose a novel tokenized representation of digital ink (online handwriting) that includes both a time-ordered sequence of strokes as text, and as image. We show that this representation yields results comparable to or better than state-of-the-art online handwriting recognizers. Wide applicability is shown through results with two different VLM families, on multiple public datasets. Our approach can be applied to off-the-shelf VLMs, does not require any changes in their architecture, and can be used in both fine-tuning and parameter-efficient tuning. We perform a detailed ablation study to identify the key elements of the proposed representation.
|
[
"['Anastasiia Fadeeva' 'Philippe Schlattner' 'Andrii Maksai' 'Mark Collier'\n 'Efi Kokiopoulou' 'Jesse Berent' 'Claudiu Musat']"
] |
null | null |
2402.15309
| null | null |
http://arxiv.org/pdf/2402.15309v1
|
2024-02-23T13:24:19Z
|
2024-02-23T13:24:19Z
|
Counterfactual Generation with Identifiability Guarantees
|
Counterfactual generation lies at the core of various machine learning tasks, including image translation and controllable text generation. This generation process usually requires the identification of the disentangled latent representations, such as content and style, that underlie the observed data. However, it becomes more challenging when faced with a scarcity of paired data and labeling information. Existing disentangled methods crucially rely on oversimplified assumptions, such as assuming independent content and style variables, to identify the latent variables, even though such assumptions may not hold for complex data distributions. For instance, food reviews tend to involve words like tasty, whereas movie reviews commonly contain words such as thrilling for the same positive sentiment. This problem is exacerbated when data are sampled from multiple domains since the dependence between content and style may vary significantly over domains. In this work, we tackle the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task. We provide identification guarantees for such latent-variable models by leveraging the relative sparsity of the influences from different latent variables. Our theoretical insights enable the development of a doMain AdapTive counTerfactual gEneration model, called (MATTE). Our theoretically grounded framework achieves state-of-the-art performance in unsupervised style transfer tasks, where neither paired data nor style labels are utilized, across four large-scale datasets. Code is available at https://github.com/hanqi-qi/Matte.git
|
[
"['Hanqi Yan' 'Lingjing Kong' 'Lin Gui' 'Yuejie Chi' 'Eric Xing' 'Yulan He'\n 'Kun Zhang']"
] |
null | null |
2402.15313
| null | null |
http://arxiv.org/pdf/2402.15313v2
|
2024-02-26T09:54:47Z
|
2024-02-23T13:32:47Z
|
ArabianGPT: Native Arabic GPT-based Large Language Model
|
The predominance of English and Latin-based large language models (LLMs) has led to a notable deficit in native Arabic LLMs. This discrepancy is accentuated by the prevalent inclusion of English tokens in existing Arabic models, detracting from their efficacy in processing native Arabic's intricate morphology and syntax. Consequently, there is a theoretical and practical imperative for developing LLMs predominantly focused on Arabic linguistic elements. To address this gap, this paper proposes ArabianGPT, a series of transformer-based models within the ArabianLLM suite designed explicitly for Arabic. These models, including ArabianGPT-0.1B and ArabianGPT-0.3B, vary in size and complexity, aligning with the nuanced linguistic characteristics of Arabic. The AraNizer tokenizer, integral to these models, addresses the unique morphological aspects of Arabic script, ensuring more accurate text processing. Empirical results from fine-tuning the models on tasks like sentiment analysis and summarization demonstrate significant improvements. For sentiment analysis, the fine-tuned ArabianGPT-0.1B model achieved a remarkable accuracy of 95%, a substantial increase from the base model's 56%. Similarly, in summarization tasks, fine-tuned models showed enhanced F1 scores, indicating improved precision and recall in generating concise summaries. Comparative analysis of fine-tuned ArabianGPT models against their base versions across various benchmarks reveals nuanced differences in performance, with fine-tuning positively impacting specific tasks like question answering and summarization. These findings underscore the efficacy of fine-tuning in aligning ArabianGPT models more closely with specific NLP tasks, highlighting the potential of tailored transformer architectures in advancing Arabic NLP.
|
[
"['Anis Koubaa' 'Adel Ammar' 'Lahouari Ghouti' 'Omar Najar' 'Serry Sibaee']"
] |
null | null |
2402.15315
| null | null |
http://arxiv.org/pdf/2402.15315v2
|
2024-06-07T10:30:42Z
|
2024-02-23T13:34:03Z
|
On Minimal Depth in Neural Networks
|
A characterization of the representability of neural networks is relevant to comprehend their success in artificial intelligence. This study investigate two topics on ReLU neural network expressivity and their connection with a conjecture related to the minimum depth required for representing any continuous piecewise linear (CPWL) function. The topics are the minimal depth representation of the sum and max operations, as well as the exploration of polytope neural networks. For the sum operation, we establish a sufficient condition on the minimal depth of the operands to find the minimal depth of the operation. In contrast, regarding the max operation, a comprehensive set of examples is presented, demonstrating that no sufficient conditions, depending solely on the depth of the operands, would imply a minimal depth for the operation. The study also examine the minimal depth relationship between convex CPWL functions. On polytope neural networks, we investigate basic depth properties from Minkowski sums, convex hulls, number of vertices, faces, affine transformations, and indecomposable polytopes. More significant findings include depth characterization of polygons; identification of polytopes with an increasing number of vertices, exhibiting small depth and others with arbitrary large depth; and most notably, the minimal depth of simplices, which is strictly related to the minimal depth conjecture in ReLU networks.
|
[
"['Juan L. Valerdi']"
] |
null | null |
2402.15319
| null | null |
http://arxiv.org/pdf/2402.15319v1
|
2024-02-23T13:39:16Z
|
2024-02-23T13:39:16Z
|
GPTVQ: The Blessing of Dimensionality for LLM Quantization
|
In this work we show that the size versus accuracy trade-off of neural network quantization can be significantly improved by increasing the quantization dimensionality. We propose the GPTVQ method, a new fast method for post-training vector quantization (VQ) that scales well to Large Language Models (LLMs). Our method interleaves quantization of one or more columns with updates to the remaining unquantized weights, using information from the Hessian of the per-layer output reconstruction MSE. Quantization codebooks are initialized using an efficient data-aware version of the EM algorithm. The codebooks are then updated, and further compressed by using integer quantization and SVD-based compression. GPTVQ establishes a new state-of-the art in the size vs accuracy trade-offs on a wide range of LLMs such as Llama-v2 and Mistral. Furthermore, our method is efficient: on a single H100 it takes between 3 and 11 hours to process a Llamav2-70B model, depending on quantization setting. Lastly, with on-device timings for VQ decompression on a mobile CPU we show that VQ leads to improved latency compared to using a 4-bit integer format.
|
[
"['Mart van Baalen' 'Andrey Kuzmin' 'Markus Nagel' 'Peter Couperus'\n 'Cedric Bastoul' 'Eric Mahurin' 'Tijmen Blankevoort' 'Paul Whatmough']"
] |
null | null |
2402.15321
| null | null |
http://arxiv.org/pdf/2402.15321v2
|
2024-03-17T08:41:49Z
|
2024-02-23T13:39:59Z
|
OpenSUN3D: 1st Workshop Challenge on Open-Vocabulary 3D Scene
Understanding
|
This report provides an overview of the challenge hosted at the OpenSUN3D Workshop on Open-Vocabulary 3D Scene Understanding held in conjunction with ICCV 2023. The goal of this workshop series is to provide a platform for exploration and discussion of open-vocabulary 3D scene understanding tasks, including but not limited to segmentation, detection and mapping. We provide an overview of the challenge hosted at the workshop, present the challenge dataset, the evaluation methodology, and brief descriptions of the winning methods. For additional details, please see https://opensun3d.github.io/index_iccv23.html.
|
[
"['Francis Engelmann' 'Ayca Takmaz' 'Jonas Schult' 'Elisabetta Fedele'\n 'Johanna Wald' 'Songyou Peng' 'Xi Wang' 'Or Litany' 'Siyu Tang'\n 'Federico Tombari' 'Marc Pollefeys' 'Leonidas Guibas' 'Hongbo Tian'\n 'Chunjie Wang' 'Xiaosheng Yan' 'Bingwen Wang' 'Xuanyang Zhang' 'Xiao Liu'\n 'Phuc Nguyen' 'Khoi Nguyen' 'Anh Tran' 'Cuong Pham' 'Zhening Huang'\n 'Xiaoyang Wu' 'Xi Chen' 'Hengshuang Zhao' 'Lei Zhu' 'Joan Lasenby']"
] |
null | null |
2402.15324
| null | null |
http://arxiv.org/abs/2402.15324v1
|
2024-02-23T13:43:15Z
|
2024-02-23T13:43:15Z
|
Shapley Value Based Multi-Agent Reinforcement Learning: Theory, Method
and Its Application to Energy Network
|
Multi-agent reinforcement learning is an area of rapid advancement in artificial intelligence and machine learning. One of the important questions to be answered is how to conduct credit assignment in a multi-agent system. There have been many schemes designed to conduct credit assignment by multi-agent reinforcement learning algorithms. Although these credit assignment schemes have been proved useful in improving the performance of multi-agent reinforcement learning, most of them are designed heuristically without a rigorous theoretic basis and therefore infeasible to understand how agents cooperate. In this thesis, we aim at investigating the foundation of credit assignment in multi-agent reinforcement learning via cooperative game theory. We first extend a game model called convex game and a payoff distribution scheme called Shapley value in cooperative game theory to Markov decision process, named as Markov convex game and Markov Shapley value respectively. We represent a global reward game as a Markov convex game under the grand coalition. As a result, Markov Shapley value can be reasonably used as a credit assignment scheme in the global reward game. Markov Shapley value possesses the following virtues: (i) efficiency; (ii) identifiability of dummy agents; (iii) reflecting the contribution and (iv) symmetry, which form the fair credit assignment. Based on Markov Shapley value, we propose three multi-agent reinforcement learning algorithms called SHAQ, SQDDPG and SMFPPO. Furthermore, we extend Markov convex game to partial observability to deal with the partially observable problems, named as partially observable Markov convex game. In application, we evaluate SQDDPG and SMFPPO on the real-world problem in energy networks.
|
[
"['Jianhong Wang']"
] |
null | null |
2402.15326
| null | null |
http://arxiv.org/pdf/2402.15326v1
|
2024-02-23T13:44:57Z
|
2024-02-23T13:44:57Z
|
Understanding Oversmoothing in Diffusion-Based GNNs From the Perspective
of Operator Semigroup Theory
|
This paper presents a novel study of the oversmoothing issue in diffusion-based Graph Neural Networks (GNNs). Diverging from extant approaches grounded in random walk analysis or particle systems, we approach this problem through operator semigroup theory. This theoretical framework allows us to rigorously prove that oversmoothing is intrinsically linked to the ergodicity of the diffusion operator. This finding further poses a general and mild ergodicity-breaking condition, encompassing the various specific solutions previously offered, thereby presenting a more universal and theoretically grounded approach to mitigating oversmoothing in diffusion-based GNNs. Additionally, we offer a probabilistic interpretation of our theory, forging a link with prior works and broadening the theoretical horizon. Our experimental results reveal that this ergodicity-breaking term effectively mitigates oversmoothing measured by Dirichlet energy, and simultaneously enhances performance in node classification tasks.
|
[
"['Weichen Zhao' 'Chenguang Wang' 'Xinyan Wang' 'Congying Han' 'Tiande Guo'\n 'Tianshu Yu']"
] |
null | null |
2402.15328
| null | null |
http://arxiv.org/pdf/2402.15328v1
|
2024-02-23T13:51:20Z
|
2024-02-23T13:51:20Z
|
Towards Principled Task Grouping for Multi-Task Learning
|
This paper presents a novel approach to task grouping in Multitask Learning (MTL), advancing beyond existing methods by addressing key theoretical and practical limitations. Unlike prior studies, our approach offers a more theoretically grounded method that does not rely on restrictive assumptions for constructing transfer gains. We also propose a flexible mathematical programming formulation which can accommodate a wide spectrum of resource constraints, thus enhancing its versatility. Experimental results across diverse domains, including computer vision datasets, combinatorial optimization benchmarks and time series tasks, demonstrate the superiority of our method over extensive baselines, validating its effectiveness and general applicability in MTL.
|
[
"['Chenguang Wang' 'Xuanhao Pan' 'Tianshu Yu']"
] |
null | null |
2402.15332
| null | null |
http://arxiv.org/pdf/2402.15332v2
|
2024-06-06T00:58:55Z
|
2024-02-23T14:01:53Z
|
Position: Categorical Deep Learning is an Algebraic Theory of All
Architectures
|
We present our position on the elusive quest for a general-purpose framework for specifying and studying deep learning architectures. Our opinion is that the key attempts made so far lack a coherent bridge between specifying constraints which models must satisfy and specifying their implementations. Focusing on building a such a bridge, we propose to apply category theory -- precisely, the universal algebra of monads valued in a 2-category of parametric maps -- as a single theory elegantly subsuming both of these flavours of neural network design. To defend our position, we show how this theory recovers constraints induced by geometric deep learning, as well as implementations of many architectures drawn from the diverse landscape of neural networks, such as RNNs. We also illustrate how the theory naturally encodes many standard constructs in computer science and automata theory.
|
[
"['Bruno Gavranović' 'Paul Lessard' 'Andrew Dudzik' 'Tamara von Glehn'\n 'João G. M. Araújo' 'Petar Veličković']"
] |
null | null |
2402.15335
| null | null |
http://arxiv.org/pdf/2402.15335v1
|
2024-02-23T14:15:58Z
|
2024-02-23T14:15:58Z
|
Low-Rank Representations Meets Deep Unfolding: A Generalized and
Interpretable Network for Hyperspectral Anomaly Detection
|
Current hyperspectral anomaly detection (HAD) benchmark datasets suffer from low resolution, simple background, and small size of the detection data. These factors also limit the performance of the well-known low-rank representation (LRR) models in terms of robustness on the separation of background and target features and the reliance on manual parameter selection. To this end, we build a new set of HAD benchmark datasets for improving the robustness of the HAD algorithm in complex scenarios, AIR-HAD for short. Accordingly, we propose a generalized and interpretable HAD network by deeply unfolding a dictionary-learnable LLR model, named LRR-Net$^+$, which is capable of spectrally decoupling the background structure and object properties in a more generalized fashion and eliminating the bias introduced by vital interference targets concurrently. In addition, LRR-Net$^+$ integrates the solution process of the Alternating Direction Method of Multipliers (ADMM) optimizer with the deep network, guiding its search process and imparting a level of interpretability to parameter optimization. Additionally, the integration of physical models with DL techniques eliminates the need for manual parameter tuning. The manually tuned parameters are seamlessly transformed into trainable parameters for deep neural networks, facilitating a more efficient and automated optimization process. Extensive experiments conducted on the AIR-HAD dataset show the superiority of our LRR-Net$^+$ in terms of detection performance and generalization ability, compared to top-performing rivals. Furthermore, the compilable codes and our AIR-HAD benchmark datasets in this paper will be made available freely and openly at url{https://sites.google.com/view/danfeng-hong}.
|
[
"['Chenyu Li' 'Bing Zhang' 'Danfeng Hong' 'Jing Yao' 'Jocelyn Chanussot']"
] |
null | null |
2402.15337
| null | null |
http://arxiv.org/pdf/2402.15337v2
|
2024-06-05T10:42:40Z
|
2024-02-23T14:17:01Z
|
Ranking Entities along Conceptual Space Dimensions with LLMs: An
Analysis of Fine-Tuning Strategies
|
Conceptual spaces represent entities in terms of their primitive semantic features. Such representations are highly valuable but they are notoriously difficult to learn, especially when it comes to modelling perceptual and subjective features. Distilling conceptual spaces from Large Language Models (LLMs) has recently emerged as a promising strategy, but existing work has been limited to probing pre-trained LLMs using relatively simple zero-shot strategies. We focus in particular on the task of ranking entities according to a given conceptual space dimension. Unfortunately, we cannot directly fine-tune LLMs on this task, because ground truth rankings for conceptual space dimensions are rare. We therefore use more readily available features as training data and analyse whether the ranking capabilities of the resulting models transfer to perceptual and subjective features. We find that this is indeed the case, to some extent, but having at least some perceptual and subjective features in the training data seems essential for achieving the best results.
|
[
"['Nitesh Kumar' 'Usashi Chatterjee' 'Steven Schockaert']"
] |
null | null |
2402.15343
| null | null |
http://arxiv.org/pdf/2402.15343v1
|
2024-02-23T14:23:51Z
|
2024-02-23T14:23:51Z
|
NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data
|
Large Language Models (LLMs) have shown impressive abilities in data annotation, opening the way for new approaches to solve classic NLP problems. In this paper, we show how to use LLMs to create NuNER, a compact language representation model specialized in the Named Entity Recognition (NER) task. NuNER can be fine-tuned to solve downstream NER problems in a data-efficient way, outperforming similar-sized foundation models in the few-shot regime and competing with much larger LLMs. We find that the size and entity-type diversity of the pre-training dataset are key to achieving good performance. We view NuNER as a member of the broader family of task-specific foundation models, recently unlocked by LLMs.
|
[
"['Sergei Bogdanov' 'Alexandre Constantin' 'Timothée Bernard'\n 'Benoit Crabbé' 'Etienne Bernard']"
] |
null | null |
2402.15344
| null | null |
http://arxiv.org/pdf/2402.15344v1
|
2024-02-23T14:24:45Z
|
2024-02-23T14:24:45Z
|
Iteration and Stochastic First-order Oracle Complexities of Stochastic
Gradient Descent using Constant and Decaying Learning Rates
|
The performance of stochastic gradient descent (SGD), which is the simplest first-order optimizer for training deep neural networks, depends on not only the learning rate but also the batch size. They both affect the number of iterations and the stochastic first-order oracle (SFO) complexity needed for training. In particular, the previous numerical results indicated that, for SGD using a constant learning rate, the number of iterations needed for training decreases when the batch size increases, and the SFO complexity needed for training is minimized at a critical batch size and that it increases once the batch size exceeds that size. Here, we study the relationship between batch size and the iteration and SFO complexities needed for nonconvex optimization in deep learning with SGD using constant or decaying learning rates and show that SGD using the critical batch size minimizes the SFO complexity. We also provide numerical comparisons of SGD with the existing first-order optimizers and show the usefulness of SGD using a critical batch size. Moreover, we show that measured critical batch sizes are close to the sizes estimated from our theoretical results.
|
[
"['Kento Imaizumi' 'Hideaki Iiduka']"
] |
null | null |
2402.15345
| null | null |
http://arxiv.org/pdf/2402.15345v1
|
2024-02-23T14:26:12Z
|
2024-02-23T14:26:12Z
|
Fourier Basis Density Model
|
We introduce a lightweight, flexible and end-to-end trainable probability density model parameterized by a constrained Fourier basis. We assess its performance at approximating a range of multi-modal 1D densities, which are generally difficult to fit. In comparison to the deep factorized model introduced in [1], our model achieves a lower cross entropy at a similar computational budget. In addition, we also evaluate our method on a toy compression task, demonstrating its utility in learned compression.
|
[
"['Alfredo De la Fuente' 'Saurabh Singh' 'Johannes Ballé']"
] |
null | null |
2402.15347
| null | null |
http://arxiv.org/pdf/2402.15347v2
|
2024-05-10T10:47:25Z
|
2024-02-23T14:31:10Z
|
Information-Theoretic Safe Bayesian Optimization
|
We consider a sequential decision making task, where the goal is to optimize an unknown function without evaluating parameters that violate an a~priori unknown (safety) constraint. A common approach is to place a Gaussian process prior on the unknown functions and allow evaluations only in regions that are safe with high probability. Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case. Moreover, the way in which they exploit regularity assumptions about the constraint introduces an additional critical hyperparameter. In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate. The combination of this exploration criterion with a well known Bayesian optimization acquisition function yields a novel safe Bayesian optimization selection criterion. Our approach is naturally applicable to continuous domains and does not require additional explicit hyperparameters. We theoretically analyze the method and show that we do not violate the safety constraint with high probability and that we learn about the value of the safe optimum up to arbitrary precision. Empirical evaluations demonstrate improved data-efficiency and scalability.
|
[
"['Alessandro G. Bottero' 'Carlos E. Luis' 'Julia Vinogradska'\n 'Felix Berkenkamp' 'Jan Peters']"
] |
null | null |
2402.15350
| null | null |
http://arxiv.org/abs/2402.15350v2
|
2024-07-02T06:12:05Z
|
2024-02-23T14:38:05Z
|
Farsight: Fostering Responsible AI Awareness During AI Application
Prototyping
|
Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://PAIR-code.github.io/farsight.
|
[
"['Zijie J. Wang' 'Chinmay Kulkarni' 'Lauren Wilcox' 'Michael Terry'\n 'Michael Madaio']"
] |
null | null |
2402.15351
| null | null |
http://arxiv.org/pdf/2402.15351v1
|
2024-02-23T14:38:19Z
|
2024-02-23T14:38:19Z
|
AutoMMLab: Automatically Generating Deployable Models from Language
Instructions for Computer Vision Tasks
|
Automated machine learning (AutoML) is a collection of techniques designed to automate the machine learning development process. While traditional AutoML approaches have been successfully applied in several critical steps of model development (e.g. hyperparameter optimization), there lacks a AutoML system that automates the entire end-to-end model production workflow. To fill this blank, we present AutoMMLab, a general-purpose LLM-empowered AutoML system that follows user's language instructions to automate the whole model production workflow for computer vision tasks. The proposed AutoMMLab system effectively employs LLMs as the bridge to connect AutoML and OpenMMLab community, empowering non-expert individuals to easily build task-specific models via a user-friendly language interface. Specifically, we propose RU-LLaMA to understand users' request and schedule the whole pipeline, and propose a novel LLM-based hyperparameter optimizer called HPO-LLaMA to effectively search for the optimal hyperparameters. Experiments show that our AutoMMLab system is versatile and covers a wide range of mainstream tasks, including classification, detection, segmentation and keypoint estimation. We further develop a new benchmark, called LAMP, for studying key components in the end-to-end prompt-based model training pipeline. Code, model, and data will be released.
|
[
"['Zekang Yang' 'Wang Zeng' 'Sheng Jin' 'Chen Qian' 'Ping Luo' 'Wentao Liu']"
] |
null | null |
2402.15352
| null | null |
http://arxiv.org/pdf/2402.15352v1
|
2024-02-23T14:39:12Z
|
2024-02-23T14:39:12Z
|
On normalization-equivariance properties of supervised and unsupervised
denoising methods: a survey
|
Image denoising is probably the oldest and still one of the most active research topic in image processing. Many methodological concepts have been introduced in the past decades and have improved performances significantly in recent years, especially with the emergence of convolutional neural networks and supervised deep learning. In this paper, we propose a survey of guided tour of supervised and unsupervised learning methods for image denoising, classifying the main principles elaborated during this evolution, with a particular concern given to recent developments in supervised learning. It is conceived as a tutorial organizing in a comprehensive framework current approaches. We give insights on the rationales and limitations of the most performant methods in the literature, and we highlight the common features between many of them. Finally, we focus on on the normalization equivariance properties that is surprisingly not guaranteed with most of supervised methods. It is of paramount importance that intensity shifting or scaling applied to the input image results in a corresponding change in the denoiser output.
|
[
"['Sébastien Herbreteau' 'Charles Kervrann']"
] |
null | null |
2402.15359
| null | null |
http://arxiv.org/pdf/2402.15359v1
|
2024-02-23T14:52:05Z
|
2024-02-23T14:52:05Z
|
Streaming Gaussian Dirichlet Random Fields for Spatial Predictions of
High Dimensional Categorical Observations
|
We present the Streaming Gaussian Dirichlet Random Field (S-GDRF) model, a novel approach for modeling a stream of spatiotemporally distributed, sparse, high-dimensional categorical observations. The proposed approach efficiently learns global and local patterns in spatiotemporal data, allowing for fast inference and querying with a bounded time complexity. Using a high-resolution data series of plankton images classified with a neural network, we demonstrate the ability of the approach to make more accurate predictions compared to a Variational Gaussian Process (VGP), and to learn a predictive distribution of observations from streaming categorical data. S-GDRFs open the door to enabling efficient informative path planning over high-dimensional categorical observations, which until now has not been feasible.
|
[
"['J. E. San Soucie' 'H. M. Sosik' 'Y. Girdhar']"
] |
null | null |
2402.15360
| null | null |
http://arxiv.org/pdf/2402.15360v1
|
2024-02-23T14:52:44Z
|
2024-02-23T14:52:44Z
|
All Thresholds Barred: Direct Estimation of Call Density in Bioacoustic
Data
|
Passive acoustic monitoring (PAM) studies generate thousands of hours of audio, which may be used to monitor specific animal populations, conduct broad biodiversity surveys, detect threats such as poachers, and more. Machine learning classifiers for species identification are increasingly being used to process the vast amount of audio generated by bioacoustic surveys, expediting analysis and increasing the utility of PAM as a management tool. In common practice, a threshold is applied to classifier output scores, and scores above the threshold are aggregated into a detection count. The choice of threshold produces biased counts of vocalizations, which are subject to false positive/negative rates that may vary across subsets of the dataset. In this work, we advocate for directly estimating call density: The proportion of detection windows containing the target vocalization, regardless of classifier score. Our approach targets a desirable ecological estimator and provides a more rigorous grounding for identifying the core problems caused by distribution shifts -- when the defining characteristics of the data distribution change -- and designing strategies to mitigate them. We propose a validation scheme for estimating call density in a body of data and obtain, through Bayesian reasoning, probability distributions of confidence scores for both the positive and negative classes. We use these distributions to predict site-level densities, which may be subject to distribution shifts. We test our proposed methods on a real-world study of Hawaiian birds and provide simulation results leveraging existing fully annotated datasets, demonstrating robustness to variations in call density and classifier model quality.
|
[
"['Amanda K. Navine' 'Tom Denton' 'Matthew J. Weldy' 'Patrick J. Hart']"
] |
null | null |
2402.15365
| null | null |
http://arxiv.org/pdf/2402.15365v1
|
2024-02-23T14:55:58Z
|
2024-02-23T14:55:58Z
|
Efficient semi-supervised inference for logistic regression under
case-control studies
|
Semi-supervised learning has received increasingly attention in statistics and machine learning. In semi-supervised learning settings, a labeled data set with both outcomes and covariates and an unlabeled data set with covariates only are collected. We consider an inference problem in semi-supervised settings where the outcome in the labeled data is binary and the labeled data is collected by case-control sampling. Case-control sampling is an effective sampling scheme for alleviating imbalance structure in binary data. Under the logistic model assumption, case-control data can still provide consistent estimator for the slope parameter of the regression model. However, the intercept parameter is not identifiable. Consequently, the marginal case proportion cannot be estimated from case-control data. We find out that with the availability of the unlabeled data, the intercept parameter can be identified in semi-supervised learning setting. We construct the likelihood function of the observed labeled and unlabeled data and obtain the maximum likelihood estimator via an iterative algorithm. The proposed estimator is shown to be consistent, asymptotically normal, and semiparametrically efficient. Extensive simulation studies are conducted to show the finite sample performance of the proposed method. The results imply that the unlabeled data not only helps to identify the intercept but also improves the estimation efficiency of the slope parameter. Meanwhile, the marginal case proportion can be estimated accurately by the proposed method.
|
[
"['Zhuojun Quan' 'Yuanyuan Lin' 'Kani Chen' 'Wen Yu']"
] |
null | null |
2402.15370
| null | null |
http://arxiv.org/pdf/2402.15370v1
|
2024-02-23T15:07:13Z
|
2024-02-23T15:07:13Z
|
Dual Encoder: Exploiting the Potential of Syntactic and Semantic for
Aspect Sentiment Triplet Extraction
|
Aspect Sentiment Triple Extraction (ASTE) is an emerging task in fine-grained sentiment analysis. Recent studies have employed Graph Neural Networks (GNN) to model the syntax-semantic relationships inherent in triplet elements. However, they have yet to fully tap into the vast potential of syntactic and semantic information within the ASTE task. In this work, we propose a emph{Dual Encoder: Exploiting the potential of Syntactic and Semantic} model (D2E2S), which maximizes the syntactic and semantic relationships among words. Specifically, our model utilizes a dual-channel encoder with a BERT channel to capture semantic information, and an enhanced LSTM channel for comprehensive syntactic information capture. Subsequently, we introduce the heterogeneous feature interaction module to capture intricate interactions between dependency syntax and attention semantics, and to dynamically select vital nodes. We leverage the synergy of these modules to harness the significant potential of syntactic and semantic information in ASTE tasks. Testing on public benchmarks, our D2E2S model surpasses the current state-of-the-art(SOTA), demonstrating its effectiveness.
|
[
"['Xiaowei Zhao' 'Yong Zhou' 'Xiujuan Xu']"
] |
null | null |
2402.15374
| null | null |
http://arxiv.org/pdf/2402.15374v2
|
2024-06-10T14:10:38Z
|
2024-02-23T15:19:37Z
|
Outlier detection by ensembling uncertainty with negative objectness
|
Outlier detection is an essential capability in safety-critical applications of supervised visual recognition. Most of the existing methods deliver best results by encouraging standard closed-set models to produce low-confidence predictions in negative training data. However, that approach conflates prediction uncertainty with recognition of the negative class. We therefore reconsider direct prediction of K+1 logits that correspond to K groundtruth classes and one outlier class. This setup allows us to formulate a novel anomaly score as an ensemble of in-distribution uncertainty and the posterior of the outlier class which we term negative objectness. Now outliers can be independently detected due to i) high prediction uncertainty or ii) similarity with negative data. We embed our method into a dense prediction architecture with mask-level recognition over K+2 classes. The training procedure encourages the novel K+2-th class to learn negative objectness at pasted negative instances. Our models outperform the current state-of-the art on standard benchmarks for image-wide and pixel-level outlier detection with and without training on real negative data.
|
[
"['Anja Delić' 'Matej Grcić' 'Siniša Šegvić']"
] |
null | null |
2402.15390
| null | null |
http://arxiv.org/pdf/2402.15390v2
|
2024-05-26T20:47:41Z
|
2024-02-23T15:42:12Z
|
Explorations of Self-Repair in Language Models
|
Prior interpretability research studying narrow distributions has preliminarily identified self-repair, a phenomena where if components in large language models are ablated, later components will change their behavior to compensate. Our work builds off this past literature, demonstrating that self-repair exists on a variety of models families and sizes when ablating individual attention heads on the full training distribution. We further show that on the full training distribution self-repair is imperfect, as the original direct effect of the head is not fully restored, and noisy, since the degree of self-repair varies significantly across different prompts (sometimes overcorrecting beyond the original effect). We highlight two different mechanisms that contribute to self-repair, including changes in the final LayerNorm scaling factor and sparse sets of neurons implementing Anti-Erasure. We additionally discuss the implications of these results for interpretability practitioners and close with a more speculative discussion on the mystery of why self-repair occurs in these models at all, highlighting evidence for the Iterative Inference hypothesis in language models, a framework that predicts self-repair.
|
[
"['Cody Rushing' 'Neel Nanda']"
] |
null | null |
2402.15391
| null | null |
http://arxiv.org/pdf/2402.15391v1
|
2024-02-23T15:47:26Z
|
2024-02-23T15:47:26Z
|
Genie: Generative Interactive Environments
|
We introduce Genie, the first generative interactive environment trained in an unsupervised manner from unlabelled Internet videos. The model can be prompted to generate an endless variety of action-controllable virtual worlds described through text, synthetic images, photographs, and even sketches. At 11B parameters, Genie can be considered a foundation world model. It is comprised of a spatiotemporal video tokenizer, an autoregressive dynamics model, and a simple and scalable latent action model. Genie enables users to act in the generated environments on a frame-by-frame basis despite training without any ground-truth action labels or other domain-specific requirements typically found in the world model literature. Further the resulting learned latent action space facilitates training agents to imitate behaviors from unseen videos, opening the path for training generalist agents of the future.
|
[
"['Jake Bruce' 'Michael Dennis' 'Ashley Edwards' 'Jack Parker-Holder'\n 'Yuge Shi' 'Edward Hughes' 'Matthew Lai' 'Aditi Mavalankar'\n 'Richie Steigerwald' 'Chris Apps' 'Yusuf Aytar' 'Sarah Bechtle'\n 'Feryal Behbahani' 'Stephanie Chan' 'Nicolas Heess' 'Lucy Gonzalez'\n 'Simon Osindero' 'Sherjil Ozair' 'Scott Reed' 'Jingwei Zhang'\n 'Konrad Zolna' 'Jeff Clune' 'Nando de Freitas' 'Satinder Singh'\n 'Tim Rocktäschel']"
] |
null | null |
2402.15392
| null | null |
http://arxiv.org/pdf/2402.15392v2
|
2024-06-06T06:49:52Z
|
2024-02-23T15:49:46Z
|
Offline Inverse RL: New Solution Concepts and Provably Efficient
Algorithms
|
Inverse reinforcement learning (IRL) aims to recover the reward function of an expert agent from demonstrations of behavior. It is well-known that the IRL problem is fundamentally ill-posed, i.e., many reward functions can explain the demonstrations. For this reason, IRL has been recently reframed in terms of estimating the feasible reward set (Metelli et al., 2021), thus, postponing the selection of a single reward. However, so far, the available formulations and algorithmic solutions have been proposed and analyzed mainly for the online setting, where the learner can interact with the environment and query the expert at will. This is clearly unrealistic in most practical applications, where the availability of an offline dataset is a much more common scenario. In this paper, we introduce a novel notion of feasible reward set capturing the opportunities and limitations of the offline setting and we analyze the complexity of its estimation. This requires the introduction an original learning framework that copes with the intrinsic difficulty of the setting, for which the data coverage is not under control. Then, we propose two computationally and statistically efficient algorithms, IRLO and PIRLO, for addressing the problem. In particular, the latter adopts a specific form of pessimism to enforce the novel desirable property of inclusion monotonicity of the delivered feasible set. With this work, we aim to provide a panorama of the challenges of the offline IRL problem and how they can be fruitfully addressed.
|
[
"['Filippo Lazzati' 'Mirco Mutti' 'Alberto Maria Metelli']"
] |
null | null |
2402.15393
| null | null |
http://arxiv.org/pdf/2402.15393v2
|
2024-06-07T17:10:09Z
|
2024-02-23T15:51:45Z
|
NeuralThink: Learning Algorithms For Consistent and Efficient
Extrapolation Across General Tasks
|
We propose NeuralThink, a novel deep thinking architecture that can efficiently and consistently extrapolate, i.e., learn algorithms from smaller problems (in terms of observation size) and execute those algorithms in large problems. Contrary to previous deep thinking architectures, NeuralThink can be naturally applied in both same-size problems, where the input and output sizes are the same, and in different-size problems, where the size of the input and output differ. To allow for this versatility, we design NeuralThink with three main components: a recurrent module, that iteratively processes input information at different scales, a processing module, responsible for aggregating the previously processed information, and a curriculum-based training scheme, that improves the extrapolation performance of the method. To evaluate our method we introduce a set of novel different-size tasks and we show that NeuralThink consistently outperforms the prior state-of-the-art deep thinking approaches in extrapolating to larger problems, considering smaller training problems and requiring less parameters than other approaches.
|
[
"['Bernardo Esteves' 'Miguel Vasco' 'Francisco S. Melo']"
] |
null | null |
2402.15398
| null | null |
http://arxiv.org/pdf/2402.15398v1
|
2024-02-23T16:00:04Z
|
2024-02-23T16:00:04Z
|
TransFlower: An Explainable Transformer-Based Model with Flow-to-Flow
Attention for Commuting Flow Prediction
|
Understanding the link between urban planning and commuting flows is crucial for guiding urban development and policymaking. This research, bridging computer science and urban studies, addresses the challenge of integrating these fields with their distinct focuses. Traditional urban studies methods, like the gravity and radiation models, often underperform in complex scenarios due to their limited handling of multiple variables and reliance on overly simplistic and unrealistic assumptions, such as spatial isotropy. While deep learning models offer improved accuracy, their black-box nature poses a trade-off between performance and explainability -- both vital for analyzing complex societal phenomena like commuting flows. To address this, we introduce TransFlower, an explainable, transformer-based model employing flow-to-flow attention to predict urban commuting patterns. It features a geospatial encoder with an anisotropy-aware relative location encoder for nuanced flow representation. Following this, the transformer-based flow predictor enhances this by leveraging attention mechanisms to efficiently capture flow interactions. Our model outperforms existing methods by up to 30.8% Common Part of Commuters, offering insights into mobility dynamics crucial for urban planning and policy decisions.
|
[
"['Yan Luo' 'Zhuoyue Wan' 'Yuzhong Chen' 'Gengchen Mai' 'Fu-lai Chung'\n 'Kent Larson']"
] |
null | null |
2402.15399
| null | null |
http://arxiv.org/pdf/2402.15399v1
|
2024-02-23T16:01:44Z
|
2024-02-23T16:01:44Z
|
Distributionally Robust Off-Dynamics Reinforcement Learning: Provable
Efficiency with Linear Function Approximation
|
We study off-dynamics Reinforcement Learning (RL), where the policy is trained on a source domain and deployed to a distinct target domain. We aim to solve this problem via online distributionally robust Markov decision processes (DRMDPs), where the learning algorithm actively interacts with the source domain while seeking the optimal performance under the worst possible dynamics that is within an uncertainty set of the source domain's transition kernel. We provide the first study on online DRMDPs with function approximation for off-dynamics RL. We find that DRMDPs' dual formulation can induce nonlinearity, even when the nominal transition kernel is linear, leading to error propagation. By designing a $d$-rectangular uncertainty set using the total variation distance, we remove this additional nonlinearity and bypass the error propagation. We then introduce DR-LSVI-UCB, the first provably efficient online DRMDP algorithm for off-dynamics RL with function approximation, and establish a polynomial suboptimality bound that is independent of the state and action space sizes. Our work makes the first step towards a deeper understanding of the provable efficiency of online DRMDPs with linear function approximation. Finally, we substantiate the performance and robustness of DR-LSVI-UCB through different numerical experiments.
|
[
"['Zhishuai Liu' 'Pan Xu']"
] |
null | null |
2402.15402
| null | null |
http://arxiv.org/pdf/2402.15402v1
|
2024-02-23T16:05:51Z
|
2024-02-23T16:05:51Z
|
Grasp, See and Place: Efficient Unknown Object Rearrangement with Policy
Structure Prior
|
We focus on the task of unknown object rearrangement, where a robot is supposed to re-configure the objects into a desired goal configuration specified by an RGB-D image. Recent works explore unknown object rearrangement systems by incorporating learning-based perception modules. However, they are sensitive to perception error, and pay less attention to task-level performance. In this paper, we aim to develop an effective system for unknown object rearrangement amidst perception noise. We theoretically reveal the noisy perception impacts grasp and place in a decoupled way, and show such a decoupled structure is non-trivial to improve task optimality. We propose GSP, a dual-loop system with the decoupled structure as prior. For the inner loop, we learn an active seeing policy for self-confident object matching to improve the perception of place. For the outer loop, we learn a grasp policy aware of object matching and grasp capability guided by task-level rewards. We leverage the foundation model CLIP for object matching, policy learning and self-termination. A series of experiments indicate that GSP can conduct unknown object rearrangement with higher completion rate and less steps.
|
[
"['Kechun Xu' 'Zhongxiang Zhou' 'Jun Wu' 'Haojian Lu' 'Rong Xiong'\n 'Yue Wang']"
] |
null | null |
2402.15404
| null | null |
http://arxiv.org/pdf/2402.15404v1
|
2024-02-23T16:06:38Z
|
2024-02-23T16:06:38Z
|
United We Pretrain, Divided We Fail! Representation Learning for Time
Series by Pretraining on 75 Datasets at Once
|
In natural language processing and vision, pretraining is utilized to learn effective representations. Unfortunately, the success of pretraining does not easily carry over to time series due to potential mismatch between sources and target. Actually, common belief is that multi-dataset pretraining does not work for time series! Au contraire, we introduce a new self-supervised contrastive pretraining approach to learn one encoding from many unlabeled and diverse time series datasets, so that the single learned representation can then be reused in several target domains for, say, classification. Specifically, we propose the XD-MixUp interpolation method and the Soft Interpolation Contextual Contrasting (SICC) loss. Empirically, this outperforms both supervised training and other self-supervised pretraining methods when finetuning on low-data regimes. This disproves the common belief: We can actually learn from multiple time series datasets, even from 75 at once.
|
[
"['Maurice Kraus' 'Felix Divo' 'David Steinmann' 'Devendra Singh Dhami'\n 'Kristian Kersting']"
] |
null | null |
2402.15406
| null | null |
http://arxiv.org/pdf/2402.15406v1
|
2024-02-23T16:07:39Z
|
2024-02-23T16:07:39Z
|
Conformalized-DeepONet: A Distribution-Free Framework for Uncertainty
Quantification in Deep Operator Networks
|
In this paper, we adopt conformal prediction, a distribution-free uncertainty quantification (UQ) framework, to obtain confidence prediction intervals with coverage guarantees for Deep Operator Network (DeepONet) regression. Initially, we enhance the uncertainty quantification frameworks (B-DeepONet and Prob-DeepONet) previously proposed by the authors by using split conformal prediction. By combining conformal prediction with our Prob- and B-DeepONets, we effectively quantify uncertainty by generating rigorous confidence intervals for DeepONet prediction. Additionally, we design a novel Quantile-DeepONet that allows for a more natural use of split conformal prediction. We refer to this distribution-free effective uncertainty quantification framework as split conformal Quantile-DeepONet regression. Finally, we demonstrate the effectiveness of the proposed methods using various ordinary, partial differential equation numerical examples, and multi-fidelity learning.
|
[
"['Christian Moya' 'Amirhossein Mollaali' 'Zecheng Zhang' 'Lu Lu'\n 'Guang Lin']"
] |
null | null |
2402.15409
| null | null |
http://arxiv.org/pdf/2402.15409v1
|
2024-02-23T16:16:38Z
|
2024-02-23T16:16:38Z
|
Lasso with Latents: Efficient Estimation, Covariate Rescaling, and
Computational-Statistical Gaps
|
It is well-known that the statistical performance of Lasso can suffer significantly when the covariates of interest have strong correlations. In particular, the prediction error of Lasso becomes much worse than computationally inefficient alternatives like Best Subset Selection. Due to a large conjectured computational-statistical tradeoff in the problem of sparse linear regression, it may be impossible to close this gap in general. In this work, we propose a natural sparse linear regression setting where strong correlations between covariates arise from unobserved latent variables. In this setting, we analyze the problem caused by strong correlations and design a surprisingly simple fix. While Lasso with standard normalization of covariates fails, there exists a heterogeneous scaling of the covariates with which Lasso will suddenly obtain strong provable guarantees for estimation. Moreover, we design a simple, efficient procedure for computing such a "smart scaling." The sample complexity of the resulting "rescaled Lasso" algorithm incurs (in the worst case) quadratic dependence on the sparsity of the underlying signal. While this dependence is not information-theoretically necessary, we give evidence that it is optimal among the class of polynomial-time algorithms, via the method of low-degree polynomials. This argument reveals a new connection between sparse linear regression and a special version of sparse PCA with a near-critical negative spike. The latter problem can be thought of as a real-valued analogue of learning a sparse parity. Using it, we also establish the first computational-statistical gap for the closely related problem of learning a Gaussian Graphical Model.
|
[
"['Jonathan Kelner' 'Frederic Koehler' 'Raghu Meka' 'Dhruv Rohatgi']"
] |
null | null |
2402.15411
| null | null |
http://arxiv.org/pdf/2402.15411v2
|
2024-06-27T16:15:39Z
|
2024-02-23T16:19:32Z
|
Optimistic Information Directed Sampling
|
We study the problem of online learning in contextual bandit problems where the loss function is assumed to belong to a known parametric function class. We propose a new analytic framework for this setting that bridges the Bayesian theory of information-directed sampling due to Russo and Van Roy (2018) and the worst-case theory of Foster, Kakade, Qian, and Rakhlin (2021) based on the decision-estimation coefficient. Drawing from both lines of work, we propose a algorithmic template called Optimistic Information-Directed Sampling and show that it can achieve instance-dependent regret guarantees similar to the ones achievable by the classic Bayesian IDS method, but with the major advantage of not requiring any Bayesian assumptions. The key technical innovation of our analysis is introducing an optimistic surrogate model for the regret and using it to define a frequentist version of the Information Ratio of Russo and Van Roy (2018), and a less conservative version of the Decision Estimation Coefficient of Foster et al. (2021). Keywords: Contextual bandits, information-directed sampling, decision estimation coefficient, first-order regret bounds.
|
[
"['Gergely Neu' 'Matteo Papini' 'Ludovic Schwartz']"
] |
null | null |
2402.15413
| null | null |
http://arxiv.org/pdf/2402.15413v1
|
2024-02-23T16:19:49Z
|
2024-02-23T16:19:49Z
|
G-RepsNet: A Fast and General Construction of Equivariant Networks for
Arbitrary Matrix Groups
|
Group equivariance is a strong inductive bias useful in a wide range of deep learning tasks. However, constructing efficient equivariant networks for general groups and domains is difficult. Recent work by Finzi et al. (2021) directly solves the equivariance constraint for arbitrary matrix groups to obtain equivariant MLPs (EMLPs). But this method does not scale well and scaling is crucial in deep learning. Here, we introduce Group Representation Networks (G-RepsNets), a lightweight equivariant network for arbitrary matrix groups with features represented using tensor polynomials. The key intuition for our design is that using tensor representations in the hidden layers of a neural network along with simple inexpensive tensor operations can lead to expressive universal equivariant networks. We find G-RepsNet to be competitive to EMLP on several tasks with group symmetries such as O(5), O(1, 3), and O(3) with scalars, vectors, and second-order tensors as data types. On image classification tasks, we find that G-RepsNet using second-order representations is competitive and often even outperforms sophisticated state-of-the-art equivariant models such as GCNNs (Cohen & Welling, 2016a) and E(2)-CNNs (Weiler & Cesa, 2019). To further illustrate the generality of our approach, we show that G-RepsNet is competitive to G-FNO (Helwig et al., 2023) and EGNN (Satorras et al., 2021) on N-body predictions and solving PDEs, respectively, while being efficient.
|
[
"['Sourya Basu' 'Suhas Lohit' 'Matthew Brand']"
] |
null | null |
2402.15414
| null | null |
http://arxiv.org/pdf/2402.15414v1
|
2024-02-23T16:20:29Z
|
2024-02-23T16:20:29Z
|
Does Combining Parameter-efficient Modules Improve Few-shot Transfer
Accuracy?
|
Parameter-efficient fine-tuning stands as the standard for efficiently fine-tuning large language and vision models on downstream tasks. Specifically, the efficiency of low-rank adaptation has facilitated the creation and sharing of hundreds of custom LoRA modules, each trained on distinct data from various downstream tasks. In this paper, we explore the composability of LoRA modules, examining if combining these pre-trained modules enhances generalization to unseen downstream tasks. Our investigation involves evaluating two approaches: (a) uniform composition, involving averaging upstream LoRA modules with equal weights, and (b) learned composition, where we learn the weights for each upstream module and perform weighted averaging. Our experimental results on both vision and language models reveal that in few-shot settings, where only a limited number of samples are available for the downstream task, both uniform and learned composition methods result in better transfer accuracy; outperforming full fine-tuning and training a LoRA from scratch. Moreover, in full-shot settings, learned composition performs comparably to regular LoRA training with significantly fewer number of trainable parameters. Our research unveils the potential of uniform composition for enhancing transferability in low-shot settings, without introducing additional learnable parameters.
|
[
"['Nader Asadi' 'Mahdi Beitollahi' 'Yasser Khalil' 'Yinchuan Li'\n 'Guojun Zhang' 'Xi Chen']"
] |
null | null |
2402.15415
| null | null |
http://arxiv.org/pdf/2402.15415v1
|
2024-02-23T16:26:01Z
|
2024-02-23T16:26:01Z
|
The Impact of LoRA on the Emergence of Clusters in Transformers
|
In this paper, we employ the mathematical framework on Transformers developed by citet{sander2022sinkformers,geshkovski2023emergence,geshkovski2023mathematical} to explore how variations in attention parameters and initial token values impact the structural dynamics of token clusters. Our analysis demonstrates that while the clusters within a modified attention matrix dynamics can exhibit significant divergence from the original over extended periods, they maintain close similarities over shorter intervals, depending on the parameter differences. This work contributes to the fine-tuning field through practical applications to the LoRA algorithm cite{hu2021lora,peft}, enhancing our understanding of the behavior of LoRA-enhanced Transformer models.
|
[
"['Hugo Koubbi' 'Matthieu Boussard' 'Louis Hernandez']"
] |
null | null |
2402.15420
| null | null |
http://arxiv.org/abs/2402.15420v1
|
2024-02-23T16:30:05Z
|
2024-02-23T16:30:05Z
|
PREDILECT: Preferences Delineated with Zero-Shot Language-based
Reasoning in Reinforcement Learning
|
Preference-based reinforcement learning (RL) has emerged as a new field in robot learning, where humans play a pivotal role in shaping robot behavior by expressing preferences on different sequences of state-action pairs. However, formulating realistic policies for robots demands responses from humans to an extensive array of queries. In this work, we approach the sample-efficiency challenge by expanding the information collected per query to contain both preferences and optional text prompting. To accomplish this, we leverage the zero-shot capabilities of a large language model (LLM) to reason from the text provided by humans. To accommodate the additional query information, we reformulate the reward learning objectives to contain flexible highlights -- state-action pairs that contain relatively high information and are related to the features processed in a zero-shot fashion from a pretrained LLM. In both a simulated scenario and a user study, we reveal the effectiveness of our work by analyzing the feedback and its implications. Additionally, the collective feedback collected serves to train a robot on socially compliant trajectories in a simulated social navigation landscape. We provide video examples of the trained policies at https://sites.google.com/view/rl-predilect
|
[
"['Simon Holk' 'Daniel Marta' 'Iolanda Leite']"
] |
null | null |
2402.15422
| null | null |
http://arxiv.org/pdf/2402.15422v2
|
2024-06-25T17:02:10Z
|
2024-02-23T16:32:28Z
|
A Data-Centric Approach To Generate Faithful and High Quality Patient
Summaries with Large Language Models
|
Patients often face difficulties in understanding their hospitalizations, while healthcare workers have limited resources to provide explanations. In this work, we investigate the potential of large language models to generate patient summaries based on doctors' notes and study the effect of training data on the faithfulness and quality of the generated summaries. To this end, we release (i) a rigorous labeling protocol for errors in medical texts and (ii) a publicly available dataset of annotated hallucinations in 100 doctor-written and 100 generated summaries. We show that fine-tuning on hallucination-free data effectively reduces hallucinations from 2.60 to 1.55 per summary for Llama 2, while preserving relevant information. We observe a similar effect on GPT-4 (0.70 to 0.40), when the few-shot examples are hallucination-free. We also conduct a qualitative evaluation using hallucination-free and improved training data. We find that common quantitative metrics do not correlate well with faithfulness and quality. Finally, we test GPT-4 for automatic hallucination detection, which clearly outperforms common baselines.
|
[
"['Stefan Hegselmann' 'Shannon Zejiang Shen' 'Florian Gierse'\n 'Monica Agrawal' 'David Sontag' 'Xiaoyi Jiang']"
] |
null | null |
2402.15429
| null | null |
http://arxiv.org/pdf/2402.15429v2
|
2024-07-12T21:25:42Z
|
2024-02-23T16:48:56Z
|
ProTIP: Probabilistic Robustness Verification on Text-to-Image Diffusion
Models against Stochastic Perturbation
|
Text-to-Image (T2I) Diffusion Models (DMs) have shown impressive abilities in generating high-quality images based on simple text descriptions. However, as is common with many Deep Learning (DL) models, DMs are subject to a lack of robustness. While there are attempts to evaluate the robustness of T2I DMs as a binary or worst-case problem, they cannot answer how robust in general the model is whenever an adversarial example (AE) can be found. In this study, we first introduce a probabilistic notion of T2I DMs' robustness; and then establish an efficient framework, ProTIP, to evaluate it with statistical guarantees. The main challenges stem from: i) the high computational cost of the generation process; and ii) determining if a perturbed input is an AE involves comparing two output distributions, which is fundamentally harder compared to other DL tasks like classification where an AE is identified upon misprediction of labels. To tackle the challenges, we employ sequential analysis with efficacy and futility early stopping rules in the statistical testing for identifying AEs, and adaptive concentration inequalities to dynamically determine the "just-right" number of stochastic perturbations whenever the verification target is met. Empirical experiments validate the effectiveness and efficiency of ProTIP over common T2I DMs. Finally, we demonstrate an application of ProTIP to rank commonly used defence methods.
|
[
"['Yi Zhang' 'Yun Tang' 'Wenjie Ruan' 'Xiaowei Huang' 'Siddartha Khastgir'\n 'Paul Jennings' 'Xingyu Zhao']"
] |
null | null |
2402.15430
| null | null |
http://arxiv.org/pdf/2402.15430v2
|
2024-04-11T06:40:12Z
|
2024-02-23T16:50:07Z
|
Hierarchical Invariance for Robust and Interpretable Vision Tasks at
Larger Scales
|
Developing robust and interpretable vision systems is a crucial step towards trustworthy artificial intelligence. In this regard, a promising paradigm considers embedding task-required invariant structures, e.g., geometric invariance, in the fundamental image representation. However, such invariant representations typically exhibit limited discriminability, limiting their applications in larger-scale trustworthy vision tasks. For this open problem, we conduct a systematic investigation of hierarchical invariance, exploring this topic from theoretical, practical, and application perspectives. At the theoretical level, we show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture yet in a fully interpretable manner. The general blueprint, specific definitions, invariant properties, and numerical implementations are provided. At the practical level, we discuss how to customize this theoretical framework into a given task. With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner. We demonstrate the above arguments with accuracy, invariance, and efficiency results on texture, digit, and parasite classification experiments. Furthermore, at the application level, our representations are explored in real-world forensics tasks on adversarial perturbations and Artificial Intelligence Generated Content (AIGC). Such applications reveal that the proposed strategy not only realizes the theoretically promised invariance, but also exhibits competitive discriminability even in the era of deep learning. For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
|
[
"['Shuren Qi' 'Yushu Zhang' 'Chao Wang' 'Zhihua Xia' 'Xiaochun Cao'\n 'Jian Weng']"
] |
null | null |
2402.15432
| null | null |
http://arxiv.org/pdf/2402.15432v1
|
2024-02-23T16:51:17Z
|
2024-02-23T16:51:17Z
|
Universal Lower Bounds and Optimal Rates: Achieving Minimax Clustering
Error in Sub-Exponential Mixture Models
|
Clustering is a pivotal challenge in unsupervised machine learning and is often investigated through the lens of mixture models. The optimal error rate for recovering cluster labels in Gaussian and sub-Gaussian mixture models involves ad hoc signal-to-noise ratios. Simple iterative algorithms, such as Lloyd's algorithm, attain this optimal error rate. In this paper, we first establish a universal lower bound for the error rate in clustering any mixture model, expressed through a Chernoff divergence, a more versatile measure of model information than signal-to-noise ratios. We then demonstrate that iterative algorithms attain this lower bound in mixture models with sub-exponential tails, notably emphasizing location-scale mixtures featuring Laplace-distributed errors. Additionally, for datasets better modelled by Poisson or Negative Binomial mixtures, we study mixture models whose distributions belong to an exponential family. In such mixtures, we establish that Bregman hard clustering, a variant of Lloyd's algorithm employing a Bregman divergence, is rate optimal.
|
[
"['Maximilien Dreveton' 'Alperen Gözeten' 'Matthias Grossglauser'\n 'Patrick Thiran']"
] |
null | null |
2402.15441
| null | null |
http://arxiv.org/pdf/2402.15441v4
|
2024-06-21T08:48:18Z
|
2024-02-13T09:19:05Z
|
Active Few-Shot Fine-Tuning
|
We study the question: How can we select the right data for fine-tuning to a specific task? We call this data selection problem active fine-tuning and show that it is an instance of transductive active learning, a novel generalization of classical active learning. We propose ITL, short for information-based transductive learning, an approach which samples adaptively to maximize information gained about the specified task. We are the first to show, under general regularity assumptions, that such decision rules converge uniformly to the smallest possible uncertainty obtainable from the accessible data. We apply ITL to the few-shot fine-tuning of large neural networks and show that fine-tuning with ITL learns the task with significantly fewer examples than the state-of-the-art.
|
[
"['Jonas Hübotter' 'Bhavya Sukhija' 'Lenart Treven' 'Yarden As'\n 'Andreas Krause']"
] |
null | null |
2402.15444
| null | null |
http://arxiv.org/pdf/2402.15444v1
|
2024-02-22T05:48:03Z
|
2024-02-22T05:48:03Z
|
Unleashing the Power of Imbalanced Modality Information for Multi-modal
Knowledge Graph Completion
|
Multi-modal knowledge graph completion (MMKGC) aims to predict the missing triples in the multi-modal knowledge graphs by incorporating structural, visual, and textual information of entities into the discriminant models. The information from different modalities will work together to measure the triple plausibility. Existing MMKGC methods overlook the imbalance problem of modality information among entities, resulting in inadequate modal fusion and inefficient utilization of the raw modality information. To address the mentioned problems, we propose Adaptive Multi-modal Fusion and Modality Adversarial Training (AdaMF-MAT) to unleash the power of imbalanced modality information for MMKGC. AdaMF-MAT achieves multi-modal fusion with adaptive modality weights and further generates adversarial samples by modality-adversarial training to enhance the imbalanced modality information. Our approach is a co-design of the MMKGC model and training strategy which can outperform 19 recent MMKGC methods and achieve new state-of-the-art results on three public MMKGC benchmarks. Our code and data have been released at https://github.com/zjukg/AdaMF-MAT.
|
[
"['Yichi Zhang' 'Zhuo Chen' 'Lei Liang' 'Huajun Chen' 'Wen Zhang']"
] |
null | null |
2402.15449
| null | null |
http://arxiv.org/pdf/2402.15449v1
|
2024-02-23T17:25:10Z
|
2024-02-23T17:25:10Z
|
Repetition Improves Language Model Embeddings
|
Recent approaches to improving the extraction of text embeddings from autoregressive large language models (LLMs) have largely focused on improvements to data, backbone pretrained language models, or improving task-differentiation via instructions. In this work, we address an architectural limitation of autoregressive models: token embeddings cannot contain information from tokens that appear later in the input. To address this limitation, we propose a simple approach, "echo embeddings," in which we repeat the input twice in context and extract embeddings from the second occurrence. We show that echo embeddings of early tokens can encode information about later tokens, allowing us to maximally leverage high-quality LLMs for embeddings. On the MTEB leaderboard, echo embeddings improve over classical embeddings by over 9% zero-shot and by around 0.7% when fine-tuned. Echo embeddings with a Mistral-7B model achieve state-of-the-art compared to prior open source models that do not leverage synthetic fine-tuning data.
|
[
"['Jacob Mitchell Springer' 'Suhas Kotha' 'Daniel Fried' 'Graham Neubig'\n 'Aditi Raghunathan']"
] |
null | null |
2402.15472
| null | null |
http://arxiv.org/pdf/2402.15472v2
|
2024-07-04T07:06:22Z
|
2024-02-23T18:04:54Z
|
FAIR: Filtering of Automatically Induced Rules
|
The availability of large annotated data can be a critical bottleneck in training machine learning algorithms successfully, especially when applied to diverse domains. Weak supervision offers a promising alternative by accelerating the creation of labeled training data using domain-specific rules. However, it requires users to write a diverse set of high-quality rules to assign labels to the unlabeled data. Automatic Rule Induction (ARI) approaches circumvent this problem by automatically creating rules from features on a small labeled set and filtering a final set of rules from them. In the ARI approach, the crucial step is to filter out a set of a high-quality useful subset of rules from the large set of automatically created rules. In this paper, we propose an algorithm (Filtering of Automatically Induced Rules) to filter rules from a large number of automatically induced rules using submodular objective functions that account for the collective precision, coverage, and conflicts of the rule set. We experiment with three ARI approaches and five text classification datasets to validate the superior performance of our algorithm with respect to several semi-supervised label aggregation approaches. Further, we show that achieves statistically significant results in comparison to existing rule-filtering approaches.
|
[
"['Divya Jyoti Bajpai' 'Ayush Maheshwari' 'Manjesh Kumar Hanawal'\n 'Ganesh Ramakrishnan']"
] |
null | null |
2402.15473
| null | null |
http://arxiv.org/pdf/2402.15473v2
|
2024-04-18T06:38:22Z
|
2024-02-23T18:05:06Z
|
Leveraging Domain Knowledge for Efficient Reward Modelling in RLHF: A
Case-Study in E-Commerce Opinion Summarization
|
Reinforcement Learning from Human Feedback (RLHF) has become a dominating strategy in aligning Language Models (LMs) with human values/goals. The key to the strategy is learning a reward model ($varphi$), which can reflect the latent reward model of humans. While this strategy has proven effective, the training methodology requires a lot of human preference annotation (usually in the order of tens of thousands) to train $varphi$. Such a large-scale annotation is justifiable when it's a one-time effort, and the reward model is universally applicable. However, human goals are subjective and depend on the task, requiring task-specific preference annotations, which can be impractical to fulfill. To address this challenge, we propose a novel approach to infuse domain knowledge into $varphi$, which reduces the amount of preference annotation required ($21times$), omits Alignment Tax, and provides some interpretability. We validate our approach in E-Commerce Opinion Summarization, with a significant reduction in dataset size (to just $940$ samples) while advancing the SOTA ($sim4$ point ROUGE-L improvement, $68%$ of times preferred by humans over SOTA). Our contributions include a novel Reward Modeling technique and two new datasets: PromptOpinSumm (supervised data for Opinion Summarization) and OpinPref (a gold-standard human preference dataset). The proposed methodology opens up avenues for efficient RLHF, making it more adaptable to applications with varying human values. We release the artifacts (Code: github.com/efficient-rlhf. PromptOpinSumm: hf.co/prompt-opin-summ. OpinPref: hf.co/opin-pref) for usage under MIT License.
|
[
"['Swaroop Nath' 'Tejpalsingh Siledar' 'Sankara Sri Raghava Ravindra Muddu'\n 'Rupasai Rangaraju' 'Harshad Khadilkar' 'Pushpak Bhattacharyya'\n 'Suman Banerjee' 'Amey Patil' 'Sudhanshu Shekhar Singh'\n 'Muthusamy Chelliah' 'Nikesh Garera']"
] |
null | null |
2402.15477
| null | null |
http://arxiv.org/pdf/2402.15477v1
|
2024-02-23T18:11:32Z
|
2024-02-23T18:11:32Z
|
Debiasing Machine Learning Models by Using Weakly Supervised Learning
|
We tackle the problem of bias mitigation of algorithmic decisions in a setting where both the output of the algorithm and the sensitive variable are continuous. Most of prior work deals with discrete sensitive variables, meaning that the biases are measured for subgroups of persons defined by a label, leaving out important algorithmic bias cases, where the sensitive variable is continuous. Typical examples are unfair decisions made with respect to the age or the financial status. In our work, we then propose a bias mitigation strategy for continuous sensitive variables, based on the notion of endogeneity which comes from the field of econometrics. In addition to solve this new problem, our bias mitigation strategy is a weakly supervised learning method which requires that a small portion of the data can be measured in a fair manner. It is model agnostic, in the sense that it does not make any hypothesis on the prediction model. It also makes use of a reasonably large amount of input observations and their corresponding predictions. Only a small fraction of the true output predictions should be known. This therefore limits the need for expert interventions. Results obtained on synthetic data show the effectiveness of our approach for examples as close as possible to real-life applications in econometrics.
|
[
"['Renan D. B. Brotto' 'Jean-Michel Loubes' 'Laurent Risser'\n 'Jean-Pierre Florens' 'Kenji Nose-Filho' 'João M. T. Romano']"
] |
null | null |
2402.15478
| null | null |
http://arxiv.org/pdf/2402.15478v2
|
2024-06-07T13:06:56Z
|
2024-02-23T18:12:53Z
|
Transformers are Expressive, But Are They Expressive Enough for
Regression?
|
Transformers have become pivotal in Natural Language Processing, demonstrating remarkable success in applications like Machine Translation and Summarization. Given their widespread adoption, several works have attempted to analyze the expressivity of Transformers. Expressivity of a neural network is the class of functions it can approximate. A neural network is fully expressive if it can act as a universal function approximator. We attempt to analyze the same for Transformers. Contrary to existing claims, our findings reveal that Transformers struggle to reliably approximate smooth functions, relying on piecewise constant approximations with sizable intervals. The central question emerges as: "Are Transformers truly Universal Function Approximators?" To address this, we conduct a thorough investigation, providing theoretical insights and supporting evidence through experiments. Theoretically, we prove that Transformer Encoders cannot approximate smooth functions. Experimentally, we complement our theory and show that the full Transformer architecture cannot approximate smooth functions. By shedding light on these challenges, we advocate a refined understanding of Transformers' capabilities.
|
[
"['Swaroop Nath' 'Harshad Khadilkar' 'Pushpak Bhattacharyya']"
] |
null | null |
2402.15487
| null | null |
http://arxiv.org/pdf/2402.15487v1
|
2024-02-23T18:27:17Z
|
2024-02-23T18:27:17Z
|
RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for
Robotic Manipulation
|
Robots need to explore their surroundings to adapt to and tackle tasks in unknown environments. Prior work has proposed building scene graphs of the environment but typically assumes that the environment is static, omitting regions that require active interactions. This severely limits their ability to handle more complex tasks in household and office environments: before setting up a table, robots must explore drawers and cabinets to locate all utensils and condiments. In this work, we introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment. The ACSG accounts for both low-level information, such as geometry and semantics, and high-level information, such as the action-conditioned relationships between different entities in the scene. To this end, we present the Robotic Exploration (RoboEXP) system, which incorporates the Large Multimodal Model (LMM) and an explicit memory design to enhance our system's capabilities. The robot reasons about what and how to explore an object, accumulating new information through the interaction process and incrementally constructing the ACSG. We apply our system across various real-world settings in a zero-shot manner, demonstrating its effectiveness in exploring and modeling environments it has never seen before. Leveraging the constructed ACSG, we illustrate the effectiveness and efficiency of our RoboEXP system in facilitating a wide range of real-world manipulation tasks involving rigid, articulated objects, nested objects like Matryoshka dolls, and deformable objects like cloth.
|
[
"['Hanxiao Jiang' 'Binghao Huang' 'Ruihai Wu' 'Zhuoran Li' 'Shubham Garg'\n 'Hooshang Nayyeri' 'Shenlong Wang' 'Yunzhu Li']"
] |
null | null |
2402.15490
| null | null |
http://arxiv.org/pdf/2402.15490v2
|
2024-02-28T08:51:35Z
|
2024-02-23T18:28:57Z
|
A Comprehensive Survey of Convolutions in Deep Learning: Applications,
Challenges, and Future Trends
|
In today's digital age, Convolutional Neural Networks (CNNs), a subset of Deep Learning (DL), are widely used for various computer vision tasks such as image classification, object detection, and image segmentation. There are numerous types of CNNs designed to meet specific needs and requirements, including 1D, 2D, and 3D CNNs, as well as dilated, grouped, attention, depthwise convolutions, and NAS, among others. Each type of CNN has its unique structure and characteristics, making it suitable for specific tasks. It's crucial to gain a thorough understanding and perform a comparative analysis of these different CNN types to understand their strengths and weaknesses. Furthermore, studying the performance, limitations, and practical applications of each type of CNN can aid in the development of new and improved architectures in the future. We also dive into the platforms and frameworks that researchers utilize for their research or development from various perspectives. Additionally, we explore the main research fields of CNN like 6D vision, generative models, and meta-learning. This survey paper provides a comprehensive examination and comparison of various CNN architectures, highlighting their architectural differences and emphasizing their respective advantages, disadvantages, applications, challenges, and future trends.
|
[
"['Abolfazl Younesi' 'Mohsen Ansari' 'MohammadAmin Fazli' 'Alireza Ejlali'\n 'Muhammad Shafique' 'Jörg Henkel']"
] |
null | null |
2402.15492
| null | null |
http://arxiv.org/pdf/2402.15492v1
|
2024-02-23T18:31:02Z
|
2024-02-23T18:31:02Z
|
Mechanics-Informed Autoencoder Enables Automated Detection and
Localization of Unforeseen Structural Damage
|
Structural health monitoring (SHM) is vital for ensuring the safety and longevity of structures like buildings and bridges. As the volume and scale of structures and the impact of their failure continue to grow, there is a dire need for SHM techniques that are scalable, inexpensive, operate passively without human intervention, and customized for each mechanical structure without the need for complex baseline models. We present a novel "deploy-and-forget" approach for automated detection and localization of damages in structures. It is based on a synergistic combination of fully passive measurements from inexpensive sensors and a mechanics-informed autoencoder. Once deployed, our solution continuously learns and adapts a bespoke baseline model for each structure, learning from its undamaged state's response characteristics. After learning from just 3 hours of data, it can autonomously detect and localize different types of unforeseen damage. Results from numerical simulations and experiments indicate that incorporating the mechanical characteristics into the variational autoencoder allows for up to 35% earlier detection and localization of damage over a standard autoencoder. Our approach holds substantial promise for a significant reduction in human intervention and inspection costs and enables proactive and preventive maintenance strategies, thus extending the lifespan, reliability, and sustainability of civil infrastructures.
|
[
"['Xuyang Li' 'Hamed Bolandi' 'Mahdi Masmoudi' 'Talal Salem' 'Nizar Lajnef'\n 'Vishnu Naresh Boddeti']"
] |
null | null |
2402.15505
| null | null |
http://arxiv.org/pdf/2402.15505v1
|
2024-02-23T18:56:11Z
|
2024-02-23T18:56:11Z
|
Co-Supervised Learning: Improving Weak-to-Strong Generalization with
Hierarchical Mixture of Experts
|
Steering the behavior of a strong model pre-trained on internet-scale data can be difficult due to the scarcity of competent supervisors. Recent studies reveal that, despite supervisory noises, a strong student model may surpass its weak teacher when fine-tuned on specific objectives. Yet, the effectiveness of such weak-to-strong generalization remains limited, especially in the presence of large capability gaps. In this paper, we propose to address this challenge by harnessing a diverse set of specialized teachers, instead of a single generalist one, that collectively supervises the strong student. Our approach resembles the classical hierarchical mixture of experts, with two components tailored for co-supervision: (i) we progressively alternate student training and teacher assignment, leveraging the growth of the strong student to identify plausible supervisions; (ii) we conservatively enforce teacher-student and local-global consistency, leveraging their dependencies to reject potential annotation noises. We validate the proposed method through visual recognition tasks on the OpenAI weak-to-strong benchmark and additional multi-domain datasets. Our code is available at url{https://github.com/yuejiangliu/csl}.
|
[
"['Yuejiang Liu' 'Alexandre Alahi']"
] |
null | null |
2402.15506
| null | null |
http://arxiv.org/pdf/2402.15506v3
|
2024-03-20T06:00:14Z
|
2024-02-23T18:56:26Z
|
AgentOhana: Design Unified Data and Training Pipeline for Effective
Agent Learning
|
Autonomous agents powered by large language models (LLMs) have garnered significant research attention. However, fully harnessing the potential of LLMs for agent-based tasks presents inherent challenges due to the heterogeneous nature of diverse data sources featuring multi-turn trajectories. In this paper, we introduce textbf{AgentOhana} as a comprehensive solution to address these challenges. textit{AgentOhana} aggregates agent trajectories from distinct environments, spanning a wide array of scenarios. It meticulously standardizes and unifies these trajectories into a consistent format, streamlining the creation of a generic data loader optimized for agent training. Leveraging the data unification, our training pipeline maintains equilibrium across different data sources and preserves independent randomness across devices during dataset partitioning and model training. Additionally, we present textbf{xLAM-v0.1}, a large action model tailored for AI agents, which demonstrates exceptional performance across various benchmarks. Begin the exploration at url{https://github.com/SalesforceAIResearch/xLAM}.
|
[
"['Jianguo Zhang' 'Tian Lan' 'Rithesh Murthy' 'Zhiwei Liu' 'Weiran Yao'\n 'Juntao Tan' 'Thai Hoang' 'Liangwei Yang' 'Yihao Feng' 'Zuxin Liu'\n 'Tulika Awalgaonkar' 'Juan Carlos Niebles' 'Silvio Savarese'\n 'Shelby Heinecke' 'Huan Wang' 'Caiming Xiong']"
] |
null | null |
2402.15513
| null | null |
http://arxiv.org/abs/2402.15513v1
|
2024-01-23T16:49:54Z
|
2024-01-23T16:49:54Z
|
Investigating the Generalizability of Physiological Characteristics of
Anxiety
|
Recent works have demonstrated the effectiveness of machine learning (ML) techniques in detecting anxiety and stress using physiological signals, but it is unclear whether ML models are learning physiological features specific to stress. To address this ambiguity, we evaluated the generalizability of physiological features that have been shown to be correlated with anxiety and stress to high-arousal emotions. Specifically, we examine features extracted from electrocardiogram (ECG) and electrodermal (EDA) signals from the following three datasets: Anxiety Phases Dataset (APD), Wearable Stress and Affect Detection (WESAD), and the Continuously Annotated Signals of Emotion (CASE) dataset. We aim to understand whether these features are specific to anxiety or general to other high-arousal emotions through a statistical regression analysis, in addition to a within-corpus, cross-corpus, and leave-one-corpus-out cross-validation across instances of stress and arousal. We used the following classifiers: Support Vector Machines, LightGBM, Random Forest, XGBoost, and an ensemble of the aforementioned models. We found that models trained on an arousal dataset perform relatively well on a previously unseen stress dataset, and vice versa. Our experimental results suggest that the evaluated models may be identifying emotional arousal instead of stress. This work is the first cross-corpus evaluation across stress and arousal from ECG and EDA signals, contributing new findings about the generalizability of stress detection.
|
[
"['Emily Zhou' 'Mohammad Soleymani' 'Maja J. Matarić']"
] |
null | null |
2402.15516
| null | null |
http://arxiv.org/pdf/2402.15516v1
|
2024-02-09T12:12:52Z
|
2024-02-09T12:12:52Z
|
GLA-Grad: A Griffin-Lim Extended Waveform Generation Diffusion Model
|
Diffusion models are receiving a growing interest for a variety of signal generation tasks such as speech or music synthesis. WaveGrad, for example, is a successful diffusion model that conditionally uses the mel spectrogram to guide a diffusion process for the generation of high-fidelity audio. However, such models face important challenges concerning the noise diffusion process for training and inference, and they have difficulty generating high-quality speech for speakers that were not seen during training. With the aim of minimizing the conditioning error and increasing the efficiency of the noise diffusion process, we propose in this paper a new scheme called GLA-Grad, which consists in introducing a phase recovery algorithm such as the Griffin-Lim algorithm (GLA) at each step of the regular diffusion process. Furthermore, it can be directly applied to an already-trained waveform generation model, without additional training or fine-tuning. We show that our algorithm outperforms state-of-the-art diffusion models for speech generation, especially when generating speech for a previously unseen target speaker.
|
[
"['Haocheng Liu' 'Teysir Baoueb' 'Mathieu Fontaine' 'Jonathan Le Roux'\n 'Gael Richard']"
] |
null | null |
2402.15521
| null | null |
http://arxiv.org/pdf/2402.15521v1
|
2024-02-15T18:13:41Z
|
2024-02-15T18:13:41Z
|
HKD-SHO: A hybrid smart home system based on knowledge-based and
data-driven services
|
A smart home is realized by setting up various services. Several methods have been proposed to create smart home services, which can be divided into knowledge-based and data-driven approaches. However, knowledge-based approaches usually require manual input from the inhabitant, which can be complicated if the physical phenomena of the concerned environment states are complex, and the inhabitant does not know how to adjust related actuators to achieve the target values of the states monitored by services. Moreover, machine learning-based data-driven approaches that we are interested in are like black boxes and cannot show the inhabitant in which situations certain services proposed certain actuators' states. To solve these problems, we propose a hybrid system called HKD-SHO (Hybrid Knowledge-based and Data-driven services based Smart HOme system), where knowledge-based and machine learning-based data-driven services are profitably integrated. The principal advantage is that it inherits the explicability of knowledge-based services and the dynamism of data-driven services. We compare HKD-SHO with several systems for creating dynamic smart home services, and the results show the better performance of HKD-SHO.
|
[
"['Mingming Qiu' 'Elie Najm' 'Rémi Sharrock' 'Bruno Traverson']"
] |
null | null |
2402.15524
| null | null |
http://arxiv.org/pdf/2402.15524v1
|
2024-02-19T20:03:45Z
|
2024-02-19T20:03:45Z
|
Graph Pruning for Enumeration of Minimal Unsatisfiable Subsets
|
Finding Minimal Unsatisfiable Subsets (MUSes) of binary constraints is a common problem in infeasibility analysis of over-constrained systems. However, because of the exponential search space of the problem, enumerating MUSes is extremely time-consuming in real applications. In this work, we propose to prune formulas using a learned model to speed up MUS enumeration. We represent formulas as graphs and then develop a graph-based learning model to predict which part of the formula should be pruned. Importantly, our algorithm does not require data labeling by only checking the satisfiability of pruned formulas. It does not even require training data from the target application because it extrapolates to data with different distributions. In our experiments we combine our algorithm with existing MUS enumerators and validate its effectiveness in multiple benchmarks including a set of real-world problems outside our training distribution. The experiment results show that our method significantly accelerates MUS enumeration on average on these benchmark problems.
|
[
"['Panagiotis Lymperopoulos' 'Liping Liu']"
] |
null | null |
2402.15526
| null | null |
http://arxiv.org/pdf/2402.15526v1
|
2024-02-20T08:03:05Z
|
2024-02-20T08:03:05Z
|
Chain-of-Specificity: An Iteratively Refining Method for Eliciting
Knowledge from Large Language Models
|
Large Language Models (LLMs) exhibit remarkable generative capabilities, enabling the generation of valuable information. Despite these advancements, previous research found that LLMs sometimes struggle with adhering to specific constraints (e.g., in specific place or at specific time), at times even overlooking them, which leads to responses that are either too generic or not fully satisfactory. Existing approaches attempted to address this issue by decomposing or rewriting input instructions, yet they fall short in adequately emphasizing specific constraints and in unlocking the underlying knowledge (e.g., programming within the context of software development). In response, this paper proposes a simple yet effective method named Chain-of-Specificity (CoS). Specifically, CoS iteratively emphasizes the specific constraints in the input instructions, unlocks knowledge within LLMs, and refines responses. Experiments conducted on publicly available and self-build complex datasets demonstrate that CoS outperforms existing methods in enhancing generated content especially for the specificity. Besides, as the number of specific constraints increase, other baselines falter, while CoS still performs well. Moreover, we show that distilling responses generated by CoS effectively enhances the ability of smaller models to follow the constrained instructions. Resources of this paper will be released for further research.
|
[
"['Kaiwen Wei' 'Jingyuan Zhang' 'Hongzhi Zhang' 'Fuzheng Zhang' 'Di Zhang'\n 'Li Jin' 'Yue Yu']"
] |
null | null |
2402.15534
| null | null |
http://arxiv.org/pdf/2402.15534v1
|
2024-02-22T20:51:37Z
|
2024-02-22T20:51:37Z
|
DiCoM -- Diverse Concept Modeling towards Enhancing Generalizability in
Chest X-Ray Studies
|
Chest X-Ray (CXR) is a widely used clinical imaging modality and has a pivotal role in the diagnosis and prognosis of various lung and heart related conditions. Conventional automated clinical diagnostic tool design strategies relying on radiology reads and supervised learning, entail the cumbersome requirement of high quality annotated training data. To address this challenge, self-supervised pre-training has proven to outperform supervised pre-training in numerous downstream vision tasks, representing a significant breakthrough in the field. However, medical imaging pre-training significantly differs from pre-training with natural images (e.g., ImageNet) due to unique attributes of clinical images. In this context, we introduce Diverse Concept Modeling (DiCoM), a novel self-supervised training paradigm that leverages a student teacher framework for learning diverse concepts and hence effective representation of the CXR data. Hence, expanding beyond merely modeling a single primary label within an image, instead, effectively harnessing the information from all the concepts inherent in the CXR. The pre-trained model is subsequently fine-tuned to address diverse domain-specific tasks. Our proposed paradigm consistently demonstrates robust performance across multiple downstream tasks on multiple datasets, highlighting the success and generalizability of the pre-training strategy. To establish the efficacy of our methods we analyze both the power of learned representations and the speed of convergence (SoC) of our models. For diverse data and tasks, DiCoM is able to achieve in most cases better results compared to other state-of-the-art pre-training strategies. This when combined with the higher SoC and generalization capabilities positions DiCoM to be established as a foundation model for CXRs, a widely used imaging modality.
|
[
"['Abhieet Parida' 'Daniel Capellan-Martin' 'Sara Atito' 'Muhammad Awais'\n 'Maria J. Ledesma-Carbayo' 'Marius G. Linguraru' 'Syed Muhammad Anwar']"
] |
null | null |
2402.15537
| null | null |
http://arxiv.org/pdf/2402.15537v2
|
2024-06-19T14:49:09Z
|
2024-02-23T04:52:08Z
|
Evaluating the Performance of ChatGPT for Spam Email Detection
|
Email continues to be a pivotal and extensively utilized communication medium within professional and commercial domains. Nonetheless, the prevalence of spam emails poses a significant challenge for users, disrupting their daily routines and diminishing productivity. Consequently, accurately identifying and filtering spam based on content has become crucial for cybersecurity. Recent advancements in natural language processing, particularly with large language models like ChatGPT, have shown remarkable performance in tasks such as question answering and text generation. However, its potential in spam identification remains underexplored. To fill in the gap, this study attempts to evaluate ChatGPT's capabilities for spam identification in both English and Chinese email datasets. We employ ChatGPT for spam email detection using in-context learning, which requires a prompt instruction and a few demonstrations. We also investigate how the number of demonstrations in the prompt affects the performance of ChatGPT. For comparison, we also implement five popular benchmark methods, including naive Bayes, support vector machines (SVM), logistic regression (LR), feedforward dense neural networks (DNN), and BERT classifiers. Through extensive experiments, the performance of ChatGPT is significantly worse than deep supervised learning methods in the large English dataset, while it presents superior performance on the low-resourced Chinese dataset.
|
[
"['Shijing Si' 'Yuwei Wu' 'Le Tang' 'Yugui Zhang' 'Jedrek Wosik']"
] |
null | null |
2402.15542
| null | null |
http://arxiv.org/abs/2402.15542v1
|
2024-02-23T10:36:22Z
|
2024-02-23T10:36:22Z
|
Streaming IoT Data and the Quantum Edge: A Classic/Quantum Machine
Learning Use Case
|
With the advent of the Post-Moore era, the scientific community is faced with the challenge of addressing the demands of current data-intensive machine learning applications, which are the cornerstone of urgent analytics in distributed computing. Quantum machine learning could be a solution for the increasing demand of urgent analytics, providing potential theoretical speedups and increased space efficiency. However, challenges such as (1) the encoding of data from the classical to the quantum domain, (2) hyperparameter tuning, and (3) the integration of quantum hardware into a distributed computing continuum limit the adoption of quantum machine learning for urgent analytics. In this work, we investigate the use of Edge computing for the integration of quantum machine learning into a distributed computing continuum, identifying the main challenges and possible solutions. Furthermore, exploring the data encoding and hyperparameter tuning challenges, we present preliminary results for quantum machine learning analytics on an IoT scenario.
|
[
"['Sabrina Herbst' 'Vincenzo De Maio' 'Ivona Brandic']"
] |
null | null |
2402.15546
| null | null |
http://arxiv.org/pdf/2402.15546v1
|
2024-02-23T13:01:13Z
|
2024-02-23T13:01:13Z
|
HiMAP: Learning Heuristics-Informed Policies for Large-Scale Multi-Agent
Pathfinding
|
Large-scale multi-agent pathfinding (MAPF) presents significant challenges in several areas. As systems grow in complexity with a multitude of autonomous agents operating simultaneously, efficient and collision-free coordination becomes paramount. Traditional algorithms often fall short in scalability, especially in intricate scenarios. Reinforcement Learning (RL) has shown potential to address the intricacies of MAPF; however, it has also been shown to struggle with scalability, demanding intricate implementation, lengthy training, and often exhibiting unstable convergence, limiting its practical application. In this paper, we introduce Heuristics-Informed Multi-Agent Pathfinding (HiMAP), a novel scalable approach that employs imitation learning with heuristic guidance in a decentralized manner. We train on small-scale instances using a heuristic policy as a teacher that maps each single agent observation information to an action probability distribution. During pathfinding, we adopt several inference techniques to improve performance. With a simple training scheme and implementation, HiMAP demonstrates competitive results in terms of success rate and scalability in the field of imitation-learning-only MAPF, showing the potential of imitation-learning-only MAPF equipped with inference techniques.
|
[
"['Huijie Tang' 'Federico Berto' 'Zihan Ma' 'Chuanbo Hua' 'Kyuree Ahn'\n 'Jinkyoo Park']"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.