arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.11980
|
2023-05-19T19:59:52Z
|
AutoCoreset: An Automatic Practical Coreset Construction Framework
|
[
"Alaa Maalouf",
"Murad Tukan",
"Vladimir Braverman",
"Daniela Rus"
] |
A coreset is a tiny weighted subset of an input set, that closely resembles
the loss function, with respect to a certain set of queries. Coresets became
prevalent in machine learning as they have shown to be advantageous for many
applications. While coreset research is an active research area, unfortunately,
coresets are constructed in a problem-dependent manner, where for each problem,
a new coreset construction algorithm is usually suggested, a process that may
take time or may be hard for new researchers in the field. Even the generic
frameworks require additional (problem-dependent) computations or proofs to be
done by the user. Besides, many problems do not have (provable) small coresets,
limiting their applicability. To this end, we suggest an automatic practical
framework for constructing coresets, which requires (only) the input data and
the desired cost function from the user, without the need for any other
task-related computation to be done by the user. To do so, we reduce the
problem of approximating a loss function to an instance of vector summation
approximation, where the vectors we aim to sum are loss vectors of a specific
subset of the queries, such that we aim to approximate the image of the
function on this subset. We show that while this set is limited, the coreset is
quite general. An extensive experimental study on various machine learning
applications is also conducted. Finally, we provide a ``plug and play" style
implementation, proposing a user-friendly system that can be easily used to
apply coresets for many problems. Full open source code can be found at
\href{https://github.com/alaamaalouf/AutoCoreset}{\text{https://github.com/alaamaalouf/AutoCoreset}}.
We believe that these contributions enable future research and easier use and
applications of coresets.
|
[
"cs.LG"
] | false |
2305.11358
|
2023-05-19T00:31:26Z
|
Understanding the World to Solve Social Dilemmas Using Multi-Agent
Reinforcement Learning
|
[
"Manuel Rios",
"Nicanor Quijano",
"Luis Felipe Giraldo"
] |
Social dilemmas are situations where groups of individuals can benefit from
mutual cooperation but conflicting interests impede them from doing so. This
type of situations resembles many of humanity's most critical challenges, and
discovering mechanisms that facilitate the emergence of cooperative behaviors
is still an open problem. In this paper, we study the behavior of
self-interested rational agents that learn world models in a multi-agent
reinforcement learning (RL) setting and that coexist in environments where
social dilemmas can arise. Our simulation results show that groups of agents
endowed with world models outperform all the other tested ones when dealing
with scenarios where social dilemmas can arise. We exploit the world model
architecture to qualitatively assess the learnt dynamics and confirm that each
agent's world model is capable to encode information of the behavior of the
changing environment and the other agent's actions. This is the first work that
shows that world models facilitate the emergence of complex coordinated
behaviors that enable interacting agents to ``understand'' both environmental
and social dynamics.
|
[
"cs.LG",
"cs.MA"
] | false |
2305.11379
|
2023-05-19T01:53:10Z
|
Generalized Precision Matrix for Scalable Estimation of Nonparametric
Markov Networks
|
[
"Yujia Zheng",
"Ignavier Ng",
"Yewen Fan",
"Kun Zhang"
] |
A Markov network characterizes the conditional independence structure, or
Markov property, among a set of random variables. Existing work focuses on
specific families of distributions (e.g., exponential families) and/or certain
structures of graphs, and most of them can only handle variables of a single
data type (continuous or discrete). In this work, we characterize the
conditional independence structure in general distributions for all data types
(i.e., continuous, discrete, and mixed-type) with a Generalized Precision
Matrix (GPM). Besides, we also allow general functional relations among
variables, thus giving rise to a Markov network structure learning algorithm in
one of the most general settings. To deal with the computational challenge of
the problem, especially for large graphs, we unify all cases under the same
umbrella of a regularized score matching framework. We validate the theoretical
results and demonstrate the scalability empirically in various settings.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.11386
|
2023-05-19T02:03:49Z
|
Improving Fairness in AI Models on Electronic Health Records: The Case
for Federated Learning Methods
|
[
"Raphael Poulain",
"Mirza Farhan Bin Tarek",
"Rahmatollah Beheshti"
] |
Developing AI tools that preserve fairness is of critical importance,
specifically in high-stakes applications such as those in healthcare. However,
health AI models' overall prediction performance is often prioritized over the
possible biases such models could have. In this study, we show one possible
approach to mitigate bias concerns by having healthcare institutions
collaborate through a federated learning paradigm (FL; which is a popular
choice in healthcare settings). While FL methods with an emphasis on fairness
have been previously proposed, their underlying model and local implementation
techniques, as well as their possible applications to the healthcare domain
remain widely underinvestigated. Therefore, we propose a comprehensive FL
approach with adversarial debiasing and a fair aggregation method, suitable to
various fairness metrics, in the healthcare domain where electronic health
records are used. Not only our approach explicitly mitigates bias as part of
the optimization process, but an FL-based paradigm would also implicitly help
with addressing data imbalance and increasing the data size, offering a
practical solution for healthcare applications. We empirically demonstrate our
method's superior performance on multiple experiments simulating large-scale
real-world scenarios and compare it to several baselines. Our method has
achieved promising fairness performance with the lowest impact on overall
discrimination performance (accuracy).
|
[
"cs.LG",
"cs.CY"
] | false |
2305.11387
|
2023-05-19T02:07:22Z
|
Justices for Information Bottleneck Theory
|
[
"Faxian Cao",
"Yongqiang Cheng",
"Adil Mehmood Khan",
"Zhijing Yang"
] |
This study comes as a timely response to mounting criticism of the
information bottleneck (IB) theory, injecting fresh perspectives to rectify
misconceptions and reaffirm its validity. Firstly, we introduce an auxiliary
function to reinterpret the maximal coding rate reduction method as a special
yet local optimal case of IB theory. Through this auxiliary function, we
clarify the paradox of decreasing mutual information during the application of
ReLU activation in deep learning (DL) networks. Secondly, we challenge the
doubts about IB theory's applicability by demonstrating its capacity to explain
the absence of a compression phase with linear activation functions in hidden
layers, when viewed through the lens of the auxiliary function. Lastly, by
taking a novel theoretical stance, we provide a new way to interpret the inner
organizations of DL networks by using IB theory, aligning them with recent
experimental evidence. Thus, this paper serves as an act of justice for IB
theory, potentially reinvigorating its standing and application in DL and other
fields such as communications and biomedical research.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.11390
|
2023-05-19T02:35:39Z
|
ALT: An Automatic System for Long Tail Scenario Modeling
|
[
"Ya-Lin Zhang",
"Jun Zhou",
"Yankun Ren",
"Yue Zhang",
"Xinxing Yang",
"Meng Li",
"Qitao Shi",
"Longfei Li"
] |
In this paper, we consider the problem of long tail scenario modeling with
budget limitation, i.e., insufficient human resources for model training stage
and limited time and computing resources for model inference stage. This
problem is widely encountered in various applications, yet has received
deficient attention so far. We present an automatic system named ALT to deal
with this problem. Several efforts are taken to improve the algorithms used in
our system, such as employing various automatic machine learning related
techniques, adopting the meta learning philosophy, and proposing an essential
budget-limited neural architecture search method, etc. Moreover, to build the
system, many optimizations are performed from a systematic perspective, and
essential modules are armed, making the system more feasible and efficient. We
perform abundant experiments to validate the effectiveness of our system and
demonstrate the usefulness of the critical modules in our system. Moreover,
online results are provided, which fully verified the efficacy of our system.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.11437
|
2023-05-19T05:39:40Z
|
PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy
|
[
"Achintha Wijesinghe",
"Songyang Zhang",
"Zhi Ding"
] |
Federated Learning (FL) has emerged as an effective learning paradigm for
distributed computation owing to its strong potential in capturing underlying
data statistics while preserving data privacy. However, in cases of practical
data heterogeneity among FL clients, existing FL frameworks still exhibit
deficiency in capturing the overall feature properties of local client data
that exhibit disparate distributions. In response, generative adversarial
networks (GANs) have recently been exploited in FL to address data
heterogeneity since GANs can be integrated for data regeneration without
exposing original raw data. Despite some successes, existing GAN-related FL
frameworks often incur heavy communication cost and also elicit other privacy
concerns, which limit their applications in real scenarios. To this end, this
work proposes a novel FL framework that requires only partial GAN model
sharing. Named as PS-FedGAN, this new framework enhances the GAN releasing and
training mechanism to address heterogeneous data distributions across clients
and to strengthen privacy preservation at reduced communication cost,
especially over wireless networks. Our analysis demonstrates the convergence
and privacy benefits of the proposed PS-FEdGAN framework. Through experimental
results based on several well-known benchmark datasets, our proposed PS-FedGAN
shows great promise to tackle FL under non-IID client data distributions, while
securing data privacy and lowering communication overhead.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.11489
|
2023-05-19T07:39:24Z
|
Incomplete Multi-view Clustering via Diffusion Completion
|
[
"Sifan Fang"
] |
Incomplete multi-view clustering is a challenging and non-trivial task to
provide effective data analysis for large amounts of unlabeled data in the real
world. All incomplete multi-view clustering methods need to address the problem
of how to reduce the impact of missing views. To address this issue, we propose
diffusion completion to recover the missing views integrated into an incomplete
multi-view clustering framework. Based on the observable views information, the
diffusion model is used to recover the missing views, and then the consistency
information of the multi-view data is learned by contrastive learning to
improve the performance of multi-view clustering. To the best of our knowledge,
this may be the first work to incorporate diffusion models into an incomplete
multi-view clustering framework. Experimental results show that the proposed
method performs well in recovering the missing views while achieving superior
clustering performance compared to state-of-the-art methods.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.11567
|
2023-05-19T10:11:21Z
|
TSGM: A Flexible Framework for Generative Modeling of Synthetic Time
Series
|
[
"Alexander Nikitin",
"Letizia Iannucci",
"Samuel Kaski"
] |
Temporally indexed data are essential in a wide range of fields and of
interest to machine learning researchers. Time series data, however, are often
scarce or highly sensitive, which precludes the sharing of data between
researchers and industrial organizations and the application of existing and
new data-intensive ML methods. A possible solution to this bottleneck is to
generate synthetic data. In this work, we introduce Time Series Generative
Modeling (TSGM), an open-source framework for the generative modeling of
synthetic time series. TSGM includes a broad repertoire of machine learning
methods: generative models, probabilistic, and simulator-based approaches. The
framework enables users to evaluate the quality of the produced data from
different angles: similarity, downstream effectiveness, predictive consistency,
diversity, and privacy. The framework is extensible, which allows researchers
to rapidly implement their own methods and compare them in a shareable
environment. TSGM was tested on open datasets and in production and proved to
be beneficial in both cases. Additionally to the library, the project allows
users to employ command line interfaces for synthetic data generation which
lowers the entry threshold for those without a programming background.
|
[
"cs.LG",
"stat.ML"
] | false |
2305.11575
|
2023-05-19T10:22:55Z
|
The Deep Promotion Time Cure Model
|
[
"Victor Medina-Olivares",
"Stefan Lessmann",
"Nadja Klein"
] |
We propose a novel method for predicting time-to-event in the presence of
cure fractions based on flexible survivals models integrated into a deep neural
network framework. Our approach allows for non-linear relationships and
high-dimensional interactions between covariates and survival and is suitable
for large-scale applications. Furthermore, we allow the method to incorporate
an identified predictor formed of an additive decomposition of interpretable
linear and non-linear effects and add an orthogonalization layer to capture
potential higher dimensional interactions. We demonstrate the usefulness and
computational efficiency of our method via simulations and apply it to a large
portfolio of US mortgage loans. Here, we find not only a better predictive
performance of our framework but also a more realistic picture of covariate
effects.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.11602
|
2023-05-19T11:29:13Z
|
Latent Imitator: Generating Natural Individual Discriminatory Instances
for Black-Box Fairness Testing
|
[
"Yisong Xiao",
"Aishan Liu",
"Tianlin Li",
"Xianglong Liu"
] |
Machine learning (ML) systems have achieved remarkable performance across a
wide area of applications. However, they frequently exhibit unfair behaviors in
sensitive application domains, raising severe fairness concerns. To evaluate
and test fairness, engineers often generate individual discriminatory instances
to expose unfair behaviors before model deployment. However, existing baselines
ignore the naturalness of generation and produce instances that deviate from
the real data distribution, which may fail to reveal the actual model fairness
since these unnatural discriminatory instances are unlikely to appear in
practice. To address the problem, this paper proposes a framework named Latent
Imitator (LIMI) to generate more natural individual discriminatory instances
with the help of a generative adversarial network (GAN), where we imitate the
decision boundary of the target model in the semantic latent space of GAN and
further samples latent instances on it. Specifically, we first derive a
surrogate linear boundary to coarsely approximate the decision boundary of the
target model, which reflects the nature of the original data distribution.
Subsequently, to obtain more natural instances, we manipulate random latent
vectors to the surrogate boundary with a one-step movement, and further conduct
vector calculation to probe two potential discriminatory candidates that may be
more closely located in the real decision boundary. Extensive experiments on
various datasets demonstrate that our LIMI outperforms other baselines largely
in effectiveness ($\times$9.42 instances), efficiency ($\times$8.71 speeds),
and naturalness (+19.65%) on average. In addition, we empirically demonstrate
that retraining on test samples generated by our approach can lead to
improvements in both individual fairness (45.67% on $IF_r$ and 32.81% on
$IF_o$) and group fairness (9.86% on $SPD$ and 28.38% on $AOD$}).
|
[
"cs.SE",
"cs.LG"
] | false |
2305.11633
|
2023-05-19T12:20:37Z
|
Goal-Oriented Communications in Federated Learning via Feedback on
Risk-Averse Participation
|
[
"Shashi Raj Pandey",
"Van Phuc Bui",
"Petar Popovski"
] |
We treat the problem of client selection in a Federated Learning (FL) setup,
where the learning objective and the local incentives of the participants are
used to formulate a goal-oriented communication problem. Specifically, we
incorporate the risk-averse nature of participants and obtain a
communication-efficient on-device performance, while relying on feedback from
the Parameter Server (\texttt{PS}). A client has to decide its transmission
plan on when not to participate in FL. This is based on its intrinsic
incentive, which is the value of the trained global model upon participation by
this client. Poor updates not only plunge the performance of the global model
with added communication cost but also propagate the loss in performance on
other participating devices. We cast the relevance of local updates as
\emph{semantic information} for developing local transmission strategies, i.e.,
making a decision on when to ``not transmit". The devices use feedback about
the state of the PS and evaluate their contributions in training the learning
model in each aggregation period, which eventually lowers the number of
occupied connections. Simulation results validate the efficacy of our proposed
approach, with up to $1.4\times$ gain in communication links utilization as
compared with the baselines.
|
[
"cs.DC",
"cs.LG"
] | false |
2305.11638
|
2023-05-19T12:37:08Z
|
A Path to Holistic Privacy in Stream Processing Systems
|
[
"Mikhail Fomichev"
] |
The massive streams of Internet of Things (IoT) data require a timely
analysis to retain data usefulness. Stream processing systems (SPSs) enable
this task, deriving knowledge from the IoT data in real-time. Such real-time
analytics benefits many applications but can also be used to violate user
privacy, as the IoT data collected from users or their vicinity is inherently
sensitive. In this paper, we present our systematic look into privacy issues
arising from the intersection of SPSs and IoT, identifying key research
challenges towards achieving holistic privacy protection in SPSs and proposing
the solutions.
|
[
"cs.CR",
"cs.LG"
] | false |
2305.11681
|
2023-05-19T13:57:04Z
|
Probabilistic Lexicase Selection
|
[
"Li Ding",
"Edward Pantridge",
"Lee Spector"
] |
Lexicase selection is a widely used parent selection algorithm in genetic
programming, known for its success in various task domains such as program
synthesis, symbolic regression, and machine learning. Due to its non-parametric
and recursive nature, calculating the probability of each individual being
selected by lexicase selection has been proven to be an NP-hard problem, which
discourages deeper theoretical understanding and practical improvements to the
algorithm. In this work, we introduce probabilistic lexicase selection
(plexicase selection), a novel parent selection algorithm that efficiently
approximates the probability distribution of lexicase selection. Our method not
only demonstrates superior problem-solving capabilities as a semantic-aware
selection method, but also benefits from having a probabilistic representation
of the selection process for enhanced efficiency and flexibility. Experiments
are conducted in two prevalent domains in genetic programming: program
synthesis and symbolic regression, using standard benchmarks including PSB and
SRBench. The empirical results show that plexicase selection achieves
state-of-the-art problem-solving performance that is competitive to the
lexicase selection, and significantly outperforms lexicase selection in
computation efficiency.
|
[
"cs.NE",
"cs.LG"
] | false |
2305.11832
|
2023-05-19T17:15:34Z
|
Improving Multimodal Joint Variational Autoencoders through Normalizing
Flows and Correlation Analysis
|
[
"Agathe Senellart",
"Clément Chadebec",
"Stéphanie Allassonnière"
] |
We propose a new multimodal variational autoencoder that enables to generate
from the joint distribution and conditionally to any number of complex
modalities. The unimodal posteriors are conditioned on the Deep Canonical
Correlation Analysis embeddings which preserve the shared information across
modalities leading to more coherent cross-modal generations. Furthermore, we
use Normalizing Flows to enrich the unimodal posteriors and achieve more
diverse data generation. Finally, we propose to use a Product of Experts for
inferring one modality from several others which makes the model scalable to
any number of modalities. We demonstrate that our method improves likelihood
estimates, diversity of the generations and in particular coherence metrics in
the conditional generations on several datasets.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.11928
|
2023-05-19T15:11:18Z
|
Energy-frugal and Interpretable AI Hardware Design using Learning
Automata
|
[
"Rishad Shafik",
"Tousif Rahman",
"Adrian Wheeldon",
"Ole-Christoffer Granmo",
"Alex Yakovlev"
] |
Energy efficiency is a crucial requirement for enabling powerful artificial
intelligence applications at the microedge. Hardware acceleration with frugal
architectural allocation is an effective method for reducing energy. Many
emerging applications also require the systems design to incorporate
interpretable decision models to establish responsibility and transparency. The
design needs to provision for additional resources to provide reachable states
in real-world data scenarios, defining conflicting design tradeoffs between
energy efficiency. is challenging.
Recently a new machine learning algorithm, called the Tsetlin machine, has
been proposed. The algorithm is fundamentally based on the principles of
finite-state automata and benefits from natural logic underpinning rather than
arithmetic. In this paper, we investigate methods of energy-frugal artificial
intelligence hardware design by suitably tuning the hyperparameters, while
maintaining high learning efficacy. To demonstrate interpretability, we use
reachability and game-theoretic analysis in two simulation environments: a
SystemC model to study the bounded state transitions in the presence of
hardware faults and Nash equilibrium between states to analyze the learning
convergence. Our analyses provides the first insights into conflicting design
tradeoffs involved in energy-efficient and interpretable decision models for
this new artificial intelligence hardware architecture. We show that frugal
resource allocation coupled with systematic prodigality between randomized
reinforcements can provide decisive energy reduction while also achieving
robust and interpretable learning.
|
[
"cs.AI",
"cs.LG"
] | false |
2305.11942
|
2023-05-19T18:01:07Z
|
OPTWIN: Drift identification with optimal sub-windows
|
[
"Mauro Dalle Lucca Tosi",
"Martin Theobald"
] |
Online Learning (OL) is a field of research that is increasingly gaining
attention both in academia and industry. One of the main challenges of OL is
the inherent presence of concept drifts, which are commonly defined as
unforeseeable changes in the statistical properties of an incoming data stream
over time. The detection of concept drifts typically involves analyzing the
error rates produced by an underlying OL algorithm in order to identify if a
concept drift occurred or not, such that the OL algorithm can adapt
accordingly. Current concept-drift detectors perform very well, i.e., with low
false negative rates, but they still tend to exhibit high false positive rates
in the concept-drift detection. This may impact the performance of the learner
and result in an undue amount of computational resources spent on retraining a
model that actually still performs within its expected range. In this paper, we
propose OPTWIN, our "OPTimal WINdow" concept drift detector. OPTWIN uses a
sliding window of events over an incoming data stream to track the errors of an
OL algorithm. The novelty of OPTWIN is to consider both the means and the
variances of the error rates produced by a learner in order to split the
sliding window into two provably optimal sub-windows, such that the split
occurs at the earliest event at which a statistically significant difference
according to either the $t$- or the $f$-tests occurred. We assessed OPTWIN over
the MOA framework, using ADWIN, DDM, EDDM, STEPD and ECDD as baselines over 7
synthetic and real-world datasets, and in the presence of both sudden and
gradual concept drifts. In our experiments, we show that OPTWIN surpasses the
F1-score of the baselines in a statistically significant manner while
maintaining a lower detection delay and saving up to 21% of time spent on
retraining the models.
|
[
"cs.LG",
"cs.DS"
] | false |
2305.15428
|
2023-05-19T07:38:36Z
|
Online Influence Maximization under Decreasing Cascade Model
|
[
"Fang Kong",
"Jize Xie",
"Baoxiang Wang",
"Tao Yao",
"Shuai Li"
] |
We study online influence maximization (OIM) under a new model of decreasing
cascade (DC). This model is a generalization of the independent cascade (IC)
model by considering the common phenomenon of market saturation. In DC, the
chance of an influence attempt being successful reduces with previous failures.
The effect is neglected by previous OIM works under IC and linear threshold
models. We propose the DC-UCB algorithm to solve this problem, which achieves a
regret bound of the same order as the state-of-the-art works on the IC model.
Extensive experiments on both synthetic and real datasets show the
effectiveness of our algorithm.
|
[
"cs.SI",
"cs.LG"
] | false |
2305.18592
|
2023-05-19T14:49:04Z
|
Deep Neural Networks Generalization and Fine-Tuning for 12-lead ECG
Classification
|
[
"Aram Avetisyan",
"Shahane Tigranyan",
"Ariana Asatryan",
"Olga Mashkova",
"Sergey Skorik",
"Vladislav Ananev",
"Yury Markin"
] |
Numerous studies are aimed at diagnosing heart diseases based on 12-lead
electrocardiographic (ECG) records using deep learning methods. These studies
usually use specific datasets that differ in size and parameters, such as
patient metadata, number of doctors annotating ECGs, types of devices for ECG
recording, data preprocessing techniques, etc. It is well-known that
high-quality deep neural networks trained on one ECG dataset do not necessarily
perform well on another dataset or clinical settings. In this paper, we propose
a methodology to improve the quality of heart disease prediction regardless of
the dataset by training neural networks on a variety of datasets with further
fine-tuning for the specific dataset. To show its applicability, we train
different neural networks on a large private dataset TIS containing various ECG
records from multiple hospitals and on a relatively small public dataset
PTB-XL. We demonstrate that training the networks on a large dataset and
fine-tuning it on a small dataset from another source outperforms the networks
trained only on one small dataset. We also show how the ability of a deep
neural networks to generalize allows to improve classification quality of more
diseases.
|
[
"eess.SP",
"cs.LG"
] | false |
2305.11353
|
2023-05-19T00:07:38Z
|
Meta-learning for heterogeneous treatment effect estimation with
closed-form solvers
|
[
"Tomoharu Iwata",
"Yoichi Chikahara"
] |
This article proposes a meta-learning method for estimating the conditional
average treatment effect (CATE) from a few observational data. The proposed
method learns how to estimate CATEs from multiple tasks and uses the knowledge
for unseen tasks. In the proposed method, based on the meta-learner framework,
we decompose the CATE estimation problem into sub-problems. For each
sub-problem, we formulate our estimation models using neural networks with
task-shared and task-specific parameters. With our formulation, we can obtain
optimal task-specific parameters in a closed form that are differentiable with
respect to task-shared parameters, making it possible to perform effective
meta-learning. The task-shared parameters are trained such that the expected
CATE estimation performance in few-shot settings is improved by minimizing the
difference between a CATE estimated with a large amount of data and one
estimated with just a few data. Our experimental results demonstrate that our
method outperforms the existing meta-learning approaches and CATE estimation
methods.
|
[
"stat.ML",
"cs.AI",
"cs.LG"
] | false |
2305.11360
|
2023-05-19T00:36:43Z
|
Differentially Private Adapters for Parameter Efficient Acoustic
Modeling
|
[
"Chun-Wei Ho",
"Chao-Han Huck Yang",
"Sabato Marco Siniscalchi"
] |
In this work, we devise a parameter-efficient solution to bring differential
privacy (DP) guarantees into adaptation of a cross-lingual speech classifier.
We investigate a new frozen pre-trained adaptation framework for DP-preserving
speech modeling without full model fine-tuning. First, we introduce a noisy
teacher-student ensemble into a conventional adaptation scheme leveraging a
frozen pre-trained acoustic model and attain superior performance than DP-based
stochastic gradient descent (DPSGD). Next, we insert residual adapters (RA)
between layers of the frozen pre-trained acoustic model. The RAs reduce
training cost and time significantly with a negligible performance drop.
Evaluated on the open-access Multilingual Spoken Words (MLSW) dataset, our
solution reduces the number of trainable parameters by 97.5% using the RAs with
only a 4% performance drop with respect to fine-tuning the cross-lingual speech
classifier while preserving DP guarantees.
|
[
"cs.SD",
"cs.CR",
"cs.LG",
"eess.AS"
] | false |
2305.11605
|
2023-05-19T11:31:33Z
|
MIDI-Draw: Sketching to Control Melody Generation
|
[
"Tashi Namgyal",
"Peter Flach",
"Raul Santos-Rodriguez"
] |
We describe a proof-of-principle implementation of a system for drawing
melodies that abstracts away from a note-level input representation via melodic
contours. The aim is to allow users to express their musical intentions without
requiring prior knowledge of how notes fit together melodiously. Current
approaches to controllable melody generation often require users to choose
parameters that are static across a whole sequence, via buttons or sliders. In
contrast, our method allows users to quickly specify how parameters should
change over time by drawing a contour.
|
[
"cs.SD",
"cs.AI",
"cs.LG",
"eess.AS"
] | false |
2305.11665
|
2023-05-19T13:30:34Z
|
A Generic Performance Model for Deep Learning in a Distributed
Environment
|
[
"Tulasi Kavarakuntla",
"Liangxiu Han",
"Huw Lloyd",
"Annabel Latham",
"Anthony Kleerekoper",
"Samson B. Akintoye"
] |
Performance modelling of a deep learning application is essential to improve
and quantify the efficiency of the model framework. However, existing
performance models are mostly case-specific, with limited capability for the
new deep learning frameworks/applications. In this paper, we propose a generic
performance model of an application in a distributed environment with a generic
expression of the application execution time that considers the influence of
both intrinsic factors/operations (e.g. algorithmic parameters/internal
operations) and extrinsic scaling factors (e.g. the number of processors, data
chunks and batch size). We formulate it as a global optimization problem and
solve it using regularization on a cost function and differential evolution
algorithm to find the best-fit values of the constants in the generic
expression to match the experimentally determined computation time. We have
evaluated the proposed model on three deep learning frameworks (i.e.,
TensorFlow, MXnet, and Pytorch). The experimental results show that the
proposed model can provide accurate performance predictions and
interpretability. In addition, the proposed work can be applied to any
distributed deep neural network without instrumenting the code and provides
insight into the factors affecting performance and scalability.
|
[
"cs.DC",
"cs.LG",
"cs.PF"
] | false |
2305.11752
|
2023-05-19T15:39:55Z
|
Marginalized Beam Search Algorithms for Hierarchical HMMs
|
[
"Xuechun Xu",
"Joakim Jaldén"
] |
Inferring a state sequence from a sequence of measurements is a fundamental
problem in bioinformatics and natural language processing. The Viterbi and the
Beam Search (BS) algorithms are popular inference methods, but they have
limitations when applied to Hierarchical Hidden Markov Models (HHMMs), where
the interest lies in the outer state sequence. The Viterbi algorithm can not
infer outer states without inner states, while the BS algorithm requires
marginalization over prohibitively large state spaces. We propose two new
algorithms to overcome these limitations: the greedy marginalized BS algorithm
and the local focus BS algorithm. We show that they approximate the most likely
outer state sequence with higher performance than the Viterbi algorithm, and we
evaluate the performance of these algorithms on an explicit duration HMM with
simulation and nanopore base calling data.
|
[
"cs.LG",
"eess.SP",
"q-bio.QM"
] | false |
2305.11765
|
2023-05-19T15:52:06Z
|
Tester-Learners for Halfspaces: Universal Algorithms
|
[
"Aravind Gollakota",
"Adam R. Klivans",
"Konstantinos Stavropoulos",
"Arsen Vasilyan"
] |
We give the first tester-learner for halfspaces that succeeds universally
over a wide class of structured distributions. Our universal tester-learner
runs in fully polynomial time and has the following guarantee: the learner
achieves error $O(\mathrm{opt}) + \epsilon$ on any labeled distribution that
the tester accepts, and moreover, the tester accepts whenever the marginal is
any distribution that satisfies a Poincar\'e inequality. In contrast to prior
work on testable learning, our tester is not tailored to any single target
distribution but rather succeeds for an entire target class of distributions.
The class of Poincar\'e distributions includes all strongly log-concave
distributions, and, assuming the Kannan--L\'{o}vasz--Simonovits (KLS)
conjecture, includes all log-concave distributions. In the special case where
the label noise is known to be Massart, our tester-learner achieves error
$\mathrm{opt} + \epsilon$ while accepting all log-concave distributions
unconditionally (without assuming KLS). Our tests rely on checking
hypercontractivity of the unknown distribution using a sum-of-squares (SOS)
program, and crucially make use of the fact that Poincar\'e distributions are
certifiably hypercontractive in the SOS framework.
|
[
"cs.LG",
"cs.DS",
"stat.ML"
] | false |
2305.11798
|
2023-05-19T16:33:05Z
|
The probability flow ODE is provably fast
|
[
"Sitan Chen",
"Sinho Chewi",
"Holden Lee",
"Yuanzhi Li",
"Jianfeng Lu",
"Adil Salim"
] |
We provide the first polynomial-time convergence guarantees for the
probability flow ODE implementation (together with a corrector step) of
score-based generative modeling. Our analysis is carried out in the wake of
recent results obtaining such guarantees for the SDE-based implementation
(i.e., denoising diffusion probabilistic modeling or DDPM), but requires the
development of novel techniques for studying deterministic dynamics without
contractivity. Through the use of a specially chosen corrector step based on
the underdamped Langevin diffusion, we obtain better dimension dependence than
prior works on DDPM ($O(\sqrt{d})$ vs. $O(d)$, assuming smoothness of the data
distribution), highlighting potential advantages of the ODE framework.
|
[
"cs.LG",
"math.ST",
"stat.ML",
"stat.TH"
] | false |
2305.11805
|
2023-05-19T16:41:59Z
|
PANNA 2.0: Efficient neural network interatomic potentials and new
architectures
|
[
"Franco Pellegrini",
"Ruggero Lot",
"Yusuf Shaidu",
"Emine Küçükbenli"
] |
We present the latest release of PANNA 2.0 (Properties from Artificial Neural
Network Architectures), a code for the generation of neural network interatomic
potentials based on local atomic descriptors and multilayer perceptrons. Built
on a new back end, this new release of PANNA features improved tools for
customizing and monitoring network training, better GPU support including a
fast descriptor calculator, new plugins for external codes and a new
architecture for the inclusion of long-range electrostatic interactions through
a variational charge equilibration scheme. We present an overview of the main
features of the new code, and several benchmarks comparing the accuracy of
PANNA models to the state of the art, on commonly used benchmarks as well as
richer datasets.
|
[
"physics.comp-ph",
"cond-mat.mtrl-sci",
"cs.LG",
"physics.chem-ph"
] | false |
2305.11807
|
2023-05-19T16:43:53Z
|
On the Fairness Impacts of Private Ensembles Models
|
[
"Cuong Tran",
"Ferdinando Fioretto"
] |
The Private Aggregation of Teacher Ensembles (PATE) is a machine learning
framework that enables the creation of private models through the combination
of multiple "teacher" models and a "student" model. The student model learns to
predict an output based on the voting of the teachers, and the resulting model
satisfies differential privacy. PATE has been shown to be effective in creating
private models in semi-supervised settings or when protecting data labels is a
priority. This paper explores whether the use of PATE can result in unfairness,
and demonstrates that it can lead to accuracy disparities among groups of
individuals. The paper also analyzes the algorithmic and data properties that
contribute to these disproportionate impacts, why these aspects are affecting
different groups disproportionately, and offers recommendations for mitigating
these effects
|
[
"cs.LG",
"cs.AI",
"cs.CY"
] | false |
2305.11833
|
2023-05-19T17:17:00Z
|
Complexity of Neural Network Training and ETR: Extensions with
Effectively Continuous Functions
|
[
"Teemu Hankala",
"Miika Hannula",
"Juha Kontinen",
"Jonni Virtema"
] |
We study the complexity of the problem of training neural networks defined
via various activation functions. The training problem is known to be
existsR-complete with respect to linear activation functions and the ReLU
activation function. We consider the complexity of the problem with respect to
the sigmoid activation function and other effectively continuous functions. We
show that these training problems are polynomial-time many-one bireducible to
the existential theory of the reals extended with the corresponding activation
functions. In particular, we establish that the sigmoid activation function
leads to the existential theory of the reals with the exponential function. It
is thus open, and equivalent with the decidability of the existential theory of
the reals with the exponential function, whether training neural networks using
the sigmoid activation function is algorithmically solvable. In contrast, we
obtain that the training problem is undecidable if sinusoidal activation
functions are considered. Finally, we obtain general upper bounds for the
complexity of the training problem in the form of low levels of the
arithmetical hierarchy.
|
[
"cs.LO",
"cs.AI",
"cs.CC",
"cs.LG"
] | false |
2305.11844
|
2023-05-19T17:35:20Z
|
AI's Regimes of Representation: A Community-centered Study of
Text-to-Image Models in South Asia
|
[
"Rida Qadri",
"Renee Shelby",
"Cynthia L. Bennett",
"Remi Denton"
] |
This paper presents a community-centered study of cultural limitations of
text-to-image (T2I) models in the South Asian context. We theorize these
failures using scholarship on dominant media regimes of representations and
locate them within participants' reporting of their existing social
marginalizations. We thus show how generative AI can reproduce an outsiders
gaze for viewing South Asian cultures, shaped by global and regional power
inequities. By centering communities as experts and soliciting their
perspectives on T2I limitations, our study adds rich nuance into existing
evaluative frameworks and deepens our understanding of the culturally-specific
ways AI technologies can fail in non-Western and Global South settings. We
distill lessons for responsible development of T2I models, recommending
concrete pathways forward that can allow for recognition of structural
inequalities.
|
[
"cs.CY",
"cs.AI",
"cs.HC",
"cs.LG"
] | false |
2305.11921
|
2023-05-19T08:58:55Z
|
An Approach to Multiple Comparison Benchmark Evaluations that is Stable
Under Manipulation of the Comparate Set
|
[
"Ali Ismail-Fawaz",
"Angus Dempster",
"Chang Wei Tan",
"Matthieu Herrmann",
"Lynn Miller",
"Daniel F. Schmidt",
"Stefano Berretti",
"Jonathan Weber",
"Maxime Devanne",
"Germain Forestier",
"Geoffrey I. Webb"
] |
The measurement of progress using benchmarks evaluations is ubiquitous in
computer science and machine learning. However, common approaches to analyzing
and presenting the results of benchmark comparisons of multiple algorithms over
multiple datasets, such as the critical difference diagram introduced by
Dem\v{s}ar (2006), have important shortcomings and, we show, are open to both
inadvertent and intentional manipulation. To address these issues, we propose a
new approach to presenting the results of benchmark comparisons, the Multiple
Comparison Matrix (MCM), that prioritizes pairwise comparisons and precludes
the means of manipulating experimental results in existing approaches. MCM can
be used to show the results of an all-pairs comparison, or to show the results
of a comparison between one or more selected algorithms and the state of the
art. MCM is implemented in Python and is publicly available.
|
[
"stat.ME",
"cs.AI",
"cs.LG",
"cs.PF"
] | false |
2305.11957
|
2023-05-19T18:41:17Z
|
Towards understanding neural collapse in supervised contrastive learning
with the information bottleneck method
|
[
"Siwei Wang",
"Stephanie E Palmer"
] |
Neural collapse describes the geometry of activation in the final layer of a
deep neural network when it is trained beyond performance plateaus. Open
questions include whether neural collapse leads to better generalization and,
if so, why and how training beyond the plateau helps. We model neural collapse
as an information bottleneck (IB) problem in order to investigate whether such
a compact representation exists and discover its connection to generalization.
We demonstrate that neural collapse leads to good generalization specifically
when it approaches an optimal IB solution of the classification problem. Recent
research has shown that two deep neural networks independently trained with the
same contrastive loss objective are linearly identifiable, meaning that the
resulting representations are equivalent up to a matrix transformation. We
leverage linear identifiability to approximate an analytical solution of the IB
problem. This approximation demonstrates that when class means exhibit
$K$-simplex Equiangular Tight Frame (ETF) behavior (e.g., $K$=10 for CIFAR10
and $K$=100 for CIFAR100), they coincide with the critical phase transitions of
the corresponding IB problem. The performance plateau occurs once the optimal
solution for the IB problem includes all of these phase transitions. We also
show that the resulting $K$-simplex ETF can be packed into a $K$-dimensional
Gaussian distribution using supervised contrastive learning with a ResNet50
backbone. This geometry suggests that the $K$-simplex ETF learned by supervised
contrastive learning approximates the optimal features for source coding.
Hence, there is a direct correspondence between optimal IB solutions and
generalization in contrastive learning.
|
[
"cs.LG",
"cs.IT",
"math.IT"
] | false |
2305.11965
|
2023-05-19T19:25:56Z
|
Not All Semantics are Created Equal: Contrastive Self-supervised
Learning with Automatic Temperature Individualization
|
[
"Zi-Hao Qiu",
"Quanqi Hu",
"Zhuoning Yuan",
"Denny Zhou",
"Lijun Zhang",
"Tianbao Yang"
] |
In this paper, we aim to optimize a contrastive loss with individualized
temperatures in a principled and systematic manner for self-supervised
learning. The common practice of using a global temperature parameter $\tau$
ignores the fact that ``not all semantics are created equal", meaning that
different anchor data may have different numbers of samples with similar
semantics, especially when data exhibits long-tails. First, we propose a new
robust contrastive loss inspired by distributionally robust optimization (DRO),
providing us an intuition about the effect of $\tau$ and a mechanism for
automatic temperature individualization. Then, we propose an efficient
stochastic algorithm for optimizing the robust contrastive loss with a provable
convergence guarantee without using large mini-batch sizes. Theoretical and
experimental results show that our algorithm automatically learns a suitable
$\tau$ for each sample. Specifically, samples with frequent semantics use large
temperatures to keep local semantic structures, while samples with rare
semantics use small temperatures to induce more separable features. Our method
not only outperforms prior strong baselines (e.g., SimCLR, CLIP) on unimodal
and bimodal datasets with larger improvements on imbalanced data but also is
less sensitive to hyper-parameters. To our best knowledge, this is the first
methodical approach to optimizing a contrastive loss with individualized
temperatures.
|
[
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] | false |
2305.12010
|
2023-05-19T21:37:37Z
|
Chemellia: An Ecosystem for Atomistic Scientific Machine Learning
|
[
"Anant Thazhemadam",
"Dhairya Gandhi",
"Venkatasubramanian Viswanathan",
"Rachel C. Kurchin"
] |
Chemellia is an open-source framework for atomistic machine learning in the
Julia programming language. The framework takes advantage of Julia's high speed
as well as the ability to share and reuse code and interfaces through the
paradigm of multiple dispatch. Chemellia is designed to make use of existing
interfaces and avoid ``reinventing the wheel'' wherever possible. A key aspect
of the Chemellia ecosystem is the ChemistryFeaturization interface for defining
and encoding features -- it is designed to maximize interoperability between
featurization schemes and elements thereof, to maintain provenance of encoded
features, and to ensure easy decodability and reconfigurability to enable
feature engineering experiments. This embodies the overall design principles of
the Chemellia ecosystem: separation of concerns, interoperability, and
transparency. We illustrate these principles by discussing the implementation
of crystal graph convolutional neural networks for material property
prediction.
|
[
"cs.CE",
"cond-mat.mtrl-sci",
"cs.LG"
] | false |
2305.12030
|
2023-05-19T23:00:54Z
|
Learning Continually on a Sequence of Graphs -- The Dynamical System Way
|
[
"Krishnan Raghavan",
"Prasanna Balaprakash"
] |
Continual learning~(CL) is a field concerned with learning a series of
inter-related task with the tasks typically defined in the sense of either
regression or classification. In recent years, CL has been studied extensively
when these tasks are defined using Euclidean data -- data, such as images, that
can be described by a set of vectors in an n-dimensional real space. However,
the literature is quite sparse, when the data corresponding to a CL task is
nonEuclidean -- data , such as graphs, point clouds or manifold, where the
notion of similarity in the sense of Euclidean metric does not hold. For
instance, a graph is described by a tuple of vertices and edges and
similarities between two graphs is not well defined through a Euclidean metric.
Due to this fundamental nature of the data, developing CL for nonEuclidean data
presents several theoretical and methodological challenges. In particular, CL
for graphs requires explicit modelling of nonstationary behavior of vertices
and edges and their effects on the learning problem. Therefore, in this work,
we develop a adaptive dynamic programming viewpoint for CL with graphs. In this
work, we formulate a two-player sequential game between the act of learning new
tasks~(generalization) and remembering previously learned tasks~(forgetting).
We prove mathematically the existence of a solution to the game and demonstrate
convergence to the solution of the game. Finally, we demonstrate the efficacy
of our method on a number of graph benchmarks with a comprehensive ablation
study while establishing state-of-the-art performance.
|
[
"cs.LG",
"cs.AI",
"math.OC"
] | false |
2305.13332
|
2023-05-19T15:46:31Z
|
Conditional Online Learning for Keyword Spotting
|
[
"Michel Meneses",
"Bruno Iwami"
] |
Modern approaches for keyword spotting rely on training deep neural networks
on large static datasets with i.i.d. distributions. However, the resulting
models tend to underperform when presented with changing data regimes in
real-life applications. This work investigates a simple but effective online
continual learning method that updates a keyword spotter on-device via SGD as
new data becomes available. Contrary to previous research, this work focuses on
learning the same KWS task, which covers most commercial applications. During
experiments with dynamic audio streams in different scenarios, that method
improves the performance of a pre-trained small-footprint model by 34%.
Moreover, experiments demonstrate that, compared to a naive online learning
implementation, conditional model updates based on its performance in a small
hold-out set drawn from the training distribution mitigate catastrophic
forgetting.
|
[
"eess.AS",
"cs.LG",
"cs.SD"
] | false |
2305.14370
|
2023-05-19T08:21:02Z
|
A Survey on the Role of Artificial Intelligence in the Prediction and
Diagnosis of Schizophrenia
|
[
"Narges Ramesh",
"Yasmin Ghodsi",
"Hamidreza Bolhasani"
] |
Machine learning is employed in healthcare to draw approximate conclusions
regarding human diseases and mental health problems. Compared to older
traditional methods, it can help to analyze data more efficiently and produce
better and more dependable results. Millions of people are affected by
schizophrenia, which is a chronic mental disorder that can significantly impact
their lives. Many machine learning algorithms have been developed to predict
and prevent this disease, and they can potentially be implemented in the
diagnosis of individuals who have it. This survey aims to review papers that
have focused on the use of deep learning to detect and predict schizophrenia
using EEG signals, functional magnetic resonance imaging (fMRI), and diffusion
magnetic resonance imaging (dMRI). With our chosen search strategy, we assessed
ten publications from 2019 to 2022. All studies achieved successful predictions
of more than 80%. This review provides summaries of the studies and compares
their notable aspects. In the field of artificial intelligence (AI) and machine
learning (ML) for schizophrenia, significant advances have been made due to the
availability of ML tools, and we are optimistic that this field will continue
to grow.
|
[
"q-bio.NC",
"cs.AI",
"cs.LG",
"cs.NE"
] | false |
2305.14373
|
2023-05-19T20:20:44Z
|
An Ensemble Semi-Supervised Adaptive Resonance Theory Model with
Explanation Capability for Pattern Classification
|
[
"Farhad Pourpanah",
"Chee Peng Lim",
"Ali Etemad",
"Q. M. Jonathan Wu"
] |
Most semi-supervised learning (SSL) models entail complex structures and
iterative training processes as well as face difficulties in interpreting their
predictions to users. To address these issues, this paper proposes a new
interpretable SSL model using the supervised and unsupervised Adaptive
Resonance Theory (ART) family of networks, which is denoted as SSL-ART.
Firstly, SSL-ART adopts an unsupervised fuzzy ART network to create a number of
prototype nodes using unlabeled samples. Then, it leverages a supervised fuzzy
ARTMAP structure to map the established prototype nodes to the target classes
using labeled samples. Specifically, a one-to-many (OtM) mapping scheme is
devised to associate a prototype node with more than one class label. The main
advantages of SSL-ART include the capability of: (i) performing online
learning, (ii) reducing the number of redundant prototype nodes through the OtM
mapping scheme and minimizing the effects of noisy samples, and (iii) providing
an explanation facility for users to interpret the predicted outcomes. In
addition, a weighted voting strategy is introduced to form an ensemble SSL-ART
model, which is denoted as WESSL-ART. Every ensemble member, i.e., SSL-ART,
assigns {\color{black}a different weight} to each class based on its
performance pertaining to the corresponding class. The aim is to mitigate the
effects of training data sequences on all SSL-ART members and improve the
overall performance of WESSL-ART. The experimental results on eighteen
benchmark data sets, three artificially generated data sets, and a real-world
case study indicate the benefits of the proposed SSL-ART and WESSL-ART models
for tackling pattern classification problems.
|
[
"cs.NE",
"cs.AI",
"cs.LG"
] | false |
2306.03885
|
2023-05-19T06:38:29Z
|
Three-way Imbalanced Learning based on Fuzzy Twin SVM
|
[
"Wanting Cai",
"Mingjie Cai",
"Qingguo Li",
"Qiong Liu"
] |
Three-way decision (3WD) is a powerful tool for granular computing to deal
with uncertain data, commonly used in information systems, decision-making, and
medical care. Three-way decision gets much research in traditional rough set
models. However, three-way decision is rarely combined with the currently
popular field of machine learning to expand its research. In this paper,
three-way decision is connected with SVM, a standard binary classification
model in machine learning, for solving imbalanced classification problems that
SVM needs to improve. A new three-way fuzzy membership function and a new fuzzy
twin support vector machine with three-way membership (TWFTSVM) are proposed.
The new three-way fuzzy membership function is defined to increase the
certainty of uncertain data in both input space and feature space, which
assigns higher fuzzy membership to minority samples compared with majority
samples. To evaluate the effectiveness of the proposed model, comparative
experiments are designed for forty-seven different datasets with varying
imbalance ratios. In addition, datasets with different imbalance ratios are
derived from the same dataset to further assess the proposed model's
performance. The results show that the proposed model significantly outperforms
other traditional SVM-based methods.
|
[
"cs.LG",
"cs.IT",
"math.IT"
] | false |
2305.11381
|
2023-05-19T01:58:13Z
|
Online Learning in a Creator Economy
|
[
"Banghua Zhu",
"Sai Praneeth Karimireddy",
"Jiantao Jiao",
"Michael I. Jordan"
] |
The creator economy has revolutionized the way individuals can profit through
online platforms. In this paper, we initiate the study of online learning in
the creator economy by modeling the creator economy as a three-party game
between the users, platform, and content creators, with the platform
interacting with the content creator under a principal-agent model through
contracts to encourage better content. Additionally, the platform interacts
with the users to recommend new content, receive an evaluation, and ultimately
profit from the content, which can be modeled as a recommender system.
Our study aims to explore how the platform can jointly optimize the contract
and recommender system to maximize the utility in an online learning fashion.
We primarily analyze and compare two families of contracts: return-based
contracts and feature-based contracts. Return-based contracts pay the content
creator a fraction of the reward the platform gains. In contrast, feature-based
contracts pay the content creator based on the quality or features of the
content, regardless of the reward the platform receives. We show that under
smoothness assumptions, the joint optimization of return-based contracts and
recommendation policy provides a regret $\Theta(T^{2/3})$. For the
feature-based contract, we introduce a definition of intrinsic dimension $d$ to
characterize the hardness of learning the contract and provide an upper bound
on the regret $\mathcal{O}(T^{(d+1)/(d+2)})$. The upper bound is tight for the
linear family.
|
[
"cs.GT",
"cs.CY",
"cs.IR",
"cs.LG",
"econ.TH"
] | false |
2305.12236
|
2023-05-20T17:01:52Z
|
Embracing Compact and Robust Architectures for Multi-Exposure Image
Fusion
|
[
"Zhu Liu",
"Jinyuan Liu",
"Guanyao Wu",
"Xin Fan",
"Risheng Liu"
] |
In recent years, deep learning-based methods have achieved remarkable
progress in multi-exposure image fusion. However, existing methods rely on
aligned image pairs, inevitably generating artifacts when faced with device
shaking in real-world scenarios. Moreover, these learning-based methods are
built on handcrafted architectures and operations by increasing network depth
or width, neglecting different exposure characteristics. As a result, these
direct cascaded architectures with redundant parameters fail to achieve highly
effective inference time and lead to massive computation. To alleviate these
issues, in this paper, we propose a search-based paradigm, involving
self-alignment and detail repletion modules for robust multi-exposure image
fusion. By utilizing scene relighting and deformable convolutions, the
self-alignment module can accurately align images despite camera movement.
Furthermore, by imposing a hardware-sensitive constraint, we introduce neural
architecture search to discover compact and efficient networks, investigating
effective feature representation for fusion. We realize the state-of-the-art
performance in comparison to various competitive schemes, yielding a 4.02% and
29.34% improvement in PSNR for general and misaligned scenarios, respectively,
while reducing inference time by 68.1%. The source code will be available at
https://github.com/LiuZhu-CV/CRMEF.
|
[
"cs.CV"
] | false |
2305.12252
|
2023-05-20T17:59:23Z
|
Boosting Human-Object Interaction Detection with Text-to-Image Diffusion
Model
|
[
"Jie Yang",
"Bingliang Li",
"Fengyu Yang",
"Ailing Zeng",
"Lei Zhang",
"Ruimao Zhang"
] |
This paper investigates the problem of the current HOI detection methods and
introduces DiffHOI, a novel HOI detection scheme grounded on a pre-trained
text-image diffusion model, which enhances the detector's performance via
improved data diversity and HOI representation. We demonstrate that the
internal representation space of a frozen text-to-image diffusion model is
highly relevant to verb concepts and their corresponding context. Accordingly,
we propose an adapter-style tuning method to extract the various semantic
associated representation from a frozen diffusion model and CLIP model to
enhance the human and object representations from the pre-trained detector,
further reducing the ambiguity in interaction prediction. Moreover, to fill in
the gaps of HOI datasets, we propose SynHOI, a class-balance, large-scale, and
high-diversity synthetic dataset containing over 140K HOI images with fully
triplet annotations. It is built using an automatic and scalable pipeline
designed to scale up the generation of diverse and high-precision HOI-annotated
data. SynHOI could effectively relieve the long-tail issue in existing datasets
and facilitate learning interaction representations. Extensive experiments
demonstrate that DiffHOI significantly outperforms the state-of-the-art in
regular detection (i.e., 41.50 mAP) and zero-shot detection. Furthermore,
SynHOI can improve the performance of model-agnostic and backbone-agnostic HOI
detection, particularly exhibiting an outstanding 11.55% mAP improvement in
rare classes.
|
[
"cs.CV"
] | false |
2305.12254
|
2023-05-20T18:01:47Z
|
A request for clarity over the End of Sequence token in the
Self-Critical Sequence Training
|
[
"Jia Cheng Hu",
"Roberto Cavicchioli",
"Alessandro Capotondi"
] |
The Image Captioning research field is currently compromised by the lack of
transparency and awareness over the End-of-Sequence token (<Eos>) in the
Self-Critical Sequence Training. If the <Eos> token is omitted, a model can
boost its performance up to +4.1 CIDEr-D using trivial sentence fragments.
While this phenomenon poses an obstacle to a fair evaluation and comparison of
established works, people involved in new projects are given the arduous choice
between lower scores and unsatisfactory descriptions due to the competitive
nature of the research. This work proposes to solve the problem by spreading
awareness of the issue itself. In particular, we invite future works to share a
simple and informative signature with the help of a library called SacreEOS.
Code available at
\emph{\href{https://github.com/jchenghu/sacreeos}{https://github.com/jchenghu/sacreeos}}
|
[
"cs.CV"
] | false |
2305.12070
|
2023-05-20T03:12:23Z
|
Instrumental Variable Learning for Chest X-ray Classification
|
[
"Weizhi Nie",
"Chen Zhang",
"Dan song",
"Yunpeng Bai",
"Keliang Xie",
"Anan Liu"
] |
The chest X-ray (CXR) is commonly employed to diagnose thoracic illnesses,
but the challenge of achieving accurate automatic diagnosis through this method
persists due to the complex relationship between pathology. In recent years,
various deep learning-based approaches have been suggested to tackle this
problem but confounding factors such as image resolution or noise problems
often damage model performance. In this paper, we focus on the chest X-ray
classification task and proposed an interpretable instrumental variable (IV)
learning framework, to eliminate the spurious association and obtain accurate
causal representation. Specifically, we first construct a structural causal
model (SCM) for our task and learn the confounders and the preliminary
representations of IV, we then leverage electronic health record (EHR) as
auxiliary information and we fuse the above feature with our transformer-based
semantic fusion module, so the IV has the medical semantic. Meanwhile, the
reliability of IV is further guaranteed via the constraints of mutual
information between related causal variables. Finally, our approach's
performance is demonstrated using the MIMIC-CXR, NIH ChestX-ray 14, and
CheXpert datasets, and we achieve competitive results.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.12072
|
2023-05-20T03:17:44Z
|
Chest X-ray Image Classification: A Causal Perspective
|
[
"Weizhi Nie",
"Chen Zhang",
"Dan Song",
"Lina Zhao",
"Yunpeng Bai",
"Keliang Xie",
"Anan Liu"
] |
The chest X-ray (CXR) is one of the most common and easy-to-get medical tests
used to diagnose common diseases of the chest. Recently, many deep
learning-based methods have been proposed that are capable of effectively
classifying CXRs. Even though these techniques have worked quite well, it is
difficult to establish whether what these algorithms actually learn is the
cause-and-effect link between diseases and their causes or just how to map
labels to photos.In this paper, we propose a causal approach to address the CXR
classification problem, which constructs a structural causal model (SCM) and
uses the backdoor adjustment to select effective visual information for CXR
classification. Specially, we design different probability optimization
functions to eliminate the influence of confounders on the learning of real
causality. Experimental results demonstrate that our proposed method
outperforms the open-source NIH ChestX-ray14 in terms of classification
performance.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.12106
|
2023-05-20T05:58:35Z
|
Human labeling errors and their impact on ConvNets for satellite image
scene classification
|
[
"Longkang Peng",
"Tao Wei",
"Xuehong Chen",
"Xiaobei Chen",
"Rui Sun",
"Luoma Wan",
"Xiaolin Zhu"
] |
Convolutional neural networks (ConvNets) have been successfully applied to
satellite image scene classification. Human-labeled training datasets are
essential for ConvNets to perform accurate classification. Errors in
human-labeled training datasets are unavoidable due to the complexity of
satellite images. However, the distribution of human labeling errors on
satellite images and their impact on ConvNets have not been investigated. To
fill this research gap, this study, for the first time, collected real-world
labels from 32 participants and explored how their errors affect three ConvNets
(VGG16, GoogleNet and ResNet-50) for high-resolution satellite image scene
classification. We found that: (1) human labeling errors have significant class
and instance dependence, which is fundamentally different from the simulation
noise in previous studies; (2) regarding the overall accuracy of all classes,
when human labeling errors in training data increase by one unit, the overall
accuracy of ConvNets classification decreases by approximately half a unit; (3)
regarding the accuracy of each class, the impact of human labeling errors on
ConvNets shows large heterogeneity across classes. To uncover the mechanism
underlying the impact of human labeling errors on ConvNets, we further compared
it with two types of simulated labeling noise: uniform noise (errors
independent of both classes and instances) and class-dependent noise (errors
independent of instances but not classes). Our results show that the impact of
human labeling errors on ConvNets is similar to that of the simulated
class-dependent noise but not to that of the simulated uniform noise,
suggesting that the impact of human labeling errors on ConvNets is mainly due
to class-dependent errors rather than instance-dependent errors.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.12144
|
2023-05-20T09:02:10Z
|
DiffCap: Exploring Continuous Diffusion on Image Captioning
|
[
"Yufeng He",
"Zefan Cai",
"Xu Gan",
"Baobao Chang"
] |
Current image captioning works usually focus on generating descriptions in an
autoregressive manner. However, there are limited works that focus on
generating descriptions non-autoregressively, which brings more decoding
diversity. Inspired by the success of diffusion models on generating
natural-looking images, we propose a novel method DiffCap to apply continuous
diffusions on image captioning. Unlike image generation where the output is
fixed-size and continuous, image description length varies with discrete
tokens. Our method transforms discrete tokens in a natural way and applies
continuous diffusion on them to successfully fuse extracted image features for
diffusion caption generation. Our experiments on COCO dataset demonstrate that
our method uses a much simpler structure to achieve comparable results to the
previous non-autoregressive works. Apart from quality, an intriguing property
of DiffCap is its high diversity during generation, which is missing from many
autoregressive models. We believe our method on fusing multimodal features in
diffusion language generation will inspire more researches on multimodal
language generation tasks for its simplicity and decoding flexibility.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.12170
|
2023-05-20T11:18:38Z
|
Dual-Diffusion: Dual Conditional Denoising Diffusion Probabilistic
Models for Blind Super-Resolution Reconstruction in RSIs
|
[
"Mengze Xu",
"Jie Ma",
"Yuanyuan Zhu"
] |
Previous super-resolution reconstruction (SR) works are always designed on
the assumption that the degradation operation is fixed, such as bicubic
downsampling. However, as for remote sensing images, some unexpected factors
can cause the blurred visual performance, like weather factors, orbit altitude,
etc. Blind SR methods are proposed to deal with various degradations. There are
two main challenges of blind SR in RSIs: 1) the accu-rate estimation of
degradation kernels; 2) the realistic image generation in the ill-posed
problem. To rise to the challenge, we propose a novel blind SR framework based
on dual conditional denoising diffusion probabilistic models (DDSR). In our
work, we introduce conditional denoising diffusion probabilistic models (DDPM)
from two aspects: kernel estimation progress and re-construction progress,
named as the dual-diffusion. As for kernel estimation progress, conditioned on
low-resolution (LR) images, a new DDPM-based kernel predictor is constructed by
studying the invertible mapping between the kernel distribution and the latent
distribution. As for reconstruction progress, regarding the predicted
degradation kernels and LR images as conditional information, we construct a
DDPM-based reconstructor to learning the mapping from the LR images to HR
images. Com-prehensive experiments show the priority of our proposal com-pared
with SOTA blind SR methods. Source Code is available at
https://github.com/Lincoln20030413/DDSR
|
[
"eess.IV",
"cs.CV"
] | false |
2305.12242
|
2023-05-20T17:24:06Z
|
Comparative Analysis of Deep Learning Models for Brand Logo
Classification in Real-World Scenarios
|
[
"Qimao Yang",
"Huili Chen",
"Qiwei Dong"
] |
This report presents a comprehensive study on deep learning models for brand
logo classification in real-world scenarios. The dataset contains 3,717 labeled
images of logos from ten prominent brands. Two types of models, Convolutional
Neural Networks (CNN) and Vision Transformer (ViT), were evaluated for their
performance. The ViT model, DaViT small, achieved the highest accuracy of
99.60%, while the DenseNet29 achieved the fastest inference speed of 366.62
FPS. The findings suggest that the DaViT model is a suitable choice for offline
applications due to its superior accuracy. This study demonstrates the
practical application of deep learning in brand logo classification tasks.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.12248
|
2023-05-20T17:38:44Z
|
Brain encoding models based on multimodal transformers can transfer
across language and vision
|
[
"Jerry Tang",
"Meng Du",
"Vy A. Vo",
"Vasudev Lal",
"Alexander G. Huth"
] |
Encoding models have been used to assess how the human brain represents
concepts in language and vision. While language and vision rely on similar
concept representations, current encoding models are typically trained and
tested on brain responses to each modality in isolation. Recent advances in
multimodal pretraining have produced transformers that can extract aligned
representations of concepts in language and vision. In this work, we used
representations from multimodal transformers to train encoding models that can
transfer across fMRI responses to stories and movies. We found that encoding
models trained on brain responses to one modality can successfully predict
brain responses to the other modality, particularly in cortical regions that
represent conceptual meaning. Further analysis of these encoding models
revealed shared semantic dimensions that underlie concept representations in
language and vision. Comparing encoding models trained using representations
from multimodal and unimodal transformers, we found that multimodal
transformers learn more aligned representations of concepts in language and
vision. Our results demonstrate how multimodal transformers can provide
insights into the brain's capacity for multimodal processing.
|
[
"cs.CL",
"cs.CV"
] | false |
2305.12068
|
2023-05-20T03:08:42Z
|
Technical outlier detection via convolutional variational autoencoder
for the ADMANI breast mammogram dataset
|
[
"Hui Li",
"Carlos A. Pena Solorzano",
"Susan Wei",
"Davis J. McCarthy"
] |
The ADMANI datasets (annotated digital mammograms and associated non-image
datasets) from the Transforming Breast Cancer Screening with AI programme
(BRAIx) run by BreastScreen Victoria in Australia are multi-centre, large
scale, clinically curated, real-world databases. The datasets are expected to
aid in the development of clinically relevant Artificial Intelligence (AI)
algorithms for breast cancer detection, early diagnosis, and other
applications. To ensure high data quality, technical outliers must be removed
before any downstream algorithm development. As a first step, we randomly
select 30,000 individual mammograms and use Convolutional Variational
Autoencoder (CVAE), a deep generative neural network, to detect outliers. CVAE
is expected to detect all sorts of outliers, although its detection performance
differs among different types of outliers. Traditional image processing
techniques such as erosion and pectoral muscle analysis can compensate for the
poor performance of CVAE in certain outlier types. We identify seven types of
technical outliers: implant, pacemaker, cardiac loop recorder, improper
radiography, atypical lesion/calcification, incorrect exposure parameter and
improper placement. The outlier recall rate for the test set is 61% if CVAE,
erosion and pectoral muscle analysis each select the top 1% images ranked in
ascending or descending order according to image outlier score under each
detection method, and 83% if each selects the top 5% images. This study offers
an overview of technical outliers in the ADMANI dataset and suggests future
directions to improve outlier detection effectiveness.
|
[
"eess.IV",
"cs.AI",
"cs.CV"
] | false |
2305.12218
|
2023-05-20T15:48:47Z
|
Text-Video Retrieval with Disentangled Conceptualization and Set-to-Set
Alignment
|
[
"Peng Jin",
"Hao Li",
"Zesen Cheng",
"Jinfa Huang",
"Zhennan Wang",
"Li Yuan",
"Chang Liu",
"Jie Chen"
] |
Text-video retrieval is a challenging cross-modal task, which aims to align
visual entities with natural language descriptions. Current methods either fail
to leverage the local details or are computationally expensive. What's worse,
they fail to leverage the heterogeneous concepts in data. In this paper, we
propose the Disentangled Conceptualization and Set-to-set Alignment (DiCoSA) to
simulate the conceptualizing and reasoning process of human beings. For
disentangled conceptualization, we divide the coarse feature into multiple
latent factors related to semantic concepts. For set-to-set alignment, where a
set of visual concepts correspond to a set of textual concepts, we propose an
adaptive pooling method to aggregate semantic concepts to address the partial
matching. In particular, since we encode concepts independently in only a few
dimensions, DiCoSA is superior at efficiency and granularity, ensuring
fine-grained interactions using a similar computational complexity as
coarse-grained alignment. Extensive experiments on five datasets, including
MSR-VTT, LSMDC, MSVD, ActivityNet, and DiDeMo, demonstrate that our method
outperforms the existing state-of-the-art methods.
|
[
"cs.CV",
"cs.AI",
"cs.IR"
] | false |
2305.12231
|
2023-05-20T16:50:45Z
|
Bi-VLGM : Bi-Level Class-Severity-Aware Vision-Language Graph Matching
for Text Guided Medical Image Segmentation
|
[
"Chen Wenting",
"Liu Jie",
"Yuan Yixuan"
] |
Medical reports with substantial information can be naturally complementary
to medical images for computer vision tasks, and the modality gap between
vision and language can be solved by vision-language matching (VLM). However,
current vision-language models distort the intra-model relation and mainly
include class information in prompt learning that is insufficient for
segmentation task. In this paper, we introduce a Bi-level class-severity-aware
Vision-Language Graph Matching (Bi-VLGM) for text guided medical image
segmentation, composed of a word-level VLGM module and a sentence-level VLGM
module, to exploit the class-severity-aware relation among visual-textual
features. In word-level VLGM, to mitigate the distorted intra-modal relation
during VLM, we reformulate VLM as graph matching problem and introduce a
vision-language graph matching (VLGM) to exploit the high-order relation among
visual-textual features. Then, we perform VLGM between the local features for
each class region and class-aware prompts to bridge their gap. In
sentence-level VLGM, to provide disease severity information for segmentation
task, we introduce a severity-aware prompting to quantify the severity level of
retinal lesion, and perform VLGM between the global features and the
severity-aware prompts. By exploiting the relation between the local (global)
and class (severity) features, the segmentation model can selectively learn the
class-aware and severity-aware information to promote performance. Extensive
experiments prove the effectiveness of our method and its superiority to
existing methods. Source code is to be released.
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.12092
|
2023-05-20T04:50:20Z
|
ESCOXLM-R: Multilingual Taxonomy-driven Pre-training for the Job Market
Domain
|
[
"Mike Zhang",
"Rob van der Goot",
"Barbara Plank"
] |
The increasing number of benchmarks for Natural Language Processing (NLP)
tasks in the computational job market domain highlights the demand for methods
that can handle job-related tasks such as skill extraction, skill
classification, job title classification, and de-identification. While some
approaches have been developed that are specific to the job market domain,
there is a lack of generalized, multilingual models and benchmarks for these
tasks. In this study, we introduce a language model called ESCOXLM-R, based on
XLM-R, which uses domain-adaptive pre-training on the European Skills,
Competences, Qualifications and Occupations (ESCO) taxonomy, covering 27
languages. The pre-training objectives for ESCOXLM-R include dynamic masked
language modeling and a novel additional objective for inducing multilingual
taxonomical ESCO relations. We comprehensively evaluate the performance of
ESCOXLM-R on 6 sequence labeling and 3 classification tasks in 4 languages and
find that it achieves state-of-the-art results on 6 out of 9 datasets. Our
analysis reveals that ESCOXLM-R performs better on short spans and outperforms
XLM-R on entity-level and surface-level span-F1, likely due to ESCO containing
short skill and occupation titles, and encoding information on the
entity-level.
|
[
"cs.CL"
] | false |
2305.12096
|
2023-05-20T05:20:37Z
|
Can NLP Models Correctly Reason Over Contexts that Break the Common
Assumptions?
|
[
"Neeraj Varshney",
"Mihir Parmar",
"Nisarg Patel",
"Divij Handa",
"Sayantan Sarkar",
"Man Luo",
"Chitta Baral"
] |
Pre-training on large corpora of text enables the language models to acquire
a vast amount of factual and commonsense knowledge which allows them to achieve
remarkable performance on a variety of language understanding tasks. They
typically acquire this knowledge by learning from the pre-training text and
capturing certain patterns from it. However, real-world settings often present
scenarios that do not abide by these patterns i.e. scenarios that break the
common assumptions. Can state-of-the-art NLP models correctly reason over the
contexts of such scenarios?
Addressing the above question, in this paper, we investigate the ability of
models to correctly reason over contexts that break the common assumptions. To
this end, we first systematically create evaluation data in which each data
instance consists of (a) a common assumption, (b) a context that follows the
assumption, (c) a context that breaks the assumption, and (d) questions based
on the contexts. Then, through evaluations on multiple models including GPT-3
and Flan T5, we show that while doing fairly well on contexts that follow the
common assumptions, the models struggle to correctly reason over contexts that
break those assumptions. Specifically, the performance gap is as high as 20%
absolute points. Furthermore, we thoroughly analyze these results revealing
several interesting findings. We believe our work and findings will encourage
and facilitate further research in developing more robust models that can also
reliably reason over contexts that break the common assumptions. Data is
available at \url{https://github.com/nrjvarshney/break_the_common_assumptions}.
|
[
"cs.CL"
] | false |
2305.12123
|
2023-05-20T07:02:27Z
|
Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization
|
[
"Ting Wu",
"Rui Zheng",
"Tao Gui",
"Qi Zhang",
"Xuanjing Huang"
] |
Models trained with empirical risk minimization (ERM) are revealed to easily
rely on spurious correlations, resulting in poor generalization. Group
distributionally robust optimization (group DRO) can alleviate this problem by
minimizing the worst-case loss over pre-defined groups. While promising, in
practice factors like expensive annotations and privacy preclude the
availability of group labels. More crucially, when taking a closer look at the
failure modes of out-of-distribution generalization, the typical procedure of
reweighting in group DRO loses efficiency. Hinged on the limitations, in this
work, we reformulate the group DRO framework by proposing Q-Diversity.
Characterized by an interactive training mode, Q-Diversity relaxes the group
identification from annotation into direct parameterization. Furthermore, a
novel mixing strategy across groups is presented to diversify the
under-represented groups. In a series of experiments on both synthetic and
real-world text classification tasks, results demonstrate that Q-Diversity can
consistently improve worst-case accuracy under different distributional shifts,
outperforming state-of-the-art alternatives.
|
[
"cs.CL"
] | false |
2305.12199
|
2023-05-20T14:13:08Z
|
VNHSGE: VietNamese High School Graduation Examination Dataset for Large
Language Models
|
[
"Xuan-Quy Dao",
"Ngoc-Bich Le",
"The-Duy Vo",
"Xuan-Dung Phan",
"Bac-Bien Ngo",
"Van-Tien Nguyen",
"Thi-My-Thanh Nguyen",
"Hong-Phuoc Nguyen"
] |
The VNHSGE (VietNamese High School Graduation Examination) dataset, developed
exclusively for evaluating large language models (LLMs), is introduced in this
article. The dataset, which covers nine subjects, was generated from the
Vietnamese National High School Graduation Examination and comparable tests.
300 literary essays have been included, and there are over 19,000
multiple-choice questions on a range of topics. The dataset assesses LLMs in
multitasking situations such as question answering, text generation, reading
comprehension, visual question answering, and more by including both textual
data and accompanying images. Using ChatGPT and BingChat, we evaluated LLMs on
the VNHSGE dataset and contrasted their performance with that of Vietnamese
students to see how well they performed. The results show that ChatGPT and
BingChat both perform at a human level in a number of areas, including
literature, English, history, geography, and civics education. They still have
space to grow, though, especially in the areas of mathematics, physics,
chemistry, and biology. The VNHSGE dataset seeks to provide an adequate
benchmark for assessing the abilities of LLMs with its wide-ranging coverage
and variety of activities. We intend to promote future developments in the
creation of LLMs by making this dataset available to the scientific community,
especially in resolving LLMs' limits in disciplines involving mathematics and
the natural sciences.
|
[
"cs.CL"
] | false |
2305.12209
|
2023-05-20T15:12:25Z
|
Self-Distillation with Meta Learning for Knowledge Graph Completion
|
[
"Yunshui Li",
"Junhao Liu",
"Chengming Li",
"Min Yang"
] |
In this paper, we propose a selfdistillation framework with meta
learning(MetaSD) for knowledge graph completion with dynamic pruning, which
aims to learn compressed graph embeddings and tackle the longtail samples.
Specifically, we first propose a dynamic pruning technique to obtain a small
pruned model from a large source model, where the pruning mask of the pruned
model could be updated adaptively per epoch after the model weights are
updated. The pruned model is supposed to be more sensitive to difficult to
memorize samples(e.g., longtail samples) than the source model. Then, we
propose a onestep meta selfdistillation method for distilling comprehensive
knowledge from the source model to the pruned model, where the two models
coevolve in a dynamic manner during training. In particular, we exploit the
performance of the pruned model, which is trained alongside the source model in
one iteration, to improve the source models knowledge transfer ability for the
next iteration via meta learning. Extensive experiments show that MetaSD
achieves competitive performance compared to strong baselines, while being 10x
smaller than baselines.
|
[
"cs.CL"
] | false |
2305.12217
|
2023-05-20T15:47:59Z
|
PromptNER: A Prompting Method for Few-shot Named Entity Recognition via
k Nearest Neighbor Search
|
[
"Mozhi Zhang",
"Hang Yan",
"Yaqian Zhou",
"Xipeng Qiu"
] |
Few-shot Named Entity Recognition (NER) is a task aiming to identify named
entities via limited annotated samples. Recently, prototypical networks have
shown promising performance in few-shot NER. Most of prototypical networks will
utilize the entities from the support set to construct label prototypes and use
the query set to compute span-level similarities and optimize these label
prototype representations. However, these methods are usually unsuitable for
fine-tuning in the target domain, where only the support set is available. In
this paper, we propose PromptNER: a novel prompting method for few-shot NER via
k nearest neighbor search. We use prompts that contains entity category
information to construct label prototypes, which enables our model to fine-tune
with only the support set. Our approach achieves excellent transfer learning
ability, and extensive experiments on the Few-NERD and CrossNER datasets
demonstrate that our model achieves superior performance over state-of-the-art
methods.
|
[
"cs.CL"
] | false |
2305.12228
|
2023-05-20T16:41:48Z
|
Dynamic Transformers Provide a False Sense of Efficiency
|
[
"Yiming Chen",
"Simin Chen",
"Zexin Li",
"Wei Yang",
"Cong Liu",
"Robby T. Tan",
"Haizhou Li"
] |
Despite much success in natural language processing (NLP), pre-trained
language models typically lead to a high computational cost during inference.
Multi-exit is a mainstream approach to address this issue by making a trade-off
between efficiency and accuracy, where the saving of computation comes from an
early exit. However, whether such saving from early-exiting is robust remains
unknown. Motivated by this, we first show that directly adapting existing
adversarial attack approaches targeting model accuracy cannot significantly
reduce inference efficiency. To this end, we propose a simple yet effective
attacking framework, SAME, a novel slowdown attack framework on multi-exit
models, which is specially tailored to reduce the efficiency of the multi-exit
models. By leveraging the multi-exit models' design characteristics, we utilize
all internal predictions to guide the adversarial sample generation instead of
merely considering the final prediction. Experiments on the GLUE benchmark show
that SAME can effectively diminish the efficiency gain of various multi-exit
models by 80% on average, convincingly validating its effectiveness and
generalization ability.
|
[
"cs.CL"
] | false |
2305.12233
|
2023-05-20T16:52:30Z
|
A Measure of Explanatory Effectiveness
|
[
"Dylan Cope",
"Peter McBurney"
] |
In most conversations about explanation and AI, the recipient of the
explanation (the explainee) is suspiciously absent, despite the problem being
ultimately communicative in nature. We pose the problem `explaining AI systems'
in terms of a two-player cooperative game in which each agent seeks to maximise
our proposed measure of explanatory effectiveness. This measure serves as a
foundation for the automated assessment of explanations, in terms of the
effects that any given action in the game has on the internal state of the
explainee.
|
[
"cs.CL"
] | false |
2305.12276
|
2023-05-20T20:16:57Z
|
Analogy in Contact: Modeling Maltese Plural Inflection
|
[
"Sara Court",
"Andrea D. Sims",
"Micha Elsner"
] |
Maltese is often described as having a hybrid morphological system resulting
from extensive contact between Semitic and Romance language varieties. Such a
designation reflects an etymological divide as much as it does a larger
tradition in the literature to consider concatenative and non-concatenative
morphological patterns as distinct in the language architecture. Using a
combination of computational modeling and information theoretic methods, we
quantify the extent to which the phonology and etymology of a Maltese singular
noun may predict the morphological process (affixal vs. templatic) as well as
the specific plural allomorph (affix or template) relating a singular noun to
its associated plural form(s) in the lexicon. The results indicate phonological
pressures shape the organization of the Maltese lexicon with predictive power
that extends beyond that of a word's etymology, in line with analogical
theories of language change in contact.
|
[
"cs.CL"
] | false |
2305.12289
|
2023-05-20T21:52:24Z
|
Revisiting the Architectures like Pointer Networks to Efficiently
Improve the Next Word Distribution, Summarization Factuality, and Beyond
|
[
"Haw-Shiuan Chang",
"Zonghai Yao",
"Alolika Gon",
"Hong Yu",
"Andrew McCallum"
] |
Is the output softmax layer, which is adopted by most language models (LMs),
always the best way to compute the next word probability? Given so many
attention layers in a modern transformer-based LM, are the pointer networks
redundant nowadays? In this study, we discover that the answers to both
questions are no. This is because the softmax bottleneck sometimes prevents the
LMs from predicting the desired distribution and the pointer networks can be
used to break the bottleneck efficiently. Based on the finding, we propose
several softmax alternatives by simplifying the pointer networks and
accelerating the word-by-word rerankers. In GPT-2, our proposals are
significantly better and more efficient than mixture of softmax, a
state-of-the-art softmax alternative. In summarization experiments, without
significantly decreasing its training/testing speed, our best method based on
T5-Small improves factCC score by 2 points in CNN/DM and XSUM dataset, and
improves MAUVE scores by 30% in BookSum paragraph-level dataset.
|
[
"cs.CL"
] | false |
2305.12129
|
2023-05-20T07:30:55Z
|
Lifting the Curse of Capacity Gap in Distilling Language Models
|
[
"Chen Zhang",
"Yang Yang",
"Jiahao Liu",
"Jingang Wang",
"Yunsen Xian",
"Benyou Wang",
"Dawei Song"
] |
Pretrained language models (LMs) have shown compelling performance on various
downstream tasks, but unfortunately they require a tremendous amount of
inference compute. Knowledge distillation finds a path to compress LMs to small
ones with a teacher-student paradigm. However, when the capacity gap between
the teacher and the student is large, a curse of capacity gap appears, invoking
a deficiency in distilling LMs. While a few studies have been carried out to
fill the gap, the curse is not yet well tackled. In this paper, we aim at
lifting the curse of capacity gap via enlarging the capacity of the student
without notably increasing the inference compute. Largely motivated by sparse
activation regime of mixture of experts (MoE), we propose a mixture of minimal
experts (MiniMoE), which imposes extra parameters to the student but introduces
almost no additional inference compute. Experimental results on GLUE and CoNLL
demonstrate the curse of capacity gap is lifted by the magic of MiniMoE to a
large extent. MiniMoE also achieves the state-of-the-art performance at small
FLOPs compared with a range of competitive baselines. With a compression rate
as much as $\sim$50$\times$, MiniMoE preserves $\sim$95\% GLUE score of the
teacher.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.12190
|
2023-05-20T13:28:22Z
|
Paragraph-level Citation Recommendation based on Topic Sentences as
Queries
|
[
"Zoran Medić",
"Jan Šnajder"
] |
Citation recommendation (CR) models may help authors find relevant articles
at various stages of the paper writing process. Most research has dealt with
either global CR, which produces general recommendations suitable for the
initial writing stage, or local CR, which produces specific recommendations
more fitting for the final writing stages. We propose the task of
paragraph-level CR as a middle ground between the two approaches, where the
paragraph's topic sentence is taken as input and recommendations for citing
within the paragraph are produced at the output. We propose a model for this
task, fine-tune it using the quadruplet loss on the dataset of ACL papers, and
show improvements over the baselines.
|
[
"cs.IR",
"cs.CL"
] | false |
2305.12281
|
2023-05-20T21:15:19Z
|
Lifelong Language Pretraining with Distribution-Specialized Experts
|
[
"Wuyang Chen",
"Yanqi Zhou",
"Nan Du",
"Yanping Huang",
"James Laudon",
"Zhifeng Chen",
"Claire Cu"
] |
Pretraining on a large-scale corpus has become a standard method to build
general language models (LMs). Adapting a model to new data distributions
targeting different downstream tasks poses significant challenges. Naive
fine-tuning may incur catastrophic forgetting when the over-parameterized LMs
overfit the new data but fail to preserve the pretrained features. Lifelong
learning (LLL) aims to enable information systems to learn from a continuous
data stream across time. However, most prior work modifies the training recipe
assuming a static fixed network architecture. We find that additional model
capacity and proper regularization are key elements to achieving strong LLL
performance. Thus, we propose Lifelong-MoE, an extensible MoE
(Mixture-of-Experts) architecture that dynamically adds model capacity via
adding experts with regularized pretraining. Our results show that by only
introducing a limited number of extra experts while keeping the computation
cost constant, our model can steadily adapt to data distribution shifts while
preserving the previous knowledge. Compared to existing lifelong learning
approaches, Lifelong-MoE achieves better few-shot performance on 19 downstream
NLP tasks.
|
[
"cs.CL",
"cs.LG"
] | false |
2305.12090
|
2023-05-20T04:32:59Z
|
UP5: Unbiased Foundation Model for Fairness-aware Recommendation
|
[
"Wenyue Hua",
"Yingqiang Ge",
"Shuyuan Xu",
"Jianchao Ji",
"Yongfeng Zhang"
] |
Recent advancements in foundation models such as large language models (LLM)
have propelled them to the forefront of recommender systems (RS). Moreover,
fairness in RS is critical since many users apply it for decision-making and
demand fulfillment. However, at present, there is a lack of understanding
regarding the level of fairness exhibited by recommendation foundation models
and the appropriate methods for equitably treating different groups of users in
foundation models. In this paper, we focus on user-side unfairness problem and
show through a thorough examination that there is unfairness involved in LLMs
that lead to unfair recommendation results. To eliminate bias from LLM for
fairness-aware recommendation, we introduce a novel Unbiased P5 (UP5)
foundation model based on Counterfactually-Fair-Prompting (CFP) techniques. CFP
includes two sub-modules: a personalized prefix prompt that enhances fairness
with respect to individual sensitive attributes, and a Prompt Mixture that
integrates multiple counterfactually-fair prompts for a set of sensitive
attributes. Experiments are conducted on two real-world datasets, MovieLens-1M
and Insurance, and results are compared with both matching-based and
sequential-based fairness-aware recommendation models. The results show that
UP5 achieves better recommendation performance and meanwhile exhibits a high
level of fairness.
|
[
"cs.IR",
"cs.AI",
"cs.CL",
"cs.LG"
] | false |
2305.12196
|
2023-05-20T14:00:08Z
|
Experimental results from applying GPT-4 to an unpublished formal
language
|
[
"Gregor vom Scheidt"
] |
Can large language models be used to complete mathematical tasks that are
traditionally performed either manually or with the aid of theorem provers? To
answer this question, a state-of-the-art system, GPT-4, was provided with a
concise natural language specification for a previously unpublished formal
system and asked to complete a number of tasks, from stating function and type
definitions to proving simple theorems and verifying user-supplied proofs. The
system completed all tasks successfully, showed extensive domain knowledge,
invented helpful new syntax and semantics, and exhibited generalization and
inference abilities. So the answer seems to be: yes.
|
[
"cs.CL",
"cs.LO",
"math.LO",
"03B35",
"F.4.1; I.2.3; F.4.3"
] | false |
2305.12257
|
2023-05-20T18:20:39Z
|
SEntFiN 1.0: Entity-Aware Sentiment Analysis for Financial News
|
[
"Ankur Sinha",
"Satishwar Kedas",
"Rishu Kumar",
"Pekka Malo"
] |
Fine-grained financial sentiment analysis on news headlines is a challenging
task requiring human-annotated datasets to achieve high performance. Limited
studies have tried to address the sentiment extraction task in a setting where
multiple entities are present in a news headline. In an effort to further
research in this area, we make publicly available SEntFiN 1.0, a
human-annotated dataset of 10,753 news headlines with entity-sentiment
annotations, of which 2,847 headlines contain multiple entities, often with
conflicting sentiments. We augment our dataset with a database of over 1,000
financial entities and their various representations in news media amounting to
over 5,000 phrases. We propose a framework that enables the extraction of
entity-relevant sentiments using a feature-based approach rather than an
expression-based approach. For sentiment extraction, we utilize 12 different
learning schemes utilizing lexicon-based and pre-trained sentence
representations and five classification approaches. Our experiments indicate
that lexicon-based n-gram ensembles are above par with pre-trained word
embedding schemes such as GloVe. Overall, RoBERTa and finBERT (domain-specific
BERT) achieve the highest average accuracy of 94.29% and F1-score of 93.27%.
Further, using over 210,000 entity-sentiment predictions, we validate the
economic effect of sentiments on aggregate market movements over a long
duration.
|
[
"cs.CL",
"cs.AI",
"cs.LG",
"I.2.7"
] | false |
2305.12268
|
2023-05-20T19:17:10Z
|
Patton: Language Model Pretraining on Text-Rich Networks
|
[
"Bowen Jin",
"Wentao Zhang",
"Yu Zhang",
"Yu Meng",
"Xinyang Zhang",
"Qi Zhu",
"Jiawei Han"
] |
A real-world text corpus sometimes comprises not only text documents but also
semantic links between them (e.g., academic papers in a bibliographic network
are linked by citations and co-authorships). Text documents and semantic
connections form a text-rich network, which empowers a wide range of downstream
tasks such as classification and retrieval. However, pretraining methods for
such structures are still lacking, making it difficult to build one generic
model that can be adapted to various tasks on text-rich networks. Current
pretraining objectives, such as masked language modeling, purely model texts
and do not take inter-document structure information into consideration. To
this end, we propose our PretrAining on TexT-Rich NetwOrk framework Patton.
Patton includes two pretraining strategies: network-contextualized masked
language modeling and masked node prediction, to capture the inherent
dependency between textual attributes and network structure. We conduct
experiments on four downstream tasks in five datasets from both academic and
e-commerce domains, where Patton outperforms baselines significantly and
consistently.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.12272
|
2023-05-20T19:29:47Z
|
Autoregressive Modeling with Lookahead Attention
|
[
"Li Du",
"Hongyuan Mei",
"Jason Eisner"
] |
To predict the next token, autoregressive models ordinarily examine the past.
Could they also benefit from also examining hypothetical futures? We consider a
novel Transformer-based autoregressive architecture that estimates the
next-token distribution by extrapolating multiple continuations of the past,
according to some proposal distribution, and attending to these extended
strings. This architecture draws insights from classical AI systems such as
board game players: when making a local decision, a policy may benefit from
exploring possible future trajectories and analyzing them. On multiple tasks
including morphological inflection and Boolean satisfiability, our lookahead
model is able to outperform the ordinary Transformer model of comparable size.
However, on some tasks, it appears to be benefiting from the extra computation
without actually using the lookahead information. We discuss possible variant
architectures as well as future speedups.
|
[
"cs.CL",
"cs.AI",
"cs.LG"
] | false |
2305.12301
|
2023-05-20T23:55:55Z
|
Sentence Embedder Guided Utterance Encoder (SEGUE) for Spoken Language
Understanding
|
[
"Yi Xuan Tan",
"Navonil Majumder",
"Soujanya Poria"
] |
The pre-trained speech encoder wav2vec 2.0 performs very well on various
spoken language understanding (SLU) tasks. However, on many tasks, it trails
behind text encoders with textual input. To improve the understanding
capability of SLU encoders, various studies have used knowledge distillation to
transfer knowledge from natural language understanding (NLU) encoders. We use a
very simple method of distilling from a textual sentence embedder directly into
wav2vec 2.0 as pre-training, utilizing paired audio-text datasets. We observed
that this method is indeed capable of improving SLU task performance in
fine-tuned settings, as well as full-data and few-shot transfer on a frozen
encoder. However, the model performs worse on certain tasks highlighting the
strengths and weaknesses of our approach.
|
[
"cs.CL",
"cs.AI",
"cs.SD",
"eess.AS"
] | false |
2305.18315
|
2023-05-20T00:48:52Z
|
CDJUR-BR -- A Golden Collection of Legal Document from Brazilian Justice
with Fine-Grained Named Entities
|
[
"Antonio Mauricio",
"Vladia Pinheiro",
"Vasco Furtado",
"João Araújo Monteiro Neto",
"Francisco das Chagas Jucá Bomfim",
"André Câmara Ferreira da Costa",
"Raquel Silveira",
"Nilsiton Aragão"
] |
A basic task for most Legal Artificial Intelligence (Legal AI) applications
is Named Entity Recognition (NER). However, texts produced in the context of
legal practice make references to entities that are not trivially recognized by
the currently available NERs. There is a lack of categorization of legislation,
jurisprudence, evidence, penalties, the roles of people in a legal process
(judge, lawyer, victim, defendant, witness), types of locations (crime
location, defendant's address), etc. In this sense, there is still a need for a
robust golden collection, annotated with fine-grained entities of the legal
domain, and which covers various documents of a legal process, such as
petitions, inquiries, complaints, decisions and sentences. In this article, we
describe the development of the Golden Collection of the Brazilian Judiciary
(CDJUR-BR) contemplating a set of fine-grained named entities that have been
annotated by experts in legal documents. The creation of CDJUR-BR followed its
own methodology that aimed to attribute a character of comprehensiveness and
robustness. Together with the CDJUR-BR repository we provided a NER based on
the BERT model and trained with the CDJUR-BR, whose results indicated the
prevalence of the CDJUR-BR.
|
[
"cs.CL",
"cs.AI",
"cs.IR",
"cs.LG"
] | false |
2305.12087
|
2023-05-20T04:11:00Z
|
Semi-Supervised Graph Imbalanced Regression
|
[
"Gang Liu",
"Tong Zhao",
"Eric Inae",
"Tengfei Luo",
"Meng Jiang"
] |
Data imbalance is easily found in annotated data when the observations of
certain continuous label values are difficult to collect for regression tasks.
When they come to molecule and polymer property predictions, the annotated
graph datasets are often small because labeling them requires expensive
equipment and effort. To address the lack of examples of rare label values in
graph regression tasks, we propose a semi-supervised framework to progressively
balance training data and reduce model bias via self-training. The training
data balance is achieved by (1) pseudo-labeling more graphs for
under-represented labels with a novel regression confidence measurement and (2)
augmenting graph examples in latent space for remaining rare labels after data
balancing with pseudo-labels. The former is to identify quality examples from
unlabeled data whose labels are confidently predicted and sample a subset of
them with a reverse distribution from the imbalanced annotated data. The latter
collaborates with the former to target a perfect balance using a novel
label-anchored mixup algorithm. We perform experiments in seven regression
tasks on graph datasets. Results demonstrate that the proposed framework
significantly reduces the error of predicted graph properties, especially in
under-represented label areas.
|
[
"cs.LG"
] | false |
2305.12109
|
2023-05-20T06:06:44Z
|
Meta Neural Coordination
|
[
"Yuwei Sun"
] |
Meta-learning aims to develop algorithms that can learn from other learning
algorithms to adapt to new and changing environments. This requires a model of
how other learning algorithms operate and perform in different contexts, which
is similar to representing and reasoning about mental states in the theory of
mind. Furthermore, the problem of uncertainty in the predictions of
conventional deep neural networks highlights the partial predictability of the
world, requiring the representation of multiple predictions simultaneously.
This is facilitated by coordination among neural modules, where different
modules' beliefs and desires are attributed to others. The neural coordination
among modular and decentralized neural networks is a fundamental prerequisite
for building autonomous intelligence machines that can interact flexibly and
adaptively. In this work, several pieces of evidence demonstrate a new avenue
for tackling the problems above, termed Meta Neural Coordination. We discuss
the potential advancements required to build biologically-inspired machine
intelligence, drawing from both machine learning and cognitive science
communities.
|
[
"cs.LG"
] | false |
2305.12133
|
2023-05-20T07:57:15Z
|
Loss Spike in Training Neural Networks
|
[
"Zhongwang Zhang",
"Zhi-Qin John Xu"
] |
In this work, we study the mechanism underlying loss spikes observed during
neural network training. When the training enters a region, which has a
smaller-loss-as-sharper (SLAS) structure, the training becomes unstable and
loss exponentially increases once it is too sharp, i.e., the rapid ascent of
the loss spike. The training becomes stable when it finds a flat region. The
deviation in the first eigen direction (with maximum eigenvalue of the loss
Hessian ($\lambda_{\mathrm{max}}$) is found to be dominated by low-frequency.
Since low-frequency is captured very fast (frequency principle), the rapid
descent is then observed. Inspired by our analysis of loss spikes, we revisit
the link between $\lambda_{\mathrm{max}}$ flatness and generalization. For real
datasets, low-frequency is often dominant and well-captured by both the
training data and the test data. Then, a solution with good generalization and
a solution with bad generalization can both learn low-frequency well, thus,
they have little difference in the sharpest direction. Therefore, although
$\lambda_{\mathrm{max}}$ can indicate the sharpness of the loss landscape,
deviation in its corresponding eigen direction is not responsible for the
generalization difference. We also find that loss spikes can facilitate
condensation, i.e., input weights evolve towards the same, which may be the
underlying mechanism for why the loss spike improves generalization, rather
than simply controlling the value of $\lambda_{\mathrm{max}}$.
|
[
"cs.LG"
] | false |
2305.12148
|
2023-05-20T09:27:34Z
|
Probabilistic Modeling: Proving the Lottery Ticket Hypothesis in Spiking
Neural Network
|
[
"Man Yao",
"Yuhong Chou",
"Guangshe Zhao",
"Xiawu Zheng",
"Yonghong Tian",
"Bo Xu",
"Guoqi Li"
] |
The Lottery Ticket Hypothesis (LTH) states that a randomly-initialized large
neural network contains a small sub-network (i.e., winning tickets) which, when
trained in isolation, can achieve comparable performance to the large network.
LTH opens up a new path for network pruning. Existing proofs of LTH in
Artificial Neural Networks (ANNs) are based on continuous activation functions,
such as ReLU, which satisfying the Lipschitz condition. However, these
theoretical methods are not applicable in Spiking Neural Networks (SNNs) due to
the discontinuous of spiking function. We argue that it is possible to extend
the scope of LTH by eliminating Lipschitz condition. Specifically, we propose a
novel probabilistic modeling approach for spiking neurons with complicated
spatio-temporal dynamics. Then we theoretically and experimentally prove that
LTH holds in SNNs. According to our theorem, we conclude that pruning directly
in accordance with the weight size in existing SNNs is clearly not optimal. We
further design a new criterion for pruning based on our theory, which achieves
better pruning results than baseline.
|
[
"cs.LG"
] | false |
2305.12235
|
2023-05-20T16:59:27Z
|
Joining the Conversation: Towards Language Acquisition for Ad Hoc Team
Play
|
[
"Dylan Cope",
"Peter McBurney"
] |
In this paper, we propose and consider the problem of cooperative language
acquisition as a particular form of the ad hoc team play problem. We then
present a probabilistic model for inferring a speaker's intentions and a
listener's semantics from observing communications between a team of
language-users. This model builds on the assumptions that speakers are engaged
in positive signalling and listeners are exhibiting positive listening, which
is to say the messages convey hidden information from the listener, that then
causes them to change their behaviour. Further, it accounts for potential
sub-optimality in the speaker's ability to convey the right information
(according to the given task). Finally, we discuss further work for testing and
developing this framework.
|
[
"cs.LG"
] | false |
2305.12238
|
2023-05-20T17:09:44Z
|
Low-Entropy Latent Variables Hurt Out-of-Distribution Performance
|
[
"Nandi Schoots",
"Dylan Cope"
] |
We study the relationship between the entropy of intermediate representations
and a model's robustness to distributional shift. We train models consisting of
two feed-forward networks end-to-end separated by a discrete $n$-bit channel on
an unsupervised contrastive learning task. Different masking strategies are
applied after training that remove a proportion of low-entropy bits,
high-entropy bits, or randomly selected bits, and the effects on performance
are compared to the baseline accuracy with no mask. We hypothesize that the
entropy of a bit serves as a guide to its usefulness out-of-distribution (OOD).
Through experiment on three OOD datasets we demonstrate that the removal of
low-entropy bits can notably benefit OOD performance. Conversely, we find that
top-entropy masking disproportionately harms performance both in-distribution
(InD) and OOD.
|
[
"cs.LG"
] | false |
2305.12270
|
2023-05-20T19:22:40Z
|
Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion
|
[
"Yun Luo",
"Xiaotian Lin",
"Zhen Yang",
"Fandong Meng",
"Jie Zhou",
"Yue Zhang"
] |
Task-incremental continual learning refers to continually training a model in
a sequence of tasks while overcoming the problem of catastrophic forgetting
(CF). The issue arrives for the reason that the learned representations are
forgotten for learning new tasks, and the decision boundary is destructed.
Previous studies mostly consider how to recover the representations of learned
tasks. It is seldom considered to adapt the decision boundary for new
representations and in this paper we propose a Supervised Contrastive learning
framework with adaptive classification criterion for Continual Learning (SCCL),
In our method, a contrastive loss is used to directly learn representations for
different tasks and a limited number of data samples are saved as the
classification criterion. During inference, the saved data samples are fed into
the current model to obtain updated representations, and a k Nearest Neighbour
module is used for classification. In this way, the extensible model can solve
the learned tasks with adaptive criteria of saved samples. To mitigate CF, we
further use an instance-wise relation distillation regularization term and a
memory replay module to maintain the information of previous tasks. Experiments
show that SCCL achieves state-of-the-art performance and has a stronger ability
to overcome CF compared with the classification baselines.
|
[
"cs.LG"
] | false |
2305.12060
|
2023-05-20T02:18:39Z
|
Mechanical Property Design of Bio-compatible Mg alloys using
Machine-Learning Algorithms
|
[
"Parham Valipoorsalimi",
"Yuksel Asli Sari",
"Mihriban Pekguleryuz"
] |
Magnesium alloys are attractive options for temporary bio-implants because of
their biocompatibility, controlled corrosion rate, and similarity to natural
bone in terms of stiffness and density. Nevertheless, their low mechanical
strength hinders their use as cardiovascular stents and bone substitutes. While
it is possible to engineer alloys with the desired mechanical strength,
optimizing the mechanical properties of biocompatible magnesium alloys using
conventional experimental methods is time-consuming and expensive. Therefore,
Artificial Intelligence (AI) can be leveraged to streamline the alloy design
process and reduce the required time. In this study, a machine learning model
was developed to predict the yield strength (YS) of biocompatible magnesium
alloys with an $R^2$ accuracy of 91\%. The predictive model was then validated
using the CALPHAD technique and thermodynamics calculations. Next, the
predictive model was employed as the fitness function of a genetic algorithm to
optimize the alloy composition for high-strength biocompatible magnesium
implants. As a result, two alloys were proposed and synthesized, exhibiting YS
values of 108 and 113 MPa, respectively. These values were substantially higher
than those of conventional magnesium biocompatible alloys and closer to the YS
and compressive strength of natural bone. Finally, the synthesized alloys were
subjected to microstructure analysis and mechanical property testing to
validate and evaluate the performance of the proposed AI-based alloy design
approach for creating alloys with specific properties suitable for diverse
applications.
|
[
"cond-mat.mtrl-sci",
"cs.LG"
] | false |
2305.12063
|
2023-05-20T02:52:02Z
|
Efficient Multimodal Neural Networks for Trigger-less Voice Assistants
|
[
"Sai Srujana Buddi",
"Utkarsh Oggy Sarawgi",
"Tashweena Heeramun",
"Karan Sawnhey",
"Ed Yanosik",
"Saravana Rathinam",
"Saurabh Adya"
] |
The adoption of multimodal interactions by Voice Assistants (VAs) is growing
rapidly to enhance human-computer interactions. Smartwatches have now
incorporated trigger-less methods of invoking VAs, such as Raise To Speak
(RTS), where the user raises their watch and speaks to VAs without an explicit
trigger. Current state-of-the-art RTS systems rely on heuristics and engineered
Finite State Machines to fuse gesture and audio data for multimodal
decision-making. However, these methods have limitations, including limited
adaptability, scalability, and induced human biases. In this work, we propose a
neural network based audio-gesture multimodal fusion system that (1) Better
understands temporal correlation between audio and gesture data, leading to
precise invocations (2) Generalizes to a wide range of environments and
scenarios (3) Is lightweight and deployable on low-power devices, such as
smartwatches, with quick launch times (4) Improves productivity in asset
development processes.
|
[
"cs.LG",
"cs.HC"
] | false |
2305.12125
|
2023-05-20T07:18:06Z
|
A Framework for Provably Stable and Consistent Training of Deep
Feedforward Networks
|
[
"Arunselvan Ramaswamy",
"Shalabh Bhatnagar",
"Naman Saxena"
] |
We present a novel algorithm for training deep neural networks in supervised
(classification and regression) and unsupervised (reinforcement learning)
scenarios. This algorithm combines the standard stochastic gradient descent and
the gradient clipping method. The output layer is updated using clipped
gradients, the rest of the neural network is updated using standard gradients.
Updating the output layer using clipped gradient stabilizes it. We show that
the remaining layers are automatically stabilized provided the neural network
is only composed of squashing (compact range) activations. We also present a
novel squashing activation function - it is obtained by modifying a Gaussian
Error Linear Unit (GELU) to have compact range - we call it Truncated GELU
(tGELU). Unlike other squashing activations, such as sigmoid, the range of
tGELU can be explicitly specified. As a consequence, the problem of vanishing
gradients that arise due to a small range, e.g., in the case of a sigmoid
activation, is eliminated. We prove that a NN composed of squashing activations
(tGELU, sigmoid, etc.), when updated using the algorithm presented herein, is
numerically stable and has consistent performance (low variance). The theory is
supported by extensive experiments. Within reinforcement learning, as a
consequence of our study, we show that target networks in Deep Q-Learning can
be omitted, greatly speeding up learning and alleviating memory requirements.
Cross-entropy based classification algorithms that suffer from high variance
issues are more consistent when trained using our framework. One symptom of
numerical instability in training is the high variance of the neural network
update values. We show, in theory and through experiments, that our algorithm
updates have low variance, and the training loss reduces in a smooth manner.
|
[
"cs.LG",
"cs.AI",
"90B05, 90C40, 90C90"
] | false |
2305.12149
|
2023-05-20T09:31:35Z
|
Normalizing flow sampling with Langevin dynamics in the latent space
|
[
"Florentin Coeurdoux",
"Nicolas Dobigeon",
"Pierre Chainais"
] |
Normalizing flows (NF) use a continuous generator to map a simple latent
(e.g. Gaussian) distribution, towards an empirical target distribution
associated with a training data set. Once trained by minimizing a variational
objective, the learnt map provides an approximate generative model of the
target distribution. Since standard NF implement differentiable maps, they may
suffer from pathological behaviors when targeting complex distributions. For
instance, such problems may appear for distributions on multi-component
topologies or characterized by multiple modes with high probability regions
separated by very unlikely areas. A typical symptom is the explosion of the
Jacobian norm of the transformation in very low probability areas. This paper
proposes to overcome this issue thanks to a new Markov chain Monte Carlo
algorithm to sample from the target distribution in the latent domain before
transporting it back to the target domain. The approach relies on a Metropolis
adjusted Langevin algorithm (MALA) whose dynamics explicitly exploits the
Jacobian of the transformation. Contrary to alternative approaches, the
proposed strategy preserves the tractability of the likelihood and it does not
require a specific training. Notably, it can be straightforwardly used with any
pre-trained NF network, regardless of the architecture. Experiments conducted
on synthetic and high-dimensional real data sets illustrate the efficiency of
the method.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.12157
|
2023-05-20T10:05:06Z
|
(Machine) Learning to Be Like Thee? For Algorithm Education, Not
Training
|
[
"Susana Perez Blazquez",
"Inas Hipolito"
] |
This paper argues that Machine Learning (ML) algorithms must be educated.
ML-trained algorithms moral decisions are ubiquitous in human society.
Sometimes reverting the societal advances governments, NGOs and civil society
have achieved with great effort in the last decades or are yet on the path to
be achieved. While their decisions have an incommensurable impact on human
societies, these algorithms are within the least educated agents known (data
incomplete, un-inclusive, or biased). ML algorithms are not something separate
from our human idiosyncrasy but an enactment of our most implicit prejudices
and biases. Some research is devoted to responsibility assignment as a strategy
to tackle immoral AI behaviour. Yet this paper argues that the solution for AI
ethical decision-making resides in algorithm education (as opposed to the
training) of ML. Drawing from an analogy between ML and child education for
social responsibility, the paper offers clear directions for responsible and
sustainable AI design, specifically with respect to how to educate algorithms
to decide ethically.
|
[
"cs.LG",
"q-bio.NC"
] | false |
2305.12185
|
2023-05-20T12:41:47Z
|
Do We Need an Encoder-Decoder to Model Dynamical Systems on Networks?
|
[
"Bing Liu",
"Wei Luo",
"Gang Li",
"Jing Huang",
"Bo Yang"
] |
As deep learning gains popularity in modelling dynamical systems, we expose
an underappreciated misunderstanding relevant to modelling dynamics on
networks. Strongly influenced by graph neural networks, latent vertex
embeddings are naturally adopted in many neural dynamical network models.
However, we show that embeddings tend to induce a model that fits observations
well but simultaneously has incorrect dynamical behaviours. Recognising that
previous studies narrowly focus on short-term predictions during the transient
phase of a flow, we propose three tests for correct long-term behaviour, and
illustrate how an embedding-based dynamical model fails these tests, and
analyse the causes, particularly through the lens of topological conjugacy. In
doing so, we show that the difficulties can be avoided by not using embedding.
We propose a simple embedding-free alternative based on parametrising two
additive vector-field components. Through extensive experiments, we verify that
the proposed model can reliably recover a broad class of dynamics on different
network topologies from time series data.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.12213
|
2023-05-20T15:33:06Z
|
Taming Resource Heterogeneity In Distributed ML Training With Dynamic
Batching
|
[
"Sahil Tyagi",
"Prateek Sharma"
] |
Current techniques and systems for distributed model training mostly assume
that clusters are comprised of homogeneous servers with a constant resource
availability. However, cluster heterogeneity is pervasive in computing
infrastructure, and is a fundamental characteristic of low-cost transient
resources (such as EC2 spot instances). In this paper, we develop a dynamic
batching technique for distributed data-parallel training that adjusts the
mini-batch sizes on each worker based on its resource availability and
throughput. Our mini-batch controller seeks to equalize iteration times on all
workers, and facilitates training on clusters comprised of servers with
different amounts of CPU and GPU resources. This variable mini-batch technique
uses proportional control and ideas from PID controllers to find stable
mini-batch sizes. Our empirical evaluation shows that dynamic batching can
reduce model training times by more than 4x on heterogeneous clusters.
|
[
"cs.LG",
"cs.DC"
] | false |
2305.12266
|
2023-05-20T18:48:41Z
|
LightESD: Fully-Automated and Lightweight Anomaly Detection Framework
for Edge Computing
|
[
"Ronit Das",
"Tie Luo"
] |
Anomaly detection is widely used in a broad range of domains from
cybersecurity to manufacturing, finance, and so on. Deep learning based anomaly
detection has recently drawn much attention because of its superior capability
of recognizing complex data patterns and identifying outliers accurately.
However, deep learning models are typically iteratively optimized in a central
server with input data gathered from edge devices, and such data transfer
between edge devices and the central server impose substantial overhead on the
network and incur additional latency and energy consumption. To overcome this
problem, we propose a fully-automated, lightweight, statistical learning based
anomaly detection framework called LightESD. It is an on-device learning method
without the need for data transfer between edge and server, and is extremely
lightweight that most low-end edge devices can easily afford with negligible
delay, CPU/memory utilization, and power consumption. Yet, it achieves highly
competitive detection accuracy. Another salient feature is that it can
auto-adapt to probably any dataset without manually setting or configuring
model parameters or hyperparameters, which is a drawback of most existing
methods. We focus on time series data due to its pervasiveness in edge
applications such as IoT. Our evaluation demonstrates that LightESD outperforms
other SOTA methods on detection accuracy, efficiency, and resource consumption.
Additionally, its fully automated feature gives it another competitive
advantage in terms of practical usability and generalizability.
|
[
"cs.LG",
"cs.CR"
] | false |
2305.14376
|
2023-05-20T21:07:47Z
|
PTGB: Pre-Train Graph Neural Networks for Brain Network Analysis
|
[
"Yi Yang",
"Hejie Cui",
"Carl Yang"
] |
The human brain is the central hub of the neurobiological system, controlling
behavior and cognition in complex ways. Recent advances in neuroscience and
neuroimaging analysis have shown a growing interest in the interactions between
brain regions of interest (ROIs) and their impact on neural development and
disorder diagnosis. As a powerful deep model for analyzing graph-structured
data, Graph Neural Networks (GNNs) have been applied for brain network
analysis. However, training deep models requires large amounts of labeled data,
which is often scarce in brain network datasets due to the complexities of data
acquisition and sharing restrictions. To make the most out of available
training data, we propose PTGB, a GNN pre-training framework that captures
intrinsic brain network structures, regardless of clinical outcomes, and is
easily adaptable to various downstream tasks. PTGB comprises two key
components: (1) an unsupervised pre-training technique designed specifically
for brain networks, which enables learning from large-scale datasets without
task-specific labels; (2) a data-driven parcellation atlas mapping pipeline
that facilitates knowledge transfer across datasets with different ROI systems.
Extensive evaluations using various GNN models have demonstrated the robust and
superior performance of PTGB compared to baseline methods.
|
[
"q-bio.NC",
"cs.LG"
] | false |
2305.12058
|
2023-05-20T01:56:29Z
|
DADIN: Domain Adversarial Deep Interest Network for Cross Domain
Recommender Systems
|
[
"Menglin Kong",
"Muzhou Hou",
"Shaojie Zhao",
"Feng Liu",
"Ri Su",
"Yinghao Chen"
] |
Click-Through Rate (CTR) prediction is one of the main tasks of the
recommendation system, which is conducted by a user for different items to give
the recommendation results. Cross-domain CTR prediction models have been
proposed to overcome problems of data sparsity, long tail distribution of
user-item interactions, and cold start of items or users. In order to make
knowledge transfer from source domain to target domain more smoothly, an
innovative deep learning cross-domain CTR prediction model, Domain Adversarial
Deep Interest Network (DADIN) is proposed to convert the cross-domain
recommendation task into a domain adaptation problem. The joint distribution
alignment of two domains is innovatively realized by introducing domain
agnostic layers and specially designed loss, and optimized together with CTR
prediction loss in a way of adversarial training. It is found that the Area
Under Curve (AUC) of DADIN is 0.08% higher than the most competitive baseline
on Huawei dataset and is 0.71% higher than its competitors on Amazon dataset,
achieving the state-of-the-art results on the basis of the evaluation of this
model performance on two real datasets. The ablation study shows that by
introducing adversarial method, this model has respectively led to the AUC
improvements of 2.34% on Huawei dataset and 16.67% on Amazon dataset.
|
[
"cs.IR",
"cs.AI",
"cs.LG"
] | false |
2305.12088
|
2023-05-20T04:13:35Z
|
Game-Theoretical Analysis of Reviewer Rewards in Peer-Review Journal
Systems: Analysis and Experimental Evaluation using Deep Reinforcement
Learning
|
[
"Minhyeok Lee"
] |
In this paper, we navigate the intricate domain of reviewer rewards in
open-access academic publishing, leveraging the precision of mathematics and
the strategic acumen of game theory. We conceptualize the prevailing
voucher-based reviewer reward system as a two-player game, subsequently
identifying potential shortcomings that may incline reviewers towards binary
decisions. To address this issue, we propose and mathematically formalize an
alternative reward system with the objective of mitigating this bias and
promoting more comprehensive reviews. We engage in a detailed investigation of
the properties and outcomes of both systems, employing rigorous
game-theoretical analysis and deep reinforcement learning simulations. Our
results underscore a noteworthy divergence between the two systems, with our
proposed system demonstrating a more balanced decision distribution and
enhanced stability. This research not only augments the mathematical
understanding of reviewer reward systems, but it also provides valuable
insights for the formulation of policies within journal review system. Our
contribution to the mathematical community lies in providing a game-theoretical
perspective to a real-world problem and in the application of deep
reinforcement learning to simulate and understand this complex system.
|
[
"cs.AI",
"cs.GT",
"cs.LG",
"econ.TH"
] | false |
2305.12121
|
2023-05-20T06:56:00Z
|
ACA-Net: Towards Lightweight Speaker Verification using Asymmetric Cross
Attention
|
[
"Jia Qi Yip",
"Tuan Truong",
"Dianwen Ng",
"Chong Zhang",
"Yukun Ma",
"Trung Hieu Nguyen",
"Chongjia Ni",
"Shengkui Zhao",
"Eng Siong Chng",
"Bin Ma"
] |
In this paper, we propose ACA-Net, a lightweight, global context-aware
speaker embedding extractor for Speaker Verification (SV) that improves upon
existing work by using Asymmetric Cross Attention (ACA) to replace temporal
pooling. ACA is able to distill large, variable-length sequences into small,
fixed-sized latents by attending a small query to large key and value matrices.
In ACA-Net, we build a Multi-Layer Aggregation (MLA) block using ACA to
generate fixed-sized identity vectors from variable-length inputs. Through
global attention, ACA-Net acts as an efficient global feature extractor that
adapts to temporal variability unlike existing SV models that apply a fixed
function for pooling over the temporal dimension which may obscure information
about the signal's non-stationary temporal variability. Our experiments on the
WSJ0-1talker show ACA-Net outperforms a strong baseline by 5\% relative
improvement in EER using only 1/5 of the parameters.
|
[
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2305.12158
|
2023-05-20T10:11:09Z
|
Model-based adaptation for sample efficient transfer in reinforcement
learning control of parameter-varying systems
|
[
"Ibrahim Ahmed",
"Marcos Quinones-Grueiro",
"Gautam Biswas"
] |
In this paper, we leverage ideas from model-based control to address the
sample efficiency problem of reinforcement learning (RL) algorithms.
Accelerating learning is an active field of RL highly relevant in the context
of time-varying systems. Traditional transfer learning methods propose to use
prior knowledge of the system behavior to devise a gradual or immediate
data-driven transformation of the control policy obtained through RL. Such
transformation is usually computed by estimating the performance of previous
control policies based on measurements recently collected from the system.
However, such retrospective measures have debatable utility with no guarantees
of positive transfer in most cases. Instead, we propose a model-based
transformation, such that when actions from a control policy are applied to the
target system, a positive transfer is achieved. The transformation can be used
as an initialization for the reinforcement learning process to converge to a
new optimum. We validate the performance of our approach through four benchmark
examples. We demonstrate that our approach is more sample-efficient than
fine-tuning with reinforcement learning alone and achieves comparable
performance to linear-quadratic-regulators and model-predictive control when an
accurate linear model is known in the three cases. If an accurate model is not
known, we empirically show that the proposed approach still guarantees positive
transfer with jump-start improvement.
|
[
"eess.SY",
"cs.LG",
"cs.SY"
] | false |
2305.12205
|
2023-05-20T14:50:34Z
|
Vocabulary for Universal Approximation: A Linguistic Perspective of
Mapping Compositions
|
[
"Yongqiang Cai"
] |
In recent years, deep learning-based sequence modelings, such as language
models, have received much attention and success, which pushes researchers to
explore the possibility of transforming non-sequential problems into a
sequential form. Following this thought, deep neural networks can be
represented as composite functions of a sequence of mappings, linear or
nonlinear, where each composition can be viewed as a \emph{word}. However, the
weights of linear mappings are undetermined and hence require an infinite
number of words. In this article, we investigate the finite case and
constructively prove the existence of a finite \emph{vocabulary} $V=\{\phi_i:
\mathbb{R}^d \to \mathbb{R}^d | i=1,...,n\}$ with $n=O(d^2)$ for the universal
approximation. That is, for any continuous mapping $f: \mathbb{R}^d \to
\mathbb{R}^d$, compact domain $\Omega$ and $\varepsilon>0$, there is a sequence
of mappings $\phi_{i_1}, ..., \phi_{i_m} \in V, m \in \mathbb{Z}_+$, such that
the composition $\phi_{i_m} \circ ... \circ \phi_{i_1} $ approximates $f$ on
$\Omega$ with an error less than $\varepsilon$. Our results provide a
linguistic perspective of composite mappings and suggest a cross-disciplinary
study between linguistics and approximation theory.
|
[
"cs.LG",
"cs.NA",
"math.DS",
"math.NA"
] | false |
2305.12220
|
2023-05-20T15:59:33Z
|
A Novel Framework for Improving the Breakdown Point of Robust Regression
Algorithms
|
[
"Zheyi Fan",
"Szu Hui Ng",
"Qingpei Hu"
] |
We present an effective framework for improving the breakdown point of robust
regression algorithms. Robust regression has attracted widespread attention due
to the ubiquity of outliers, which significantly affect the estimation results.
However, many existing robust least-squares regression algorithms suffer from a
low breakdown point, as they become stuck around local optima when facing
severe attacks. By expanding on the previous work, we propose a novel framework
that enhances the breakdown point of these algorithms by inserting a prior
distribution in each iteration step, and adjusting the prior distribution
according to historical information. We apply this framework to a specific
algorithm and derive the consistent robust regression algorithm with iterative
local search (CORALS). The relationship between CORALS and momentum gradient
descent is described, and a detailed proof of the theoretical convergence of
CORALS is presented. Finally, we demonstrate that the breakdown point of CORALS
is indeed higher than that of the algorithm from which it is derived. We apply
the proposed framework to other robust algorithms, and show that the improved
algorithms achieve better results than the original algorithms, indicating the
effectiveness of the proposed framework.
|
[
"cs.LG",
"math.ST",
"stat.TH",
"62J05"
] | false |
2305.12287
|
2023-05-20T21:44:11Z
|
Contrastive inverse regression for dimension reduction
|
[
"Sam Hawke",
"Hengrui Luo",
"Didong Li"
] |
Supervised dimension reduction (SDR) has been a topic of growing interest in
data science, as it enables the reduction of high-dimensional covariates while
preserving the functional relation with certain response variables of interest.
However, existing SDR methods are not suitable for analyzing datasets collected
from case-control studies. In this setting, the goal is to learn and exploit
the low-dimensional structure unique to or enriched by the case group, also
known as the foreground group. While some unsupervised techniques such as the
contrastive latent variable model and its variants have been developed for this
purpose, they fail to preserve the functional relationship between the
dimension-reduced covariates and the response variable. In this paper, we
propose a supervised dimension reduction method called contrastive inverse
regression (CIR) specifically designed for the contrastive setting. CIR
introduces an optimization problem defined on the Stiefel manifold with a
non-standard loss function. We prove the convergence of CIR to a local optimum
using a gradient descent-based algorithm, and our numerical study empirically
demonstrates the improved performance over competing methods for
high-dimensional data.
|
[
"stat.ML",
"cs.LG",
"stat.AP",
"stat.ME"
] | false |
2305.14374
|
2023-05-20T08:42:29Z
|
Inferring Attracting Basins of Power System with Machine Learning
|
[
"Yao Du",
"Qing Li",
"Huawei Fan",
"Meng Zhan",
"Jinghua Xiao",
"Xingang Wang"
] |
Power systems dominated by renewable energy encounter frequently large,
random disturbances, and a critical challenge faced in power-system management
is how to anticipate accurately whether the perturbed systems will return to
the functional state after the transient or collapse. Whereas model-based
studies show that the key to addressing the challenge lies in the attracting
basins of the functional and dysfunctional states in the phase space, the
finding of the attracting basins for realistic power systems remains a
challenge, as accurate models describing the system dynamics are generally
unavailable. Here we propose a new machine learning technique, namely balanced
reservoir computing, to infer the attracting basins of a typical power system
based on measured data. Specifically, trained by the time series of a handful
of perturbation events, we demonstrate that the trained machine can predict
accurately whether the system will return to the functional state in response
to a large, random perturbation, thereby reconstructing the attracting basin of
the functional state. The working mechanism of the new machine is analyzed, and
it is revealed that the success of the new machine is attributed to the good
balance between the echo and fading properties of the reservoir network; the
effect of noisy signals on the prediction performance is also investigated, and
a stochastic-resonance-like phenomenon is observed. Finally, we demonstrate
that the new technique can be also utilized to infer the attracting basins of
coexisting attractors in typical chaotic systems.
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.15153
|
2023-05-20T09:08:44Z
|
MotifRetro: Exploring the Combinability-Consistency Trade-offs in
retrosynthesis via Dynamic Motif Editing
|
[
"Zhangyang Gao",
"Xingran Chen",
"Cheng Tan",
"Stan Z. Li"
] |
Is there a unified framework for graph-based retrosynthesis prediction?
Through analysis of full-, semi-, and non-template retrosynthesis methods, we
discovered that they strive to strike an optimal balance between combinability
and consistency: \textit{Should atoms be combined as motifs to simplify the
molecular editing process, or should motifs be broken down into atoms to reduce
the vocabulary and improve predictive consistency?}
Recent works have studied several specific cases, while none of them explores
different combinability-consistency trade-offs. Therefore, we propose
MotifRetro, a dynamic motif editing framework for retrosynthesis prediction
that can explore the entire trade-off space and unify graph-based models.
MotifRetro comprises two components: RetroBPE, which controls the
combinability-consistency trade-off, and a motif editing model, where we
introduce a novel LG-EGAT module to dynamiclly add motifs to the molecule. We
conduct extensive experiments on USPTO-50K to explore how the trade-off affects
the model performance and finally achieve state-of-the-art performance.
|
[
"q-bio.BM",
"cs.AI",
"cs.LG"
] | false |
2306.09991
|
2023-05-20T22:26:44Z
|
Evolutionary Algorithms in the Light of SGD: Limit Equivalence, Minima
Flatness, and Transfer Learning
|
[
"Andrei Kucharavy",
"Rachid Guerraoui",
"Ljiljana Dolamic"
] |
Whenever applicable, the Stochastic Gradient Descent (SGD) has shown itself
to be unreasonably effective. Instead of underperforming and getting trapped in
local minima due to the batch noise, SGD leverages it to learn to generalize
better and find minima that are good enough for the entire dataset. This led to
numerous theoretical and experimental investigations, especially in the context
of Artificial Neural Networks (ANNs), leading to better machine learning
algorithms. However, SGD is not applicable in a non-differentiable setting,
leaving all that prior research off the table.
In this paper, we show that a class of evolutionary algorithms (EAs) inspired
by the Gillespie-Orr Mutational Landscapes model for natural evolution is
formally equivalent to SGD in certain settings and, in practice, is well
adapted to large ANNs. We refer to such EAs as Gillespie-Orr EA class (GO-EAs)
and empirically show how an insight transfer from SGD can work for them. We
then show that for ANNs trained to near-optimality or in the transfer learning
setting, the equivalence also allows transferring the insights from the
Mutational Landscapes model to SGD.
We then leverage this equivalence to experimentally show how SGD and GO-EAs
can provide mutual insight through examples of minima flatness, transfer
learning, and mixing of individuals in EAs applied to large models.
|
[
"cs.NE",
"cs.LG",
"q-bio.PE",
"I.2.8; G.1.6"
] | false |
2305.12114
|
2023-05-20T06:27:31Z
|
GFDC: A Granule Fusion Density-Based Clustering with Evidential
Reasoning
|
[
"Mingjie Cai",
"Zhishan Wu",
"Qingguo Li",
"Feng Xu",
"Jie Zhou"
] |
Currently, density-based clustering algorithms are widely applied because
they can detect clusters with arbitrary shapes. However, they perform poorly in
measuring global density, determining reasonable cluster centers or structures,
assigning samples accurately and handling data with large density differences
among clusters. To overcome their drawbacks, this paper proposes a granule
fusion density-based clustering with evidential reasoning (GFDC). Both local
and global densities of samples are measured by a sparse degree metric first.
Then information granules are generated in high-density and low-density
regions, assisting in processing clusters with significant density differences.
Further, three novel granule fusion strategies are utilized to combine granules
into stable cluster structures, helping to detect clusters with arbitrary
shapes. Finally, by an assignment method developed from Dempster-Shafer theory,
unstable samples are assigned. After using GFDC, a reasonable clustering result
and some identified outliers can be obtained. The experimental results on
extensive datasets demonstrate the effectiveness of GFDC.
|
[
"cs.LG",
"cs.AI",
"cs.DC",
"cs.IT",
"math.IT"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.