arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.00675
|
2023-06-01T13:49:55Z
|
RHFedMTL: Resource-Aware Hierarchical Federated Multi-Task Learning
|
[
"Xingfu Yi",
"Rongpeng Li",
"Chenghui Peng",
"Fei Wang",
"Jianjun Wu",
"Zhifeng Zhao"
] |
The rapid development of artificial intelligence (AI) over massive
applications including Internet-of-things on cellular network raises the
concern of technical challenges such as privacy, heterogeneity and resource
efficiency.
Federated learning is an effective way to enable AI over massive distributed
nodes with security.
However, conventional works mostly focus on learning a single global model
for a unique task across the network, and are generally less competent to
handle multi-task learning (MTL) scenarios with stragglers at the expense of
acceptable computation and communication cost. Meanwhile, it is challenging to
ensure the privacy while maintain a coupled multi-task learning across multiple
base stations (BSs) and terminals. In this paper, inspired by the natural
cloud-BS-terminal hierarchy of cellular works, we provide a viable
resource-aware hierarchical federated MTL (RHFedMTL) solution to meet the
heterogeneity of tasks, by solving different tasks within the BSs and
aggregating the multi-task result in the cloud without compromising the
privacy. Specifically, a primal-dual method has been leveraged to effectively
transform the coupled MTL into some local optimization sub-problems within BSs.
Furthermore, compared with existing methods to reduce resource cost by simply
changing the aggregation frequency,
we dive into the intricate relationship between resource consumption and
learning accuracy, and develop a resource-aware learning strategy for local
terminals and BSs to meet the resource budget. Extensive simulation results
demonstrate the effectiveness and superiority of RHFedMTL in terms of improving
the learning accuracy and boosting the convergence rate.
|
[
"cs.NI",
"cs.LG"
] | false |
2306.00687
|
2023-06-01T13:59:32Z
|
Adversarial Robustness in Unsupervised Machine Learning: A Systematic
Review
|
[
"Mathias Lundteigen Mohus",
"Jinyue Li"
] |
As the adoption of machine learning models increases, ensuring robust models
against adversarial attacks is increasingly important. With unsupervised
machine learning gaining more attention, ensuring it is robust against attacks
is vital. This paper conducts a systematic literature review on the robustness
of unsupervised learning, collecting 86 papers. Our results show that most
research focuses on privacy attacks, which have effective defenses; however,
many attacks lack effective and general defensive measures. Based on the
results, we formulate a model on the properties of an attack on unsupervised
learning, contributing to future research by providing a model to use.
|
[
"cs.LG",
"cs.CR",
"I.2.0"
] | false |
2306.00700
|
2023-06-01T14:09:52Z
|
Spreads in Effective Learning Rates: The Perils of Batch Normalization
During Early Training
|
[
"Christian H. X. Ali Mehmeti-Göpel",
"Michael Wand"
] |
Excursions in gradient magnitude pose a persistent challenge when training
deep networks. In this paper, we study the early training phases of deep
normalized ReLU networks, accounting for the induced scale invariance by
examining effective learning rates (LRs). Starting with the well-known fact
that batch normalization (BN) leads to exponentially exploding gradients at
initialization, we develop an ODE-based model to describe early training
dynamics. Our model predicts that in the gradient flow, effective LRs will
eventually equalize, aligning with empirical findings on warm-up training.
Using large LRs is analogous to applying an explicit solver to a stiff
non-linear ODE, causing overshooting and vanishing gradients in lower layers
after the first step. Achieving overall balance demands careful tuning of LRs,
depth, and (optionally) momentum. Our model predicts the formation of spreads
in effective LRs, consistent with empirical measurements. Moreover, we observe
that large spreads in effective LRs result in training issues concerning
accuracy, indicating the importance of controlling these dynamics. To further
support a causal relationship, we implement a simple scheduling scheme
prescribing uniform effective LRs across layers and confirm accuracy benefits.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00707
|
2023-06-01T14:16:43Z
|
Renormalized Graph Neural Networks
|
[
"Francesco Caso",
"Giovanni Trappolini",
"Andrea Bacciu",
"Pietro Liò",
"Fabrizio Silvestri"
] |
Graph Neural Networks (GNNs) have become essential for studying complex data,
particularly when represented as graphs. Their value is underpinned by their
ability to reflect the intricacies of numerous areas, ranging from social to
biological networks. GNNs can grapple with non-linear behaviors, emerging
patterns, and complex connections; these are also typical characteristics of
complex systems. The renormalization group (RG) theory has emerged as the
language for studying complex systems. It is recognized as the preferred lens
through which to study complex systems, offering a framework that can untangle
their intricate dynamics. Despite the clear benefits of integrating RG theory
with GNNs, no existing methods have ventured into this promising territory.
This paper proposes a new approach that applies RG theory to devise a novel
graph rewiring to improve GNNs' performance on graph-related tasks. We support
our proposal with extensive experiments on standard benchmarks and baselines.
The results demonstrate the effectiveness of our method and its potential to
remedy the current limitations of GNNs. Finally, this paper marks the beginning
of a new research direction. This path combines the theoretical foundations of
RG, the magnifying glass of complex systems, with the structural capabilities
of GNNs. By doing so, we aim to enhance the potential of GNNs in modeling and
unraveling the complexities inherent in diverse systems.
|
[
"cs.LG",
"physics.data-an"
] | false |
2306.00760
|
2023-06-01T14:54:42Z
|
Efficient Failure Pattern Identification of Predictive Algorithms
|
[
"Bao Nguyen",
"Viet Anh Nguyen"
] |
Given a (machine learning) classifier and a collection of unlabeled data, how
can we efficiently identify misclassification patterns presented in this
dataset? To address this problem, we propose a human-machine collaborative
framework that consists of a team of human annotators and a sequential
recommendation algorithm. The recommendation algorithm is conceptualized as a
stochastic sampler that, in each round, queries the annotators a subset of
samples for their true labels and obtains the feedback information on whether
the samples are misclassified. The sampling mechanism needs to balance between
discovering new patterns of misclassification (exploration) and confirming the
potential patterns of classification (exploitation). We construct a
determinantal point process, whose intensity balances the
exploration-exploitation trade-off through the weighted update of the posterior
at each round to form the generator of the stochastic sampler. The numerical
results empirically demonstrate the competitive performance of our framework on
multiple datasets at various signal-to-noise ratios.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00778
|
2023-06-01T15:08:22Z
|
An End-to-End Time Series Model for Simultaneous Imputation and Forecast
|
[
"Trang H. Tran",
"Lam M. Nguyen",
"Kyongmin Yeo",
"Nam Nguyen",
"Dzung Phan",
"Roman Vaculin",
"Jayant Kalagnanam"
] |
Time series forecasting using historical data has been an interesting and
challenging topic, especially when the data is corrupted by missing values. In
many industrial problem, it is important to learn the inference function
between the auxiliary observations and target variables as it provides
additional knowledge when the data is not fully observed. We develop an
end-to-end time series model that aims to learn the such inference relation and
make a multiple-step ahead forecast. Our framework trains jointly two neural
networks, one to learn the feature-wise correlations and the other for the
modeling of temporal behaviors. Our model is capable of simultaneously imputing
the missing entries and making a multiple-step ahead prediction. The
experiments show good overall performance of our framework over existing
methods in both imputation and forecasting tasks.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.00785
|
2023-06-01T15:16:36Z
|
Data Interpolants -- That's What Discriminators in Higher-order
Gradient-regularized GANs Are
|
[
"Siddarth Asokan",
"Chandra Sekhar Seelamantula"
] |
We consider the problem of optimizing the discriminator in generative
adversarial networks (GANs) subject to higher-order gradient regularization. We
show analytically, via the least-squares (LSGAN) and Wasserstein (WGAN) GAN
variants, that the discriminator optimization problem is one of interpolation
in $n$-dimensions. The optimal discriminator, derived using variational
Calculus, turns out to be the solution to a partial differential equation
involving the iterated Laplacian or the polyharmonic operator. The solution is
implementable in closed-form via polyharmonic radial basis function (RBF)
interpolation. In view of the polyharmonic connection, we refer to the
corresponding GANs as Poly-LSGAN and Poly-WGAN. Through experimental validation
on multivariate Gaussians, we show that implementing the optimal RBF
discriminator in closed-form, with penalty orders $m \approx\lceil \frac{n}{2}
\rceil $, results in superior performance, compared to training GAN with
arbitrarily chosen discriminator architectures. We employ the Poly-WGAN
discriminator to model the latent space distribution of the data with
encoder-decoder-based GAN flavors such as Wasserstein autoencoders.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.00861
|
2023-06-01T16:19:37Z
|
Non-stationary Reinforcement Learning under General Function
Approximation
|
[
"Songtao Feng",
"Ming Yin",
"Ruiquan Huang",
"Yu-Xiang Wang",
"Jing Yang",
"Yingbin Liang"
] |
General function approximation is a powerful tool to handle large state and
action spaces in a broad range of reinforcement learning (RL) scenarios.
However, theoretical understanding of non-stationary MDPs with general function
approximation is still limited. In this paper, we make the first such an
attempt. We first propose a new complexity metric called dynamic Bellman Eluder
(DBE) dimension for non-stationary MDPs, which subsumes majority of existing
tractable RL problems in static MDPs as well as non-stationary MDPs. Based on
the proposed complexity metric, we propose a novel confidence-set based
model-free algorithm called SW-OPEA, which features a sliding window mechanism
and a new confidence set design for non-stationary MDPs. We then establish an
upper bound on the dynamic regret for the proposed algorithm, and show that
SW-OPEA is provably efficient as long as the variation budget is not
significantly large. We further demonstrate via examples of non-stationary
linear and tabular MDPs that our algorithm performs better in small variation
budget scenario than the existing UCB-type algorithms. To the best of our
knowledge, this is the first dynamic regret analysis in non-stationary MDPs
with general function approximation.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.00867
|
2023-06-01T16:24:40Z
|
IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive
Control
|
[
"Rohan Chitnis",
"Yingchen Xu",
"Bobak Hashemi",
"Lucas Lehnert",
"Urun Dogan",
"Zheqing Zhu",
"Olivier Delalleau"
] |
Model-based reinforcement learning (RL) has shown great promise due to its
sample efficiency, but still struggles with long-horizon sparse-reward tasks,
especially in offline settings where the agent learns from a fixed dataset. We
hypothesize that model-based RL agents struggle in these environments due to a
lack of long-term planning capabilities, and that planning in a temporally
abstract model of the environment can alleviate this issue. In this paper, we
make two key contributions: 1) we introduce an offline model-based RL
algorithm, IQL-TD-MPC, that extends the state-of-the-art Temporal Difference
Learning for Model Predictive Control (TD-MPC) with Implicit Q-Learning (IQL);
2) we propose to use IQL-TD-MPC as a Manager in a hierarchical setting with any
off-the-shelf offline RL algorithm as a Worker. More specifically, we pre-train
a temporally abstract IQL-TD-MPC Manager to predict "intent embeddings", which
roughly correspond to subgoals, via planning. We empirically show that
augmenting state representations with intent embeddings generated by an
IQL-TD-MPC manager significantly improves off-the-shelf offline RL agents'
performance on some of the most challenging D4RL benchmark tasks. For instance,
the offline RL algorithms AWAC, TD3-BC, DT, and CQL all get zero or near-zero
normalized evaluation scores on the medium and large antmaze tasks, while our
modification gives an average score over 40.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00879
|
2023-06-01T16:39:50Z
|
Domain Generalization for Domain-Linked Classes
|
[
"Kimathi Kaai",
"Saad Hossain",
"Sirisha Rambhatla"
] |
Domain generalization (DG) focuses on transferring domain-invariant knowledge
from multiple source domains (available at train time) to an, a priori, unseen
target domain(s). This requires a class to be expressed in multiple domains for
the learning algorithm to break the spurious correlations between domain and
class. However, in the real-world, classes may often be domain-linked, i.e.
expressed only in a specific domain, which leads to extremely poor
generalization performance for these classes. In this work, we aim to learn
generalizable representations for these domain-linked classes by transferring
domain-invariant knowledge from classes expressed in multiple source domains
(domain-shared classes). To this end, we introduce this task to the community
and propose a Fair and cONtrastive feature-space regularization algorithm for
Domain-linked DG, FOND. Rigorous and reproducible experiments with baselines
across popular DG tasks demonstrate our method and its variants' ability to
accomplish state-of-the-art DG results for domain-linked classes. We also
provide practical insights on data conditions that increase domain-linked class
generalizability to tackle real-world data scarcity.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00920
|
2023-06-01T17:21:10Z
|
Better Private Linear Regression Through Better Private Feature
Selection
|
[
"Travis Dick",
"Jennifer Gillenwater",
"Matthew Joseph"
] |
Existing work on differentially private linear regression typically assumes
that end users can precisely set data bounds or algorithmic hyperparameters.
End users often struggle to meet these requirements without directly examining
the data (and violating privacy). Recent work has attempted to develop
solutions that shift these burdens from users to algorithms, but they struggle
to provide utility as the feature dimension grows. This work extends these
algorithms to higher-dimensional problems by introducing a differentially
private feature selection method based on Kendall rank correlation. We prove a
utility guarantee for the setting where features are normally distributed and
conduct experiments across 25 datasets. We find that adding this private
feature selection step before regression significantly broadens the
applicability of ``plug-and-play'' private linear regression algorithms at
little additional cost to privacy, computation, or decision-making by the end
user.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.01029
|
2023-06-01T14:40:56Z
|
SPINEX: Similarity-based Predictions and Explainable Neighbors
Exploration for Regression and Classification Tasks in Machine Learning
|
[
"M. Z. Naser",
"M. K. albashiti",
"A. Z. Naser"
] |
The field of machine learning (ML) has witnessed significant advancements in
recent years. However, many existing algorithms lack interpretability and
struggle with high-dimensional and imbalanced data. This paper proposes SPINEX,
a novel similarity-based interpretable neighbor exploration algorithm designed
to address these limitations. This algorithm combines ensemble learning and
feature interaction analysis to achieve accurate predictions and meaningful
insights by quantifying each feature's contribution to predictions and
identifying interactions between features, thereby enhancing the
interpretability of the algorithm. To evaluate the performance of SPINEX,
extensive experiments on 59 synthetic and real datasets were conducted for both
regression and classification tasks. The results demonstrate that SPINEX
achieves comparative performance and, in some scenarios, may outperform
commonly adopted ML algorithms. The same findings demonstrate the effectiveness
and competitiveness of SPINEX, making it a promising approach for various
real-world applications.
|
[
"cs.LG",
"stat.AP"
] | false |
2306.01032
|
2023-06-01T15:57:11Z
|
Chaos persists in large-scale multi-agent learning despite adaptive
learning rates
|
[
"Emmanouil-Vasileios Vlatakis-Gkaragkounis",
"Lampros Flokas",
"Georgios Piliouras"
] |
Multi-agent learning is intrinsically harder, more unstable and unpredictable
than single agent optimization. For this reason, numerous specialized
heuristics and techniques have been designed towards the goal of achieving
convergence to equilibria in self-play. One such celebrated approach is the use
of dynamically adaptive learning rates. Although such techniques are known to
allow for improved convergence guarantees in small games, it has been much
harder to analyze them in more relevant settings with large populations of
agents. These settings are particularly hard as recent work has established
that learning with fixed rates will become chaotic given large enough
populations.In this work, we show that chaos persists in large population
congestion games despite using adaptive learning rates even for the ubiquitous
Multiplicative Weight Updates algorithm, even in the presence of only two
strategies. At a technical level, due to the non-autonomous nature of the
system, our approach goes beyond conventional period-three techniques Li-Yorke
by studying fundamental properties of the dynamics including invariant sets,
volume expansion and turbulent sets. We complement our theoretical insights
with experiments showcasing that slight variations to system parameters lead to
a wide variety of unpredictable behaviors.
|
[
"cs.LG",
"math.OC"
] | false |
2306.01108
|
2023-06-01T19:49:43Z
|
Towards Learning Discrete Representations via Self-Supervision for
Wearables-Based Human Activity Recognition
|
[
"Harish Haresamudram",
"Irfan Essa",
"Thomas Ploetz"
] |
Human activity recognition (HAR) in wearable computing is typically based on
direct processing of sensor data. Sensor readings are translated into
representations, either derived through dedicated preprocessing, or integrated
into end-to-end learning. Independent of their origin, for the vast majority of
contemporary HAR, those representations are typically continuous in nature.
That has not always been the case. In the early days of HAR, discretization
approaches have been explored - primarily motivated by the desire to minimize
computational requirements, but also with a view on applications beyond mere
recognition, such as, activity discovery, fingerprinting, or large-scale
search. Those traditional discretization approaches, however, suffer from
substantial loss in precision and resolution in the resulting representations
with detrimental effects on downstream tasks. Times have changed and in this
paper we propose a return to discretized representations. We adopt and apply
recent advancements in Vector Quantization (VQ) to wearables applications,
which enables us to directly learn a mapping between short spans of sensor data
and a codebook of vectors, resulting in recognition performance that is
generally on par with their contemporary, continuous counterparts - sometimes
surpassing them. Therefore, this work presents a proof-of-concept for
demonstrating how effective discrete representations can be derived, enabling
applications beyond mere activity classification but also opening up the field
to advanced tools for the analysis of symbolic sequences, as they are known,
for example, from domains such as natural language processing. Based on an
extensive experimental evaluation on a suite of wearables-based benchmark HAR
tasks, we demonstrate the potential of our learned discretization scheme and
discuss how discretized sensor data analysis can lead to substantial changes in
HAR.
|
[
"cs.LG",
"eess.SP"
] | false |
2306.01123
|
2023-06-01T20:19:41Z
|
A Neural RDE-based model for solving path-dependent PDEs
|
[
"Bowen Fang",
"Hao Ni",
"Yue Wu"
] |
The concept of the path-dependent partial differential equation (PPDE) was
first introduced in the context of path-dependent derivatives in financial
markets. Its semilinear form was later identified as a non-Markovian backward
stochastic differential equation (BSDE). Compared to the classical PDE, the
solution of a PPDE involves an infinite-dimensional spatial variable, making it
challenging to approximate, if not impossible. In this paper, we propose a
neural rough differential equation (NRDE)-based model to learn PPDEs, which
effectively encodes the path information through the log-signature feature
while capturing the fundamental dynamics. The proposed continuous-time model
for the PPDE solution offers the benefits of efficient memory usage and the
ability to scale with dimensionality. Several numerical experiments, provided
to validate the performance of the proposed model in comparison to the strong
baseline in the literature, are used to demonstrate its effectiveness.
|
[
"cs.LG",
"math.PR",
"68T07, 60L90, 60H30"
] | false |
2306.01157
|
2023-06-01T21:27:22Z
|
Delphic Offline Reinforcement Learning under Nonidentifiable Hidden
Confounding
|
[
"Alizée Pace",
"Hugo Yèche",
"Bernhard Schölkopf",
"Gunnar Rätsch",
"Guy Tennenholtz"
] |
A prominent challenge of offline reinforcement learning (RL) is the issue of
hidden confounding: unobserved variables may influence both the actions taken
by the agent and the observed outcomes. Hidden confounding can compromise the
validity of any causal conclusion drawn from data and presents a major obstacle
to effective offline RL. In the present paper, we tackle the problem of hidden
confounding in the nonidentifiable setting. We propose a definition of
uncertainty due to hidden confounding bias, termed delphic uncertainty, which
uses variation over world models compatible with the observations, and
differentiate it from the well-known epistemic and aleatoric uncertainties. We
derive a practical method for estimating the three types of uncertainties, and
construct a pessimistic offline RL algorithm to account for them. Our method
does not assume identifiability of the unobserved confounders, and attempts to
reduce the amount of confounding bias. We demonstrate through extensive
experiments and ablations the efficacy of our approach on a sepsis management
benchmark, as well as on electronic health records. Our results suggest that
nonidentifiable hidden confounding bias can be mitigated to improve offline RL
solutions in practice.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.01191
|
2023-06-01T23:10:15Z
|
Conformal Prediction with Partially Labeled Data
|
[
"Alireza Javanmardi",
"Yusuf Sale",
"Paul Hofman",
"Eyke Hüllermeier"
] |
While the predictions produced by conformal prediction are set-valued, the
data used for training and calibration is supposed to be precise. In the
setting of superset learning or learning from partial labels, a variant of
weakly supervised learning, it is exactly the other way around: training data
is possibly imprecise (set-valued), but the model induced from this data yields
precise predictions. In this paper, we combine the two settings by making
conformal prediction amenable to set-valued training data. We propose a
generalization of the conformal prediction procedure that can be applied to
set-valued training and calibration data. We prove the validity of the proposed
method and present experimental studies in which it compares favorably to
natural baselines.
|
[
"cs.LG",
"stat.ML"
] | false |
2306.04646
|
2023-06-01T19:35:13Z
|
Improve State-Level Wheat Yield Forecasts in Kazakhstan on GEOGLAM's EO
Data by Leveraging A Simple Spatial-Aware Technique
|
[
"Anh Nhat Nhu",
"Ritvik Sahajpal",
"Christina Justice",
"Inbal Becker-Reshef"
] |
Accurate yield forecasting is essential for making informed policies and
long-term decisions for food security. Earth Observation (EO) data and machine
learning algorithms play a key role in providing a comprehensive and timely
view of crop conditions from field to national scales. However, machine
learning algorithms' prediction accuracy is often harmed by spatial
heterogeneity caused by exogenous factors not reflected in remote sensing data,
such as differences in crop management strategies. In this paper, we propose
and investigate a simple technique called state-wise additive bias to
explicitly address the cross-region yield heterogeneity in Kazakhstan. Compared
to baseline machine learning models (Random Forest, CatBoost, XGBoost), our
method reduces the overall RMSE by 8.9\% and the highest state-wise RMSE by
28.37\%. The effectiveness of state-wise additive bias indicates machine
learning's performance can be significantly improved by explicitly addressing
the spatial heterogeneity, motivating future work on spatial-aware machine
learning algorithms for yield forecasts as well as for general geospatial
forecasting problems.
|
[
"cs.LG",
"cs.CY"
] | false |
2306.16951
|
2023-06-01T12:23:14Z
|
Applying language models to algebraic topology: generating simplicial
cycles using multi-labeling in Wu's formula
|
[
"Kirill Brilliantov",
"Fedor Pavutnitskiy",
"Dmitry Pasechnyuk",
"German Magai"
] |
Computing homotopy groups of spheres has long been a fundamental objective in
algebraic topology. Various theoretical and algorithmic approaches have been
developed to tackle this problem. In this paper we take a step towards the goal
of comprehending the group-theoretic structure of the generators of these
homotopy groups by leveraging the power of machine learning. Specifically, in
the simplicial group setting of Wu's formula, we reformulate the problem of
generating simplicial cycles as a problem of sampling from the intersection of
algorithmic datasets related to Dyck languages. We present and evaluate
language modelling approaches that employ multi-label information for input
sequences, along with the necessary group-theoretic toolkit and non-neural
baselines.
|
[
"math.AT",
"cs.LG"
] | false |
2307.08649
|
2023-06-01T01:36:51Z
|
Joint Latent Topic Discovery and Expectation Modeling for Financial
Markets
|
[
"Lili Wang",
"Chenghan Huang",
"Chongyang Gao",
"Weicheng Ma",
"Soroush Vosoughi"
] |
In the pursuit of accurate and scalable quantitative methods for financial
market analysis, the focus has shifted from individual stock models to those
capturing interrelations between companies and their stocks. However, current
relational stock methods are limited by their reliance on predefined stock
relationships and the exclusive consideration of immediate effects. To address
these limitations, we present a groundbreaking framework for financial market
analysis. This approach, to our knowledge, is the first to jointly model
investor expectations and automatically mine latent stock relationships.
Comprehensive experiments conducted on China's CSI 300, one of the world's
largest markets, demonstrate that our model consistently achieves an annual
return exceeding 10%. This performance surpasses existing benchmarks, setting a
new state-of-the-art standard in stock return prediction and multiyear trading
simulations (i.e., backtesting).
|
[
"q-fin.ST",
"cs.LG"
] | false |
2306.00258
|
2023-06-01T00:32:59Z
|
Towards Foundation Models for Scientific Machine Learning:
Characterizing Scaling and Transfer Behavior
|
[
"Shashank Subramanian",
"Peter Harrington",
"Kurt Keutzer",
"Wahid Bhimji",
"Dmitriy Morozov",
"Michael Mahoney",
"Amir Gholami"
] |
Pre-trained machine learning (ML) models have shown great performance for a
wide range of applications, in particular in natural language processing (NLP)
and computer vision (CV). Here, we study how pre-training could be used for
scientific machine learning (SciML) applications, specifically in the context
of transfer learning. We study the transfer behavior of these models as (i) the
pre-trained model size is scaled, (ii) the downstream training dataset size is
scaled, (iii) the physics parameters are systematically pushed out of
distribution, and (iv) how a single model pre-trained on a mixture of different
physics problems can be adapted to various downstream applications. We find
that-when fine-tuned appropriately-transfer learning can help reach desired
accuracy levels with orders of magnitude fewer downstream examples (across
different tasks that can even be out-of-distribution) than training from
scratch, with consistent behavior across a wide range of downstream examples.
We also find that fine-tuning these models yields more performance gains as
model size increases, compared to training from scratch on new downstream
tasks. These results hold for a broad range of PDE learning tasks. All in all,
our results demonstrate the potential of the "pre-train and fine-tune" paradigm
for SciML problems, demonstrating a path towards building SciML foundation
models. We open-source our code for reproducibility.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.00280
|
2023-06-01T01:52:03Z
|
Towards Bias Correction of FedAvg over Nonuniform and Time-Varying
Communications
|
[
"Ming Xiang",
"Stratis Ioannidis",
"Edmund Yeh",
"Carlee Joe-Wong",
"Lili Su"
] |
Federated learning (FL) is a decentralized learning framework wherein a
parameter server (PS) and a collection of clients collaboratively train a model
via minimizing a global objective. Communication bandwidth is a scarce
resource; in each round, the PS aggregates the updates from a subset of clients
only. In this paper, we focus on non-convex minimization that is vulnerable to
non-uniform and time-varying communication failures between the PS and the
clients. Specifically, in each round $t$, the link between the PS and client
$i$ is active with probability $p_i^t$, which is $\textit{unknown}$ to both the
PS and the clients. This arises when the channel conditions are heterogeneous
across clients and are changing over time.
We show that when the $p_i^t$'s are not uniform, $\textit{Federated Average}$
(FedAvg) -- the most widely adopted FL algorithm -- fails to minimize the
global objective. Observing this, we propose $\textit{Federated Postponed
Broadcast}$ (FedPBC) which is a simple variant of FedAvg. It differs from
FedAvg in that the PS postpones broadcasting the global model till the end of
each round. We show that FedPBC converges to a stationary point of the original
objective. The introduced staleness is mild and there is no noticeable
slowdown. Both theoretical analysis and numerical results are provided. On the
technical front, postponing the global model broadcasts enables implicit
gossiping among the clients with active links at round $t$. Despite $p_i^t$'s
are time-varying, we are able to bound the perturbation of the global model
dynamics via the techniques of controlling the gossip-type information mixing
errors.
|
[
"cs.LG",
"cs.DC",
"stat.ML"
] | false |
2306.00281
|
2023-06-01T01:53:10Z
|
Transfer Learning for Underrepresented Music Generation
|
[
"Anahita Doosti",
"Matthew Guzdial"
] |
This paper investigates a combinational creativity approach to transfer
learning to improve the performance of deep neural network-based models for
music generation on out-of-distribution (OOD) genres. We identify Iranian folk
music as an example of such an OOD genre for MusicVAE, a large generative music
model. We find that a combinational creativity transfer learning approach can
efficiently adapt MusicVAE to an Iranian folk music dataset, indicating
potential for generating underrepresented music genres in the future.
|
[
"cs.LG",
"cs.SD",
"eess.AS"
] | false |
2306.00284
|
2023-06-01T02:04:09Z
|
Case Study-Based Approach of Quantum Machine Learning in Cybersecurity:
Quantum Support Vector Machine for Malware Classification and Protection
|
[
"Mst Shapna Akter",
"Hossain Shahriar",
"Sheikh Iqbal Ahamed",
"Kishor Datta Gupta",
"Muhammad Rahman",
"Atef Mohamed",
"Mohammad Rahman",
"Akond Rahman",
"Fan Wu"
] |
Quantum machine learning (QML) is an emerging field of research that
leverages quantum computing to improve the classical machine learning approach
to solve complex real world problems. QML has the potential to address
cybersecurity related challenges. Considering the novelty and complex
architecture of QML, resources are not yet explicitly available that can pave
cybersecurity learners to instill efficient knowledge of this emerging
technology. In this research, we design and develop QML-based ten learning
modules covering various cybersecurity topics by adopting student centering
case-study based learning approach. We apply one subtopic of QML on a
cybersecurity topic comprised of pre-lab, lab, and post-lab activities towards
providing learners with hands-on QML experiences in solving real-world security
problems. In order to engage and motivate students in a learning environment
that encourages all students to learn, pre-lab offers a brief introduction to
both the QML subtopic and cybersecurity problem. In this paper, we utilize
quantum support vector machine (QSVM) for malware classification and protection
where we use open source Pennylane QML framework on the drebin215 dataset. We
demonstrate our QSVM model and achieve an accuracy of 95% in malware
classification and protection. We will develop all the modules and introduce
them to the cybersecurity community in the coming days.
|
[
"cs.CR",
"cs.LG",
"quant-ph"
] | false |
2306.00342
|
2023-06-01T04:47:17Z
|
Combining Explicit and Implicit Regularization for Efficient Learning in
Deep Networks
|
[
"Dan Zhao"
] |
Works on implicit regularization have studied gradient trajectories during
the optimization process to explain why deep networks favor certain kinds of
solutions over others. In deep linear networks, it has been shown that gradient
descent implicitly regularizes toward low-rank solutions on matrix
completion/factorization tasks. Adding depth not only improves performance on
these tasks but also acts as an accelerative pre-conditioning that further
enhances this bias towards low-rankedness. Inspired by this, we propose an
explicit penalty to mirror this implicit bias which only takes effect with
certain adaptive gradient optimizers (e.g. Adam). This combination can enable a
degenerate single-layer network to achieve low-rank approximations with
generalization error comparable to deep linear networks, making depth no longer
necessary for learning. The single-layer network also performs competitively or
out-performs various approaches for matrix completion over a range of parameter
and data regimes despite its simplicity. Together with an optimizer's inductive
bias, our findings suggest that explicit regularization can play a role in
designing different, desirable forms of regularization and that a more nuanced
understanding of this interplay may be necessary.
|
[
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
] | false |
2306.00356
|
2023-06-01T05:33:41Z
|
Regularizing Towards Soft Equivariance Under Mixed Symmetries
|
[
"Hyunsu Kim",
"Hyungi Lee",
"Hongseok Yang",
"Juho Lee"
] |
Datasets often have their intrinsic symmetries, and particular deep-learning
models called equivariant or invariant models have been developed to exploit
these symmetries. However, if some or all of these symmetries are only
approximate, which frequently happens in practice, these models may be
suboptimal due to the architectural restrictions imposed on them. We tackle
this issue of approximate symmetries in a setup where symmetries are mixed,
i.e., they are symmetries of not single but multiple different types and the
degree of approximation varies across these types. Instead of proposing a new
architectural restriction as in most of the previous approaches, we present a
regularizer-based method for building a model for a dataset with mixed
approximate symmetries. The key component of our method is what we call
equivariance regularizer for a given type of symmetries, which measures how
much a model is equivariant with respect to the symmetries of the type. Our
method is trained with these regularizers, one per each symmetry type, and the
strength of the regularizers is automatically tuned during training, leading to
the discovery of the approximation levels of some candidate symmetry types
without explicit supervision. Using synthetic function approximation and motion
forecasting tasks, we demonstrate that our method achieves better accuracy than
prior approaches while discovering the approximate symmetry levels correctly.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2306.00367
|
2023-06-01T05:57:40Z
|
On the Equivalence of Consistency-Type Models: Consistency Models,
Consistent Diffusion Models, and Fokker-Planck Regularization
|
[
"Chieh-Hsin Lai",
"Yuhta Takida",
"Toshimitsu Uesaka",
"Naoki Murata",
"Yuki Mitsufuji",
"Stefano Ermon"
] |
The emergence of various notions of ``consistency'' in diffusion models has
garnered considerable attention and helped achieve improved sample quality,
likelihood estimation, and accelerated sampling. Although similar concepts have
been proposed in the literature, the precise relationships among them remain
unclear. In this study, we establish theoretical connections between three
recent ``consistency'' notions designed to enhance diffusion models for
distinct objectives. Our insights offer the potential for a more comprehensive
and encompassing framework for consistency-type models.
|
[
"cs.LG",
"cs.AI",
"math.ST",
"stat.TH"
] | false |
2306.00406
|
2023-06-01T07:12:00Z
|
Faster Robust Tensor Power Method for Arbitrary Order
|
[
"Yichuan Deng",
"Zhao Song",
"Junze Yin"
] |
Tensor decomposition is a fundamental method used in various areas to deal
with high-dimensional data. \emph{Tensor power method} (TPM) is one of the
widely-used techniques in the decomposition of tensors. This paper presents a
novel tensor power method for decomposing arbitrary order tensors, which
overcomes limitations of existing approaches that are often restricted to
lower-order (less than $3$) tensors or require strong assumptions about the
underlying data structure. We apply sketching method, and we are able to
achieve the running time of $\widetilde{O}(n^{p-1})$, on the power $p$ and
dimension $n$ tensor. We provide a detailed analysis for any $p$-th order
tensor, which is never given in previous works.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.00485
|
2023-06-01T09:37:43Z
|
Causal Estimation of User Learning in Personalized Systems
|
[
"Evan Munro",
"David Jones",
"Jennifer Brennan",
"Roland Nelet",
"Vahab Mirrokni",
"Jean Pouget-Abadie"
] |
In online platforms, the impact of a treatment on an observed outcome may
change over time as 1) users learn about the intervention, and 2) the system
personalization, such as individualized recommendations, change over time. We
introduce a non-parametric causal model of user actions in a personalized
system. We show that the Cookie-Cookie-Day (CCD) experiment, designed for the
measurement of the user learning effect, is biased when there is
personalization. We derive new experimental designs that intervene in the
personalization system to generate the variation necessary to separately
identify the causal effect mediated through user learning and personalization.
Making parametric assumptions allows for the estimation of long-term causal
effects based on medium-term experiments. In simulations, we show that our new
designs successfully recover the dynamic causal effects of interest.
|
[
"stat.ME",
"cs.LG",
"econ.EM"
] | false |
2306.00528
|
2023-06-01T10:28:49Z
|
Neuronal Cell Type Classification using Deep Learning
|
[
"Ofek Ophir",
"Orit Shefi",
"Ofir Lindenbaum"
] |
The brain is likely the most complex organ, given the variety of functions it
controls, the number of cells it comprises, and their corresponding diversity.
Studying and identifying neurons, the brain's primary building blocks, is a
crucial milestone and essential for understanding brain function in health and
disease. Recent developments in machine learning have provided advanced
abilities for classifying neurons. However, these methods remain black boxes
with no explainability and reasoning. This paper aims to provide a robust and
explainable deep-learning framework to classify neurons based on their
electrophysiological activity. Our analysis is performed on data provided by
the Allen Cell Types database containing a survey of biological features
derived from single-cell recordings of mice and humans. First, we classify
neuronal cell types of mice data to identify excitatory and inhibitory neurons.
Then, neurons are categorized to their broad types in humans using domain
adaptation from mice data. Lastly, neurons are classified into sub-types based
on transgenic mouse lines using deep neural networks in an explainable fashion.
We show state-of-the-art results in a dendrite-type classification of
excitatory vs. inhibitory neurons and transgenic mouse lines classification.
The model is also inherently interpretable, revealing the correlations between
neuronal types and their electrophysiological properties.
|
[
"cs.LG",
"eess.SP",
"q-bio.NC"
] | false |
2306.00563
|
2023-06-01T11:22:04Z
|
Machine Learning and Kalman Filtering for Nanomechanical Mass
Spectrometry
|
[
"Mete Erdogan",
"Nuri Berke Baytekin",
"Serhat Emre Coban",
"Alper Demir"
] |
Nanomechanical resonant sensors are used in mass spectrometry via detection
of resonance frequency jumps. There is a fundamental trade-off between
detection speed and accuracy. Temporal and size resolution are limited by the
resonator characteristics and noise. A Kalman filtering technique, augmented
with maximum-likelihood estimation, was recently proposed as a Pareto optimal
solution. We present enhancements and robust realizations for this technique,
including a confidence boosted thresholding approach as well as machine
learning for event detection. We describe learning techniques that are based on
neural networks and boosted decision trees for temporal location and event size
estimation. In the pure learning based approach that discards the Kalman
filter, the raw data from the sensor are used in training a model for both
location and size prediction. In the alternative approach that augments a
Kalman filter, the event likelihood history is used in a binary classifier for
event occurrence. Locations and sizes are predicted using maximum-likelihood,
followed by a Kalman filter that continually improves the size estimate. We
present detailed comparisons of the learning based schemes and the confidence
boosted thresholding approach, and demonstrate robust performance for a
practical realization.
|
[
"physics.ins-det",
"cs.LG",
"physics.app-ph"
] | false |
2306.00578
|
2023-06-01T11:49:43Z
|
Does Black-box Attribute Inference Attacks on Graph Neural Networks
Constitute Privacy Risk?
|
[
"Iyiola E. Olatunji",
"Anmar Hizber",
"Oliver Sihlovec",
"Megha Khosla"
] |
Graph neural networks (GNNs) have shown promising results on real-life
datasets and applications, including healthcare, finance, and education.
However, recent studies have shown that GNNs are highly vulnerable to attacks
such as membership inference attack and link reconstruction attack.
Surprisingly, attribute inference attacks has received little attention. In
this paper, we initiate the first investigation into attribute inference attack
where an attacker aims to infer the sensitive user attributes based on her
public or non-sensitive attributes. We ask the question whether black-box
attribute inference attack constitutes a significant privacy risk for
graph-structured data and their corresponding GNN model. We take a systematic
approach to launch the attacks by varying the adversarial knowledge and
assumptions. Our findings reveal that when an attacker has black-box access to
the target model, GNNs generally do not reveal significantly more information
compared to missing value estimation techniques. Code is available.
|
[
"cs.LG",
"cs.AI",
"cs.CR"
] | false |
2306.00624
|
2023-06-01T12:46:06Z
|
From Temporal to Contemporaneous Iterative Causal Discovery in the
Presence of Latent Confounders
|
[
"Raanan Y. Rohekar",
"Shami Nisimov",
"Yaniv Gurwicz",
"Gal Novik"
] |
We present a constraint-based algorithm for learning causal structures from
observational time-series data, in the presence of latent confounders. We
assume a discrete-time, stationary structural vector autoregressive process,
with both temporal and contemporaneous causal relations. One may ask if
temporal and contemporaneous relations should be treated differently. The
presented algorithm gradually refines a causal graph by learning long-term
temporal relations before short-term ones, where contemporaneous relations are
learned last. This ordering of causal relations to be learnt leads to a
reduction in the required number of statistical tests. We validate this
reduction empirically and demonstrate that it leads to higher accuracy for
synthetic data and more plausible causal graphs for real-world data compared to
state-of-the-art algorithms.
|
[
"cs.AI",
"cs.LG",
"stat.ML"
] | false |
2306.00636
|
2023-06-01T13:00:13Z
|
Unfair Utilities and First Steps Towards Improving Them
|
[
"Frederik Hytting Jørgensen",
"Sebastian Weichwald",
"Jonas Peters"
] |
Many fairness criteria constrain the policy or choice of predictors. In this
work, we propose a different framework for thinking about fairness: Instead of
constraining the policy or choice of predictors, we consider which utility a
policy is optimizing for. We define value of information fairness and propose
to not use utilities that do not satisfy this criterion. We describe how to
modify a utility to satisfy this fairness criterion and discuss the
consequences this might have on the corresponding optimal policies.
|
[
"stat.ML",
"cs.CY",
"cs.LG"
] | false |
2306.00638
|
2023-06-01T13:01:13Z
|
Byzantine-Robust Clustered Federated Learning
|
[
"Zhixu Tao",
"Kun Yang",
"Sanjeev R. Kulkarni"
] |
This paper focuses on the problem of adversarial attacks from Byzantine
machines in a Federated Learning setting where non-Byzantine machines can be
partitioned into disjoint clusters. In this setting, non-Byzantine machines in
the same cluster have the same underlying data distribution, and different
clusters of non-Byzantine machines have different learning tasks. Byzantine
machines can adversarially attack any cluster and disturb the training process
on clusters they attack. In the presence of Byzantine machines, the goal of our
work is to identify cluster membership of non-Byzantine machines and optimize
the models learned by each cluster. We adopt the Iterative Federated Clustering
Algorithm (IFCA) framework of Ghosh et al. (2020) to alternatively estimate
cluster membership and optimize models. In order to make this framework robust
against adversarial attacks from Byzantine machines, we use coordinate-wise
trimmed mean and coordinate-wise median aggregation methods used by Yin et al.
(2018). Specifically, we propose a new Byzantine-Robust Iterative Federated
Clustering Algorithm to improve on the results in Ghosh et al. (2019). We prove
a convergence rate for this algorithm for strongly convex loss functions. We
compare our convergence rate with the convergence rate of an existing
algorithm, and we demonstrate the performance of our algorithm on simulated
data.
|
[
"stat.ML",
"cs.DC",
"cs.LG"
] | false |
2306.00651
|
2023-06-01T13:17:29Z
|
Learning Prescriptive ReLU Networks
|
[
"Wei Sun",
"Asterios Tsiourvas"
] |
We study the problem of learning optimal policy from a set of discrete
treatment options using observational data. We propose a piecewise linear
neural network model that can balance strong prescriptive performance and
interpretability, which we refer to as the prescriptive ReLU network, or
P-ReLU. We show analytically that this model (i) partitions the input space
into disjoint polyhedra, where all instances that belong to the same partition
receive the same treatment, and (ii) can be converted into an equivalent
prescriptive tree with hyperplane splits for interpretability. We demonstrate
the flexibility of the P-ReLU network as constraints can be easily incorporated
with minor modifications to the architecture. Through experiments, we validate
the superior prescriptive accuracy of P-ReLU against competing benchmarks.
Lastly, we present examples of interpretable prescriptive trees extracted from
trained P-ReLUs using a real-world dataset, for both the unconstrained and
constrained scenarios.
|
[
"cs.LG",
"stat.AP",
"stat.ML"
] | false |
2306.00689
|
2023-06-01T14:00:47Z
|
Stuttering Detection Using Speaker Representations and Self-supervised
Contextual Embeddings
|
[
"Shakeel A. Sheikh",
"Md Sahidullah",
"Fabrice Hirsch",
"Slim Ouni"
] |
The adoption of advanced deep learning architectures in stuttering detection
(SD) tasks is challenging due to the limited size of the available datasets. To
this end, this work introduces the application of speech embeddings extracted
from pre-trained deep learning models trained on large audio datasets for
different tasks. In particular, we explore audio representations obtained using
emphasized channel attention, propagation, and aggregation time delay neural
network (ECAPA-TDNN) and Wav2Vec2.0 models trained on VoxCeleb and LibriSpeech
datasets respectively. After extracting the embeddings, we benchmark with
several traditional classifiers, such as the K-nearest neighbour (KNN),
Gaussian naive Bayes, and neural network, for the SD tasks. In comparison to
the standard SD systems trained only on the limited SEP-28k dataset, we obtain
a relative improvement of 12.08%, 28.71%, 37.9% in terms of unweighted average
recall (UAR) over the baselines. Finally, we have shown that combining two
embeddings and concatenating multiple layers of Wav2Vec2.0 can further improve
the UAR by up to 2.60% and 6.32% respectively.
|
[
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2306.00750
|
2023-06-01T14:45:28Z
|
End-to-End Document Classification and Key Information Extraction using
Assignment Optimization
|
[
"Ciaran Cooney",
"Joana Cavadas",
"Liam Madigan",
"Bradley Savage",
"Rachel Heyburn",
"Mairead O'Cuinn"
] |
We propose end-to-end document classification and key information extraction
(KIE) for automating document processing in forms. Through accurate document
classification we harness known information from templates to enhance KIE from
forms. We use text and layout encoding with a cosine similarity measure to
classify visually-similar documents. We then demonstrate a novel application of
mixed integer programming by using assignment optimization to extract key
information from documents. Our approach is validated on an in-house dataset of
noisy scanned forms. The best performing document classification approach
achieved 0.97 f1 score. A mean f1 score of 0.94 for the KIE task suggests there
is significant potential in applying optimization techniques. Abation results
show that the method relies on document preprocessing techniques to mitigate
Type II errors and achieve optimal performance.
|
[
"cs.IR",
"cs.AI",
"cs.LG"
] | false |
2306.00794
|
2023-06-01T15:25:14Z
|
SlothSpeech: Denial-of-service Attack Against Speech Recognition Models
|
[
"Mirazul Haque",
"Rutvij Shah",
"Simin Chen",
"Berrak Şişman",
"Cong Liu",
"Wei Yang"
] |
Deep Learning (DL) models have been popular nowadays to execute different
speech-related tasks, including automatic speech recognition (ASR). As ASR is
being used in different real-time scenarios, it is important that the ASR model
remains efficient against minor perturbations to the input. Hence, evaluating
efficiency robustness of the ASR model is the need of the hour. We show that
popular ASR models like Speech2Text model and Whisper model have dynamic
computation based on different inputs, causing dynamic efficiency. In this
work, we propose SlothSpeech, a denial-of-service attack against ASR models,
which exploits the dynamic behaviour of the model. SlothSpeech uses the
probability distribution of the output text tokens to generate perturbations to
the audio such that efficiency of the ASR model is decreased. We find that
SlothSpeech generated inputs can increase the latency up to 40X times the
latency induced by benign input.
|
[
"cs.SD",
"cs.CR",
"cs.LG",
"eess.AS"
] | false |
2306.00810
|
2023-06-01T15:37:47Z
|
Deep Operator Learning-based Surrogate Models with Uncertainty
Quantification for Optimizing Internal Cooling Channel Rib Profiles
|
[
"Izzet Sahin",
"Christian Moya",
"Amirhossein Mollaali",
"Guang Lin",
"Guillermo Paniagua"
] |
This paper designs surrogate models with uncertainty quantification
capabilities to improve the thermal performance of rib-turbulated internal
cooling channels effectively. To construct the surrogate, we use the deep
operator network (DeepONet) framework, a novel class of neural networks
designed to approximate mappings between infinite-dimensional spaces using
relatively small datasets. The proposed DeepONet takes an arbitrary continuous
rib geometry with control points as input and outputs continuous detailed
information about the distribution of pressure and heat transfer around the
profiled ribs. The datasets needed to train and test the proposed DeepONet
framework were obtained by simulating a 2D rib-roughened internal cooling
channel. To accomplish this, we continuously modified the input rib geometry by
adjusting the control points according to a simple random distribution with
constraints, rather than following a predefined path or sampling method. The
studied channel has a hydraulic diameter, Dh, of 66.7 mm, and a
length-to-hydraulic diameter ratio, L/Dh, of 10. The ratio of rib center height
to hydraulic diameter (e/Dh), which was not changed during the rib profile
update, was maintained at a constant value of 0.048. The ribs were placed in
the channel with a pitch-to-height ratio (P/e) of 10. In addition, we provide
the proposed surrogates with effective uncertainty quantification capabilities.
This is achieved by converting the DeepONet framework into a Bayesian DeepONet
(B-DeepONet). B-DeepONet samples from the posterior distribution of DeepONet
parameters using the novel framework of stochastic gradient replica-exchange
MCMC.
|
[
"physics.flu-dyn",
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.00872
|
2023-06-01T16:32:53Z
|
Is novelty predictable?
|
[
"Clara Fannjiang",
"Jennifer Listgarten"
] |
Machine learning-based design has gained traction in the sciences, most
notably in the design of small molecules, materials, and proteins, with
societal implications spanning drug development and manufacturing, plastic
degradation, and carbon sequestration. When designing objects to achieve novel
property values with machine learning, one faces a fundamental challenge: how
to push past the frontier of current knowledge, distilled from the training
data into the model, in a manner that rationally controls the risk of failure.
If one trusts learned models too much in extrapolation, one is likely to design
rubbish. In contrast, if one does not extrapolate, one cannot find novelty.
Herein, we ponder how one might strike a useful balance between these two
extremes. We focus in particular on designing proteins with novel property
values, although much of our discussion addresses machine learning-based design
more broadly.
|
[
"cs.LG",
"q-bio.BM",
"q-bio.QM"
] | false |
2306.00958
|
2023-06-01T17:52:23Z
|
LIV: Language-Image Representations and Rewards for Robotic Control
|
[
"Yecheng Jason Ma",
"William Liang",
"Vaidehi Som",
"Vikash Kumar",
"Amy Zhang",
"Osbert Bastani",
"Dinesh Jayaraman"
] |
We present Language-Image Value learning (LIV), a unified objective for
vision-language representation and reward learning from action-free videos with
text annotations. Exploiting a novel connection between dual reinforcement
learning and mutual information contrastive learning, the LIV objective trains
a multi-modal representation that implicitly encodes a universal value function
for tasks specified as language or image goals. We use LIV to pre-train the
first control-centric vision-language representation from large human video
datasets such as EpicKitchen. Given only a language or image goal, the
pre-trained LIV model can assign dense rewards to each frame in videos of
unseen robots or humans attempting that task in unseen environments. Further,
when some target domain-specific data is available, the same objective can be
used to fine-tune and improve LIV and even other pre-trained representations
for robotic control and reward specification in that domain. In our experiments
on several simulated and real-world robot environments, LIV models consistently
outperform the best prior input state representations for imitation learning,
as well as reward specification methods for policy synthesis. Our results
validate the advantages of joint vision-language representation and reward
learning within the unified, compact LIV framework.
|
[
"cs.RO",
"cs.AI",
"cs.LG"
] | false |
2306.01012
|
2023-06-01T01:50:37Z
|
Graph-Level Embedding for Time-Evolving Graphs
|
[
"Lili Wang",
"Chenghan Huang",
"Weicheng Ma",
"Xinyuan Cao",
"Soroush Vosoughi"
] |
Graph representation learning (also known as network embedding) has been
extensively researched with varying levels of granularity, ranging from nodes
to graphs. While most prior work in this area focuses on node-level
representation, limited research has been conducted on graph-level embedding,
particularly for dynamic or temporal networks. However, learning
low-dimensional graph-level representations for dynamic networks is critical
for various downstream graph retrieval tasks such as temporal graph similarity
ranking, temporal graph isomorphism, and anomaly detection. In this paper, we
present a novel method for temporal graph-level embedding that addresses this
gap. Our approach involves constructing a multilayer graph and using a modified
random walk with temporal backtracking to generate temporal contexts for the
graph's nodes. We then train a "document-level" language model on these
contexts to generate graph-level embeddings. We evaluate our proposed model on
five publicly available datasets for the task of temporal graph similarity
ranking, and our model outperforms baseline methods. Our experimental results
demonstrate the effectiveness of our method in generating graph-level
embeddings for dynamic networks.
|
[
"cs.LG",
"cs.AI",
"cs.SI"
] | false |
2306.01027
|
2023-06-01T13:33:26Z
|
An FPGA Architecture for Online Learning using the Tsetlin Machine
|
[
"Samuel Prescott",
"Adrian Wheeldon",
"Rishad Shafik",
"Tousif Rahman",
"Alex Yakovlev",
"Ole-Christoffer Granmo"
] |
There is a need for machine learning models to evolve in unsupervised
circumstances. New classifications may be introduced, unexpected faults may
occur, or the initial dataset may be small compared to the data-points
presented to the system during normal operation. Implementing such a system
using neural networks involves significant mathematical complexity, which is a
major issue in power-critical edge applications.
This paper proposes a novel field-programmable gate-array infrastructure for
online learning, implementing a low-complexity machine learning algorithm
called the Tsetlin Machine. This infrastructure features a custom-designed
architecture for run-time learning management, providing on-chip offline and
online learning. Using this architecture, training can be carried out on-demand
on the \ac{FPGA} with pre-classified data before inference takes place.
Additionally, our architecture provisions online learning, where training can
be interleaved with inference during operation. Tsetlin Machine (TM) training
naturally descends to an optimum, with training also linked to a threshold
hyper-parameter which is used to reduce the probability of issuing feedback as
the TM becomes trained further. The proposed architecture is modular, allowing
the data input source to be easily changed, whilst inbuilt cross-validation
infrastructure allows for reliable and representative results during system
testing. We present use cases for online learning using the proposed
infrastructure and demonstrate the energy/performance/accuracy trade-offs.
|
[
"cs.LG",
"cs.AI",
"cs.AR"
] | false |
2306.01089
|
2023-06-01T19:02:50Z
|
Semi-supervised Community Detection via Structural Similarity Metrics
|
[
"Yicong Jiang",
"Tracy Ke"
] |
Motivated by social network analysis and network-based recommendation
systems, we study a semi-supervised community detection problem in which the
objective is to estimate the community label of a new node using the network
topology and partially observed community labels of existing nodes. The network
is modeled using a degree-corrected stochastic block model, which allows for
severe degree heterogeneity and potentially non-assortative communities. We
propose an algorithm that computes a `structural similarity metric' between the
new node and each of the $K$ communities by aggregating labeled and unlabeled
data. The estimated label of the new node corresponds to the value of $k$ that
maximizes this similarity metric. Our method is fast and numerically
outperforms existing semi-supervised algorithms. Theoretically, we derive
explicit bounds for the misclassification error and show the efficiency of our
method by comparing it with an ideal classifier. Our findings highlight, to the
best of our knowledge, the first semi-supervised community detection algorithm
that offers theoretical guarantees.
|
[
"cs.SI",
"cs.LG",
"stat.ME",
"stat.ML"
] | false |
2306.01100
|
2023-06-01T19:23:38Z
|
ALO-VC: Any-to-any Low-latency One-shot Voice Conversion
|
[
"Bohan Wang",
"Damien Ronssin",
"Milos Cernak"
] |
This paper presents ALO-VC, a non-parallel low-latency one-shot phonetic
posteriorgrams (PPGs) based voice conversion method. ALO-VC enables any-to-any
voice conversion using only one utterance from the target speaker, with only
47.5 ms future look-ahead. The proposed hybrid signal processing and machine
learning pipeline combines a pre-trained speaker encoder, a pitch predictor to
predict the converted speech's prosody, and positional encoding to convey the
phoneme's location information. We introduce two system versions: ALO-VC-R,
which uses a pre-trained d-vector speaker encoder, and ALO-VC-E, which improves
performance using the ECAPA-TDNN speaker encoder. The experimental results
demonstrate both ALO-VC-R and ALO-VC-E can achieve comparable performance to
non-causal baseline systems on the VCTK dataset and two out-of-domain datasets.
Furthermore, both proposed systems can be deployed on a single CPU core with 55
ms latency and 0.78 real-time factor. Our demo is available online.
|
[
"eess.AS",
"cs.LG",
"cs.SD"
] | false |
2306.01122
|
2023-06-01T20:19:30Z
|
On the Convergence of Coordinate Ascent Variational Inference
|
[
"Anirban Bhattacharya",
"Debdeep Pati",
"Yun Yang"
] |
As a computational alternative to Markov chain Monte Carlo approaches,
variational inference (VI) is becoming more and more popular for approximating
intractable posterior distributions in large-scale Bayesian models due to its
comparable efficacy and superior efficiency. Several recent works provide
theoretical justifications of VI by proving its statistical optimality for
parameter estimation under various settings; meanwhile, formal analysis on the
algorithmic convergence aspects of VI is still largely lacking. In this paper,
we consider the common coordinate ascent variational inference (CAVI) algorithm
for implementing the mean-field (MF) VI towards optimizing a Kullback--Leibler
divergence objective functional over the space of all factorized distributions.
Focusing on the two-block case, we analyze the convergence of CAVI by
leveraging the extensive toolbox from functional analysis and optimization. We
provide general conditions for certifying global or local exponential
convergence of CAVI. Specifically, a new notion of generalized correlation for
characterizing the interaction between the constituting blocks in influencing
the VI objective functional is introduced, which according to the theory,
quantifies the algorithmic contraction rate of two-block CAVI. As
illustrations, we apply the developed theory to a number of examples, and
derive explicit problem-dependent upper bounds on the algorithmic contraction
rate.
|
[
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] | false |
2306.01143
|
2023-06-01T20:56:02Z
|
Federated Graph Learning for Low Probability of Detection in Wireless
Ad-Hoc Networks
|
[
"Sivaram Krishnan",
"Jihong Park",
"Subhash Sagar",
"Gregory Sherman",
"Benjamin Campbell",
"Jinho Choi"
] |
Low probability of detection (LPD) has recently emerged as a means to enhance
the privacy and security of wireless networks. Unlike existing wireless
security techniques, LPD measures aim to conceal the entire existence of
wireless communication instead of safeguarding the information transmitted from
users. Motivated by LPD communication, in this paper, we study a
privacy-preserving and distributed framework based on graph neural networks to
minimise the detectability of a wireless ad-hoc network as a whole and predict
an optimal communication region for each node in the wireless network, allowing
them to communicate while remaining undetected from external actors. We also
demonstrate the effectiveness of the proposed method in terms of two
performance measures, i.e., mean absolute error and median absolute error.
|
[
"cs.LG",
"cs.CR",
"cs.NI"
] | false |
2306.01174
|
2023-06-01T22:16:28Z
|
Neural Ideal Large Eddy Simulation: Modeling Turbulence with Neural
Stochastic Differential Equations
|
[
"Anudhyan Boral",
"Zhong Yi Wan",
"Leonardo Zepeda-Núñez",
"James Lottes",
"Qing Wang",
"Yi-fan Chen",
"John Roberts Anderson",
"Fei Sha"
] |
We introduce a data-driven learning framework that assimilates two powerful
ideas: ideal large eddy simulation (LES) from turbulence closure modeling and
neural stochastic differential equations (SDE) for stochastic modeling. The
ideal LES models the LES flow by treating each full-order trajectory as a
random realization of the underlying dynamics, as such, the effect of
small-scales is marginalized to obtain the deterministic evolution of the LES
state. However, ideal LES is analytically intractable. In our work, we use a
latent neural SDE to model the evolution of the stochastic process and an
encoder-decoder pair for transforming between the latent space and the desired
ideal flow field. This stands in sharp contrast to other types of neural
parameterization of closure models where each trajectory is treated as a
deterministic realization of the dynamics. We show the effectiveness of our
approach (niLES - neural ideal LES) on a challenging chaotic dynamical system:
Kolmogorov flow at a Reynolds number of 20,000. Compared to competing methods,
our method can handle non-uniform geometries using unstructured meshes
seamlessly. In particular, niLES leads to trajectories with more accurate
statistics and enhances stability, particularly for long-horizon rollouts.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2306.01196
|
2023-06-01T23:22:46Z
|
An Effective Meaningful Way to Evaluate Survival Models
|
[
"Shi-ang Qi",
"Neeraj Kumar",
"Mahtab Farrokh",
"Weijie Sun",
"Li-Hao Kuan",
"Rajesh Ranganath",
"Ricardo Henao",
"Russell Greiner"
] |
One straightforward metric to evaluate a survival prediction model is based
on the Mean Absolute Error (MAE) -- the average of the absolute difference
between the time predicted by the model and the true event time, over all
subjects. Unfortunately, this is challenging because, in practice, the test set
includes (right) censored individuals, meaning we do not know when a censored
individual actually experienced the event. In this paper, we explore various
metrics to estimate MAE for survival datasets that include (many) censored
individuals. Moreover, we introduce a novel and effective approach for
generating realistic semi-synthetic survival datasets to facilitate the
evaluation of metrics. Our findings, based on the analysis of the
semi-synthetic datasets, reveal that our proposed metric (MAE using
pseudo-observations) is able to rank models accurately based on their
performance, and often closely matches the true MAE -- in particular, is better
than several alternative methods.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2306.01799
|
2023-06-01T15:42:50Z
|
Pairwise Ranking Losses of Click-Through Rates Prediction for Welfare
Maximization in Ad Auctions
|
[
"Boxiang Lyu",
"Zhe Feng",
"Zachary Robertson",
"Sanmi Koyejo"
] |
We study the design of loss functions for click-through rates (CTR) to
optimize (social) welfare in advertising auctions. Existing works either only
focus on CTR predictions without consideration of business objectives (e.g.,
welfare) in auctions or assume that the distribution over the participants'
expected cost-per-impression (eCPM) is known a priori, then use various
additional assumptions on the parametric form of the distribution to derive
loss functions for predicting CTRs. In this work, we bring back the welfare
objectives of ad auctions into CTR predictions and propose a novel weighted
rankloss to train the CTR model. Compared to existing literature, our approach
provides a provable guarantee on welfare but without assumptions on the eCPMs'
distribution while also avoiding the intractability of naively applying
existing learning-to-rank methods. Further, we propose a theoretically
justifiable technique for calibrating the losses using labels generated from a
teacher network, only assuming that the teacher network has bounded $\ell_2$
generalization error. Finally, we demonstrate the advantages of the proposed
loss on synthetic and real-world data.
|
[
"cs.GT",
"cs.IR",
"cs.LG"
] | false |
2306.01802
|
2023-06-01T16:31:36Z
|
Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains
|
[
"Matthew Dowling",
"Yuan Zhao",
"Il Memming Park"
] |
Latent Gaussian process (GP) models are widely used in neuroscience to
uncover hidden state evolutions from sequential observations, mainly in neural
activity recordings. While latent GP models provide a principled and powerful
solution in theory, the intractable posterior in non-conjugate settings
necessitates approximate inference schemes, which may lack scalability. In this
work, we propose cvHM, a general inference framework for latent GP models
leveraging Hida-Mat\'ern kernels and conjugate computation variational
inference (CVI). With cvHM, we are able to perform variational inference of
latent neural trajectories with linear time complexity for arbitrary
likelihoods. The reparameterization of stationary kernels using Hida-Mat\'ern
GPs helps us connect the latent variable models that encode prior assumptions
through dynamical systems to those that encode trajectory assumptions through
GPs. In contrast to previous work, we use bidirectional information filtering,
leading to a more concise implementation. Furthermore, we employ the Whittle
approximate likelihood to achieve highly efficient hyperparameter learning.
|
[
"q-bio.NC",
"cs.LG",
"stat.AP",
"stat.ML"
] | false |
2306.03941
|
2023-06-01T18:26:37Z
|
A scientometric analysis of the effect of COVID-19 on the spread of
research outputs
|
[
"Gianpaolo Zammarchi",
"Andrea Carta",
"Silvia Columbu",
"Luca Frigau",
"Monica Musio"
] |
The spread of the Sars-COV-2 pandemic in 2020 had a huge impact on the life
course of all of us. This rapid spread has also caused an increase in the
research production in topics related to COVID-19 with regard to different
aspects. Italy has, unfortunately, been one of the first countries to be
massively involved in the outbreak of the disease. In this paper we present an
extensive scientometric analysis of the research production both at global
(entire literature produced in the first 2 years after the beginning of the
pandemic) and local level (COVID-19 literature produced by authors with an
Italian affiliation). Our results showed that US and China are the most active
countries in terms of number of publications and that the number of
collaborations between institutions varies according to geographical distance.
Moreover, we identified the medical-biological as the fields with the greatest
growth in terms of literature production. Furthermore, we also better explored
the relationship between the number of citations and variables obtained from
the data set (e.g. number of authors per article). Using multiple
correspondence analysis and quantile regression we shed light on the role of
journal topics and impact factor, the type of article, the field of study and
how these elements affect citations.
|
[
"cs.DL",
"cs.LG",
"physics.soc-ph"
] | false |
2306.07981
|
2023-06-01T01:44:49Z
|
Feature Engineering-Based Detection of Buffer Overflow Vulnerability in
Source Code Using Neural Networks
|
[
"Mst Shapna Akter",
"Hossain Shahriar",
"Juan Rodriguez Cardenas",
"Sheikh Iqbal Ahamed",
"Alfredo Cuzzocrea"
] |
One of the most significant challenges in the field of software code auditing
is the presence of vulnerabilities in software source code. Every year, more
and more software flaws are discovered, either internally in proprietary code
or publicly disclosed. These flaws are highly likely to be exploited and can
lead to system compromise, data leakage, or denial of service. To create a
large-scale machine learning system for function level vulnerability
identification, we utilized a sizable dataset of C and C++ open-source code
containing millions of functions with potential buffer overflow exploits. We
have developed an efficient and scalable vulnerability detection method based
on neural network models that learn features extracted from the source codes.
The source code is first converted into an intermediate representation to
remove unnecessary components and shorten dependencies. We maintain the
semantic and syntactic information using state of the art word embedding
algorithms such as GloVe and fastText. The embedded vectors are subsequently
fed into neural networks such as LSTM, BiLSTM, LSTM Autoencoder, word2vec,
BERT, and GPT2 to classify the possible vulnerabilities. We maintain the
semantic and syntactic information using state of the art word embedding
algorithms such as GloVe and fastText. The embedded vectors are subsequently
fed into neural networks such as LSTM, BiLSTM, LSTM Autoencoder, word2vec,
BERT, and GPT2 to classify the possible vulnerabilities. Furthermore, we have
proposed a neural network model that can overcome issues associated with
traditional neural networks. We have used evaluation metrics such as F1 score,
precision, recall, accuracy, and total execution time to measure the
performance. We have conducted a comparative analysis between results derived
from features containing a minimal text representation and semantic and
syntactic information.
|
[
"cs.CR",
"cs.LG",
"cs.SE"
] | false |
2306.17169
|
2023-06-01T04:11:22Z
|
Enterprise Disk Drive Scrubbing Based on Mondrian Conformal Predictors
|
[
"Rahul Vishwakarma",
"Jinha Hwang",
"Soundouss Messoudi",
"Ava Hedayatipour"
] |
Disk scrubbing is a process aimed at resolving read errors on disks by
reading data from the disk. However, scrubbing the entire storage array at once
can adversely impact system performance, particularly during periods of high
input/output operations. Additionally, the continuous reading of data from
disks when scrubbing can result in wear and tear, especially on larger capacity
disks, due to the significant time and energy consumption involved. To address
these issues, we propose a selective disk scrubbing method that enhances the
overall reliability and power efficiency in data centers. Our method employs a
Machine Learning model based on Mondrian Conformal prediction to identify
specific disks for scrubbing, by proactively predicting the health status of
each disk in the storage pool, forecasting n-days in advance, and using an
open-source dataset. For disks predicted as non-healthy, we mark them for
replacement without further action. For healthy drives, we create a set and
quantify their relative health across the entire storage pool based on the
predictor's confidence. This enables us to prioritize selective scrubbing for
drives with established scrubbing frequency based on the scrub cycle. The
method we propose provides an efficient and dependable solution for managing
enterprise disk drives. By scrubbing just 22.7% of the total storage disks, we
can achieve optimized energy consumption and reduce the carbon footprint of the
data center.
|
[
"cs.DC",
"cs.AI",
"cs.LG"
] | false |
2306.00352
|
2023-06-01T05:15:34Z
|
Improving Energy Conserving Descent for Machine Learning: Theory and
Practice
|
[
"G. Bruno De Luca",
"Alice Gatti",
"Eva Silverstein"
] |
We develop the theory of Energy Conserving Descent (ECD) and introduce
ECDSep, a gradient-based optimization algorithm able to tackle convex and
non-convex optimization problems. The method is based on the novel ECD
framework of optimization as physical evolution of a suitable chaotic
energy-conserving dynamical system, enabling analytic control of the
distribution of results - dominated at low loss - even for generic
high-dimensional problems with no symmetries. Compared to previous realizations
of this idea, we exploit the theoretical control to improve both the dynamics
and chaos-inducing elements, enhancing performance while simplifying the
hyper-parameter tuning of the optimization algorithm targeted to different
classes of problems. We empirically compare with popular optimization methods
such as SGD, Adam and AdamW on a wide range of machine learning problems,
finding competitive or improved performance compared to the best among them on
each task. We identify limitations in our analysis pointing to possibilities
for additional improvements.
|
[
"cs.LG",
"astro-ph.IM",
"hep-th",
"math.OC",
"stat.ML"
] | false |
2306.00361
|
2023-06-01T05:41:31Z
|
Sharded Bayesian Additive Regression Trees
|
[
"Hengrui Luo",
"Matthew T. Pratola"
] |
In this paper we develop the randomized Sharded Bayesian Additive Regression
Trees (SBT) model. We introduce a randomization auxiliary variable and a
sharding tree to decide partitioning of data, and fit each partition component
to a sub-model using Bayesian Additive Regression Tree (BART). By observing
that the optimal design of a sharding tree can determine optimal sharding for
sub-models on a product space, we introduce an intersection tree structure to
completely specify both the sharding and modeling using only tree structures.
In addition to experiments, we also derive the theoretical optimal weights for
minimizing posterior contractions and prove the worst-case complexity of SBT.
|
[
"stat.ML",
"cs.LG",
"math.ST",
"stat.ME",
"stat.TH",
"62F15, 62G08",
"G.3"
] | false |
2306.00614
|
2023-06-01T12:38:11Z
|
Adaptation and Optimization of Automatic Speech Recognition (ASR) for
the Maritime Domain in the Field of VHF Communication
|
[
"Emin Cagatay Nakilcioglu",
"Maximilian Reimann",
"Ole John"
] |
This paper introduces a multilingual automatic speech recognizer (ASR) for
maritime radio communi-cation that automatically converts received VHF radio
signals into text. The challenges of maritime radio communication are described
at first, and the deep learning architecture of marFM consisting of audio
processing techniques and machine learning algorithms is presented.
Subsequently, maritime radio data of interest is analyzed and then used to
evaluate the transcription performance of our ASR model for various maritime
radio data.
|
[
"cs.SD",
"cs.AI",
"cs.HC",
"cs.LG",
"eess.AS"
] | false |
2306.00629
|
2023-06-01T12:52:34Z
|
Identifiability and Generalizability in Constrained Inverse
Reinforcement Learning
|
[
"Andreas Schlaginhaufen",
"Maryam Kamgarpour"
] |
Two main challenges in Reinforcement Learning (RL) are designing appropriate
reward functions and ensuring the safety of the learned policy. To address
these challenges, we present a theoretical framework for Inverse Reinforcement
Learning (IRL) in constrained Markov decision processes. From a convex-analytic
perspective, we extend prior results on reward identifiability and
generalizability to both the constrained setting and a more general class of
regularizations. In particular, we show that identifiability up to potential
shaping (Cao et al., 2021) is a consequence of entropy regularization and may
generally no longer hold for other regularizations or in the presence of safety
constraints. We also show that to ensure generalizability to new transition
laws and constraints, the true reward must be identified up to a constant.
Additionally, we derive a finite sample guarantee for the suboptimality of the
learned rewards, and validate our results in a gridworld environment.
|
[
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY",
"math.OC"
] | false |
2306.00357
|
2023-06-01T05:36:22Z
|
Efficient and Robust Bayesian Selection of Hyperparameters in Dimension
Reduction for Visualization
|
[
"Yin-Ting Liao",
"Hengrui Luo",
"Anna Ma"
] |
We introduce an efficient and robust auto-tuning framework for hyperparameter
selection in dimension reduction (DR) algorithms, focusing on large-scale
datasets and arbitrary performance metrics. By leveraging Bayesian optimization
(BO) with a surrogate model, our approach enables efficient hyperparameter
selection with multi-objective trade-offs and allows us to perform data-driven
sensitivity analysis. By incorporating normalization and subsampling, the
proposed framework demonstrates versatility and efficiency, as shown in
applications to visualization techniques such as t-SNE and UMAP. We evaluate
our results on various synthetic and real-world datasets using multiple quality
metrics, providing a robust and efficient solution for hyperparameter selection
in DR algorithms.
|
[
"stat.ML",
"cs.HC",
"cs.LG",
"math.PR",
"math.ST",
"stat.TH",
"62F15, 68T09, 94A16"
] | false |
2306.00833
|
2023-06-01T15:55:46Z
|
When Does Bottom-up Beat Top-down in Hierarchical Community Detection?
|
[
"Maximilien Dreveton",
"Daichi Kuroda",
"Matthias Grossglauser",
"Patrick Thiran"
] |
Hierarchical clustering of networks consists in finding a tree of
communities, such that lower levels of the hierarchy reveal finer-grained
community structures. There are two main classes of algorithms tackling this
problem. Divisive ($\textit{top-down}$) algorithms recursively partition the
nodes into two communities, until a stopping rule indicates that no further
split is needed. In contrast, agglomerative ($\textit{bottom-up}$) algorithms
first identify the smallest community structure and then repeatedly merge the
communities using a $\textit{linkage}$ method. In this article, we establish
theoretical guarantees for the recovery of the hierarchical tree and community
structure of a Hierarchical Stochastic Block Model by a bottom-up algorithm. We
also establish that this bottom-up algorithm attains the information-theoretic
threshold for exact recovery at intermediate levels of the hierarchy. Notably,
these recovery conditions are less restrictive compared to those existing for
top-down algorithms. This shows that bottom-up algorithms extend the feasible
region for achieving exact recovery at intermediate levels. Numerical
experiments on both synthetic and real data sets confirm the superiority of
bottom-up algorithms over top-down algorithms. We also observe that top-down
algorithms can produce dendrograms with inversions. These findings contribute
to a better understanding of hierarchical clustering techniques and their
applications in network analysis.
|
[
"cs.SI",
"cs.LG",
"math.ST",
"stat.ME",
"stat.ML",
"stat.TH"
] | false |
2306.01229
|
2023-06-02T01:40:08Z
|
Exploring the Boundaries of Semi-Supervised Facial Expression
Recognition: Learning from In-Distribution, Out-of-Distribution, and
Unconstrained Data
|
[
"Shuvendu Roy",
"Ali Etemad"
] |
Deep learning-based methods have been the key driving force behind much of
the recent success of facial expression recognition (FER) systems. However, the
need for large amounts of labelled data remains a challenge. Semi-supervised
learning offers a way to overcome this limitation, allowing models to learn
from a small amount of labelled data along with a large unlabelled dataset.
While semi-supervised learning has shown promise in FER, most current methods
from general computer vision literature have not been explored in the context
of FER. In this work, we present a comprehensive study on 11 of the most recent
semi-supervised methods, in the context of FER, namely Pi-model, Pseudo-label,
Mean Teacher, VAT, UDA, MixMatch, ReMixMatch, FlexMatch, CoMatch, and CCSSL.
Our investigation covers semi-supervised learning from in-distribution,
out-of-distribution, unconstrained, and very small unlabelled data. Our
evaluation includes five FER datasets plus one large face dataset for
unconstrained learning. Our results demonstrate that FixMatch consistently
achieves better performance on in-distribution unlabelled data, while
ReMixMatch stands out among all methods for out-of-distribution, unconstrained,
and scarce unlabelled data scenarios. Another significant observation is that
semi-supervised learning produces a reasonable improvement over supervised
learning, regardless of whether in-distribution, out-of-distribution, or
unconstrained data is utilized as the unlabelled set. We also conduct
sensitivity analyses on critical hyper-parameters for the two best methods of
each setting.
|
[
"cs.CV"
] | false |
2306.01343
|
2023-06-02T08:16:21Z
|
Bilevel Fast Scene Adaptation for Low-Light Image Enhancement
|
[
"Long Ma",
"Dian Jin",
"Nan An",
"Jinyuan Liu",
"Xin Fan",
"Risheng Liu"
] |
Enhancing images in low-light scenes is a challenging but widely concerned
task in the computer vision. The mainstream learning-based methods mainly
acquire the enhanced model by learning the data distribution from the specific
scenes, causing poor adaptability (even failure) when meeting real-world
scenarios that have never been encountered before. The main obstacle lies in
the modeling conundrum from distribution discrepancy across different scenes.
To remedy this, we first explore relationships between diverse low-light scenes
based on statistical analysis, i.e., the network parameters of the encoder
trained in different data distributions are close. We introduce the bilevel
paradigm to model the above latent correspondence from the perspective of
hyperparameter optimization. A bilevel learning framework is constructed to
endow the scene-irrelevant generality of the encoder towards diverse scenes
(i.e., freezing the encoder in the adaptation and testing phases). Further, we
define a reinforced bilevel learning framework to provide a meta-initialization
for scene-specific decoder to further ameliorate visual quality. Moreover, to
improve the practicability, we establish a Retinex-induced architecture with
adaptive denoising and apply our built learning framework to acquire its
parameters by using two training losses including supervised and unsupervised
forms. Extensive experimental evaluations on multiple datasets verify our
adaptability and competitive performance against existing state-of-the-art
works. The code and datasets will be available at
https://github.com/vis-opt-group/BL.
|
[
"cs.CV"
] | false |
2306.01363
|
2023-06-02T08:37:38Z
|
Quantifying Sample Anonymity in Score-Based Generative Models with
Adversarial Fingerprinting
|
[
"Mischa Dombrowski",
"Bernhard Kainz"
] |
Recent advances in score-based generative models have led to a huge spike in
the development of downstream applications using generative models ranging from
data augmentation over image and video generation to anomaly detection. Despite
publicly available trained models, their potential to be used for privacy
preserving data sharing has not been fully explored yet. Training diffusion
models on private data and disseminating the models and weights rather than the
raw dataset paves the way for innovative large-scale data-sharing strategies,
particularly in healthcare, where safeguarding patients' personal health
information is paramount. However, publishing such models without individual
consent of, e.g., the patients from whom the data was acquired, necessitates
guarantees that identifiable training samples will never be reproduced, thus
protecting personal health data and satisfying the requirements of policymakers
and regulatory bodies. This paper introduces a method for estimating the upper
bound of the probability of reproducing identifiable training images during the
sampling process. This is achieved by designing an adversarial approach that
searches for anatomic fingerprints, such as medical devices or dermal art,
which could potentially be employed to re-identify training images. Our method
harnesses the learned score-based model to estimate the probability of the
entire subspace of the score function that may be utilized for one-to-one
reproduction of training samples. To validate our estimates, we generate
anomalies containing a fingerprint and investigate whether generated samples
from trained generative models can be uniquely mapped to the original training
samples. Overall our results show that privacy-breaching images are reproduced
at sampling time if the models were trained without care.
|
[
"cs.CV"
] | false |
2306.01395
|
2023-06-02T09:44:45Z
|
Masked Autoencoder for Unsupervised Video Summarization
|
[
"Minho Shim",
"Taeoh Kim",
"Jinhyung Kim",
"Dongyoon Wee"
] |
Summarizing a video requires a diverse understanding of the video, ranging
from recognizing scenes to evaluating how much each frame is essential enough
to be selected as a summary. Self-supervised learning (SSL) is acknowledged for
its robustness and flexibility to multiple downstream tasks, but the video SSL
has not shown its value for dense understanding tasks like video summarization.
We claim an unsupervised autoencoder with sufficient self-supervised learning
does not need any extra downstream architecture design or fine-tuning weights
to be utilized as a video summarization model. The proposed method to evaluate
the importance score of each frame takes advantage of the reconstruction score
of the autoencoder's decoder. We evaluate the method in major unsupervised
video summarization benchmarks to show its effectiveness under various
experimental settings.
|
[
"cs.CV"
] | false |
2306.01398
|
2023-06-02T09:46:22Z
|
Evaluating The Robustness of Self-Supervised Representations to
Background/Foreground Removal
|
[
"Xavier F. Cadet",
"Ranya Aloufi",
"Alain Miranville",
"Sara Ahmadi-Abhari",
"Hamed Haddadi"
] |
Despite impressive empirical advances of SSL in solving various tasks, the
problem of understanding and characterizing SSL representations learned from
input data remains relatively under-explored. We provide a comparative analysis
of how the representations produced by SSL models differ when masking parts of
the input. Specifically, we considered state-of-the-art SSL pretrained models,
such as DINOv2, MAE, and SwaV, and analyzed changes at the representation
levels across 4 Image Classification datasets. First, we generate variations of
the datasets by applying foreground and background segmentation. Then, we
conduct statistical analysis using Canonical Correlation Analysis (CCA) and
Centered Kernel Alignment (CKA) to evaluate the robustness of the
representations learned in SSL models. Empirically, we show that not all models
lead to representations that separate foreground, background, and complete
images. Furthermore, we test different masking strategies by occluding the
center regions of the images to address cases where foreground and background
are difficult. For example, the DTD dataset that focuses on texture rather
specific objects.
|
[
"cs.CV"
] | false |
2306.01405
|
2023-06-02T09:52:04Z
|
Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise
to Noise Mapping
|
[
"Baorui Ma",
"Yu-Shen Liu",
"Zhizhong Han"
] |
Learning signed distance functions (SDFs) from 3D point clouds is an
important task in 3D computer vision. However, without ground truth signed
distances, point normals or clean point clouds, current methods still struggle
from learning SDFs from noisy point clouds. To overcome this challenge, we
propose to learn SDFs via a noise to noise mapping, which does not require any
clean point cloud or ground truth supervision for training. Our novelty lies in
the noise to noise mapping which can infer a highly accurate SDF of a single
object or scene from its multiple or even single noisy point cloud
observations. Our novel learning manner is supported by modern Lidar systems
which capture multiple noisy observations per second. We achieve this by a
novel loss which enables statistical reasoning on point clouds and maintains
geometric consistency although point clouds are irregular, unordered and have
no point correspondence among noisy observations. Our evaluation under the
widely used benchmarks demonstrates our superiority over the state-of-the-art
methods in surface reconstruction, point cloud denoising and upsampling. Our
code, data, and pre-trained models are available at
https://github.com/mabaorui/Noise2NoiseMapping/
|
[
"cs.CV"
] | false |
2306.01438
|
2023-06-02T10:57:41Z
|
Bi-LRFusion: Bi-Directional LiDAR-Radar Fusion for 3D Dynamic Object
Detection
|
[
"Yingjie Wang",
"Jiajun Deng",
"Yao Li",
"Jinshui Hu",
"Cong Liu",
"Yu Zhang",
"Jianmin Ji",
"Wanli Ouyang",
"Yanyong Zhang"
] |
LiDAR and Radar are two complementary sensing approaches in that LiDAR
specializes in capturing an object's 3D shape while Radar provides longer
detection ranges as well as velocity hints. Though seemingly natural, how to
efficiently combine them for improved feature representation is still unclear.
The main challenge arises from that Radar data are extremely sparse and lack
height information. Therefore, directly integrating Radar features into
LiDAR-centric detection networks is not optimal. In this work, we introduce a
bi-directional LiDAR-Radar fusion framework, termed Bi-LRFusion, to tackle the
challenges and improve 3D detection for dynamic objects. Technically,
Bi-LRFusion involves two steps: first, it enriches Radar's local features by
learning important details from the LiDAR branch to alleviate the problems
caused by the absence of height information and extreme sparsity; second, it
combines LiDAR features with the enhanced Radar features in a unified
bird's-eye-view representation. We conduct extensive experiments on nuScenes
and ORR datasets, and show that our Bi-LRFusion achieves state-of-the-art
performance for detecting dynamic objects. Notably, Radar data in these two
datasets have different formats, which demonstrates the generalizability of our
method. Codes are available at https://github.com/JessieW0806/BiLRFusion.
|
[
"cs.CV"
] | false |
2306.01449
|
2023-06-02T11:11:00Z
|
SASMU: boost the performance of generalized recognition model using
synthetic face dataset
|
[
"Chia-Chun Chung",
"Pei-Chun Chang",
"Yong-Sheng Chen",
"HaoYuan He",
"Chinson Yeh"
] |
Nowadays, deploying a robust face recognition product becomes easy with the
development of face recognition techniques for decades. Not only profile image
verification but also the state-of-the-art method can handle the in-the-wild
image almost perfectly. However, the concern of privacy issues raise rapidly
since mainstream research results are powered by tons of web-crawled data,
which faces the privacy invasion issue. The community tries to escape this
predicament completely by training the face recognition model with synthetic
data but faces severe domain gap issues, which still need to access real images
and identity labels to fine-tune the model. In this paper, we propose SASMU, a
simple, novel, and effective method for face recognition using a synthetic
dataset. Our proposed method consists of spatial data augmentation (SA) and
spectrum mixup (SMU). We first analyze the existing synthetic datasets for
developing a face recognition system. Then, we reveal that heavy data
augmentation is helpful for boosting performance when using synthetic data. By
analyzing the previous frequency mixup studies, we proposed a novel method for
domain generalization. Extensive experimental results have demonstrated the
effectiveness of SASMU, achieving state-of-the-art performance on several
common benchmarks, such as LFW, AgeDB-30, CA-LFW, CFP-FP, and CP-LFW.
|
[
"cs.CV"
] | false |
2306.01452
|
2023-06-02T11:19:50Z
|
dugMatting: Decomposed-Uncertainty-Guided Matting
|
[
"Jiawei Wu",
"Changqing Zhang",
"Zuoyong Li",
"Huazhu Fu",
"Xi Peng",
"Joey Tianyi Zhou"
] |
Cutting out an object and estimating its opacity mask, known as image
matting, is a key task in image and video editing. Due to the highly ill-posed
issue, additional inputs, typically user-defined trimaps or scribbles, are
usually needed to reduce the uncertainty. Although effective, it is either time
consuming or only suitable for experienced users who know where to place the
strokes. In this work, we propose a decomposed-uncertainty-guided matting
(dugMatting) algorithm, which explores the explicitly decomposed uncertainties
to efficiently and effectively improve the results. Basing on the
characteristic of these uncertainties, the epistemic uncertainty is reduced in
the process of guiding interaction (which introduces prior knowledge), while
the aleatoric uncertainty is reduced in modeling data distribution (which
introduces statistics for both data and possible noise). The proposed matting
framework relieves the requirement for users to determine the interaction areas
by using simple and efficient labeling. Extensively quantitative and
qualitative results validate that the proposed method significantly improves
the original matting algorithms in terms of both efficiency and efficacy.
|
[
"cs.CV"
] | false |
2306.01500
|
2023-06-02T12:49:22Z
|
A Feature Reuse Framework with Texture-adaptive Aggregation for
Reference-based Super-Resolution
|
[
"Xiaoyong Mei",
"Yi Yang",
"Ming Li",
"Changqin Huang",
"Kai Zhang",
"Pietro Lió"
] |
Reference-based super-resolution (RefSR) has gained considerable success in
the field of super-resolution with the addition of high-resolution reference
images to reconstruct low-resolution (LR) inputs with more high-frequency
details, thereby overcoming some limitations of single image super-resolution
(SISR). Previous research in the field of RefSR has mostly focused on two
crucial aspects. The first is accurate correspondence matching between the LR
and the reference (Ref) image. The second is the effective transfer and
aggregation of similar texture information from the Ref images. Nonetheless, an
important detail of perceptual loss and adversarial loss has been
underestimated, which has a certain adverse effect on texture transfer and
reconstruction. In this study, we propose a feature reuse framework that guides
the step-by-step texture reconstruction process through different stages,
reducing the negative impacts of perceptual and adversarial loss. The feature
reuse framework can be used for any RefSR model, and several RefSR approaches
have improved their performance after being retrained using our framework.
Additionally, we introduce a single image feature embedding module and a
texture-adaptive aggregation module. The single image feature embedding module
assists in reconstructing the features of the LR inputs itself and effectively
lowers the possibility of including irrelevant textures. The texture-adaptive
aggregation module dynamically perceives and aggregates texture information
between the LR inputs and the Ref images using dynamic filters. This enhances
the utilization of the reference texture while reducing reference misuse. The
source code is available at https://github.com/Yi-Yang355/FRFSR.
|
[
"cs.CV"
] | false |
2306.01596
|
2023-06-02T15:07:48Z
|
Two-View Geometry Scoring Without Correspondences
|
[
"Axel Barroso-Laguna",
"Eric Brachmann",
"Victor Adrian Prisacariu",
"Gabriel J. Brostow",
"Daniyar Turmukhambetov"
] |
Camera pose estimation for two-view geometry traditionally relies on RANSAC.
Normally, a multitude of image correspondences leads to a pool of proposed
hypotheses, which are then scored to find a winning model. The inlier count is
generally regarded as a reliable indicator of "consensus". We examine this
scoring heuristic, and find that it favors disappointing models under certain
circumstances. As a remedy, we propose the Fundamental Scoring Network (FSNet),
which infers a score for a pair of overlapping images and any proposed
fundamental matrix. It does not rely on sparse correspondences, but rather
embodies a two-view geometry model through an epipolar attention mechanism that
predicts the pose error of the two images. FSNet can be incorporated into
traditional RANSAC loops. We evaluate FSNet on fundamental and essential matrix
estimation on indoor and outdoor datasets, and establish that FSNet can
successfully identify good poses for pairs of images with few or unreliable
correspondences. Besides, we show that naively combining FSNet with MAGSAC++
scoring approach achieves state of the art results.
|
[
"cs.CV"
] | false |
2306.01642
|
2023-06-02T16:06:42Z
|
Automatic Reconstruction of Semantic 3D Models from 2D Floor Plans
|
[
"Aleixo Cambeiro Barreiro",
"Mariusz Trzeciakiewicz",
"Anna Hilsmann",
"Peter Eisert"
] |
Digitalization of existing buildings and the creation of 3D BIM models for
them has become crucial for many tasks. Of particular importance are floor
plans, which contain information about building layouts and are vital for
processes such as construction, maintenance or refurbishing. However, this data
is not always available in digital form, especially for older buildings
constructed before CAD tools were widely available, or lacks semantic
information. The digitalization of such information usually requires manual
work of an expert that must reconstruct the layouts by hand, which is a
cumbersome and error-prone process. In this paper, we present a pipeline for
reconstruction of vectorized 3D models from scanned 2D plans, aiming at
increasing the efficiency of this process. The method presented achieves
state-of-the-art results in the public dataset CubiCasa5k, and shows good
generalization to different types of plans. Our vectorization approach is
particularly effective, outperforming previous methods.
|
[
"cs.CV"
] | false |
2306.01738
|
2023-06-02T17:59:48Z
|
OCBEV: Object-Centric BEV Transformer for Multi-View 3D Object Detection
|
[
"Zhangyang Qi",
"Jiaqi Wang",
"Xiaoyang Wu",
"Hengshuang Zhao"
] |
Multi-view 3D object detection is becoming popular in autonomous driving due
to its high effectiveness and low cost. Most of the current state-of-the-art
detectors follow the query-based bird's-eye-view (BEV) paradigm, which benefits
from both BEV's strong perception power and end-to-end pipeline. Despite
achieving substantial progress, existing works model objects via globally
leveraging temporal and spatial information of BEV features, resulting in
problems when handling the challenging complex and dynamic autonomous driving
scenarios. In this paper, we proposed an Object-Centric query-BEV detector
OCBEV, which can carve the temporal and spatial cues of moving targets more
effectively. OCBEV comprises three designs: Object Aligned Temporal Fusion
aligns the BEV feature based on ego-motion and estimated current locations of
moving objects, leading to a precise instance-level feature fusion. Object
Focused Multi-View Sampling samples more 3D features from an adaptive local
height ranges of objects for each scene to enrich foreground information.
Object Informed Query Enhancement replaces part of pre-defined decoder queries
in common DETR-style decoders with positional features of objects on
high-confidence locations, introducing more direct object positional priors.
Extensive experimental evaluations are conducted on the challenging nuScenes
dataset. Our approach achieves a state-of-the-art result, surpassing the
traditional BEVFormer by 1.5 NDS points. Moreover, we have a faster convergence
speed and only need half of the training iterations to get comparable
performance, which further demonstrates its effectiveness.
|
[
"cs.CV"
] | false |
2306.01900
|
2023-06-02T20:09:57Z
|
Conditional Generation from Unconditional Diffusion Models using
Denoiser Representations
|
[
"Alexandros Graikos",
"Srikar Yellapragada",
"Dimitris Samaras"
] |
Denoising diffusion models have gained popularity as a generative modeling
technique for producing high-quality and diverse images. Applying these models
to downstream tasks requires conditioning, which can take the form of text,
class labels, or other forms of guidance. However, providing conditioning
information to these models can be challenging, particularly when annotations
are scarce or imprecise. In this paper, we propose adapting pre-trained
unconditional diffusion models to new conditions using the learned internal
representations of the denoiser network. We demonstrate the effectiveness of
our approach on various conditional generation tasks, including
attribute-conditioned generation and mask-conditioned generation. Additionally,
we show that augmenting the Tiny ImageNet training set with synthetic images
generated by our approach improves the classification accuracy of ResNet
baselines by up to 8%. Our approach provides a powerful and flexible way to
adapt diffusion models to new conditions and generate high-quality augmented
data for various conditional generation tasks.
|
[
"cs.CV"
] | false |
2306.01902
|
2023-06-02T20:19:19Z
|
Unlearnable Examples for Diffusion Models: Protect Data from
Unauthorized Exploitation
|
[
"Zhengyue Zhao",
"Jinhao Duan",
"Xing Hu",
"Kaidi Xu",
"Chenan Wang",
"Rui Zhang",
"Zidong Du",
"Qi Guo",
"Yunji Chen"
] |
Diffusion models have demonstrated remarkable performance in image generation
tasks, paving the way for powerful AIGC applications. However, these
widely-used generative models can also raise security and privacy concerns,
such as copyright infringement, and sensitive data leakage. To tackle these
issues, we propose a method, Unlearnable Diffusion Perturbation, to safeguard
images from unauthorized exploitation. Our approach involves designing an
algorithm to generate sample-wise perturbation noise for each image to be
protected. This imperceptible protective noise makes the data almost
unlearnable for diffusion models, i.e., diffusion models trained or fine-tuned
on the protected data cannot generate high-quality and diverse images related
to the protected training data. Theoretically, we frame this as a max-min
optimization problem and introduce EUDP, a noise scheduler-based method to
enhance the effectiveness of the protective noise. We evaluate our methods on
both Denoising Diffusion Probabilistic Model and Latent Diffusion Models,
demonstrating that training diffusion models on the protected data lead to a
significant reduction in the quality of the generated images. Especially, the
experimental results on Stable Diffusion demonstrate that our method
effectively safeguards images from being used to train Diffusion Models in
various tasks, such as training specific objects and styles. This achievement
holds significant importance in real-world scenarios, as it contributes to the
protection of privacy and copyright against AI-generated content.
|
[
"cs.CV"
] | false |
2306.01929
|
2023-06-02T22:05:52Z
|
Recent Advances of Local Mechanisms in Computer Vision: A Survey and
Outlook of Recent Work
|
[
"Qiangchang Wang",
"Yilong Yin"
] |
Inspired by the fact that human brains can emphasize discriminative parts of
the input and suppress irrelevant ones, substantial local mechanisms have been
designed to boost the development of computer vision. They can not only focus
on target parts to learn discriminative local representations, but also process
information selectively to improve the efficiency. In terms of application
scenarios and paradigms, local mechanisms have different characteristics. In
this survey, we provide a systematic review of local mechanisms for various
computer vision tasks and approaches, including fine-grained visual
recognition, person re-identification, few-/zero-shot learning, multi-modal
learning, self-supervised learning, Vision Transformers, and so on.
Categorization of local mechanisms in each field is summarized. Then,
advantages and disadvantages for every category are analyzed deeply, leaving
room for exploration. Finally, future research directions about local
mechanisms have also been discussed that may benefit future works. To the best
our knowledge, this is the first survey about local mechanisms on computer
vision. We hope that this survey can shed light on future research in the
computer vision field.
|
[
"cs.CV"
] | false |
2306.01210
|
2023-06-02T00:08:38Z
|
A new method using deep transfer learning on ECG to predict the response
to cardiac resynchronization therapy
|
[
"Zhuo He",
"Hongjin Si",
"Xinwei Zhang",
"Qing-Hui Chen",
"Jiangang Zou",
"Weihua Zhou"
] |
Background: Cardiac resynchronization therapy (CRT) has emerged as an
effective treatment for heart failure patients with electrical dyssynchrony.
However, accurately predicting which patients will respond to CRT remains a
challenge. This study explores the application of deep transfer learning
techniques to train a predictive model for CRT response. Methods: In this
study, the short-time Fourier transform (STFT) technique was employed to
transform ECG signals into two-dimensional images. A transfer learning approach
was then applied on the MIT-BIT ECG database to pre-train a convolutional
neural network (CNN) model. The model was fine-tuned to extract relevant
features from the ECG images, and then tested on our dataset of CRT patients to
predict their response. Results: Seventy-one CRT patients were enrolled in this
study. The transfer learning model achieved an accuracy of 72% in
distinguishing responders from non-responders in the local dataset.
Furthermore, the model showed good sensitivity (0.78) and specificity (0.79) in
identifying CRT responders. The performance of our model outperformed clinic
guidelines and traditional machine learning approaches. Conclusion: The
utilization of ECG images as input and leveraging the power of transfer
learning allows for improved accuracy in identifying CRT responders. This
approach offers potential for enhancing patient selection and improving
outcomes of CRT.
|
[
"eess.SP",
"cs.CV"
] | false |
2306.01232
|
2023-06-02T01:46:31Z
|
Deep Reinforcement Learning Framework for Thoracic Diseases
Classification via Prior Knowledge Guidance
|
[
"Weizhi Nie",
"Chen Zhang",
"Dan Song",
"Lina Zhao",
"Yunpeng Bai",
"Keliang Xie",
"Anan Liu"
] |
The chest X-ray is often utilized for diagnosing common thoracic diseases. In
recent years, many approaches have been proposed to handle the problem of
automatic diagnosis based on chest X-rays. However, the scarcity of labeled
data for related diseases still poses a huge challenge to an accurate
diagnosis. In this paper, we focus on the thorax disease diagnostic problem and
propose a novel deep reinforcement learning framework, which introduces prior
knowledge to direct the learning of diagnostic agents and the model parameters
can also be continuously updated as the data increases, like a person's
learning process. Especially, 1) prior knowledge can be learned from the
pre-trained model based on old data or other domains' similar data, which can
effectively reduce the dependence on target domain data, and 2) the framework
of reinforcement learning can make the diagnostic agent as exploratory as a
human being and improve the accuracy of diagnosis through continuous
exploration. The method can also effectively solve the model learning problem
in the case of few-shot data and improve the generalization ability of the
model. Finally, our approach's performance was demonstrated using the
well-known NIH ChestX-ray 14 and CheXpert datasets, and we achieved competitive
results. The source code can be found here:
\url{https://github.com/NeaseZ/MARL}.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.01316
|
2023-06-02T07:29:36Z
|
Independent Modular Networks
|
[
"Hamed Damirchi",
"Forest Agostinelli",
"Pooyan Jamshidi"
] |
Monolithic neural networks that make use of a single set of weights to learn
useful representations for downstream tasks explicitly dismiss the
compositional nature of data generation processes. This characteristic exists
in data where every instance can be regarded as the combination of an identity
concept, such as the shape of an object, combined with modifying concepts, such
as orientation, color, and size. The dismissal of compositionality is
especially detrimental in robotics, where state estimation relies heavily on
the compositional nature of physical mechanisms (e.g., rotations and
transformations) to model interactions. To accommodate this data
characteristic, modular networks have been proposed. However, a lack of
structure in each module's role, and modular network-specific issues such as
module collapse have restricted their usability. We propose a modular network
architecture that accommodates the mentioned decompositional concept by
proposing a unique structure that splits the modules into predetermined roles.
Additionally, we provide regularizations that improve the resiliency of the
modular network to the problem of module collapse while improving the
decomposition accuracy of the model.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.01364
|
2023-06-02T08:38:02Z
|
Towards Robust GAN-generated Image Detection: a Multi-view Completion
Representation
|
[
"Chi Liu",
"Tianqing Zhu",
"Sheng Shen",
"Wanlei Zhou"
] |
GAN-generated image detection now becomes the first line of defense against
the malicious uses of machine-synthesized image manipulations such as
deepfakes. Although some existing detectors work well in detecting clean, known
GAN samples, their success is largely attributable to overfitting unstable
features such as frequency artifacts, which will cause failures when facing
unknown GANs or perturbation attacks. To overcome the issue, we propose a
robust detection framework based on a novel multi-view image completion
representation. The framework first learns various view-to-image tasks to model
the diverse distributions of genuine images. Frequency-irrelevant features can
be represented from the distributional discrepancies characterized by the
completion models, which are stable, generalized, and robust for detecting
unknown fake patterns. Then, a multi-view classification is devised with
elaborated intra- and inter-view learning strategies to enhance view-specific
feature representation and cross-view feature aggregation, respectively. We
evaluated the generalization ability of our framework across six popular GANs
at different resolutions and its robustness against a broad range of
perturbation attacks. The results confirm our method's improved effectiveness,
generalization, and robustness over various baselines.
|
[
"cs.CR",
"cs.CV"
] | false |
2306.01523
|
2023-06-02T13:24:37Z
|
Transformer-based Multi-Modal Learning for Multi Label Remote Sensing
Image Classification
|
[
"David Hoffmann",
"Kai Norman Clasen",
"Begüm Demir"
] |
In this paper, we introduce a novel Synchronized Class Token Fusion (SCT
Fusion) architecture in the framework of multi-modal multi-label classification
(MLC) of remote sensing (RS) images. The proposed architecture leverages
modality-specific attention-based transformer encoders to process varying input
modalities, while exchanging information across modalities by synchronizing the
special class tokens after each transformer encoder block. The synchronization
involves fusing the class tokens with a trainable fusion transformation,
resulting in a synchronized class token that contains information from all
modalities. As the fusion transformation is trainable, it allows to reach an
accurate representation of the shared features among different modalities.
Experimental results show the effectiveness of the proposed architecture over
single-modality architectures and an early fusion multi-modal architecture when
evaluated on a multi-modal MLC dataset.
The code of the proposed architecture is publicly available at
https://git.tu-berlin.de/rsim/sct-fusion.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.01526
|
2023-06-02T13:26:23Z
|
Group channel pruning and spatial attention distilling for object
detection
|
[
"Yun Chu",
"Pu Li",
"Yong Bai",
"Zhuhua Hu",
"Yongqing Chen",
"Jiafeng Lu"
] |
Due to the over-parameterization of neural networks, many model compression
methods based on pruning and quantization have emerged. They are remarkable in
reducing the size, parameter number, and computational complexity of the model.
However, most of the models compressed by such methods need the support of
special hardware and software, which increases the deployment cost. Moreover,
these methods are mainly used in classification tasks, and rarely directly used
in detection tasks. To address these issues, for the object detection network
we introduce a three-stage model compression method: dynamic sparse training,
group channel pruning, and spatial attention distilling. Firstly, to select out
the unimportant channels in the network and maintain a good balance between
sparsity and accuracy, we put forward a dynamic sparse training method, which
introduces a variable sparse rate, and the sparse rate will change with the
training process of the network. Secondly, to reduce the effect of pruning on
network accuracy, we propose a novel pruning method called group channel
pruning. In particular, we divide the network into multiple groups according to
the scales of the feature layer and the similarity of module structure in the
network, and then we use different pruning thresholds to prune the channels in
each group. Finally, to recover the accuracy of the pruned network, we use an
improved knowledge distillation method for the pruned network. Especially, we
extract spatial attention information from the feature maps of specific scales
in each group as knowledge for distillation. In the experiments, we use YOLOv4
as the object detection network and PASCAL VOC as the training dataset. Our
method reduces the parameters of the model by 64.7 % and the calculation by
34.9%.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.01562
|
2023-06-02T14:17:37Z
|
An Attentive-based Generative Model for Medical Image Synthesis
|
[
"Jiayuan Wang",
"Q. M. Jonathan Wu",
"Farhad Pourpanah"
] |
Magnetic resonance (MR) and computer tomography (CT) imaging are valuable
tools for diagnosing diseases and planning treatment. However, limitations such
as radiation exposure and cost can restrict access to certain imaging
modalities. To address this issue, medical image synthesis can generate one
modality from another, but many existing models struggle with high-quality
image synthesis when multiple slices are present in the dataset. This study
proposes an attention-based dual contrast generative model, called
ADC-cycleGAN, which can synthesize medical images from unpaired data with
multiple slices. The model integrates a dual contrast loss term with the
CycleGAN loss to ensure that the synthesized images are distinguishable from
the source domain. Additionally, an attention mechanism is incorporated into
the generators to extract informative features from both channel and spatial
domains. To improve performance when dealing with multiple slices, the
$K$-means algorithm is used to cluster the dataset into $K$ groups, and each
group is used to train a separate ADC-cycleGAN. Experimental results
demonstrate that the proposed ADC-cycleGAN model produces comparable samples to
other state-of-the-art generative models, achieving the highest PSNR and SSIM
values of 19.04385 and 0.68551, respectively. We publish the code at
https://github.com/JiayuanWang-JW/ADC-cycleGAN.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.01630
|
2023-06-02T15:49:26Z
|
A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging
|
[
"Jeffrey Wen",
"Rizwan Ahmad",
"Philip Schniter"
] |
Accelerated magnetic resonance (MR) imaging attempts to reduce acquisition
time by collecting data below the Nyquist rate. As an ill-posed inverse
problem, many plausible solutions exist, yet the majority of deep learning
approaches generate only a single solution. We instead focus on sampling from
the posterior distribution, which provides more comprehensive information for
downstream inference tasks. To do this, we design a novel conditional
normalizing flow (CNF) that infers the signal component in the measurement
operator's nullspace, which is later combined with measured data to form
complete images. Using fastMRI brain and knee data, we demonstrate fast
inference and accuracy that surpasses recent posterior sampling techniques for
MRI. Code is available at https://github.com/jwen307/mri_cnf/
|
[
"eess.IV",
"cs.CV"
] | false |
2306.01656
|
2023-06-02T16:24:34Z
|
Backchannel Detection and Agreement Estimation from Video with
Transformer Networks
|
[
"Ahmed Amer",
"Chirag Bhuvaneshwara",
"Gowtham K. Addluri",
"Mohammed M. Shaik",
"Vedant Bonde",
"Philipp Müller"
] |
Listeners use short interjections, so-called backchannels, to signify
attention or express agreement. The automatic analysis of this behavior is of
key importance for human conversation analysis and interactive conversational
agents. Current state-of-the-art approaches for backchannel analysis from
visual behavior make use of two types of features: features based on body pose
and features based on facial behavior. At the same time, transformer neural
networks have been established as an effective means to fuse input from
different data sources, but they have not yet been applied to backchannel
analysis. In this work, we conduct a comprehensive evaluation of multi-modal
transformer architectures for automatic backchannel analysis based on pose and
facial information. We address both the detection of backchannels as well as
the task of estimating the agreement expressed in a backchannel. In evaluations
on the MultiMediate'22 backchannel detection challenge, we reach 66.4% accuracy
with a one-layer transformer architecture, outperforming the previous state of
the art. With a two-layer transformer architecture, we furthermore set a new
state of the art (0.0604 MSE) on the task of estimating the amount of agreement
expressed in a backchannel.
|
[
"cs.CV",
"cs.HC"
] | false |
2306.01828
|
2023-06-02T17:45:44Z
|
Unifying (Machine) Vision via Counterfactual World Modeling
|
[
"Daniel M. Bear",
"Kevin Feigelis",
"Honglin Chen",
"Wanhee Lee",
"Rahul Venkatesh",
"Klemen Kotar",
"Alex Durango",
"Daniel L. K. Yamins"
] |
Leading approaches in machine vision employ different architectures for
different tasks, trained on costly task-specific labeled datasets. This
complexity has held back progress in areas, such as robotics, where robust
task-general perception remains a bottleneck. In contrast, "foundation models"
of natural language have shown how large pre-trained neural networks can
provide zero-shot solutions to a broad spectrum of apparently distinct tasks.
Here we introduce Counterfactual World Modeling (CWM), a framework for
constructing a visual foundation model: a unified, unsupervised network that
can be prompted to perform a wide variety of visual computations. CWM has two
key components, which resolve the core issues that have hindered application of
the foundation model concept to vision. The first is structured masking, a
generalization of masked prediction methods that encourages a prediction model
to capture the low-dimensional structure in visual data. The model thereby
factors the key physical components of a scene and exposes an interface to them
via small sets of visual tokens. This in turn enables CWM's second main idea --
counterfactual prompting -- the observation that many apparently distinct
visual representations can be computed, in a zero-shot manner, by comparing the
prediction model's output on real inputs versus slightly modified
("counterfactual") inputs. We show that CWM generates high-quality readouts on
real-world images and videos for a diversity of tasks, including estimation of
keypoints, optical flow, occlusions, object segments, and relative depth. Taken
together, our results show that CWM is a promising path to unifying the
manifold strands of machine vision in a conceptually simple foundation.
|
[
"cs.CV",
"cs.AI",
"I.2.10; I.4.8"
] | false |
2306.01853
|
2023-06-02T18:16:21Z
|
Multi-Contrast Computed Tomography Atlas of Healthy Pancreas
|
[
"Yinchi Zhou",
"Ho Hin Lee",
"Yucheng Tang",
"Xin Yu",
"Qi Yang",
"Shunxing Bao",
"Jeffrey M. Spraggins",
"Yuankai Huo",
"Bennett A. Landman"
] |
With the substantial diversity in population demographics, such as
differences in age and body composition, the volumetric morphology of pancreas
varies greatly, resulting in distinctive variations in shape and appearance.
Such variations increase the difficulty at generalizing population-wide
pancreas features. A volumetric spatial reference is needed to adapt the
morphological variability for organ-specific analysis. Here, we proposed a
high-resolution computed tomography (CT) atlas framework specifically optimized
for the pancreas organ across multi-contrast CT. We introduce a deep
learning-based pre-processing technique to extract the abdominal region of
interests (ROIs) and leverage a hierarchical registration pipeline to align the
pancreas anatomy across populations. Briefly, DEEDs affine and non-rigid
registration are performed to transfer patient abdominal volumes to a fixed
high-resolution atlas template. To generate and evaluate the pancreas atlas
template, multi-contrast modality CT scans of 443 subjects (without reported
history of pancreatic disease, age: 15-50 years old) are processed. Comparing
with different registration state-of-the-art tools, the combination of DEEDs
affine and non-rigid registration achieves the best performance for the
pancreas label transfer across all contrast phases. We further perform external
evaluation with another research cohort of 100 de-identified portal venous
scans with 13 organs labeled, having the best label transfer performance of
0.504 Dice score in unsupervised setting. The qualitative representation (e.g.,
average mapping) of each phase creates a clear boundary of pancreas and its
distinctive contrast appearance. The deformation surface renderings across
scales (e.g., small to large volume) further illustrate the generalizability of
the proposed atlas template.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.01893
|
2023-06-02T19:58:14Z
|
Hierarchical Quadratic Random Forest Classifier
|
[
"Faezeh Fallah"
] |
In this paper, we proposed a hierarchical quadratic random forest classifier
for classifying multiresolution samples extracted from multichannel data. This
forest incorporated a penalized multivariate linear discriminant in each of its
decision nodes and processed squared features to realize quadratic decision
boundaries in the original feature space. The penalized discriminant was based
on a multiclass sparse discriminant analysis and the penalization was based on
a group Lasso regularizer which was an intermediate between the Lasso and the
ridge regularizer. The classification probabilities estimated by this forest
and the features learned by its decision nodes could be used standalone or
foster graph-based classifiers.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.01936
|
2023-06-02T22:29:58Z
|
Sub-Meter Tree Height Mapping of California using Aerial Images and
LiDAR-Informed U-Net Model
|
[
"Fabien H Wagner",
"Sophia Roberts",
"Alison L Ritz",
"Griffin Carter",
"Ricardo Dalagnol",
"Samuel Favrichon",
"Mayumi CM Hirye",
"Martin Brandt",
"Philipe Ciais",
"Sassan Saatchi"
] |
Tree canopy height is one of the most important indicators of forest biomass,
productivity, and species diversity, but it is challenging to measure
accurately from the ground and from space. Here, we used a U-Net model adapted
for regression to map the canopy height of all trees in the state of California
with very high-resolution aerial imagery (60 cm) from the USDA-NAIP program.
The U-Net model was trained using canopy height models computed from aerial
LiDAR data as a reference, along with corresponding RGB-NIR NAIP images
collected in 2020. We evaluated the performance of the deep-learning model
using 42 independent 1 km$^2$ sites across various forest types and landscape
variations in California. Our predictions of tree heights exhibited a mean
error of 2.9 m and showed relatively low systematic bias across the entire
range of tree heights present in California. In 2020, trees taller than 5 m
covered ~ 19.3% of California. Our model successfully estimated canopy heights
up to 50 m without saturation, outperforming existing canopy height products
from global models. The approach we used allowed for the reconstruction of the
three-dimensional structure of individual trees as observed from nadir-looking
optical airborne imagery, suggesting a relatively robust estimation and mapping
capability, even in the presence of image distortion. These findings
demonstrate the potential of large-scale mapping and monitoring of tree height,
as well as potential biomass estimation, using NAIP imagery.
|
[
"cs.CV",
"eess.IV",
"92-08",
"I.4.9; I.5.4"
] | false |
2306.01938
|
2023-06-02T22:39:33Z
|
Self-supervised Interest Point Detection and Description for Fisheye and
Perspective Images
|
[
"Marcela Mera-Trujillo",
"Shivang Patel",
"Yu Gu",
"Gianfranco Doretto"
] |
Keypoint detection and matching is a fundamental task in many computer vision
problems, from shape reconstruction, to structure from motion, to AR/VR
applications and robotics. It is a well-studied problem with remarkable
successes such as SIFT, and more recent deep learning approaches. While great
robustness is exhibited by these techniques with respect to noise, illumination
variation, and rigid motion transformations, less attention has been placed on
image distortion sensitivity. In this work, we focus on the case when this is
caused by the geometry of the cameras used for image acquisition, and consider
the keypoint detection and matching problem between the hybrid scenario of a
fisheye and a projective image. We build on a state-of-the-art approach and
derive a self-supervised procedure that enables training an interest point
detector and descriptor network. We also collected two new datasets for
additional training and testing in this unexplored scenario, and we demonstrate
that current approaches are suboptimal because they are designed to work in
traditional projective conditions, while the proposed approach turns out to be
the most effective.
|
[
"cs.CV",
"cs.RO"
] | false |
2306.01209
|
2023-06-02T00:00:09Z
|
Counting Crowds in Bad Weather
|
[
"Zhi-Kai Huang",
"Wei-Ting Chen",
"Yuan-Chun Chiang",
"Sy-Yen Kuo",
"Ming-Hsuan Yang"
] |
Crowd counting has recently attracted significant attention in the field of
computer vision due to its wide applications to image understanding. Numerous
methods have been proposed and achieved state-of-the-art performance for
real-world tasks. However, existing approaches do not perform well under
adverse weather such as haze, rain, and snow since the visual appearances of
crowds in such scenes are drastically different from those images in clear
weather of typical datasets. In this paper, we propose a method for robust
crowd counting in adverse weather scenarios. Instead of using a two-stage
approach that involves image restoration and crowd counting modules, our model
learns effective features and adaptive queries to account for large appearance
variations. With these weather queries, the proposed model can learn the
weather information according to the degradation of the input image and
optimize with the crowd counting module simultaneously. Experimental results
show that the proposed algorithm is effective in counting crowds under
different weather types on benchmark datasets. The source code and trained
models will be made available to the public.
|
[
"cs.CV",
"cs.AI",
"eess.IV"
] | false |
2306.01268
|
2023-06-02T05:04:27Z
|
DeepScribe: Localization and Classification of Elamite Cuneiform Signs
Via Deep Learning
|
[
"Edward C. Williams",
"Grace Su",
"Sandra R. Schloen",
"Miller C. Prosser",
"Susanne Paulus",
"Sanjay Krishnan"
] |
Twenty-five hundred years ago, the paperwork of the Achaemenid Empire was
recorded on clay tablets. In 1933, archaeologists from the University of
Chicago's Oriental Institute (OI) found tens of thousands of these tablets and
fragments during the excavation of Persepolis. Many of these tablets have been
painstakingly photographed and annotated by expert cuneiformists, and now
provide a rich dataset consisting of over 5,000 annotated tablet images and
100,000 cuneiform sign bounding boxes. We leverage this dataset to develop
DeepScribe, a modular computer vision pipeline capable of localizing cuneiform
signs and providing suggestions for the identity of each sign. We investigate
the difficulty of learning subtasks relevant to cuneiform tablet transcription
on ground-truth data, finding that a RetinaNet object detector can achieve a
localization mAP of 0.78 and a ResNet classifier can achieve a top-5 sign
classification accuracy of 0.89. The end-to-end pipeline achieves a top-5
classification accuracy of 0.80. As part of the classification module,
DeepScribe groups cuneiform signs into morphological clusters. We consider how
this automatic clustering approach differs from the organization of standard,
printed sign lists and what we may learn from it. These components, trained
individually, are sufficient to produce a system that can analyze photos of
cuneiform tablets from the Achaemenid period and provide useful transliteration
suggestions to researchers. We evaluate the model's end-to-end performance on
locating and classifying signs, providing a roadmap to a linguistically-aware
transliteration system, then consider the model's potential utility when
applied to other periods of cuneiform writing.
|
[
"cs.CV",
"cs.DL",
"cs.IR"
] | false |
2306.01295
|
2023-06-02T06:41:24Z
|
Egocentric Planning for Scalable Embodied Task Achievement
|
[
"Xiaotian Liu",
"Hector Palacios",
"Christian Muise"
] |
Embodied agents face significant challenges when tasked with performing
actions in diverse environments, particularly in generalizing across object
types and executing suitable actions to accomplish tasks. Furthermore, agents
should exhibit robustness, minimizing the execution of illegal actions. In this
work, we present Egocentric Planning, an innovative approach that combines
symbolic planning and Object-oriented POMDPs to solve tasks in complex
environments, harnessing existing models for visual perception and natural
language processing. We evaluated our approach in ALFRED, a simulated
environment designed for domestic tasks, and demonstrated its high scalability,
achieving an impressive 36.07% unseen success rate in the ALFRED benchmark and
winning the ALFRED challenge at CVPR Embodied AI workshop. Our method requires
reliable perception and the specification or learning of a symbolic description
of the preconditions and effects of the agent's actions, as well as what object
types reveal information about others. It is capable of naturally scaling to
solve new tasks beyond ALFRED, as long as they can be solved using the
available skills. This work offers a solid baseline for studying end-to-end and
hybrid methods that aim to generalize to new tasks, including recent approaches
relying on LLMs, but often struggle to scale to long sequences of actions or
produce robust plans for novel tasks.
|
[
"cs.AI",
"cs.CV",
"cs.LG",
"cs.RO"
] | false |
2306.01322
|
2023-06-02T07:44:00Z
|
Privacy Distillation: Reducing Re-identification Risk of Multimodal
Diffusion Models
|
[
"Virginia Fernandez",
"Pedro Sanchez",
"Walter Hugo Lopez Pinaya",
"Grzegorz Jacenków",
"Sotirios A. Tsaftaris",
"Jorge Cardoso"
] |
Knowledge distillation in neural networks refers to compressing a large model
or dataset into a smaller version of itself. We introduce Privacy Distillation,
a framework that allows a text-to-image generative model to teach another model
without exposing it to identifiable data. Here, we are interested in the
privacy issue faced by a data provider who wishes to share their data via a
multimodal generative model. A question that immediately arises is ``How can a
data provider ensure that the generative model is not leaking identifiable
information about a patient?''. Our solution consists of (1) training a first
diffusion model on real data (2) generating a synthetic dataset using this
model and filtering it to exclude images with a re-identifiability risk (3)
training a second diffusion model on the filtered synthetic data only. We
showcase that datasets sampled from models trained with privacy distillation
can effectively reduce re-identification risk whilst maintaining downstream
performance.
|
[
"cs.LG",
"cs.CR",
"cs.CV"
] | false |
2306.01421
|
2023-06-02T10:22:33Z
|
Convergence analysis of equilibrium methods for inverse problems
|
[
"Daniel Obmann",
"Markus Haltmeier"
] |
Recently, the use of deep equilibrium methods has emerged as a new approach
for solving imaging and other ill-posed inverse problems. While learned
components may be a key factor in the good performance of these methods in
practice, a theoretical justification from a regularization point of view is
still lacking. In this paper, we address this issue by providing stability and
convergence results for the class of equilibrium methods. In addition, we
derive convergence rates and stability estimates in the symmetric Bregman
distance. We strengthen our results for regularization operators with
contractive residuals. Furthermore, we use the presented analysis to gain
insight into the practical behavior of these methods, including a lower bound
on the performance of the regularized solutions. In addition, we show that the
convergence analysis leads to the design of a new type of loss function which
has several advantages over previous ones. Numerical simulations are used to
support our findings.
|
[
"math.NA",
"cs.CV",
"cs.NA"
] | false |
2306.01574
|
2023-06-02T14:38:58Z
|
Probabilistic Concept Bottleneck Models
|
[
"Eunji Kim",
"Dahuin Jung",
"Sangha Park",
"Siwon Kim",
"Sungroh Yoon"
] |
Interpretable models are designed to make decisions in a human-interpretable
manner. Representatively, Concept Bottleneck Models (CBM) follow a two-step
process of concept prediction and class prediction based on the predicted
concepts. CBM provides explanations with high-level concepts derived from
concept predictions; thus, reliable concept predictions are important for
trustworthiness. In this study, we address the ambiguity issue that can harm
reliability. While the existence of a concept can often be ambiguous in the
data, CBM predicts concepts deterministically without considering this
ambiguity. To provide a reliable interpretation against this ambiguity, we
propose Probabilistic Concept Bottleneck Models (ProbCBM). By leveraging
probabilistic concept embeddings, ProbCBM models uncertainty in concept
prediction and provides explanations based on the concept and its corresponding
uncertainty. This uncertainty enhances the reliability of the explanations.
Furthermore, as class uncertainty is derived from concept uncertainty in
ProbCBM, we can explain class uncertainty by means of concept uncertainty. Code
is publicly available at https://github.com/ejkim47/prob-cbm.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] | false |
2306.01623
|
2023-06-02T15:37:43Z
|
HomE: Homography-Equivariant Video Representation Learning
|
[
"Anirudh Sriram",
"Adrien Gaidon",
"Jiajun Wu",
"Juan Carlos Niebles",
"Li Fei-Fei",
"Ehsan Adeli"
] |
Recent advances in self-supervised representation learning have enabled more
efficient and robust model performance without relying on extensive labeled
data. However, most works are still focused on images, with few working on
videos and even fewer on multi-view videos, where more powerful inductive
biases can be leveraged for self-supervision. In this work, we propose a novel
method for representation learning of multi-view videos, where we explicitly
model the representation space to maintain Homography Equivariance (HomE). Our
method learns an implicit mapping between different views, culminating in a
representation space that maintains the homography relationship between
neighboring views. We evaluate our HomE representation via action recognition
and pedestrian intent prediction as downstream tasks. On action classification,
our method obtains 96.4% 3-fold accuracy on the UCF101 dataset, better than
most state-of-the-art self-supervised learning methods. Similarly, on the STIP
dataset, we outperform the state-of-the-art by 6% for pedestrian intent
prediction one second into the future while also obtaining an accuracy of 91.2%
for pedestrian action (cross vs. not-cross) classification. Code is available
at https://github.com/anirudhs123/HomE.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
2306.01654
|
2023-06-02T16:24:07Z
|
GANs Settle Scores!
|
[
"Siddarth Asokan",
"Nishanth Shetty",
"Aadithya Srikanth",
"Chandra Sekhar Seelamantula"
] |
Generative adversarial networks (GANs) comprise a generator, trained to learn
the underlying distribution of the desired data, and a discriminator, trained
to distinguish real samples from those output by the generator. A majority of
GAN literature focuses on understanding the optimality of the discriminator
through integral probability metric (IPM) or divergence based analysis. In this
paper, we propose a unified approach to analyzing the generator optimization
through variational approach. In $f$-divergence-minimizing GANs, we show that
the optimal generator is the one that matches the score of its output
distribution with that of the data distribution, while in IPM GANs, we show
that this optimal generator matches score-like functions, involving the
flow-field of the kernel associated with a chosen IPM constraint space.
Further, the IPM-GAN optimization can be seen as one of smoothed
score-matching, where the scores of the data and the generator distributions
are convolved with the kernel associated with the constraint. The proposed
approach serves to unify score-based training and existing GAN flavors,
leveraging results from normalizing flows, while also providing explanations
for empirical phenomena such as the stability of non-saturating GAN losses.
Based on these results, we propose novel alternatives to $f$-GAN and IPM-GAN
training based on score and flow matching, and discriminator-guided Langevin
sampling.
|
[
"cs.LG",
"cs.CV",
"stat.ML"
] | false |
2306.01706
|
2023-06-02T17:28:52Z
|
Is Generative Modeling-based Stylization Necessary for Domain Adaptation
in Regression Tasks?
|
[
"Jinman Park",
"Francois Barnard",
"Saad Hossain",
"Sirisha Rambhatla",
"Paul Fieguth"
] |
Unsupervised domain adaptation (UDA) aims to bridge the gap between source
and target domains in the absence of target domain labels using two main
techniques: input-level alignment (such as generative modeling and stylization)
and feature-level alignment (which matches the distribution of the feature
maps, e.g. gradient reversal layers). Motivated from the success of generative
modeling for image classification, stylization-based methods were recently
proposed for regression tasks, such as pose estimation. However, use of
input-level alignment via generative modeling and stylization incur additional
overhead and computational complexity which limit their use in real-world DA
tasks. To investigate the role of input-level alignment for DA, we ask the
following question: Is generative modeling-based stylization necessary for
visual domain adaptation in regression? Surprisingly, we find that
input-alignment has little effect on regression tasks as compared to
classification. Based on these insights, we develop a non-parametric
feature-level domain alignment method -- Implicit Stylization (ImSty) -- which
results in consistent improvements over SOTA regression task, without the need
for computationally intensive stylization and generative modeling. Our work
conducts a critical evaluation of the role of generative modeling and
stylization, at a time when these are also gaining popularity for domain
generalization.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.