arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2306.00127
|
2023-05-31T19:05:26Z
|
Surrogate Model Extension (SME): A Fast and Accurate Weight Update
Attack on Federated Learning
|
[
"Junyi Zhu",
"Ruicong Yao",
"Matthew B. Blaschko"
] |
In Federated Learning (FL) and many other distributed training frameworks,
collaborators can hold their private data locally and only share the network
weights trained with the local data after multiple iterations. Gradient
inversion is a family of privacy attacks that recovers data from its generated
gradients. Seemingly, FL can provide a degree of protection against gradient
inversion attacks on weight updates, since the gradient of a single step is
concealed by the accumulation of gradients over multiple local iterations. In
this work, we propose a principled way to extend gradient inversion attacks to
weight updates in FL, thereby better exposing weaknesses in the presumed
privacy protection inherent in FL. In particular, we propose a surrogate model
method based on the characteristic of two-dimensional gradient flow and
low-rank property of local updates. Our method largely boosts the ability of
gradient inversion attacks on weight updates containing many iterations and
achieves state-of-the-art (SOTA) performance. Additionally, our method runs up
to $100\times$ faster than the SOTA baseline in the common FL scenario. Our
work re-evaluates and highlights the privacy risk of sharing network weights.
Our code is available at
https://github.com/JunyiZhu-AI/surrogate_model_extension.
|
[
"cs.LG",
"cs.CR"
] | false |
2306.00149
|
2023-05-31T19:39:15Z
|
Distributed Online Convex Optimization with Adversarial Constraints:
Reduced Cumulative Constraint Violation Bounds under Slater's Condition
|
[
"Xinlei Yi",
"Xiuxian Li",
"Tao Yang",
"Lihua Xie",
"Yiguang Hong",
"Tianyou Chai",
"Karl H. Johansson"
] |
This paper considers distributed online convex optimization with adversarial
constraints. In this setting, a network of agents makes decisions at each
round, and then only a portion of the loss function and a coordinate block of
the constraint function are privately revealed to each agent. The loss and
constraint functions are convex and can vary arbitrarily across rounds. The
agents collaborate to minimize network regret and cumulative constraint
violation. A novel distributed online algorithm is proposed and it achieves an
$\mathcal{O}(T^{\max\{c,1-c\}})$ network regret bound and an
$\mathcal{O}(T^{1-c/2})$ network cumulative constraint violation bound, where
$T$ is the number of rounds and $c\in(0,1)$ is a user-defined trade-off
parameter. When Slater's condition holds (i.e, there is a point that strictly
satisfies the inequality constraints), the network cumulative constraint
violation bound is reduced to $\mathcal{O}(T^{1-c})$. Moreover, if the loss
functions are strongly convex, then the network regret bound is reduced to
$\mathcal{O}(\log(T))$, and the network cumulative constraint violation bound
is reduced to $\mathcal{O}(\sqrt{\log(T)T})$ and $\mathcal{O}(\log(T))$ without
and with Slater's condition, respectively. To the best of our knowledge, this
paper is the first to achieve reduced (network) cumulative constraint violation
bounds for (distributed) online convex optimization with adversarial
constraints under Slater's condition. Finally, the theoretical results are
verified through numerical simulations.
|
[
"math.OC",
"cs.LG"
] | false |
2306.00204
|
2023-05-31T21:49:44Z
|
Toward Understanding Why Adam Converges Faster Than SGD for Transformers
|
[
"Yan Pan",
"Yuanzhi Li"
] |
While stochastic gradient descent (SGD) is still the most popular
optimization algorithm in deep learning, adaptive algorithms such as Adam have
established empirical advantages over SGD in some deep learning applications
such as training transformers. However, it remains a question that why Adam
converges significantly faster than SGD in these scenarios. In this paper, we
propose one explanation of why Adam converges faster than SGD using a new
concept directional sharpness. We argue that the performance of optimization
algorithms is closely related to the directional sharpness of the update steps,
and show SGD has much worse directional sharpness compared to adaptive
algorithms. We further observe that only a small fraction of the coordinates
causes the bad sharpness and slow convergence of SGD, and propose to use
coordinate-wise clipping as a solution to SGD and other optimization
algorithms. We demonstrate the effect of coordinate-wise clipping on sharpness
reduction and speeding up the convergence of optimization algorithms under
various settings. We show that coordinate-wise clipping improves the local loss
reduction when only a small fraction of the coordinates has bad sharpness. We
conclude that the sharpness reduction effect of adaptive coordinate-wise
scaling is the reason for Adam's success in practice and suggest the use of
coordinate-wise clipping as a universal technique to speed up deep learning
optimization.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00206
|
2023-05-31T21:57:33Z
|
Representation Reliability and Its Impact on Downstream Tasks
|
[
"Young-Jin Park",
"Hao Wang",
"Shervin Ardeshir",
"Navid Azizan"
] |
Self-supervised pre-trained models extract general-purpose representations
from data, and quantifying how reliable they are is crucial because many
downstream models use these representations as input for their own tasks. To
this end, we first introduce a formal definition of representation reliability:
the representation for a given test input is considered to be reliable if the
downstream models built on top of that representation can consistently generate
accurate predictions for that test point. It is desired to estimate the
representation reliability without knowing the downstream tasks a priori. We
provide a negative result showing that existing frameworks for uncertainty
quantification in supervised learning are not suitable for this purpose. As an
alternative, we propose an ensemble-based method for quantifying representation
reliability, based on the concept of neighborhood consistency in the
representation spaces across various pre-trained models. More specifically, the
key insight is to use shared neighboring points as anchors to align different
representation spaces. We demonstrate through comprehensive numerical
experiments that our method is capable of predicting representation reliability
with high accuracy.
|
[
"cs.LG",
"cs.AI"
] | false |
2306.00230
|
2023-05-31T22:59:52Z
|
Predictive Limitations of Physics-Informed Neural Networks in Vortex
Shedding
|
[
"Pi-Yueh Chuang",
"Lorena A. Barba"
] |
The recent surge of interest in physics-informed neural network (PINN)
methods has led to a wave of studies that attest to their potential for solving
partial differential equations (PDEs) and predicting the dynamics of physical
systems. However, the predictive limitations of PINNs have not been thoroughly
investigated. We look at the flow around a 2D cylinder and find that data-free
PINNs are unable to predict vortex shedding. Data-driven PINN exhibits vortex
shedding only while the training data (from a traditional CFD solver) is
available, but reverts to the steady state solution when the data flow stops.
We conducted dynamic mode decomposition and analyze the Koopman modes in the
solutions obtained with PINNs versus a traditional fluid solver (PetIBM). The
distribution of the Koopman eigenvalues on the complex plane suggests that PINN
is numerically dispersive and diffusive. The PINN method reverts to the steady
solution possibly as a consequence of spectral bias. This case study reaises
concerns about the ability of PINNs to predict flows with instabilities,
specifically vortex shedding. Our computational study supports the need for
more theoretical work to analyze the numerical properties of PINN methods. The
results in this paper are transparent and reproducible, with all data and code
available in public repositories and persistent archives; links are provided in
the paper repository at \url{https://github.com/barbagroup/jcs_paper_pinn}, and
a Reproducibility Statement within the paper.
|
[
"cs.CE",
"cs.LG"
] | false |
2306.00242
|
2023-05-31T23:27:58Z
|
Combinatorial Neural Bandits
|
[
"Taehyun Hwang",
"Kyuwook Chai",
"Min-hwan Oh"
] |
We consider a contextual combinatorial bandit problem where in each round a
learning agent selects a subset of arms and receives feedback on the selected
arms according to their scores. The score of an arm is an unknown function of
the arm's feature. Approximating this unknown score function with deep neural
networks, we propose algorithms: Combinatorial Neural UCB ($\texttt{CN-UCB}$)
and Combinatorial Neural Thompson Sampling ($\texttt{CN-TS}$). We prove that
$\texttt{CN-UCB}$ achieves $\tilde{\mathcal{O}}(\tilde{d} \sqrt{T})$ or
$\tilde{\mathcal{O}}(\sqrt{\tilde{d} T K})$ regret, where $\tilde{d}$ is the
effective dimension of a neural tangent kernel matrix, $K$ is the size of a
subset of arms, and $T$ is the time horizon. For $\texttt{CN-TS}$, we adapt an
optimistic sampling technique to ensure the optimism of the sampled
combinatorial action, achieving a worst-case (frequentist) regret of
$\tilde{\mathcal{O}}(\tilde{d} \sqrt{TK})$. To the best of our knowledge, these
are the first combinatorial neural bandit algorithms with regret performance
guarantees. In particular, $\texttt{CN-TS}$ is the first Thompson sampling
algorithm with the worst-case regret guarantees for the general contextual
combinatorial bandit problem. The numerical experiments demonstrate the
superior performances of our proposed algorithms.
|
[
"stat.ML",
"cs.LG"
] | false |
2306.00557
|
2023-05-31T17:04:27Z
|
Improving Protein-peptide Interface Predictions in the Low Data Regime
|
[
"Justin Diamond",
"Markus Lill"
] |
We propose a novel approach for predicting protein-peptide interactions using
a bi-modal transformer architecture that learns an inter-facial joint
distribution of residual contacts. The current data sets for crystallized
protein-peptide complexes are limited, making it difficult to accurately
predict interactions between proteins and peptides. To address this issue, we
propose augmenting the existing data from PepBDB with pseudo protein-peptide
complexes derived from the PDB. The augmented data set acts as a method to
transfer physics-based contextdependent intra-residue (within a domain)
interactions to the inter-residual (between) domains. We show that the
distributions of inter-facial residue-residue interactions share overlap with
inter residue-residue interactions, enough to increase predictive power of our
bi-modal transformer architecture. In addition, this dataaugmentation allows us
to leverage the vast amount of protein-only data available in the PDB to train
neural networks, in contrast to template-based modeling that acts as a prior
|
[
"q-bio.BM",
"cs.LG"
] | false |
2306.03099
|
2023-05-31T12:34:16Z
|
CrystalGPT: Enhancing system-to-system transferability in
crystallization prediction and control using time-series-transformers
|
[
"Niranjan Sitapure",
"Joseph S. Kwon"
] |
For prediction and real-time control tasks, machine-learning (ML)-based
digital twins are frequently employed. However, while these models are
typically accurate, they are custom-designed for individual systems, making
system-to-system (S2S) transferability difficult. This occurs even when
substantial similarities exist in the process dynamics across different
chemical systems. To address this challenge, we developed a novel
time-series-transformer (TST) framework that exploits the powerful transfer
learning capabilities inherent in transformer algorithms. This was demonstrated
using readily available process data obtained from different crystallizers
operating under various operational scenarios. Using this extensive dataset, we
trained a TST model (CrystalGPT) to exhibit remarkable S2S transferability not
only across all pre-established systems, but also to an unencountered system.
CrystalGPT achieved a cumulative error across all systems, which is eight times
superior to that of existing ML models. Additionally, we coupled CrystalGPT
with a predictive controller to reduce the variance in setpoint tracking to
just 1%.
|
[
"cond-mat.mtrl-sci",
"cs.LG",
"J.2; I.6.5; I.2.6"
] | false |
2306.10023
|
2023-05-31T03:04:29Z
|
Theoretical Analysis on the Efficiency of Interleaved Comparisons
|
[
"Kojiro Iizuka",
"Hajime Morita",
"Makoto P. Kato"
] |
This study presents a theoretical analysis on the efficiency of interleaving,
an efficient online evaluation method for rankings. Although interleaving has
already been applied to production systems, the source of its high efficiency
has not been clarified in the literature. Therefore, this study presents a
theoretical analysis on the efficiency of interleaving methods. We begin by
designing a simple interleaving method similar to ordinary interleaving
methods. Then, we explore a condition under which the interleaving method is
more efficient than A/B testing and find that this is the case when users leave
the ranking depending on the item's relevance, a typical assumption made in
click models. Finally, we perform experiments based on numerical analysis and
user simulation, demonstrating that the theoretical results are consistent with
the empirical results.
|
[
"cs.IR",
"cs.LG"
] | false |
2305.19475
|
2023-05-31T01:04:55Z
|
Doubly Constrained Fair Clustering
|
[
"John Dickerson",
"Seyed A. Esmaeili",
"Jamie Morgenstern",
"Claire Jie Zhang"
] |
The remarkable attention which fair clustering has received in the last few
years has resulted in a significant number of different notions of fairness.
Despite the fact that these notions are well-justified, they are often
motivated and studied in a disjoint manner where one fairness desideratum is
considered exclusively in isolation from the others. This leaves the
understanding of the relations between different fairness notions as an
important open problem in fair clustering. In this paper, we take the first
step in this direction. Specifically, we consider the two most prominent
demographic representation fairness notions in clustering: (1) Group Fairness
(GF), where the different demographic groups are supposed to have close to
population-level representation in each cluster and (2) Diversity in Center
Selection (DS), where the selected centers are supposed to have close to
population-level representation of each group. We show that given a constant
approximation algorithm for one constraint (GF or DS only) we can obtain a
constant approximation solution that satisfies both constraints simultaneously.
Interestingly, we prove that any given solution that satisfies the GF
constraint can always be post-processed at a bounded degradation to the
clustering cost to additionally satisfy the DS constraint while the reverse is
not true. Furthermore, we show that both GF and DS are incompatible (having an
empty feasibility set in the worst case) with a collection of other
distance-based fairness notions. Finally, we carry experiments to validate our
theoretical findings.
|
[
"cs.LG",
"cs.AI",
"cs.DS"
] | false |
2305.19482
|
2023-05-31T01:22:15Z
|
Adaptive False Discovery Rate Control with Privacy Guarantee
|
[
"Xintao Xia",
"Zhanrui Cai"
] |
Differentially private multiple testing procedures can protect the
information of individuals used in hypothesis tests while guaranteeing a small
fraction of false discoveries. In this paper, we propose a differentially
private adaptive FDR control method that can control the classic FDR metric
exactly at a user-specified level $\alpha$ with privacy guarantee, which is a
non-trivial improvement compared to the differentially private
Benjamini-Hochberg method proposed in Dwork et al. (2021). Our analysis is
based on two key insights: 1) a novel p-value transformation that preserves
both privacy and the mirror conservative property, and 2) a mirror peeling
algorithm that allows the construction of the filtration and application of the
optimal stopping technique. Numerical studies demonstrate that the proposed
DP-AdaPT performs better compared to the existing differentially private FDR
control methods. Compared to the non-private AdaPT, it incurs a small accuracy
loss but significantly reduces the computation cost.
|
[
"stat.ML",
"cs.LG",
"stat.ME"
] | false |
2305.19534
|
2023-05-31T03:42:38Z
|
Recasting Self-Attention with Holographic Reduced Representations
|
[
"Mohammad Mahmudul Alam",
"Edward Raff",
"Stella Biderman",
"Tim Oates",
"James Holt"
] |
In recent years, self-attention has become the dominant paradigm for sequence
modeling in a variety of domains. However, in domains with very long sequence
lengths the $\mathcal{O}(T^2)$ memory and $\mathcal{O}(T^2 H)$ compute costs
can make using transformers infeasible. Motivated by problems in malware
detection, where sequence lengths of $T \geq 100,000$ are a roadblock to deep
learning, we re-cast self-attention using the neuro-symbolic approach of
Holographic Reduced Representations (HRR). In doing so we perform the same
high-level strategy of the standard self-attention: a set of queries matching
against a set of keys, and returning a weighted response of the values for each
key. Implemented as a ``Hrrformer'' we obtain several benefits including
$\mathcal{O}(T H \log H)$ time complexity, $\mathcal{O}(T H)$ space complexity,
and convergence in $10\times$ fewer epochs. Nevertheless, the Hrrformer
achieves near state-of-the-art accuracy on LRA benchmarks and we are able to
learn with just a single layer. Combined, these benefits make our Hrrformer the
first viable Transformer for such long malware classification sequences and up
to $280\times$ faster to train on the Long Range Arena benchmark. Code is
available at
\url{https://github.com/NeuromorphicComputationResearchProgram/Hrrformer}
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2305.19545
|
2023-05-31T04:23:56Z
|
Catalysis distillation neural network for the few shot open catalyst
challenge
|
[
"Bowen Deng"
] |
The integration of artificial intelligence and science has resulted in
substantial progress in computational chemistry methods for the design and
discovery of novel catalysts. Nonetheless, the challenges of electrocatalytic
reactions and developing a large-scale language model in catalysis persist, and
the recent success of ChatGPT's (Chat Generative Pre-trained Transformer)
few-shot methods surpassing BERT (Bidirectional Encoder Representation from
Transformers) underscores the importance of addressing limited data, expensive
computations, time constraints and structure-activity relationship in research.
Hence, the development of few-shot techniques for catalysis is critical and
essential, regardless of present and future requirements. This paper introduces
the Few-Shot Open Catalyst Challenge 2023, a competition aimed at advancing the
application of machine learning technology for predicting catalytic reactions
on catalytic surfaces, with a specific focus on dual-atom catalysts in hydrogen
peroxide electrocatalysis. To address the challenge of limited data in
catalysis, we propose a machine learning approach based on MLP-Like and a
framework called Catalysis Distillation Graph Neural Network (CDGNN). Our
results demonstrate that CDGNN effectively learns embeddings from catalytic
structures, enabling the capture of structure-adsorption relationships. This
accomplishment has resulted in the utmost advanced and efficient determination
of the reaction pathway for hydrogen peroxide, surpassing the current graph
neural network approach by 16.1%.. Consequently, CDGNN presents a promising
approach for few-shot learning in catalysis.
|
[
"physics.chem-ph",
"cs.CE",
"cs.LG"
] | false |
2305.19582
|
2023-05-31T05:59:42Z
|
Causal Discovery with Latent Confounders Based on Higher-Order Cumulants
|
[
"Ruichu Cai",
"Zhiyi Huang",
"Wei Chen",
"Zhifeng Hao",
"Kun Zhang"
] |
Causal discovery with latent confounders is an important but challenging task
in many scientific areas. Despite the success of some overcomplete independent
component analysis (OICA) based methods in certain domains, they are
computationally expensive and can easily get stuck into local optima. We notice
that interestingly, by making use of higher-order cumulants, there exists a
closed-form solution to OICA in specific cases, e.g., when the mixing procedure
follows the One-Latent-Component structure. In light of the power of the
closed-form solution to OICA corresponding to the One-Latent-Component
structure, we formulate a way to estimate the mixing matrix using the
higher-order cumulants, and further propose the testable One-Latent-Component
condition to identify the latent variables and determine causal orders. By
iteratively removing the share identified latent components, we successfully
extend the results on the One-Latent-Component structure to the
Multi-Latent-Component structure and finally provide a practical and
asymptotically correct algorithm to learn the causal structure with latent
variables. Experimental results illustrate the asymptotic correctness and
effectiveness of the proposed method.
|
[
"cs.LG",
"cs.AI",
"stat.ME"
] | false |
2305.19588
|
2023-05-31T06:15:50Z
|
Active causal structure learning with advice
|
[
"Davin Choo",
"Themis Gouleakis",
"Arnab Bhattacharyya"
] |
We introduce the problem of active causal structure learning with advice. In
the typical well-studied setting, the learning algorithm is given the essential
graph for the observational distribution and is asked to recover the underlying
causal directed acyclic graph (DAG) $G^*$ while minimizing the number of
interventions made. In our setting, we are additionally given side information
about $G^*$ as advice, e.g. a DAG $G$ purported to be $G^*$. We ask whether the
learning algorithm can benefit from the advice when it is close to being
correct, while still having worst-case guarantees even when the advice is
arbitrarily bad. Our work is in the same space as the growing body of research
on algorithms with predictions. When the advice is a DAG $G$, we design an
adaptive search algorithm to recover $G^*$ whose intervention cost is at most
$O(\max\{1, \log \psi\})$ times the cost for verifying $G^*$; here, $\psi$ is a
distance measure between $G$ and $G^*$ that is upper bounded by the number of
variables $n$, and is exactly 0 when $G=G^*$. Our approximation factor matches
the state-of-the-art for the advice-less setting.
|
[
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ML"
] | false |
2305.19598
|
2023-05-31T06:58:34Z
|
Towards Semi-supervised Universal Graph Classification
|
[
"Xiao Luo",
"Yusheng Zhao",
"Yifang Qin",
"Wei Ju",
"Ming Zhang"
] |
Graph neural networks have pushed state-of-the-arts in graph classifications
recently. Typically, these methods are studied within the context of supervised
end-to-end training, which necessities copious task-specific labels. However,
in real-world circumstances, labeled data could be limited, and there could be
a massive corpus of unlabeled data, even from unknown classes as a
complementary. Towards this end, we study the problem of semi-supervised
universal graph classification, which not only identifies graph samples which
do not belong to known classes, but also classifies the remaining samples into
their respective classes. This problem is challenging due to a severe lack of
labels and potential class shifts. In this paper, we propose a novel graph
neural network framework named UGNN, which makes the best of unlabeled data
from the subgraph perspective. To tackle class shifts, we estimate the
certainty of unlabeled graphs using multiple subgraphs, which facilities the
discovery of unlabeled data from unknown categories. Moreover, we construct
semantic prototypes in the embedding space for both known and unknown
categories and utilize posterior prototype assignments inferred from the
Sinkhorn-Knopp algorithm to learn from abundant unlabeled graphs across
different subgraph views. Extensive experiments on six datasets verify the
effectiveness of UGNN in different settings.
|
[
"cs.LG",
"cs.AI",
"cs.IR",
"cs.SI"
] | false |
2305.19602
|
2023-05-31T07:15:06Z
|
Learning Music Sequence Representation from Text Supervision
|
[
"Tianyu Chen",
"Yuan Xie",
"Shuai Zhang",
"Shaohan Huang",
"Haoyi Zhou",
"Jianxin Li"
] |
Music representation learning is notoriously difficult for its complex
human-related concepts contained in the sequence of numerical signals. To
excavate better MUsic SEquence Representation from labeled audio, we propose a
novel text-supervision pre-training method, namely MUSER. MUSER adopts an
audio-spectrum-text tri-modal contrastive learning framework, where the text
input could be any form of meta-data with the help of text templates while the
spectrum is derived from an audio sequence. Our experiments reveal that MUSER
could be more flexibly adapted to downstream tasks compared with the current
data-hungry pre-training method, and it only requires 0.056% of pre-training
data to achieve the state-of-the-art performance.
|
[
"cs.SD",
"cs.LG",
"eess.AS"
] | false |
2305.19684
|
2023-05-31T09:28:02Z
|
End-to-end Training of Deep Boltzmann Machines by Unbiased Contrastive
Divergence with Local Mode Initialization
|
[
"Shohei Taniguchi",
"Masahiro Suzuki",
"Yusuke Iwasawa",
"Yutaka Matsuo"
] |
We address the problem of biased gradient estimation in deep Boltzmann
machines (DBMs). The existing method to obtain an unbiased estimator uses a
maximal coupling based on a Gibbs sampler, but when the state is
high-dimensional, it takes a long time to converge. In this study, we propose
to use a coupling based on the Metropolis-Hastings (MH) and to initialize the
state around a local mode of the target distribution. Because of the propensity
of MH to reject proposals, the coupling tends to converge in only one step with
a high probability, leading to high efficiency. We find that our method allows
DBMs to be trained in an end-to-end fashion without greedy pretraining. We also
propose some practical techniques to further improve the performance of DBMs.
We empirically demonstrate that our training algorithm enables DBMs to show
comparable generative performance to other deep generative models, achieving
the FID score of 10.33 for MNIST.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] | false |
2305.19695
|
2023-05-31T09:38:50Z
|
Causal discovery for time series with constraint-based model and PMIME
measure
|
[
"Antonin Arsac",
"Aurore Lomet",
"Jean-Philippe Poli"
] |
Causality defines the relationship between cause and effect. In multivariate
time series field, this notion allows to characterize the links between several
time series considering temporal lags. These phenomena are particularly
important in medicine to analyze the effect of a drug for example, in
manufacturing to detect the causes of an anomaly in a complex system or in
social sciences... Most of the time, studying these complex systems is made
through correlation only. But correlation can lead to spurious relationships.
To circumvent this problem, we present in this paper a novel approach for
discovering causality in time series data that combines a causal discovery
algorithm with an information theoretic-based measure. Hence the proposed
method allows inferring both linear and non-linear relationships and building
the underlying causal graph. We evaluate the performance of our approach on
several simulated data sets, showing promising results.
|
[
"stat.ME",
"cs.IT",
"cs.LG",
"math.IT"
] | false |
2305.19696
|
2023-05-31T09:41:27Z
|
An Efficient Machine Learning-based Channel Prediction Technique for
OFDM Sub-Bands
|
[
"Pedro E. G. Silva",
"Jules M. Moualeu",
"Pedro H. Nardelli",
"Rausley A. A. de Souza"
] |
The acquisition of accurate channel state information (CSI) is of utmost
importance since it provides performance improvement of wireless communication
systems. However, acquiring accurate CSI, which can be done through channel
estimation or channel prediction, is an intricate task due to the complexity of
the time-varying and frequency selectivity of the wireless environment. To this
end, we propose an efficient machine learning (ML)-based technique for channel
prediction in orthogonal frequency-division multiplexing (OFDM) sub-bands. The
novelty of the proposed approach lies in the training of channel fading samples
used to estimate future channel behaviour in selective fading.
|
[
"cs.IT",
"cs.LG",
"eess.SP",
"math.IT"
] | false |
2305.19698
|
2023-05-31T09:43:49Z
|
Investigation of the Robustness of Neural Density Fields
|
[
"Jonas Schuhmacher",
"Fabio Gratl",
"Dario Izzo",
"Pablo Gómez"
] |
Recent advances in modeling density distributions, so-called neural density
fields, can accurately describe the density distribution of celestial bodies
without, e.g., requiring a shape model - properties of great advantage when
designing trajectories close to these bodies. Previous work introduced this
approach, but several open questions remained. This work investigates neural
density fields and their relative errors in the context of robustness to
external factors like noise or constraints during training, like the maximal
available gravity signal strength due to a certain distance exemplified for 433
Eros and 67P/Churyumov-Gerasimenko. It is found that both models trained on a
polyhedral and mascon ground truth perform similarly, indicating that the
ground truth is not the accuracy bottleneck. The impact of solar radiation
pressure on a typical probe affects training neglectable, with the relative
error being of the same magnitude as without noise. However, limiting the
precision of measurement data by applying Gaussian noise hurts the obtainable
precision. Further, pretraining is shown as practical in order to speed up
network training. Hence, this work demonstrates that training neural networks
for the gravity inversion problem is appropriate as long as the gravity signal
is distinguishable from noise.
Code and results are available at https://github.com/gomezzz/geodesyNets
|
[
"astro-ph.EP",
"astro-ph.IM",
"cs.LG"
] | false |
2305.19733
|
2023-05-31T10:53:46Z
|
APPRAISER: DNN Fault Resilience Analysis Employing Approximation Errors
|
[
"Mahdi Taheri",
"Mohammad Hasan Ahmadilivani",
"Maksim Jenihhin",
"Masoud Daneshtalab",
"Jaan Raik"
] |
Nowadays, the extensive exploitation of Deep Neural Networks (DNNs) in
safety-critical applications raises new reliability concerns. In practice,
methods for fault injection by emulation in hardware are efficient and widely
used to study the resilience of DNN architectures for mitigating reliability
issues already at the early design stages. However, the state-of-the-art
methods for fault injection by emulation incur a spectrum of time-, design- and
control-complexity problems. To overcome these issues, a novel resiliency
assessment method called APPRAISER is proposed that applies functional
approximation for a non-conventional purpose and employs approximate computing
errors for its interest. By adopting this concept in the resiliency assessment
domain, APPRAISER provides thousands of times speed-up in the assessment
process, while keeping high accuracy of the analysis. In this paper, APPRAISER
is validated by comparing it with state-of-the-art approaches for fault
injection by emulation in FPGA. By this, the feasibility of the idea is
demonstrated, and a new perspective in resiliency evaluation for DNNs is
opened.
|
[
"cs.LG",
"cs.AI",
"cs.AR"
] | false |
2305.19918
|
2023-05-31T14:55:47Z
|
Fully Dynamic Submodular Maximization over Matroids
|
[
"Paul Dütting",
"Federico Fusco",
"Silvio Lattanzi",
"Ashkan Norouzi-Fard",
"Morteza Zadimoghaddam"
] |
Maximizing monotone submodular functions under a matroid constraint is a
classic algorithmic problem with multiple applications in data mining and
machine learning. We study this classic problem in the fully dynamic setting,
where elements can be both inserted and deleted in real-time. Our main result
is a randomized algorithm that maintains an efficient data structure with an
$\tilde{O}(k^2)$ amortized update time (in the number of additions and
deletions) and yields a $4$-approximate solution, where $k$ is the rank of the
matroid.
|
[
"cs.DS",
"cs.LG",
"stat.ML"
] | false |
2305.20020
|
2023-05-31T16:48:34Z
|
Bias Mitigation Methods for Binary Classification Decision-Making
Systems: Survey and Recommendations
|
[
"Madeleine Waller",
"Odinaldo Rodrigues",
"Oana Cocarascu"
] |
Bias mitigation methods for binary classification decision-making systems
have been widely researched due to the ever-growing importance of designing
fair machine learning processes that are impartial and do not discriminate
against individuals or groups based on protected personal characteristics. In
this paper, we present a structured overview of the research landscape for bias
mitigation methods, report on their benefits and limitations, and provide
recommendations for the development of future bias mitigation methods for
binary classification.
|
[
"cs.LG",
"cs.AI",
"cs.CY"
] | false |
2305.20025
|
2023-05-31T16:54:25Z
|
Variational $f$-Divergence and Derangements for Discriminative Mutual
Information Estimation
|
[
"Nunzio A. Letizia",
"Nicola Novello",
"Andrea M. Tonello"
] |
The accurate estimation of the mutual information is a crucial task in
various applications, including machine learning, communications, and biology,
since it enables the understanding of complex systems. High-dimensional data
render the task extremely challenging due to the amount of data to be processed
and the presence of convoluted patterns. Neural estimators based on variational
lower bounds of the mutual information have gained attention in recent years
but they are prone to either high bias or high variance as a consequence of the
partition function. We propose a novel class of discriminative mutual
information estimators based on the variational representation of the
$f$-divergence. We investigate the impact of the permutation function used to
obtain the marginal training samples and present a novel architectural solution
based on derangements. The proposed estimator is flexible as it exhibits an
excellent bias/variance trade-off. Experiments on reference scenarios
demonstrate that our approach outperforms state-of-the-art neural estimators
both in terms of accuracy and complexity.
|
[
"cs.LG",
"cs.IT",
"eess.SP",
"math.IT"
] | false |
2305.20053
|
2023-05-31T17:26:20Z
|
Efficient PDE-Constrained optimization under high-dimensional
uncertainty using derivative-informed neural operators
|
[
"Dingcheng Luo",
"Thomas O'Leary-Roseberry",
"Peng Chen",
"Omar Ghattas"
] |
We propose a novel machine learning framework for solving optimization
problems governed by large-scale partial differential equations (PDEs) with
high-dimensional random parameters. Such optimization under uncertainty (OUU)
problems may be computational prohibitive using classical methods, particularly
when a large number of samples is needed to evaluate risk measures at every
iteration of an optimization algorithm, where each sample requires the solution
of an expensive-to-solve PDE. To address this challenge, we propose a new
neural operator approximation of the PDE solution operator that has the
combined merits of (1) accurate approximation of not only the map from the
joint inputs of random parameters and optimization variables to the PDE state,
but also its derivative with respect to the optimization variables, (2)
efficient construction of the neural network using reduced basis architectures
that are scalable to high-dimensional OUU problems, and (3) requiring only a
limited number of training data to achieve high accuracy for both the PDE
solution and the OUU solution. We refer to such neural operators as multi-input
reduced basis derivative informed neural operators (MR-DINOs). We demonstrate
the accuracy and efficiency our approach through several numerical experiments,
i.e. the risk-averse control of a semilinear elliptic PDE and the steady state
Navier--Stokes equations in two and three spatial dimensions, each involving
random field inputs. Across the examples, MR-DINOs offer $10^{3}$--$10^{7}
\times$ reductions in execution time, and are able to produce OUU solutions of
comparable accuracies to those from standard PDE based solutions while being
over $10 \times$ more cost-efficient after factoring in the cost of
construction.
|
[
"math.OC",
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2305.20069
|
2023-05-31T17:44:07Z
|
A survey on the complexity of learning quantum states
|
[
"Anurag Anshu",
"Srinivasan Arunachalam"
] |
We survey various recent results that rigorously study the complexity of
learning quantum states. These include progress on quantum tomography, learning
physical quantum states, alternate learning models to tomography and learning
classical functions encoded as quantum states. We highlight how these results
are paving the way for a highly successful theory with a range of exciting open
questions. To this end, we distill 25 open questions from these results.
|
[
"quant-ph",
"cs.CC",
"cs.LG"
] | false |
2305.20077
|
2023-05-31T17:51:30Z
|
Managed Geo-Distributed Feature Store: Architecture and System Design
|
[
"Anya Li",
"Bhala Ranganathan",
"Feng Pan",
"Mickey Zhang",
"Qianjun Xu",
"Runhan Li",
"Sethu Raman",
"Shail Paragbhai Shah",
"Vivienne Tang"
] |
Companies are using machine learning to solve real-world problems and are
developing hundreds to thousands of features in the process. They are building
feature engineering pipelines as part of MLOps life cycle to transform data
from various data sources and materialize the same for future consumption.
Without feature stores, different teams across various business groups would
maintain the above process independently, which can lead to conflicting and
duplicated features in the system. Data scientists find it hard to search for
and reuse existing features and it is painful to maintain version control.
Furthermore, feature correctness violations related to online (inferencing) -
offline (training) skews and data leakage are common. Although the machine
learning community has extensively discussed the need for feature stores and
their purpose, this paper aims to capture the core architectural components
that make up a managed feature store and to share the design learning in
building such a system.
|
[
"cs.LG",
"cs.DC",
"cs.SE"
] | false |
2306.00040
|
2023-05-31T12:50:44Z
|
Assessing the Generalizability of a Performance Predictive Model
|
[
"Ana Nikolikj",
"Gjorgjina Cenikj",
"Gordana Ispirova",
"Diederick Vermetten",
"Ryan Dieter Lang",
"Andries Petrus Engelbrecht",
"Carola Doerr",
"Peter Korošec",
"Tome Eftimov"
] |
A key component of automated algorithm selection and configuration, which in
most cases are performed using supervised machine learning (ML) methods is a
good-performing predictive model. The predictive model uses the feature
representation of a set of problem instances as input data and predicts the
algorithm performance achieved on them. Common machine learning models struggle
to make predictions for instances with feature representations not covered by
the training data, resulting in poor generalization to unseen problems. In this
study, we propose a workflow to estimate the generalizability of a predictive
model for algorithm performance, trained on one benchmark suite to another. The
workflow has been tested by training predictive models across benchmark suites
and the results show that generalizability patterns in the landscape feature
space are reflected in the performance space.
|
[
"cs.LG",
"cs.AI",
"cs.NE"
] | false |
2306.00044
|
2023-05-31T15:58:37Z
|
How to Construct Perfect and Worse-than-Coin-Flip Spoofing
Countermeasures: A Word of Warning on Shortcut Learning
|
[
"Hye-jin Shim",
"Rosa González Hautamäki",
"Md Sahidullah",
"Tomi Kinnunen"
] |
Shortcut learning, or `Clever Hans effect` refers to situations where a
learning agent (e.g., deep neural networks) learns spurious correlations
present in data, resulting in biased models. We focus on finding shortcuts in
deep learning based spoofing countermeasures (CMs) that predict whether a given
utterance is spoofed or not. While prior work has addressed specific data
artifacts, such as silence, no general normative framework has been explored
for analyzing shortcut learning in CMs. In this study, we propose a generic
approach to identifying shortcuts by introducing systematic interventions on
the training and test sides, including the boundary cases of `near-perfect` and
`worse than coin flip` (label flip). By using three different models, ranging
from classic to state-of-the-art, we demonstrate the presence of shortcut
learning in five simulated conditions. We analyze the results using a
regression model to understand how biases affect the class-conditional score
statistics.
|
[
"cs.LG",
"cs.CR",
"cs.SD",
"eess.AS"
] | false |
2306.00045
|
2023-05-31T15:58:54Z
|
Lottery Tickets in Evolutionary Optimization: On Sparse
Backpropagation-Free Trainability
|
[
"Robert Tjarko Lange",
"Henning Sprekeler"
] |
Is the lottery ticket phenomenon an idiosyncrasy of gradient-based training
or does it generalize to evolutionary optimization? In this paper we establish
the existence of highly sparse trainable initializations for evolution
strategies (ES) and characterize qualitative differences compared to gradient
descent (GD)-based sparse training. We introduce a novel signal-to-noise
iterative pruning procedure, which incorporates loss curvature information into
the network pruning step. This can enable the discovery of even sparser
trainable network initializations when using black-box evolution as compared to
GD-based optimization. Furthermore, we find that these initializations encode
an inductive bias, which transfers across different ES, related tasks and even
to GD-based training. Finally, we compare the local optima resulting from the
different optimization paradigms and sparsity levels. In contrast to GD, ES
explore diverse and flat local optima and do not preserve linear mode
connectivity across sparsity levels and independent runs. The results highlight
qualitative differences between evolution and gradient-based learning dynamics,
which can be uncovered by the study of iterative pruning procedures.
|
[
"cs.NE",
"cs.AI",
"cs.LG"
] | false |
2306.00061
|
2023-05-31T18:00:02Z
|
Shadows of quantum machine learning
|
[
"Sofiene Jerbi",
"Casper Gyurik",
"Simon C. Marshall",
"Riccardo Molteni",
"Vedran Dunjko"
] |
Quantum machine learning is often highlighted as one of the most promising
uses for a quantum computer to solve practical problems. However, a major
obstacle to the widespread use of quantum machine learning models in practice
is that these models, even once trained, still require access to a quantum
computer in order to be evaluated on new data. To solve this issue, we suggest
that following the training phase of a quantum model, a quantum computer could
be used to generate what we call a classical shadow of this model, i.e., a
classically computable approximation of the learned function. While recent
works already explore this idea and suggest approaches to construct such shadow
models, they also raise the possibility that a completely classical model could
be trained instead, thus circumventing the need for a quantum computer in the
first place. In this work, we take a novel approach to define shadow models
based on the frameworks of quantum linear models and classical shadow
tomography. This approach allows us to show that there exist shadow models
which can solve certain learning tasks that are intractable for fully classical
models, based on widely-believed cryptography assumptions. We also discuss the
(un)likeliness that all quantum models could be shadowfiable, based on common
assumptions in complexity theory.
|
[
"quant-ph",
"cs.AI",
"cs.LG",
"stat.ML"
] | false |
2306.00087
|
2023-05-31T18:05:51Z
|
Adaptive Coordination in Social Embodied Rearrangement
|
[
"Andrew Szot",
"Unnat Jain",
"Dhruv Batra",
"Zsolt Kira",
"Ruta Desai",
"Akshara Rai"
] |
We present the task of "Social Rearrangement", consisting of cooperative
everyday tasks like setting up the dinner table, tidying a house or unpacking
groceries in a simulated multi-agent environment. In Social Rearrangement, two
robots coordinate to complete a long-horizon task, using onboard sensing and
egocentric observations, and no privileged information about the environment.
We study zero-shot coordination (ZSC) in this task, where an agent collaborates
with a new partner, emulating a scenario where a robot collaborates with a new
human partner. Prior ZSC approaches struggle to generalize in our complex and
visually rich setting, and on further analysis, we find that they fail to
generate diverse coordination behaviors at training time. To counter this, we
propose Behavior Diversity Play (BDP), a novel ZSC approach that encourages
diversity through a discriminability objective. Our results demonstrate that
BDP learns adaptive agents that can tackle visual coordination, and zero-shot
generalize to new partners in unseen environments, achieving 35% higher success
and 32% higher efficiency compared to baselines.
|
[
"cs.LG",
"cs.MA",
"cs.RO"
] | false |
2306.00091
|
2023-05-31T18:09:37Z
|
A General Framework for Equivariant Neural Networks on Reductive Lie
Groups
|
[
"Ilyes Batatia",
"Mario Geiger",
"Jose Munoz",
"Tess Smidt",
"Lior Silberman",
"Christoph Ortner"
] |
Reductive Lie Groups, such as the orthogonal groups, the Lorentz group, or
the unitary groups, play essential roles across scientific fields as diverse as
high energy physics, quantum mechanics, quantum chromodynamics, molecular
dynamics, computer vision, and imaging. In this paper, we present a general
Equivariant Neural Network architecture capable of respecting the symmetries of
the finite-dimensional representations of any reductive Lie Group G. Our
approach generalizes the successful ACE and MACE architectures for atomistic
point clouds to any data equivariant to a reductive Lie group action. We also
introduce the lie-nn software library, which provides all the necessary tools
to develop and implement such general G-equivariant neural networks. It
implements routines for the reduction of generic tensor products of
representations into irreducible representations, making it easy to apply our
architecture to a wide range of problems and groups. The generality and
performance of our approach are demonstrated by applying it to the tasks of top
quark decay tagging (Lorentz group) and shape recognition (orthogonal group).
|
[
"stat.ML",
"cs.LG",
"hep-th"
] | false |
2306.00145
|
2023-05-31T19:36:14Z
|
On the Expressive Power of Neural Networks
|
[
"Jan Holstermann"
] |
In 1989 George Cybenko proved in a landmark paper that wide shallow neural
networks can approximate arbitrary continuous functions on a compact set. This
universal approximation theorem sparked a lot of follow-up research.
Shen, Yang and Zhang determined optimal approximation rates for ReLU-networks
in $L^p$-norms with $p \in [1,\infty)$. Kidger and Lyons proved a universal
approximation theorem for deep narrow ReLU-networks. Telgarsky gave an example
of a deep narrow ReLU-network that cannot be approximated by a wide shallow
ReLU-network unless it has exponentially many neurons.
However, there are even more questions that still remain unresolved. Are
there any wide shallow ReLU-networks that cannot be approximated well by deep
narrow ReLU-networks? Is the universal approximation theorem still true for
other norms like the Sobolev norm $W^{1,1}$? Do these results hold for
activation functions other than ReLU?
We will answer all of those questions and more with a framework of two
expressive powers. The first one is well-known and counts the maximal number of
linear regions of a function calculated by a ReLU-network. We will improve the
best known bounds for this expressive power. The second one is entirely new.
|
[
"math.CA",
"cs.AI",
"cs.LG",
"stat.ML",
"68T01"
] | false |
2306.00148
|
2023-05-31T19:38:12Z
|
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
|
[
"Wei Xiao",
"Tsun-Hsuan Wang",
"Chuang Gan",
"Daniela Rus"
] |
Diffusion model-based approaches have shown promise in data-driven planning,
but there are no safety guarantees, thus making it hard to be applied for
safety-critical applications. To address these challenges, we propose a new
method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy
specifications by using a class of control barrier functions. The key idea of
our approach is to embed the proposed finite-time diffusion invariance into the
denoising diffusion procedure, which enables trustworthy diffusion data
generation. Moreover, we demonstrate that our finite-time diffusion invariance
method through generative models not only maintains generalization performance
but also creates robustness in safe data generation. We test our method on a
series of safe planning tasks, including maze path generation, legged robot
locomotion, and 3D space manipulation, with results showing the advantages of
robustness and guarantees over vanilla diffusion models.
|
[
"cs.LG",
"cs.RO",
"cs.SY",
"eess.SY"
] | true |
2306.00153
|
2023-05-31T19:52:17Z
|
Information Fusion via Symbolic Regression: A Tutorial in the Context of
Human Health
|
[
"Jennifer J. Schnur",
"Nitesh V. Chawla"
] |
This tutorial paper provides a general overview of symbolic regression (SR)
with specific focus on standards of interpretability. We posit that
interpretable modeling, although its definition is still disputed in the
literature, is a practical way to support the evaluation of successful
information fusion. In order to convey the benefits of SR as a modeling
technique, we demonstrate an application within the field of health and
nutrition using publicly available National Health and Nutrition Examination
Survey (NHANES) data from the Centers for Disease Control and Prevention (CDC),
fusing together anthropometric markers into a simple mathematical expression to
estimate body fat percentage. We discuss the advantages and challenges
associated with SR modeling and provide qualitative and quantitative analyses
of the learned models.
|
[
"cs.LG",
"cs.AI",
"cs.SC"
] | false |
2306.00160
|
2023-05-31T20:09:50Z
|
Audio-Visual Speech Separation in Noisy Environments with a Lightweight
Iterative Model
|
[
"Héctor Martel",
"Julius Richter",
"Kai Li",
"Xiaolin Hu",
"Timo Gerkmann"
] |
We propose Audio-Visual Lightweight ITerative model (AVLIT), an effective and
lightweight neural network that uses Progressive Learning (PL) to perform
audio-visual speech separation in noisy environments. To this end, we adopt the
Asynchronous Fully Recurrent Convolutional Neural Network (A-FRCNN), which has
shown successful results in audio-only speech separation. Our architecture
consists of an audio branch and a video branch, with iterative A-FRCNN blocks
sharing weights for each modality. We evaluated our model in a controlled
environment using the NTCD-TIMIT dataset and in-the-wild using a synthetic
dataset that combines LRS3 and WHAM!. The experiments demonstrate the
superiority of our model in both settings with respect to various audio-only
and audio-visual baselines. Furthermore, the reduced footprint of our model
makes it suitable for low resource applications.
|
[
"eess.AS",
"cs.LG",
"cs.SD"
] | false |
2306.00201
|
2023-05-31T21:39:52Z
|
Generalized Implicit Follow-The-Regularized-Leader
|
[
"Keyi Chen",
"Francesco Orabona"
] |
We propose a new class of online learning algorithms, generalized implicit
Follow-The-Regularized-Leader (FTRL), that expands the scope of FTRL framework.
Generalized implicit FTRL can recover known algorithms, as FTRL with linearized
losses and implicit FTRL, and it allows the design of new update rules, as
extensions of aProx and Mirror-Prox to FTRL. Our theory is constructive in the
sense that it provides a simple unifying framework to design updates that
directly improve the worst-case upper bound on the regret. The key idea is
substituting the linearization of the losses with a Fenchel-Young inequality.
We show the flexibility of the framework by proving that some known algorithms,
like the Mirror-Prox updates, are instantiations of the generalized implicit
FTRL. Finally, the new framework allows us to recover the temporal variation
bound of implicit OMD, with the same computational complexity.
|
[
"cs.LG",
"math.OC",
"stat.ML"
] | false |
2306.00555
|
2023-05-31T14:48:54Z
|
Sensitivity Analysis of High-Dimensional Models with Correlated Inputs
|
[
"Juraj Kardos",
"Wouter Edeling",
"Diana Suleimenova",
"Derek Groen",
"Olaf Schenk"
] |
Sensitivity analysis is an important tool used in many domains of
computational science to either gain insight into the mathematical model and
interaction of its parameters or study the uncertainty propagation through the
input-output interactions. In many applications, the inputs are stochastically
dependent, which violates one of the essential assumptions in the
state-of-the-art sensitivity analysis methods. Consequently, the results
obtained ignoring the correlations provide values which do not reflect the true
contributions of the input parameters. This study proposes an approach to
address the parameter correlations using a polynomial chaos expansion method
and Rosenblatt and Cholesky transformations to reflect the parameter
dependencies. Treatment of the correlated variables is discussed in context of
variance and derivative-based sensitivity analysis. We demonstrate that the
sensitivity of the correlated parameters can not only differ in magnitude, but
even the sign of the derivative-based index can be inverted, thus significantly
altering the model behavior compared to the prediction of the analysis
disregarding the correlations. Numerous experiments are conducted using
workflow automation tools within the VECMA toolkit.
|
[
"stat.ME",
"cs.AI",
"cs.LG"
] | false |
2306.01005
|
2023-05-31T14:40:47Z
|
AbODE: Ab Initio Antibody Design using Conjoined ODEs
|
[
"Yogesh Verma",
"Markus Heinonen",
"Vikas Garg"
] |
Antibodies are Y-shaped proteins that neutralize pathogens and constitute the
core of our adaptive immune system. De novo generation of new antibodies that
target specific antigens holds the key to accelerating vaccine discovery.
However, this co-design of the amino acid sequence and the 3D structure
subsumes and accentuates some central challenges from multiple tasks, including
protein folding (sequence to structure), inverse folding (structure to
sequence), and docking (binding). We strive to surmount these challenges with a
new generative model AbODE that extends graph PDEs to accommodate both
contextual information and external interactions. Unlike existing approaches,
AbODE uses a single round of full-shot decoding and elicits continuous
differential attention that encapsulates and evolves with latent interactions
within the antibody as well as those involving the antigen. We unravel
fundamental connections between AbODE and temporal networks as well as
graph-matching networks. The proposed model significantly outperforms existing
methods on standard metrics across benchmarks.
|
[
"cs.LG",
"cs.AI",
"q-bio.BM"
] | false |
2306.01785
|
2023-05-31T12:01:02Z
|
Beyond Rankings: Exploring the Impact of SERP Features on Organic
Click-through Rates
|
[
"Erik Fubel",
"Niclas Michael Groll",
"Patrick Gundlach",
"Qiwei Han",
"Maximilian Kaiser"
] |
Search Engine Result Pages (SERPs) serve as the digital gateways to the vast
expanse of the internet. Past decades have witnessed a surge in research
primarily centered on the influence of website ranking on these pages, to
determine the click-through rate (CTR). However, during this period, the
landscape of SERPs has undergone a dramatic evolution: SERP features,
encompassing elements such as knowledge panels, media galleries, FAQs, and
more, have emerged as an increasingly prominent facet of these result pages.
Our study examines the crucial role of these features, revealing them to be not
merely aesthetic components, but strongly influence CTR and the associated
behavior of internet users. We demonstrate how these features can significantly
modulate web traffic, either amplifying or attenuating it. We dissect these
intricate interaction effects leveraging a unique dataset of 67,000 keywords
and their respective Google SERPs, spanning over 40 distinct US-based
e-commerce domains, generating over 6 million clicks from 24 million views.
This cross-website dataset, unprecedented in its scope, enables us to assess
the impact of 24 different SERP features on organic CTR. Through an ablation
study modeling CTR, we illustrate the incremental predictive power these
features hold.
|
[
"cs.IR",
"cs.LG",
"cs.SI"
] | false |
2306.01787
|
2023-05-31T14:11:51Z
|
Power Control with QoS Guarantees: A Differentiable Projection-based
Unsupervised Learning Framework
|
[
"Mehrazin Alizadeh",
"Hina Tabassum"
] |
Deep neural networks (DNNs) are emerging as a potential solution to solve
NP-hard wireless resource allocation problems. However, in the presence of
intricate constraints, e.g., users' quality-of-service (QoS) constraints,
guaranteeing constraint satisfaction becomes a fundamental challenge. In this
paper, we propose a novel unsupervised learning framework to solve the
classical power control problem in a multi-user interference channel, where the
objective is to maximize the network sumrate under users' minimum data rate or
QoS requirements and power budget constraints. Utilizing a differentiable
projection function, two novel deep learning (DL) solutions are pursued. The
first is called Deep Implicit Projection Network (DIPNet), and the second is
called Deep Explicit Projection Network (DEPNet). DIPNet utilizes a
differentiable convex optimization layer to implicitly define a projection
function. On the other hand, DEPNet uses an explicitly-defined projection
function, which has an iterative nature and relies on a differentiable
correction process. DIPNet requires convex constraints; whereas, the DEPNet
does not require convexity and has a reduced computational complexity. To
enhance the sum-rate performance of the proposed models even further,
Frank-Wolfe algorithm (FW) has been applied to the output of the proposed
models. Extensive simulations depict that the proposed DNN solutions not only
improve the achievable data rate but also achieve zero constraint violation
probability, compared to the existing DNNs. The proposed solutions outperform
the classic optimization methods in terms of computation time complexity.
|
[
"cs.NI",
"cs.AI",
"cs.LG"
] | false |
2306.04645
|
2023-05-31T19:27:45Z
|
Special Session: Approximation and Fault Resiliency of DNN Accelerators
|
[
"Mohammad Hasan Ahmadilivani",
"Mario Barbareschi",
"Salvatore Barone",
"Alberto Bosio",
"Masoud Daneshtalab",
"Salvatore Della Torca",
"Gabriele Gavarini",
"Maksim Jenihhin",
"Jaan Raik",
"Annachiara Ruospo",
"Ernesto Sanchez",
"Mahdi Taheri"
] |
Deep Learning, and in particular, Deep Neural Network (DNN) is nowadays
widely used in many scenarios, including safety-critical applications such as
autonomous driving. In this context, besides energy efficiency and performance,
reliability plays a crucial role since a system failure can jeopardize human
life. As with any other device, the reliability of hardware architectures
running DNNs has to be evaluated, usually through costly fault injection
campaigns. This paper explores the approximation and fault resiliency of DNN
accelerators. We propose to use approximate (AxC) arithmetic circuits to
agilely emulate errors in hardware without performing fault injection on the
DNN. To allow fast evaluation of AxC DNN, we developed an efficient GPU-based
simulation framework. Further, we propose a fine-grain analysis of fault
resiliency by examining fault propagation and masking in networks
|
[
"cs.LG",
"cs.AR",
"cs.DC"
] | false |
2306.05291
|
2023-05-31T14:52:42Z
|
One shot learning based drivers head movement identification using a
millimetre wave radar sensor
|
[
"Hong Nhung Nguyen",
"Seongwook Lee",
"Tien Tung Nguyen",
"Yong Hwa Kim"
] |
Concentration of drivers on traffic is a vital safety issue; thus, monitoring
a driver being on road becomes an essential requirement. The key purpose of
supervision is to detect abnormal behaviours of the driver and promptly send
warnings to him her for avoiding incidents related to traffic accidents. In
this paper, to meet the requirement, based on radar sensors applications, the
authors first use a small sized millimetre wave radar installed at the steering
wheel of the vehicle to collect signals from different head movements of the
driver. The received signals consist of the reflection patterns that change in
response to the head movements of the driver. Then, in order to distinguish
these different movements, a classifier based on the measured signal of the
radar sensor is designed. However, since the collected data set is not large,
in this paper, the authors propose One shot learning to classify four cases of
driver's head movements. The experimental results indicate that the proposed
method can classify the four types of cases according to the various head
movements of the driver with a high accuracy reaching up to 100. In addition,
the classification performance of the proposed method is significantly better
than that of the convolutional neural network model.
|
[
"eess.SP",
"cs.AI",
"cs.LG"
] | false |
2306.08060
|
2023-05-31T06:06:28Z
|
Software Supply Chain Vulnerabilities Detection in Source Code:
Performance Comparison between Traditional and Quantum Machine Learning
Algorithms
|
[
"Mst Shapna Akter",
"Md Jobair Hossain Faruk",
"Nafisa Anjum",
"Mohammad Masum",
"Hossain Shahriar",
"Akond Rahman",
"Fan Wu",
"Alfredo Cuzzocrea"
] |
The software supply chain (SSC) attack has become one of the crucial issues
that are being increased rapidly with the advancement of the software
development domain. In general, SSC attacks execute during the software
development processes lead to vulnerabilities in software products targeting
downstream customers and even involved stakeholders. Machine Learning
approaches are proven in detecting and preventing software security
vulnerabilities. Besides, emerging quantum machine learning can be promising in
addressing SSC attacks. Considering the distinction between traditional and
quantum machine learning, performance could be varies based on the proportions
of the experimenting dataset. In this paper, we conduct a comparative analysis
between quantum neural networks (QNN) and conventional neural networks (NN)
with a software supply chain attack dataset known as ClaMP. Our goal is to
distinguish the performance between QNN and NN and to conduct the experiment,
we develop two different models for QNN and NN by utilizing Pennylane for
quantum and TensorFlow and Keras for traditional respectively. We evaluated the
performance of both models with different proportions of the ClaMP dataset to
identify the f1 score, recall, precision, and accuracy. We also measure the
execution time to check the efficiency of both models. The demonstration result
indicates that execution time for QNN is slower than NN with a higher
percentage of datasets. Due to recent advancements in QNN, a large level of
experiments shall be carried out to understand both models accurately in our
future research.
|
[
"cs.CR",
"cs.LG",
"quant-ph"
] | false |
2306.00212
|
2023-05-31T22:09:24Z
|
Provably Efficient Generalized Lagrangian Policy Optimization for Safe
Multi-Agent Reinforcement Learning
|
[
"Dongsheng Ding",
"Xiaohan Wei",
"Zhuoran Yang",
"Zhaoran Wang",
"Mihailo R. Jovanović"
] |
We examine online safe multi-agent reinforcement learning using constrained
Markov games in which agents compete by maximizing their expected total rewards
under a constraint on expected total utilities. Our focus is confined to an
episodic two-player zero-sum constrained Markov game with independent
transition functions that are unknown to agents, adversarial reward functions,
and stochastic utility functions. For such a Markov game, we employ an approach
based on the occupancy measure to formulate it as an online constrained
saddle-point problem with an explicit constraint. We extend the Lagrange
multiplier method in constrained optimization to handle the constraint by
creating a generalized Lagrangian with minimax decision primal variables and a
dual variable. Next, we develop an upper confidence reinforcement learning
algorithm to solve this Lagrangian problem while balancing exploration and
exploitation. Our algorithm updates the minimax decision primal variables via
online mirror descent and the dual variable via projected gradient step and we
prove that it enjoys sublinear rate $ O((|X|+|Y|) L \sqrt{T(|A|+|B|)}))$ for
both regret and constraint violation after playing $T$ episodes of the game.
Here, $L$ is the horizon of each episode, $(|X|,|A|)$ and $(|Y|,|B|)$ are the
state/action space sizes of the min-player and the max-player, respectively. To
the best of our knowledge, we provide the first provably efficient online safe
reinforcement learning algorithm in constrained Markov games.
|
[
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY",
"math.OC"
] | false |
2306.00272
|
2023-06-01T01:19:32Z
|
Accelerated Fingerprint Enhancement: A GPU-Optimized Mixed Architecture
Approach
|
[
"André Brasil Vieira Wyzykowski",
"Anil K. Jain"
] |
This document presents a preliminary approach to latent fingerprint
enhancement, fundamentally designed around a mixed Unet architecture. It
combines the capabilities of the Resnet-101 network and Unet encoder, aiming to
form a potentially powerful composite. This combination, enhanced with
attention mechanisms and forward skip connections, is intended to optimize the
enhancement of ridge and minutiae features in fingerprints. One innovative
element of this approach includes a novel Fingerprint Enhancement Gabor layer,
specifically designed for GPU computations. This illustrates how modern
computational resources might be harnessed to expedite enhancement. Given its
potential functionality as either a CNN or Transformer layer, this Gabor layer
could offer improved agility and processing speed to the system. However, it is
important to note that this approach is still in the early stages of
development and has not yet been fully validated through rigorous experiments.
As such, it may require additional time and testing to establish its robustness
and usability in the field of latent fingerprint enhancement. This includes
improvements in processing speed, enhancement adaptability with distinct latent
fingerprint types, and full validation in experimental approaches such as
open-set (identification 1:N) and open-set validation, fingerprint quality
evaluation, among others.
|
[
"cs.CV"
] | false |
2306.00283
|
2023-06-01T01:59:17Z
|
Autism Disease Detection Using Transfer Learning Techniques: Performance
Comparison Between Central Processing Unit vs Graphics Processing Unit
Functions for Neural Networks
|
[
"Mst Shapna Akter",
"Hossain Shahriar",
"Alfredo Cuzzocrea"
] |
Neural network approaches are machine learning methods that are widely used
in various domains, such as healthcare and cybersecurity. Neural networks are
especially renowned for their ability to deal with image datasets. During the
training process with images, various fundamental mathematical operations are
performed in the neural network. These operations include several algebraic and
mathematical functions, such as derivatives, convolutions, and matrix
inversions and transpositions. Such operations demand higher processing power
than what is typically required for regular computer usage. Since CPUs are
built with serial processing, they are not appropriate for handling large image
datasets. On the other hand, GPUs have parallel processing capabilities and can
provide higher speed. This paper utilizes advanced neural network techniques,
such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST
VGG16, and our proposed models, to compare CPU and GPU resources. We
implemented a system for classifying Autism disease using face images of
autistic and non-autistic children to compare performance during testing. We
used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and
Execution time. It was observed that GPU outperformed CPU in all tests
conducted. Moreover, the performance of the neural network models in terms of
accuracy increased on GPU compared to CPU.
|
[
"cs.CV"
] | false |
2306.00310
|
2023-06-01T03:20:54Z
|
Prompt Algebra for Task Composition
|
[
"Pramuditha Perera",
"Matthew Trager",
"Luca Zancato",
"Alessandro Achille",
"Stefano Soatto"
] |
We investigate whether prompts learned independently for different tasks can
be later combined through prompt algebra to obtain a model that supports
composition of tasks. We consider Visual Language Models (VLM) with prompt
tuning as our base classifier and formally define the notion of prompt algebra.
We propose constrained prompt tuning to improve performance of the composite
classifier. In the proposed scheme, prompts are constrained to appear in the
lower dimensional subspace spanned by the basis vectors of the pre-trained
vocabulary. Further regularization is added to ensure that the learned prompt
is grounded correctly to the existing pre-trained vocabulary. We demonstrate
the effectiveness of our method on object classification and object-attribute
classification datasets. On average, our composite model obtains classification
accuracy within 2.5% of the best base model. On UTZappos it improves
classification accuracy over the best base model by 8.45% on average.
|
[
"cs.CV"
] | false |
2306.00386
|
2023-06-01T06:35:59Z
|
Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution
|
[
"Wuxuan Shi",
"Mang Ye",
"Bo Du"
] |
Color-guided depth super-resolution (DSR) is an encouraging paradigm that
enhances a low-resolution (LR) depth map guided by an extra high-resolution
(HR) RGB image from the same scene. Existing methods usually use interpolation
to upscale the depth maps before feeding them into the network and transfer the
high-frequency information extracted from HR RGB images to guide the
reconstruction of depth maps. However, the extracted high-frequency information
usually contains textures that are not present in depth maps in the existence
of the cross-modality gap, and the noises would be further aggravated by
interpolation due to the resolution gap between the RGB and depth images. To
tackle these challenges, we propose a novel Symmetric Uncertainty-aware Feature
Transmission (SUFT) for color-guided DSR. (1) For the resolution gap, SUFT
builds an iterative up-and-down sampling pipeline, which makes depth features
and RGB features spatially consistent while suppressing noise amplification and
blurring by replacing common interpolated pre-upsampling. (2) For the
cross-modality gap, we propose a novel Symmetric Uncertainty scheme to remove
parts of RGB information harmful to the recovery of HR depth maps. Extensive
experiments on benchmark datasets and challenging real-world settings suggest
that our method achieves superior performance compared to state-of-the-art
methods. Our code and models are available at
https://github.com/ShiWuxuan/SUFT.
|
[
"cs.CV"
] | false |
2306.00396
|
2023-06-01T06:56:41Z
|
Lightweight Vision Transformer with Bidirectional Interaction
|
[
"Qihang Fan",
"Huaibo Huang",
"Xiaoqiang Zhou",
"Ran He"
] |
Recent advancements in vision backbones have significantly improved their
performance by simultaneously modeling images' local and global contexts.
However, the bidirectional interaction between these two contexts has not been
well explored and exploited, which is important in the human visual system.
This paper proposes a Fully Adaptive Self-Attention (FASA) mechanism for vision
transformer to model the local and global information as well as the
bidirectional interaction between them in context-aware ways. Specifically,
FASA employs self-modulated convolutions to adaptively extract local
representation while utilizing self-attention in down-sampled space to extract
global representation. Subsequently, it conducts a bidirectional adaptation
process between local and global representation to model their interaction. In
addition, we introduce a fine-grained downsampling strategy to enhance the
down-sampled self-attention mechanism for finer-grained global perception
capability. Based on FASA, we develop a family of lightweight vision backbones,
Fully Adaptive Transformer (FAT) family. Extensive experiments on multiple
vision tasks demonstrate that FAT achieves impressive performance. Notably, FAT
accomplishes a 77.6% accuracy on ImageNet-1K using only 4.5M parameters and
0.7G FLOPs, which surpasses the most advanced ConvNets and Transformers with
similar model size and computational costs. Moreover, our model exhibits faster
speed on modern GPU compared to other models. Code will be available at
https://github.com/qhfan/FAT.
|
[
"cs.CV"
] | false |
2306.00440
|
2023-06-01T08:29:44Z
|
Edge-guided Representation Learning for Underwater Object Detection
|
[
"Linhui Dai",
"Hong Liu",
"Pinhao Song",
"Hao Tang",
"Runwei Ding",
"Shengquan Li"
] |
Underwater object detection (UOD) is crucial for marine economic development,
environmental protection, and the planet's sustainable development. The main
challenges of this task arise from low-contrast, small objects, and mimicry of
aquatic organisms. The key to addressing these challenges is to focus the model
on obtaining more discriminative information. We observe that the edges of
underwater objects are highly unique and can be distinguished from low-contrast
or mimicry environments based on their edges. Motivated by this observation, we
propose an Edge-guided Representation Learning Network, termed ERL-Net, that
aims to achieve discriminative representation learning and aggregation under
the guidance of edge cues. Firstly, we introduce an edge-guided attention
module to model the explicit boundary information, which generates more
discriminative features. Secondly, a feature aggregation module is proposed to
aggregate the multi-scale discriminative features by regrouping them into three
levels, effectively aggregating global and local information for locating and
recognizing underwater objects. Finally, we propose a wide and asymmetric
receptive field block to enable features to have a wider receptive field,
allowing the model to focus on more small object information. Comprehensive
experiments on three challenging underwater datasets show that our method
achieves superior performance on the UOD task.
|
[
"cs.CV"
] | false |
2306.00450
|
2023-06-01T08:47:06Z
|
Exploring Open-Vocabulary Semantic Segmentation without Human Labels
|
[
"Jun Chen",
"Deyao Zhu",
"Guocheng Qian",
"Bernard Ghanem",
"Zhicheng Yan",
"Chenchen Zhu",
"Fanyi Xiao",
"Mohamed Elhoseiny",
"Sean Chang Culatana"
] |
Semantic segmentation is a crucial task in computer vision that involves
segmenting images into semantically meaningful regions at the pixel level.
However, existing approaches often rely on expensive human annotations as
supervision for model training, limiting their scalability to large, unlabeled
datasets. To address this challenge, we present ZeroSeg, a novel method that
leverages the existing pretrained vision-language (VL) model (e.g. CLIP) to
train open-vocabulary zero-shot semantic segmentation models. Although acquired
extensive knowledge of visual concepts, it is non-trivial to exploit knowledge
from these VL models to the task of semantic segmentation, as they are usually
trained at an image level. ZeroSeg overcomes this by distilling the visual
concepts learned by VL models into a set of segment tokens, each summarizing a
localized region of the target image. We evaluate ZeroSeg on multiple popular
segmentation benchmarks, including PASCAL VOC 2012, PASCAL Context, and COCO,
in a zero-shot manner (i.e., no training or adaption on target segmentation
datasets). Our approach achieves state-of-the-art performance when compared to
other zero-shot segmentation methods under the same training data, while also
performing competitively compared to strongly supervised methods. Finally, we
also demonstrated the effectiveness of ZeroSeg on open-vocabulary segmentation,
through both human studies and qualitative visualizations.
|
[
"cs.CV"
] | false |
2306.00483
|
2023-06-01T09:32:45Z
|
Overcoming Language Bias in Remote Sensing Visual Question Answering via
Adversarial Training
|
[
"Zhenghang Yuan",
"Lichao Mou",
"Xiao Xiang Zhu"
] |
The Visual Question Answering (VQA) system offers a user-friendly interface
and enables human-computer interaction. However, VQA models commonly face the
challenge of language bias, resulting from the learned superficial correlation
between questions and answers. To address this issue, in this study, we present
a novel framework to reduce the language bias of the VQA for remote sensing
data (RSVQA). Specifically, we add an adversarial branch to the original VQA
framework. Based on the adversarial branch, we introduce two regularizers to
constrain the training process against language bias. Furthermore, to evaluate
the performance in terms of language bias, we propose a new metric that
combines standard accuracy with the performance drop when incorporating
question and random image information. Experimental results demonstrate the
effectiveness of our method. We believe that our method can shed light on
future work for reducing language bias on the RSVQA task.
|
[
"cs.CV"
] | false |
2306.00552
|
2023-06-01T11:16:20Z
|
Unleash the Potential of 3D Point Cloud Modeling with A Calibrated Local
Geometry-driven Distance Metric
|
[
"Siyu Ren",
"Junhui Hou"
] |
Quantifying the dissimilarity between two unstructured 3D point clouds is a
challenging task, with existing metrics often relying on measuring the distance
between corresponding points that can be either inefficient or ineffective. In
this paper, we propose a novel distance metric called Calibrated Local Geometry
Distance (CLGD), which computes the difference between the underlying 3D
surfaces calibrated and induced by a set of reference points. By associating
each reference point with two given point clouds through computing its
directional distances to them, the difference in directional distances of an
identical reference point characterizes the geometric difference between a
typical local region of the two point clouds. Finally, CLGD is obtained by
averaging the directional distance differences of all reference points. We
evaluate CLGD on various optimization and unsupervised learning-based tasks,
including shape reconstruction, rigid registration, scene flow estimation, and
feature representation. Extensive experiments show that CLGD achieves
significantly higher accuracy under all tasks in a memory and computationally
efficient manner, compared with existing metrics. As a generic metric, CLGD has
the potential to advance 3D point cloud modeling. The source code is publicly
available at https://github.com/rsy6318/CLGD.
|
[
"cs.CV"
] | false |
2306.00576
|
2023-06-01T11:45:33Z
|
MammalNet: A Large-scale Video Benchmark for Mammal Recognition and
Behavior Understanding
|
[
"Jun Chen",
"Ming Hu",
"Darren J. Coker",
"Michael L. Berumen",
"Blair Costelloe",
"Sara Beery",
"Anna Rohrbach",
"Mohamed Elhoseiny"
] |
Monitoring animal behavior can facilitate conservation efforts by providing
key insights into wildlife health, population status, and ecosystem function.
Automatic recognition of animals and their behaviors is critical for
capitalizing on the large unlabeled datasets generated by modern video devices
and for accelerating monitoring efforts at scale. However, the development of
automated recognition systems is currently hindered by a lack of appropriately
labeled datasets. Existing video datasets 1) do not classify animals according
to established biological taxonomies; 2) are too small to facilitate
large-scale behavioral studies and are often limited to a single species; and
3) do not feature temporally localized annotations and therefore do not
facilitate localization of targeted behaviors within longer video sequences.
Thus, we propose MammalNet, a new large-scale animal behavior dataset with
taxonomy-guided annotations of mammals and their common behaviors. MammalNet
contains over 18K videos totaling 539 hours, which is ~10 times larger than the
largest existing animal behavior dataset. It covers 17 orders, 69 families, and
173 mammal categories for animal categorization and captures 12 high-level
animal behaviors that received focus in previous animal behavior studies. We
establish three benchmarks on MammalNet: standard animal and behavior
recognition, compositional low-shot animal and behavior recognition, and
behavior detection. Our dataset and code have been made available at:
https://mammal-net.github.io.
|
[
"cs.CV"
] | false |
2306.00579
|
2023-06-01T11:51:46Z
|
FMapping: Factorized Efficient Neural Field Mapping for Real-Time Dense
RGB SLAM
|
[
"Tongyan Hua",
"Haotian Bai",
"Zidong Cao",
"Lin Wang"
] |
In this paper, we introduce FMapping, an efficient neural field mapping
framework that facilitates the continuous estimation of a colorized point cloud
map in real-time dense RGB SLAM. To achieve this challenging goal without
depth, a hurdle is how to improve efficiency and reduce the mapping uncertainty
of the RGB SLAM system. To this end, we first build up a theoretical analysis
by decomposing the SLAM system into tracking and mapping parts, and the mapping
uncertainty is explicitly defined within the frame of neural representations.
Based on the analysis, we then propose an effective factorization scheme for
scene representation and introduce a sliding window strategy to reduce the
uncertainty for scene reconstruction. Specifically, we leverage the factorized
neural field to decompose uncertainty into a lower-dimensional space, which
enhances robustness to noise and improves training efficiency. We then propose
the sliding window sampler to reduce uncertainty by incorporating coherent
geometric cues from observed frames during map initialization to enhance
convergence. Our factorized neural mapping approach enjoys some advantages,
such as low memory consumption, more efficient computation, and fast
convergence during map initialization. Experiments on two benchmark datasets
show that our method can update the map of high-fidelity colorized point clouds
around 2 seconds in real time while requiring no customized CUDA kernels.
Additionally, it utilizes x20 fewer parameters than the most concise neural
implicit mapping of prior methods for SLAM, e.g., iMAP [ 31] and around x1000
fewer parameters than the state-of-the-art approach, e.g., NICE-SLAM [ 42]. For
more details, please refer to our project homepage:
https://vlis2022.github.io/fmap/.
|
[
"cs.CV"
] | false |
2306.00676
|
2023-06-01T13:51:08Z
|
Hyperspectral Target Detection Based on Low-Rank Background Subspace
Learning and Graph Laplacian Regularization
|
[
"Dunbin Shen",
"Xiaorui Ma",
"Wenfeng Kong",
"Jiacheng Tian",
"Hongyu Wang"
] |
Hyperspectral target detection is good at finding dim and small objects based
on spectral characteristics. However, existing representation-based methods are
hindered by the problem of the unknown background dictionary and insufficient
utilization of spatial information. To address these issues, this paper
proposes an efficient optimizing approach based on low-rank representation
(LRR) and graph Laplacian regularization (GLR). Firstly, to obtain a complete
and pure background dictionary, we propose a LRR-based background subspace
learning method by jointly mining the low-dimensional structure of all pixels.
Secondly, to fully exploit local spatial relationships and capture the
underlying geometric structure, a local region-based GLR is employed to
estimate the coefficients. Finally, the desired detection map is generated by
computing the ratio of representation errors from binary hypothesis testing.
The experiments conducted on two benchmark datasets validate the effectiveness
and superiority of the approach. For reproduction, the accompanying code is
available at https://github.com/shendb2022/LRBSL-GLR.
|
[
"cs.CV"
] | false |
2306.00704
|
2023-06-01T14:12:33Z
|
DAM-Net: Global Flood Detection from SAR Imagery Using Differential
Attention Metric-Based Vision Transformers
|
[
"Tamer Saleh",
"Xingxing Weng",
"Shimaa Holail",
"Chen Hao",
"Gui-Song Xia"
] |
The detection of flooded areas using high-resolution synthetic aperture radar
(SAR) imagery is a critical task with applications in crisis and disaster
management, as well as environmental resource planning. However, the complex
nature of SAR images presents a challenge that often leads to an overestimation
of the flood extent. To address this issue, we propose a novel differential
attention metric-based network (DAM-Net) in this study. The DAM-Net comprises
two key components: a weight-sharing Siamese backbone to obtain multi-scale
change features of multi-temporal images and tokens containing high-level
semantic information of water-body changes, and a temporal differential fusion
(TDF) module that integrates semantic tokens and change features to generate
flood maps with reduced speckle noise. Specifically, the backbone is split into
multiple stages. In each stage, we design three modules, namely, temporal-wise
feature extraction (TWFE), cross-temporal change attention (CTCA), and
temporal-aware change enhancement (TACE), to effectively extract the change
features. In TACE of the last stage, we introduce a class token to record
high-level semantic information of water-body changes via the attention
mechanism. Another challenge faced by data-driven deep learning algorithms is
the limited availability of flood detection datasets. To overcome this, we have
created the S1GFloods open-source dataset, a global-scale high-resolution
Sentinel-1 SAR image pairs dataset covering 46 global flood events between 2015
and 2022. The experiments on the S1GFloods dataset using the proposed DAM-Net
showed top results compared to state-of-the-art methods in terms of overall
accuracy, F1-score, and IoU, which reached 97.8%, 96.5%, and 93.2%,
respectively. Our dataset and code will be available online at
https://github.com/Tamer-Saleh/S1GFlood-Detection.
|
[
"cs.CV"
] | false |
2306.00753
|
2023-06-01T14:49:40Z
|
Robust T-Loss for Medical Image Segmentation
|
[
"Alvaro Gonzalez-Jimenez",
"Simone Lionetti",
"Philippe Gottfrois",
"Fabian Gröger",
"Marc Pouly",
"Alexander Navarini"
] |
This paper presents a new robust loss function, the T-Loss, for medical image
segmentation. The proposed loss is based on the negative log-likelihood of the
Student-t distribution and can effectively handle outliers in the data by
controlling its sensitivity with a single parameter. This parameter is updated
during the backpropagation process, eliminating the need for additional
computation or prior information about the level and spread of noisy labels.
Our experiments show that the T-Loss outperforms traditional loss functions in
terms of dice scores on two public medical datasets for skin lesion and lung
segmentation. We also demonstrate the ability of T-Loss to handle different
types of simulated label noise, resembling human error. Our results provide
strong evidence that the T-Loss is a promising alternative for medical image
segmentation where high levels of noise or outliers in the dataset are a
typical phenomenon in practice. The project website can be found at
https://robust-tloss.github.io
|
[
"cs.CV"
] | false |
2306.00792
|
2023-06-01T15:22:53Z
|
Learning Across Decentralized Multi-Modal Remote Sensing Archives with
Federated Learning
|
[
"Barış Büyüktaş",
"Gencer Sumbul",
"Begüm Demir"
] |
The development of federated learning (FL) methods, which aim to learn from
distributed databases (i.e., clients) without accessing data on clients, has
recently attracted great attention. Most of these methods assume that the
clients are associated with the same data modality. However, remote sensing
(RS) images in different clients can be associated with different data
modalities that can improve the classification performance when jointly used.
To address this problem, in this paper we introduce a novel multi-modal FL
framework that aims to learn from decentralized multi-modal RS image archives
for RS image classification problems. The proposed framework is made up of
three modules: 1) multi-modal fusion (MF); 2) feature whitening (FW); and 3)
mutual information maximization (MIM). The MF module performs iterative model
averaging to learn without accessing data on clients in the case that clients
are associated with different data modalities. The FW module aligns the
representations learned among the different clients. The MIM module maximizes
the similarity of images from different modalities. Experimental results show
the effectiveness of the proposed framework compared to iterative model
averaging, which is a widely used algorithm in FL. The code of the proposed
framework is publicly available at https://git.tu-berlin.de/rsim/MM-FL.
|
[
"cs.CV"
] | false |
2306.00813
|
2023-06-01T15:39:38Z
|
UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning
|
[
"Xiao Dong",
"Runhui Huang",
"Xiaoyong Wei",
"Zequn Jie",
"Jianxing Yu",
"Jian Yin",
"Xiaodan Liang"
] |
Recent advances in vision-language pre-training have enabled machines to
perform better in multimodal object discrimination (e.g., image-text semantic
alignment) and image synthesis (e.g., text-to-image generation). On the other
hand, fine-tuning pre-trained models with discriminative or generative
capabilities such as CLIP and Stable Diffusion on domain-specific datasets has
shown to be effective in various tasks by adapting to specific domains.
However, few studies have explored the possibility of learning both
discriminative and generative capabilities and leveraging their synergistic
effects to create a powerful and personalized multimodal model during
fine-tuning. This paper presents UniDiff, a unified multi-modal model that
integrates image-text contrastive learning (ITC), text-conditioned image
synthesis learning (IS), and reciprocal semantic consistency modeling (RSC).
UniDiff effectively learns aligned semantics and mitigates the issue of
semantic collapse during fine-tuning on small datasets by leveraging RSC on
visual features from CLIP and diffusion models, without altering the
pre-trained model's basic architecture. UniDiff demonstrates versatility in
both multi-modal understanding and generative tasks. Experimental results on
three datasets (Fashion-man, Fashion-woman, and E-commercial Product) showcase
substantial enhancements in vision-language retrieval and text-to-image
generation, illustrating the advantages of combining discriminative and
generative fine-tuning. The proposed UniDiff model establishes a robust
pipeline for personalized modeling and serves as a benchmark for future
comparisons in the field.
|
[
"cs.CV"
] | false |
2306.00863
|
2023-06-01T16:23:22Z
|
DeepFake-Adapter: Dual-Level Adapter for DeepFake Detection
|
[
"Rui Shao",
"Tianxing Wu",
"Liqiang Nie",
"Ziwei Liu"
] |
Existing deepfake detection methods fail to generalize well to unseen or
degraded samples, which can be attributed to the over-fitting of low-level
forgery patterns. Here we argue that high-level semantics are also
indispensable recipes for generalizable forgery detection. Recently, large
pre-trained Vision Transformers (ViTs) have shown promising generalization
capability. In this paper, we propose the first parameter-efficient tuning
approach for deepfake detection, namely DeepFake-Adapter, to effectively and
efficiently adapt the generalizable high-level semantics from large pre-trained
ViTs to aid deepfake detection. Given large pre-trained models but limited
deepfake data, DeepFake-Adapter introduces lightweight yet dedicated dual-level
adapter modules to a ViT while keeping the model backbone frozen. Specifically,
to guide the adaptation process to be aware of both global and local forgery
cues of deepfake data, 1) we not only insert Globally-aware Bottleneck Adapters
in parallel to MLP layers of ViT, 2) but also actively cross-attend
Locally-aware Spatial Adapters with features from ViT. Unlike existing deepfake
detection methods merely focusing on low-level forgery patterns, the forgery
detection process of our model can be regularized by generalizable high-level
semantics from a pre-trained ViT and adapted by global and local low-level
forgeries of deepfake data. Extensive experiments on several standard deepfake
detection benchmarks validate the effectiveness of our approach. Notably,
DeepFake-Adapter demonstrates a convincing advantage under cross-dataset and
cross-manipulation settings. The source code is released at
https://github.com/rshaojimmy/DeepFake-Adapter
|
[
"cs.CV"
] | false |
2306.00926
|
2023-06-01T17:30:24Z
|
Inserting Anybody in Diffusion Models via Celeb Basis
|
[
"Ge Yuan",
"Xiaodong Cun",
"Yong Zhang",
"Maomao Li",
"Chenyang Qi",
"Xintao Wang",
"Ying Shan",
"Huicheng Zheng"
] |
Exquisite demand exists for customizing the pretrained large text-to-image
model, $\textit{e.g.}$, Stable Diffusion, to generate innovative concepts, such
as the users themselves. However, the newly-added concept from previous
customization methods often shows weaker combination abilities than the
original ones even given several images during training. We thus propose a new
personalization method that allows for the seamless integration of a unique
individual into the pre-trained diffusion model using just $\textbf{one facial
photograph}$ and only $\textbf{1024 learnable parameters}$ under $\textbf{3
minutes}$. So as we can effortlessly generate stunning images of this person in
any pose or position, interacting with anyone and doing anything imaginable
from text prompts. To achieve this, we first analyze and build a well-defined
celeb basis from the embedding space of the pre-trained large text encoder.
Then, given one facial photo as the target identity, we generate its own
embedding by optimizing the weight of this basis and locking all other
parameters. Empowered by the proposed celeb basis, the new identity in our
customized model showcases a better concept combination ability than previous
personalization methods. Besides, our model can also learn several new
identities at once and interact with each other where the previous
customization model fails to. The code will be released.
|
[
"cs.CV"
] | true |
2306.00943
|
2023-06-01T17:43:27Z
|
Make-Your-Video: Customized Video Generation Using Textual and
Structural Guidance
|
[
"Jinbo Xing",
"Menghan Xia",
"Yuxin Liu",
"Yuechen Zhang",
"Yong Zhang",
"Yingqing He",
"Hanyuan Liu",
"Haoxin Chen",
"Xiaodong Cun",
"Xintao Wang",
"Ying Shan",
"Tien-Tsin Wong"
] |
Creating a vivid video from the event or scenario in our imagination is a
truly fascinating experience. Recent advancements in text-to-video synthesis
have unveiled the potential to achieve this with prompts only. While text is
convenient in conveying the overall scene context, it may be insufficient to
control precisely. In this paper, we explore customized video generation by
utilizing text as context description and motion structure (e.g. frame-wise
depth) as concrete guidance. Our method, dubbed Make-Your-Video, involves
joint-conditional video generation using a Latent Diffusion Model that is
pre-trained for still image synthesis and then promoted for video generation
with the introduction of temporal modules. This two-stage learning scheme not
only reduces the computing resources required, but also improves the
performance by transferring the rich concepts available in image datasets
solely into video generation. Moreover, we use a simple yet effective causal
attention mask strategy to enable longer video synthesis, which mitigates the
potential quality degradation effectively. Experimental results show the
superiority of our method over existing baselines, particularly in terms of
temporal coherence and fidelity to users' guidance. In addition, our model
enables several intriguing applications that demonstrate potential for
practical usage.
|
[
"cs.CV"
] | true |
2306.00968
|
2023-06-01T17:57:32Z
|
GRES: Generalized Referring Expression Segmentation
|
[
"Chang Liu",
"Henghui Ding",
"Xudong Jiang"
] |
Referring Expression Segmentation (RES) aims to generate a segmentation mask
for the object described by a given language expression. Existing classic RES
datasets and methods commonly support single-target expressions only, i.e., one
expression refers to one target object. Multi-target and no-target expressions
are not considered. This limits the usage of RES in practice. In this paper, we
introduce a new benchmark called Generalized Referring Expression Segmentation
(GRES), which extends the classic RES to allow expressions to refer to an
arbitrary number of target objects. Towards this, we construct the first
large-scale GRES dataset called gRefCOCO that contains multi-target, no-target,
and single-target expressions. GRES and gRefCOCO are designed to be
well-compatible with RES, facilitating extensive experiments to study the
performance gap of the existing RES methods on the GRES task. In the
experimental study, we find that one of the big challenges of GRES is complex
relationship modeling. Based on this, we propose a region-based GRES baseline
ReLA that adaptively divides the image into regions with sub-instance clues,
and explicitly models the region-region and region-language dependencies. The
proposed approach ReLA achieves new state-of-the-art performance on the both
newly proposed GRES and classic RES tasks. The proposed gRefCOCO dataset and
method are available at https://henghuiding.github.io/GRES.
|
[
"cs.CV"
] | false |
2306.00979
|
2023-06-01T17:59:21Z
|
Building Rearticulable Models for Arbitrary 3D Objects from 4D Point
Clouds
|
[
"Shaowei Liu",
"Saurabh Gupta",
"Shenlong Wang"
] |
We build rearticulable models for arbitrary everyday man-made objects
containing an arbitrary number of parts that are connected together in
arbitrary ways via 1 degree-of-freedom joints. Given point cloud videos of such
everyday objects, our method identifies the distinct object parts, what parts
are connected to what other parts, and the properties of the joints connecting
each part pair. We do this by jointly optimizing the part segmentation,
transformation, and kinematics using a novel energy minimization framework. Our
inferred animatable models, enables retargeting to novel poses with sparse
point correspondences guidance. We test our method on a new articulating robot
dataset, and the Sapiens dataset with common daily objects, as well as
real-world scans. Experiments show that our method outperforms two leading
prior works on various metrics.
|
[
"cs.CV"
] | false |
2306.01176
|
2023-06-01T22:21:28Z
|
Cooperative Hardware-Prompt Learning for Snapshot Compressive Imaging
|
[
"Jiamian Wang",
"Zongliang Wu",
"Yulun Zhang",
"Xin Yuan",
"Tao Lin",
"Zhiqiang Tao"
] |
Snapshot compressive imaging emerges as a promising technology for acquiring
real-world hyperspectral signals. It uses an optical encoder and compressively
produces the 2D measurement, followed by which the 3D hyperspectral data can be
retrieved via training a deep reconstruction network. Existing reconstruction
models are trained with a single hardware instance, whose performance is
vulnerable to hardware perturbation or replacement, demonstrating an
overfitting issue to the physical configuration. This defect limits the
deployment of pre-trained models since they would suffer from large performance
degradation when are assembled to unseen hardware. To better facilitate the
reconstruction model with new hardware, previous efforts resort to centralized
training by collecting multi-hardware and data, which is impractical when
dealing with proprietary assets among institutions. In light of this, federated
learning (FL) has become a feasible solution to enable cross-hardware
cooperation without breaking privacy. However, the naive FedAvg is subject to
client drift upon data heterogeneity owning to the hardware inconsistency. In
this work, we tackle this challenge by marrying prompt tuning with FL to
snapshot compressive imaging for the first time and propose an federated
hardware-prompt learning (FedHP) method. Rather than mitigating the client
drift by rectifying the gradients, which only takes effect on the learning
manifold but fails to touch the heterogeneity rooted in the input data space,
the proposed FedHP globally learns a hardware-conditioned prompter to align the
data distribution, which serves as an indicator of the data inconsistency
stemming from different pre-defined coded apertures. Extensive experiments
demonstrate that the proposed method well coordinates the pre-trained model to
indeterminate hardware configurations.
|
[
"cs.CV"
] | false |
2306.06113
|
2023-06-01T06:37:19Z
|
SAM-helps-Shadow:When Segment Anything Model meet shadow removal
|
[
"Xiaofeng Zhang",
"Chaochen Gu",
"Shanying Zhu"
] |
The challenges surrounding the application of image shadow removal to
real-world images and not just constrained datasets like ISTD/SRD have
highlighted an urgent need for zero-shot learning in this field. In this study,
we innovatively adapted the SAM (Segment anything model) for shadow removal by
introducing SAM-helps-Shadow, effectively integrating shadow detection and
removal into a single stage. Our approach utilized the model's detection
results as a potent prior for facilitating shadow detection, followed by shadow
removal using a second-order deep unfolding network. The source code of
SAM-helps-Shadow can be obtained from
https://github.com/zhangbaijin/SAM-helps-Shadow.
|
[
"cs.CV"
] | false |
2306.00294
|
2023-06-01T02:25:55Z
|
Affinity-based Attention in Self-supervised Transformers Predicts
Dynamics of Object Grouping in Humans
|
[
"Hossein Adeli",
"Seoyoung Ahn",
"Nikolaus Kriegeskorte",
"Gregory Zelinsky"
] |
The spreading of attention has been proposed as a mechanism for how humans
group features to segment objects. However, such a mechanism has not yet been
implemented and tested in naturalistic images. Here, we leverage the feature
maps from self-supervised vision Transformers and propose a model of human
object-based attention spreading and segmentation. Attention spreads within an
object through the feature affinity signal between different patches of the
image. We also collected behavioral data on people grouping objects in natural
images by judging whether two dots are on the same object or on two different
objects. We found that our models of affinity spread that were built on feature
maps from the self-supervised Transformers showed significant improvement over
baseline and CNN based models on predicting reaction time patterns of humans,
despite not being trained on the task or with any other object labels. Our work
provides new benchmarks for evaluating models of visual representation learning
including Transformers.
|
[
"cs.CV",
"q-bio.NC"
] | false |
2306.00303
|
2023-06-01T02:50:51Z
|
Sea Ice Extraction via Remote Sensed Imagery: Algorithms, Datasets,
Applications and Challenges
|
[
"Anzhu Yu",
"Wenjun Huang",
"Qing Xu",
"Qun Sun",
"Wenyue Guo",
"Song Ji",
"Bowei Wen",
"Chunping Qiu"
] |
The deep learning, which is a dominating technique in artificial
intelligence, has completely changed the image understanding over the past
decade. As a consequence, the sea ice extraction (SIE) problem has reached a
new era. We present a comprehensive review of four important aspects of SIE,
including algorithms, datasets, applications, and the future trends. Our review
focuses on researches published from 2016 to the present, with a specific focus
on deep learning-based approaches in the last five years. We divided all
relegated algorithms into 3 categories, including classical image segmentation
approach, machine learning-based approach and deep learning-based methods. We
reviewed the accessible ice datasets including SAR-based datasets, the
optical-based datasets and others. The applications are presented in 4 aspects
including climate research, navigation, geographic information systems (GIS)
production and others. It also provides insightful observations and inspiring
future research directions.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.00378
|
2023-06-01T06:19:33Z
|
Example-based Motion Synthesis via Generative Motion Matching
|
[
"Weiyu Li",
"Xuelin Chen",
"Peizhuo Li",
"Olga Sorkine-Hornung",
"Baoquan Chen"
] |
We present GenMM, a generative model that "mines" as many diverse motions as
possible from a single or few example sequences. In stark contrast to existing
data-driven methods, which typically require long offline training time, are
prone to visual artifacts, and tend to fail on large and complex skeletons,
GenMM inherits the training-free nature and the superior quality of the
well-known Motion Matching method. GenMM can synthesize a high-quality motion
within a fraction of a second, even with highly complex and large skeletal
structures. At the heart of our generative framework lies the generative motion
matching module, which utilizes the bidirectional visual similarity as a
generative cost function to motion matching, and operates in a multi-stage
framework to progressively refine a random guess using exemplar motion matches.
In addition to diverse motion generation, we show the versatility of our
generative framework by extending it to a number of scenarios that are not
possible with motion matching alone, including motion completion, key
frame-guided generation, infinite looping, and motion reassembly. Code and data
for this paper are at https://wyysf-98.github.io/GenMM/
|
[
"cs.GR",
"cs.CV"
] | true |
2306.00379
|
2023-06-01T06:21:45Z
|
Large Scale Generative Multimodal Attribute Extraction for E-commerce
Attributes
|
[
"Anant Khandelwal",
"Happy Mittal",
"Shreyas Sunil Kulkarni",
"Deepak Gupta"
] |
E-commerce websites (e.g. Amazon) have a plethora of structured and
unstructured information (text and images) present on the product pages.
Sellers often either don't label or mislabel values of the attributes (e.g.
color, size etc.) for their products. Automatically identifying these attribute
values from an eCommerce product page that contains both text and images is a
challenging task, especially when the attribute value is not explicitly
mentioned in the catalog. In this paper, we present a scalable solution for
this problem where we pose attribute extraction problem as a question-answering
task, which we solve using \textbf{MXT}, consisting of three key components:
(i) \textbf{M}AG (Multimodal Adaptation Gate), (ii) \textbf{X}ception network,
and (iii) \textbf{T}5 encoder-decoder. Our system consists of a generative
model that \emph{generates} attribute-values for a given product by using both
textual and visual characteristics (e.g. images) of the product. We show that
our system is capable of handling zero-shot attribute prediction (when
attribute value is not seen in training data) and value-absent prediction (when
attribute value is not mentioned in the text) which are missing in traditional
classification-based and NER-based models respectively. We have trained our
models using distant supervision, removing dependency on human labeling, thus
making them practical for real-world applications. With this framework, we are
able to train a single model for 1000s of (product-type, attribute) pairs, thus
reducing the overhead of training and maintaining separate models. Extensive
experiments on two real world datasets show that our framework improves the
absolute recall@90P by 10.16\% and 6.9\% from the existing state of the art
models. In a popular e-commerce store, we have deployed our models for 1000s of
(product-type, attribute) pairs.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.00446
|
2023-06-01T08:35:51Z
|
Evaluation of Multi-indicator And Multi-organ Medical Image Segmentation
Models
|
[
"Qi Ye",
"Lihua Guo"
] |
In recent years, "U-shaped" neural networks featuring encoder and decoder
structures have gained popularity in the field of medical image segmentation.
Various variants of this model have been developed. Nevertheless, the
evaluation of these models has received less attention compared to model
development. In response, we propose a comprehensive method for evaluating
medical image segmentation models for multi-indicator and multi-organ (named
MIMO). MIMO allows models to generate independent thresholds which are then
combined with multi-indicator evaluation and confidence estimation to screen
and measure each organ. As a result, MIMO offers detailed information on the
segmentation of each organ in each sample, thereby aiding developers in
analyzing and improving the model. Additionally, MIMO can produce concise
usability and comprehensiveness scores for different models. Models with higher
scores are deemed to be excellent models, which is convenient for clinical
evaluation. Our research tests eight different medical image segmentation
models on two abdominal multi-organ datasets and evaluates them from four
perspectives: correctness, confidence estimation, Usable Region and MIMO.
Furthermore, robustness experiments are tested. Experimental results
demonstrate that MIMO offers novel insights into multi-indicator and
multi-organ medical image evaluation and provides a specific and concise
measure for the usability and comprehensiveness of the model. Code:
https://github.com/SCUT-ML-GUO/MIMO
|
[
"eess.IV",
"cs.CV"
] | false |
2306.00451
|
2023-06-01T08:47:58Z
|
S$^2$ME: Spatial-Spectral Mutual Teaching and Ensemble Learning for
Scribble-supervised Polyp Segmentation
|
[
"An Wang",
"Mengya Xu",
"Yang Zhang",
"Mobarakol Islam",
"Hongliang Ren"
] |
Fully-supervised polyp segmentation has accomplished significant triumphs
over the years in advancing the early diagnosis of colorectal cancer. However,
label-efficient solutions from weak supervision like scribbles are rarely
explored yet primarily meaningful and demanding in medical practice due to the
expensiveness and scarcity of densely-annotated polyp data. Besides, various
deployment issues, including data shifts and corruption, put forward further
requests for model generalization and robustness. To address these concerns, we
design a framework of Spatial-Spectral Dual-branch Mutual Teaching and
Entropy-guided Pseudo Label Ensemble Learning (S$^2$ME). Concretely, for the
first time in weakly-supervised medical image segmentation, we promote the
dual-branch co-teaching framework by leveraging the intrinsic complementarity
of features extracted from the spatial and spectral domains and encouraging
cross-space consistency through collaborative optimization. Furthermore, to
produce reliable mixed pseudo labels, which enhance the effectiveness of
ensemble learning, we introduce a novel adaptive pixel-wise fusion technique
based on the entropy guidance from the spatial and spectral branches. Our
strategy efficiently mitigates the deleterious effects of uncertainty and noise
present in pseudo labels and surpasses previous alternatives in terms of
efficacy. Ultimately, we formulate a holistic optimization objective to learn
from the hybrid supervision of scribbles and pseudo labels. Extensive
experiments and evaluation on four public datasets demonstrate the superiority
of our method regarding in-distribution accuracy, out-of-distribution
generalization, and robustness, highlighting its promising clinical
significance. Our code is available at https://github.com/lofrienger/S2ME.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.00473
|
2023-06-01T09:23:22Z
|
Interpretable simultaneous localization of MRI corpus callosum and
classification of atypical Parkinsonian disorders using YOLOv5
|
[
"Vamshi Krishna Kancharla",
"Debanjali Bhattacharya",
"Neelam Sinha",
"Jitender Saini",
"Pramod Kumar Pal",
"Sandhya M"
] |
Structural MRI(S-MRI) is one of the most versatile imaging modality that
revolutionized the anatomical study of brain in past decades. The corpus
callosum (CC) is the principal white matter fibre tract, enabling all kinds of
inter-hemispheric communication. Thus, subtle changes in CC might be associated
with various neurological disorders. The present work proposes the potential of
YOLOv5-based CC detection framework to differentiate atypical Parkinsonian
disorders (PD) from healthy controls (HC). With 3 rounds of hold-out
validation, mean classification accuracy of 92% is obtained using the proposed
method on a proprietary dataset consisting of 20 healthy subjects and 20 cases
of APDs, with an improvement of 5% over SOTA methods (CC morphometry and visual
texture analysis) that used the same dataset. Subsequently, in order to
incorporate the explainability of YOLO predictions, Eigen CAM based heatmap is
generated for identifying the most important sub-region in CC that leads to the
classification. The result of Eigen CAM showed CC mid-body as the most
distinguishable sub-region in classifying APDs and HC, which is in-line with
SOTA methodologies and the current prevalent understanding in medicine.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.00499
|
2023-06-01T09:49:11Z
|
DeSAM: Decoupling Segment Anything Model for Generalizable Medical Image
Segmentation
|
[
"Yifan Gao",
"Wei Xia",
"Dingdu Hu",
"Xin Gao"
] |
Deep learning based automatic medical image segmentation models often suffer
from domain shift, where the models trained on a source domain do not
generalize well to other unseen domains. As a vision foundation model with
powerful generalization capabilities, Segment Anything Model (SAM) shows
potential for improving the cross-domain robustness of medical image
segmentation. However, SAM and its fine-tuned models performed significantly
worse in fully automatic mode compared to when given manual prompts. Upon
further investigation, we discovered that the degradation in performance was
related to the coupling effect of poor prompts and mask segmentation. In fully
automatic mode, the presence of inevitable poor prompts (such as points outside
the mask or boxes significantly larger than the mask) can significantly mislead
mask generation. To address the coupling effect, we propose the decoupling SAM
(DeSAM). DeSAM modifies SAM's mask decoder to decouple mask generation and
prompt embeddings while leveraging pre-trained weights. We conducted
experiments on publicly available prostate cross-site datasets. The results
show that DeSAM improves dice score by an average of 8.96% (from 70.06% to
79.02%) compared to previous state-of-the-art domain generalization method.
Moreover, DeSAM can be trained on personal devices with entry-level GPU since
our approach does not rely on tuning the heavyweight image encoder. The code is
publicly available at https://github.com/yifangao112/DeSAM.
|
[
"eess.IV",
"cs.CV"
] | false |
2306.00640
|
2023-06-01T13:06:44Z
|
Multi-Modal Deep Learning for Multi-Temporal Urban Mapping With a Partly
Missing Optical Modality
|
[
"Sebastian Hafner",
"Yifang Ban"
] |
This paper proposes a novel multi-temporal urban mapping approach using
multi-modal satellite data from the Sentinel-1 Synthetic Aperture Radar (SAR)
and Sentinel-2 MultiSpectral Instrument (MSI) missions. In particular, it
focuses on the problem of a partly missing optical modality due to clouds. The
proposed model utilizes two networks to extract features from each modality
separately. In addition, a reconstruction network is utilized to approximate
the optical features based on the SAR data in case of a missing optical
modality. Our experiments on a multi-temporal urban mapping dataset with
Sentinel-1 SAR and Sentinel-2 MSI data demonstrate that the proposed method
outperforms a multi-modal approach that uses zero values as a replacement for
missing optical data, as well as a uni-modal SAR-based approach. Therefore, the
proposed method is effective in exploiting multi-modal data, if available, but
it also retains its effectiveness in case the optical modality is missing.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.00738
|
2023-06-01T14:32:34Z
|
ReFACT: Updating Text-to-Image Models by Editing the Text Encoder
|
[
"Dana Arad",
"Hadas Orgad",
"Yonatan Belinkov"
] |
Text-to-image models are trained on extensive amounts of data, leading them
to implicitly encode factual knowledge within their parameters. While some
facts are useful, others may be incorrect or become outdated (e.g., the current
President of the United States). We introduce ReFACT, a novel approach for
editing factual knowledge in text-to-image generative models. ReFACT updates
the weights of a specific layer in the text encoder, only modifying a tiny
portion of the model's parameters, and leaving the rest of the model
unaffected. We empirically evaluate ReFACT on an existing benchmark, alongside
RoAD, a newly curated dataset. ReFACT achieves superior performance in terms of
generalization to related concepts while preserving unrelated concepts.
Furthermore, ReFACT maintains image generation quality, making it a valuable
tool for updating and correcting factual information in text-to-image models.
|
[
"cs.CL",
"cs.CV",
"68T50",
"I.2.7"
] | false |
2306.00763
|
2023-06-01T14:56:37Z
|
Learning Disentangled Prompts for Compositional Image Synthesis
|
[
"Kihyuk Sohn",
"Albert Shaw",
"Yuan Hao",
"Han Zhang",
"Luisa Polania",
"Huiwen Chang",
"Lu Jiang",
"Irfan Essa"
] |
We study domain-adaptive image synthesis, the problem of teaching pretrained
image generative models a new style or concept from as few as one image to
synthesize novel images, to better understand the compositional image
synthesis. We present a framework that leverages a pretrained class-conditional
generation model and visual prompt tuning. Specifically, we propose a novel
source class distilled visual prompt that learns disentangled prompts of
semantic (e.g., class) and domain (e.g., style) from a few images. Learned
domain prompt is then used to synthesize images of any classes in the style of
target domain. We conduct studies on various target domains with the number of
images ranging from one to a few to many, and show qualitative results which
show the compositional generalization of our method. Moreover, we show that our
method can help improve zero-shot domain adaptation classification accuracy.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.00826
|
2023-06-01T15:48:10Z
|
In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation
|
[
"Julian Bitterwolf",
"Maximilian Müller",
"Matthias Hein"
] |
Out-of-distribution (OOD) detection is the problem of identifying inputs
which are unrelated to the in-distribution task. The OOD detection performance
when the in-distribution (ID) is ImageNet-1K is commonly being tested on a
small range of test OOD datasets. We find that most of the currently used test
OOD datasets, including datasets from the open set recognition (OSR)
literature, have severe issues: In some cases more than 50$\%$ of the dataset
contains objects belonging to one of the ID classes. These erroneous samples
heavily distort the evaluation of OOD detectors. As a solution, we introduce
with NINCO a novel test OOD dataset, each sample checked to be ID free, which
with its fine-grained range of OOD classes allows for a detailed analysis of an
OOD detector's strengths and failure modes, particularly when paired with a
number of synthetic "OOD unit-tests". We provide detailed evaluations across a
large set of architectures and OOD detection methods on NINCO and the
unit-tests, revealing new insights about model weaknesses and the effects of
pretraining on OOD detection performance. We provide code and data at
https://github.com/j-cb/NINCO.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.00856
|
2023-06-01T16:14:17Z
|
A deep-learning approach to early identification of suggested sexual
harassment from videos
|
[
"Shreya Shetye",
"Anwita Maiti",
"Tannistha Maiti",
"Tarry Singh"
] |
Sexual harassment, sexual abuse, and sexual violence are prevalent problems
in this day and age. Women's safety is an important issue that needs to be
highlighted and addressed. Given this issue, we have studied each of these
concerns and the factors that affect it based on images generated from movies.
We have classified the three terms (harassment, abuse, and violence) based on
the visual attributes present in images depicting these situations. We
identified that factors such as facial expression of the victim and perpetrator
and unwanted touching had a direct link to identifying the scenes containing
sexual harassment, abuse and violence. We also studied and outlined how
state-of-the-art explicit content detectors such as Google Cloud Vision API and
Clarifai API fail to identify and categorise these images. Based on these
definitions and characteristics, we have developed a first-of-its-kind dataset
from various Indian movie scenes. These scenes are classified as sexual
harassment, sexual abuse, or sexual violence and exported in the PASCAL VOC 1.1
format. Our dataset is annotated on the identified relevant features and can be
used to develop and train a deep-learning computer vision model to identify
these issues. The dataset is publicly available for research and development.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.00890
|
2023-06-01T16:50:07Z
|
LLaVA-Med: Training a Large Language-and-Vision Assistant for
Biomedicine in One Day
|
[
"Chunyuan Li",
"Cliff Wong",
"Sheng Zhang",
"Naoto Usuyama",
"Haotian Liu",
"Jianwei Yang",
"Tristan Naumann",
"Hoifung Poon",
"Jianfeng Gao"
] |
Conversational generative AI has demonstrated remarkable promise for
empowering biomedical practitioners, but current investigations focus on
unimodal text. Multimodal conversational AI has seen rapid progress by
leveraging billions of image-text pairs from the public web, but such
general-domain vision-language models still lack sophistication in
understanding and conversing about biomedical images. In this paper, we propose
a cost-efficient approach for training a vision-language conversational
assistant that can answer open-ended research questions of biomedical images.
The key idea is to leverage a large-scale, broad-coverage biomedical
figure-caption dataset extracted from PubMed Central, use GPT-4 to
self-instruct open-ended instruction-following data from the captions, and then
fine-tune a large general-domain vision-language model using a novel curriculum
learning method. Specifically, the model first learns to align biomedical
vocabulary using the figure-caption pairs as is, then learns to master
open-ended conversational semantics using GPT-4 generated instruction-following
data, broadly mimicking how a layperson gradually acquires biomedical
knowledge. This enables us to train a Large Language and Vision Assistant for
BioMedicine (LLaVA-Med) in less than 15 hours (with eight A100s). LLaVA-Med
exhibits excellent multimodal conversational capability and can follow
open-ended instruction to assist with inquiries about a biomedical image. On
three standard biomedical visual question answering datasets, LLaVA-Med
outperforms previous supervised state-of-the-art on certain metrics. To
facilitate biomedical multimodal research, we will release our
instruction-following data and the LLaVA-Med model.
|
[
"cs.CV",
"cs.CL"
] | true |
2306.00892
|
2023-06-01T16:50:40Z
|
A Probabilistic Relaxation of the Two-Stage Object Pose Estimation
Paradigm
|
[
"Onur Beker"
] |
Existing object pose estimation methods commonly require a one-to-one point
matching step that forces them to be separated into two consecutive stages:
visual correspondence detection (e.g., by matching feature descriptors as part
of a perception front-end) followed by geometric alignment (e.g., by optimizing
a robust estimation objective for pointcloud registration or
perspective-n-point). Instead, we propose a matching-free probabilistic
formulation with two main benefits: i) it enables unified and concurrent
optimization of both visual correspondence and geometric alignment, and ii) it
can represent different plausible modes of the entire distribution of likely
poses. This in turn allows for a more graceful treatment of geometric
perception scenarios where establishing one-to-one matches between points is
conceptually ill-defined, such as textureless, symmetrical and/or occluded
objects and scenes where the correct pose is uncertain or there are multiple
equally valid solutions.
|
[
"cs.RO",
"cs.CV"
] | false |
2306.00906
|
2023-06-01T17:05:02Z
|
MOSAIC: Masked Optimisation with Selective Attention for Image
Reconstruction
|
[
"Pamuditha Somarathne",
"Tharindu Wickremasinghe",
"Amashi Niwarthana",
"A. Thieshanthan",
"Chamira U. S. Edussooriya",
"Dushan N. Wadduwage"
] |
Compressive sensing (CS) reconstructs images from sub-Nyquist measurements by
solving a sparsity-regularized inverse problem. Traditional CS solvers use
iterative optimizers with hand crafted sparsifiers, while early data-driven
methods directly learn an inverse mapping from the low-dimensional measurement
space to the original image space. The latter outperforms the former, but is
restrictive to a pre-defined measurement domain. More recent, deep unrolling
methods combine traditional proximal gradient methods and data-driven
approaches to iteratively refine an image approximation. To achieve higher
accuracy, it has also been suggested to learn both the sampling matrix, and the
choice of measurement vectors adaptively. Contrary to the current trend, in
this work we hypothesize that a general inverse mapping from a random set of
compressed measurements to the image domain exists for a given measurement
basis, and can be learned. Such a model is single-shot, non-restrictive and
does not parametrize the sampling process. To this end, we propose MOSAIC, a
novel compressive sensing framework to reconstruct images given any random
selection of measurements, sampled using a fixed basis. Motivated by the uneven
distribution of information across measurements, MOSAIC incorporates an
embedding technique to efficiently apply attention mechanisms on an encoded
sequence of measurements, while dispensing the need to use unrolled deep
networks. A range of experiments validate our proposed architecture as a
promising alternative for existing CS reconstruction methods, by achieving the
state-of-the-art for metrics of reconstruction accuracy on standard datasets.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.00931
|
2023-06-01T17:34:25Z
|
"Let's not Quote out of Context": Unified Vision-Language Pretraining
for Context Assisted Image Captioning
|
[
"Abisek Rajakumar Kalarani",
"Pushpak Bhattacharyya",
"Niyati Chhaya",
"Sumit Shekhar"
] |
Well-formed context aware image captions and tags in enterprise content such
as marketing material are critical to ensure their brand presence and content
recall. Manual creation and updates to ensure the same is non trivial given the
scale and the tedium towards this task. We propose a new unified
Vision-Language (VL) model based on the One For All (OFA) model, with a focus
on context-assisted image captioning where the caption is generated based on
both the image and its context. Our approach aims to overcome the
context-independent (image and text are treated independently) nature of the
existing approaches. We exploit context by pretraining our model with datasets
of three tasks: news image captioning where the news article is the context,
contextual visual entailment, and keyword extraction from the context. The
second pretraining task is a new VL task, and we construct and release two
datasets for the task with 1.1M and 2.2K data instances. Our system achieves
state-of-the-art results with an improvement of up to 8.34 CIDEr score on the
benchmark news image captioning datasets. To the best of our knowledge, ours is
the first effort at incorporating contextual information in pretraining the
models for the VL tasks.
|
[
"cs.CV",
"cs.CL"
] | false |
2306.00964
|
2023-06-01T17:55:32Z
|
Cocktail: Mixing Multi-Modality Controls for Text-Conditional Image
Generation
|
[
"Minghui Hu",
"Jianbin Zheng",
"Daqing Liu",
"Chuanxia Zheng",
"Chaoyue Wang",
"Dacheng Tao",
"Tat-Jen Cham"
] |
Text-conditional diffusion models are able to generate high-fidelity images
with diverse contents. However, linguistic representations frequently exhibit
ambiguous descriptions of the envisioned objective imagery, requiring the
incorporation of additional control signals to bolster the efficacy of
text-guided diffusion models. In this work, we propose Cocktail, a pipeline to
mix various modalities into one embedding, amalgamated with a generalized
ControlNet (gControlNet), a controllable normalisation (ControlNorm), and a
spatial guidance sampling method, to actualize multi-modal and
spatially-refined control for text-conditional diffusion models. Specifically,
we introduce a hyper-network gControlNet, dedicated to the alignment and
infusion of the control signals from disparate modalities into the pre-trained
diffusion model. gControlNet is capable of accepting flexible modality signals,
encompassing the simultaneous reception of any combination of modality signals,
or the supplementary fusion of multiple modality signals. The control signals
are then fused and injected into the backbone model according to our proposed
ControlNorm. Furthermore, our advanced spatial guidance sampling methodology
proficiently incorporates the control signal into the designated region,
thereby circumventing the manifestation of undesired objects within the
generated image. We demonstrate the results of our method in controlling
various modalities, proving high-quality synthesis and fidelity to multiple
external signals.
|
[
"cs.CV",
"cs.LG"
] | true |
2306.00983
|
2023-06-01T17:59:51Z
|
StyleDrop: Text-to-Image Generation in Any Style
|
[
"Kihyuk Sohn",
"Nataniel Ruiz",
"Kimin Lee",
"Daniel Castro Chin",
"Irina Blok",
"Huiwen Chang",
"Jarred Barber",
"Lu Jiang",
"Glenn Entis",
"Yuanzhen Li",
"Yuan Hao",
"Irfan Essa",
"Michael Rubinstein",
"Dilip Krishnan"
] |
Pre-trained large text-to-image models synthesize impressive images with an
appropriate use of text prompts. However, ambiguities inherent in natural
language and out-of-distribution effects make it hard to synthesize image
styles, that leverage a specific design pattern, texture or material. In this
paper, we introduce StyleDrop, a method that enables the synthesis of images
that faithfully follow a specific style using a text-to-image model. The
proposed method is extremely versatile and captures nuances and details of a
user-provided style, such as color schemes, shading, design patterns, and local
and global effects. It efficiently learns a new style by fine-tuning very few
trainable parameters (less than $1\%$ of total model parameters) and improving
the quality via iterative training with either human or automated feedback.
Better yet, StyleDrop is able to deliver impressive results even when the user
supplies only a single image that specifies the desired style. An extensive
study shows that, for the task of style tuning text-to-image models, StyleDrop
implemented on Muse convincingly outperforms other methods, including
DreamBooth and textual inversion on Imagen or Stable Diffusion. More results
are available at our project website: https://styledrop.github.io
|
[
"cs.CV",
"cs.AI"
] | true |
2306.00989
|
2023-06-01T17:59:58Z
|
Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
|
[
"Chaitanya Ryali",
"Yuan-Ting Hu",
"Daniel Bolya",
"Chen Wei",
"Haoqi Fan",
"Po-Yao Huang",
"Vaibhav Aggarwal",
"Arkabandhu Chowdhury",
"Omid Poursaeed",
"Judy Hoffman",
"Jitendra Malik",
"Yanghao Li",
"Christoph Feichtenhofer"
] |
Modern hierarchical vision transformers have added several vision-specific
components in the pursuit of supervised classification performance. While these
components lead to effective accuracies and attractive FLOP counts, the added
complexity actually makes these transformers slower than their vanilla ViT
counterparts. In this paper, we argue that this additional bulk is unnecessary.
By pretraining with a strong visual pretext task (MAE), we can strip out all
the bells-and-whistles from a state-of-the-art multi-stage vision transformer
without losing accuracy. In the process, we create Hiera, an extremely simple
hierarchical vision transformer that is more accurate than previous models
while being significantly faster both at inference and during training. We
evaluate Hiera on a variety of tasks for image and video recognition. Our code
and models are available at https://github.com/facebookresearch/hiera.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.01034
|
2023-06-01T17:21:42Z
|
Pseudo Labels for Single Positive Multi-Label Learning
|
[
"Julio Arroyo"
] |
The cost of data annotation is a substantial impediment for multi-label image
classification: in every image, every category must be labeled as present or
absent. Single positive multi-label (SPML) learning is a cost-effective
solution, where models are trained on a single positive label per image. Thus,
SPML is a more challenging domain, since it requires dealing with missing
labels. In this work, we propose a method to turn single positive data into
fully-labeled data: Pseudo Multi-Labels. Basically, a teacher network is
trained on single positive labels. Then, we treat the teacher model's
predictions on the training data as ground-truth labels to train a student
network on fully-labeled images. With this simple approach, we show that the
performance achieved by the student model approaches that of a model trained on
the actual fully-labeled images.
|
[
"cs.LG",
"cs.CV"
] | false |
2306.01125
|
2023-06-01T20:21:05Z
|
Reconstruction Distortion of Learned Image Compression with
Imperceptible Perturbations
|
[
"Yang Sui",
"Zhuohang Li",
"Ding Ding",
"Xiang Pan",
"Xiaozhong Xu",
"Shan Liu",
"Zhenzhong Chen"
] |
Learned Image Compression (LIC) has recently become the trending technique
for image transmission due to its notable performance. Despite its popularity,
the robustness of LIC with respect to the quality of image reconstruction
remains under-explored. In this paper, we introduce an imperceptible attack
approach designed to effectively degrade the reconstruction quality of LIC,
resulting in the reconstructed image being severely disrupted by noise where
any object in the reconstructed images is virtually impossible. More
specifically, we generate adversarial examples by introducing a Frobenius
norm-based loss function to maximize the discrepancy between original images
and reconstructed adversarial examples. Further, leveraging the insensitivity
of high-frequency components to human vision, we introduce Imperceptibility
Constraint (IC) to ensure that the perturbations remain inconspicuous.
Experiments conducted on the Kodak dataset using various LIC models demonstrate
effectiveness. In addition, we provide several findings and suggestions for
designing future defenses.
|
[
"cs.CV",
"cs.AI"
] | false |
2306.01141
|
2023-06-01T20:48:04Z
|
Privacy-Preserving Remote Heart Rate Estimation from Facial Videos
|
[
"Divij Gupta",
"Ali Etemad"
] |
Remote Photoplethysmography (rPPG) is the process of estimating PPG from
facial videos. While this approach benefits from contactless interaction, it is
reliant on videos of faces, which often constitutes an important privacy
concern. Recent research has revealed that deep learning techniques are
vulnerable to attacks, which can result in significant data breaches making
deep rPPG estimation even more sensitive. To address this issue, we propose a
data perturbation method that involves extraction of certain areas of the face
with less identity-related information, followed by pixel shuffling and
blurring. Our experiments on two rPPG datasets (PURE and UBFC) show that our
approach reduces the accuracy of facial recognition algorithms by over 60%,
with minimal impact on rPPG extraction. We also test our method on three facial
recognition datasets (LFW, CALFW, and AgeDB), where our approach reduced
performance by nearly 50%. Our findings demonstrate the potential of our
approach as an effective privacy-preserving solution for rPPG estimation.
|
[
"cs.CV",
"eess.IV"
] | false |
2306.01148
|
2023-06-01T21:03:06Z
|
Addressing Discrepancies in Semantic and Visual Alignment in Neural
Networks
|
[
"Natalie Abreu",
"Nathan Vaska",
"Victoria Helus"
] |
For the task of image classification, neural networks primarily rely on
visual patterns. In robust networks, we would expect for visually similar
classes to be represented similarly. We consider the problem of when
semantically similar classes are visually dissimilar, and when visual
similarity is present among non-similar classes. We propose a data augmentation
technique with the goal of better aligning semantically similar classes with
arbitrary (non-visual) semantic relationships. We leverage recent work in
diffusion-based semantic mixing to generate semantic hybrids of two classes,
and these hybrids are added to the training set as augmented data. We evaluate
whether the method increases semantic alignment by evaluating model performance
on adversarially perturbed data, with the idea that it should be easier for an
adversary to switch one class to a similarly represented class. Results
demonstrate that there is an increase in alignment of semantically similar
classes when using our proposed data augmentation method.
|
[
"cs.CV",
"cs.LG"
] | false |
2306.01190
|
2023-06-01T23:06:14Z
|
Identifying Visible Tissue in Intraoperative Ultrasound Images during
Brain Surgery: A Method and Application
|
[
"Alistair Weld",
"Luke Dixon",
"Giulio Anichini",
"Michael Dyck",
"Alex Ranne",
"Sophie Camp",
"Stamatia Giannarou"
] |
Intraoperative ultrasound scanning is a demanding visuotactile task. It
requires operators to simultaneously localise the ultrasound perspective and
manually perform slight adjustments to the pose of the probe, making sure not
to apply excessive force or breaking contact with the tissue, whilst also
characterising the visible tissue. In this paper, we propose a method for the
identification of the visible tissue, which enables the analysis of ultrasound
probe and tissue contact via the detection of acoustic shadow and construction
of confidence maps of the perceptual salience. Detailed validation with both in
vivo and phantom data is performed. First, we show that our technique is
capable of achieving state of the art acoustic shadow scan line classification
- with an average binary classification accuracy on unseen data of 0.87.
Second, we show that our framework for constructing confidence maps is able to
produce an ideal response to a probe's pose that is being oriented in and out
of optimality - achieving an average RMSE across five scans of 0.174. The
performance evaluation justifies the potential clinical value of the method
which can be used both to assist clinical training and optimise robot-assisted
ultrasound tissue scanning.
|
[
"eess.IV",
"cs.CV"
] | false |
2309.15850
|
2023-06-01T15:14:58Z
|
Reflection Invariance Learning for Few-shot Semantic Segmentation
|
[
"Qinglong Cao",
"Yuntian Chen",
"Chao Ma",
"Xiaokang Yang"
] |
Few-shot semantic segmentation (FSS) aims to segment objects of unseen
classes in query images with only a few annotated support images. Existing FSS
algorithms typically focus on mining category representations from the
single-view support to match semantic objects of the single-view query.
However, the limited annotated samples render the single-view matching struggle
to perceive the reflection invariance of novel objects, which results in a
restricted learning space for novel categories and further induces a biased
segmentation with demoted parsing performance. To address this challenge, this
paper proposes a fresh few-shot segmentation framework to mine the reflection
invariance in a multi-view matching manner. Specifically, original and
reflection support features from different perspectives with the same semantics
are learnable fused to obtain the reflection invariance prototype with a
stronger category representation ability. Simultaneously, aiming at providing
better prior guidance, the Reflection Invariance Prior Mask Generation (RIPMG)
module is proposed to integrate prior knowledge from different perspectives.
Finally, segmentation predictions from varying views are complementarily merged
in the Reflection Invariance Semantic Prediction (RISP) module to yield precise
segmentation predictions. Extensive experiments on both PASCAL-$5^\textit{i}$
and COCO-$20^\textit{i}$ datasets demonstrate the effectiveness of our approach
and show that our method could achieve state-of-the-art performance. Code is
available at \url{https://anonymous.4open.science/r/RILFS-A4D1}
|
[
"cs.CV",
"eess.IV"
] | false |
2306.00299
|
2023-06-01T02:38:18Z
|
Robust Estimation of Surface Curvature Information from Point Cloud Data
|
[
"Jared Spang"
] |
This paper surveys and evaluates some popular state of the art methods for
algorithmic curvature and normal estimation. In addition to surveying existing
methods we also propose a new method for robust curvature estimation and
evaluate it against existing methods thus demonstrating its superiority to
existing methods in the case of significant data noise. Throughout this paper
we are concerned with computation in low dimensional spaces (N < 10) and
primarily focus on the computation of the Weingarten map and quantities that
may be derived from this; however, the algorithms discussed are theoretically
applicable in any dimension. One thing that is common to all these methods is
their basis in an estimated graph structure. For any of these methods to work
the local geometry of the manifold must be exploited; however, in the case of
point cloud data it is often difficult to discover a robust manifold structure
underlying the data, even in simple cases, which can greatly influence the
results of these algorithms. We hope that in pushing these algorithms to their
limits we are able to discover, and perhaps resolve, many major pitfalls that
may affect potential users and future researchers hoping to improve these
methods
|
[
"cs.CG",
"cs.CV",
"math.DG"
] | false |
2306.00370
|
2023-06-01T06:04:45Z
|
Graph Switching Dynamical Systems
|
[
"Yongtuo Liu",
"Sara Magliacane",
"Miltiadis Kofinas",
"Efstratios Gavves"
] |
Dynamical systems with complex behaviours, e.g. immune system cells
interacting with a pathogen, are commonly modelled by splitting the behaviour
into different regimes, or modes, each with simpler dynamics, and then learning
the switching behaviour from one mode to another. Switching Dynamical Systems
(SDS) are a powerful tool that automatically discovers these modes and
mode-switching behaviour from time series data. While effective, these methods
focus on independent objects, where the modes of one object are independent of
the modes of the other objects. In this paper, we focus on the more general
interacting object setting for switching dynamical systems, where the
per-object dynamics also depends on an unknown and dynamically changing subset
of other objects and their modes. To this end, we propose a novel graph-based
approach for switching dynamical systems, GRAph Switching dynamical Systems
(GRASS), in which we use a dynamic graph to characterize interactions between
objects and learn both intra-object and inter-object mode-switching behaviour.
We introduce two new datasets for this setting, a synthesized ODE-driven
particles dataset and a real-world Salsa Couple Dancing dataset. Experiments
show that GRASS can consistently outperforms previous state-of-the-art methods.
|
[
"cs.CV",
"cs.LG",
"cs.MA",
"math.DS"
] | false |
2306.00416
|
2023-06-01T07:48:34Z
|
Controllable Motion Diffusion Model
|
[
"Yi Shi",
"Jingbo Wang",
"Xuekun Jiang",
"Bo Dai"
] |
Generating realistic and controllable motions for virtual characters is a
challenging task in computer animation, and its implications extend to games,
simulations, and virtual reality. Recent studies have drawn inspiration from
the success of diffusion models in image generation, demonstrating the
potential for addressing this task. However, the majority of these studies have
been limited to offline applications that target at sequence-level generation
that generates all steps simultaneously. To enable real-time motion synthesis
with diffusion models in response to time-varying control signals, we propose
the framework of the Controllable Motion Diffusion Model (COMODO). Our
framework begins with an auto-regressive motion diffusion model (A-MDM), which
generates motion sequences step by step. In this way, simply using the standard
DDPM algorithm without any additional complexity, our framework is able to
generate high-fidelity motion sequences over extended periods with different
types of control signals. Then, we propose our reinforcement learning-based
controller and controlling strategies on top of the A-MDM model, so that our
framework can steer the motion synthesis process across multiple tasks,
including target reaching, joystick-based control, goal-oriented control, and
trajectory following. The proposed framework enables the real-time generation
of diverse motions that react adaptively to user commands on-the-fly, thereby
enhancing the overall user experience. Besides, it is compatible with the
inpainting-based editing methods and can predict much more diverse motions
without additional fine-tuning of the basic motion generation models. We
conduct comprehensive experiments to evaluate the effectiveness of our
framework in performing various tasks and compare its performance against
state-of-the-art methods.
|
[
"cs.CV",
"cs.AI",
"cs.GR"
] | false |
2306.00424
|
2023-06-01T08:04:12Z
|
End-to-end Knowledge Retrieval with Multi-modal Queries
|
[
"Man Luo",
"Zhiyuan Fang",
"Tejas Gokhale",
"Yezhou Yang",
"Chitta Baral"
] |
We investigate knowledge retrieval with multi-modal queries, i.e. queries
containing information split across image and text inputs, a challenging task
that differs from previous work on cross-modal retrieval. We curate a new
dataset called ReMuQ for benchmarking progress on this task. ReMuQ requires a
system to retrieve knowledge from a large corpus by integrating contents from
both text and image queries. We introduce a retriever model ``ReViz'' that can
directly process input text and images to retrieve relevant knowledge in an
end-to-end fashion without being dependent on intermediate modules such as
object detectors or caption generators. We introduce a new pretraining task
that is effective for learning knowledge retrieval with multimodal queries and
also improves performance on downstream tasks. We demonstrate superior
performance in retrieval on two datasets (ReMuQ and OK-VQA) under zero-shot
settings as well as further improvements when finetuned on these datasets.
|
[
"cs.CL",
"cs.CV",
"cs.IR"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.