arxiv_id
stringlengths 10
10
| published
stringlengths 20
20
| titles
stringlengths 9
243
| authors
listlengths 1
389
| abstract
stringlengths 96
3.09k
| categories
listlengths 1
10
| selected
bool 2
classes |
---|---|---|---|---|---|---|
2305.10559
|
2023-05-17T20:33:51Z
|
Short-Term Electricity Load Forecasting Using the Temporal Fusion
Transformer: Effect of Grid Hierarchies and Data Sources
|
[
"Elena Giacomazzi",
"Felix Haag",
"Konstantin Hopf"
] |
Recent developments related to the energy transition pose particular
challenges for distribution grids. Hence, precise load forecasts become more
and more important for effective grid management. Novel modeling approaches
such as the Transformer architecture, in particular the Temporal Fusion
Transformer (TFT), have emerged as promising methods for time series
forecasting. To date, just a handful of studies apply TFTs to electricity load
forecasting problems, mostly considering only single datasets and a few
covariates. Therefore, we examine the potential of the TFT architecture for
hourly short-term load forecasting across different time horizons (day-ahead
and week-ahead) and network levels (grid and substation level). We find that
the TFT architecture does not offer higher predictive performance than a
state-of-the-art LSTM model for day-ahead forecasting on the entire grid.
However, the results display significant improvements for the TFT when applied
at the substation level with a subsequent aggregation to the upper grid-level,
resulting in a prediction error of 2.43% (MAPE) for the best-performing
scenario. In addition, the TFT appears to offer remarkable improvements over
the LSTM approach for week-ahead forecasting (yielding a predictive error of
2.52% (MAPE) at the lowest). We outline avenues for future research using the
TFT approach for load forecasting, including the exploration of various grid
levels (e.g., grid, substation, and household level).
|
[
"cs.LG"
] | false |
2305.10611
|
2023-05-17T23:43:55Z
|
ACRoBat: Optimizing Auto-batching of Dynamic Deep Learning at Compile
Time
|
[
"Pratik Fegade",
"Tianqi Chen",
"Phillip B. Gibbons",
"Todd C. Mowry"
] |
Dynamic control flow is an important technique often used to design
expressive and efficient deep learning computations for applications such as
text parsing, machine translation, exiting early out of deep models and so on.
However, the resulting control flow divergence makes batching, an important
performance optimization, difficult to perform manually. In this paper, we
present ACRoBat, a framework that enables efficient automatic batching for
dynamic deep learning computations by performing hybrid static+dynamic compiler
optimizations and end-to-end tensor code generation. ACRoBat performs up to
8.5X better than DyNet, a state-of-the-art framework for automatic batching, on
an Nvidia GeForce RTX 3070 GPU.
|
[
"cs.LG"
] | false |
2305.09887
|
2023-05-17T01:49:44Z
|
Simplifying Distributed Neural Network Training on Massive Graphs:
Randomized Partitions Improve Model Aggregation
|
[
"Jiong Zhu",
"Aishwarya Reganti",
"Edward Huang",
"Charles Dickens",
"Nikhil Rao",
"Karthik Subbian",
"Danai Koutra"
] |
Distributed training of GNNs enables learning on massive graphs (e.g., social
and e-commerce networks) that exceed the storage and computational capacity of
a single machine. To reach performance comparable to centralized training,
distributed frameworks focus on maximally recovering cross-instance node
dependencies with either communication across instances or periodic fallback to
centralized training, which create overhead and limit the framework
scalability. In this work, we present a simplified framework for distributed
GNN training that does not rely on the aforementioned costly operations, and
has improved scalability, convergence speed and performance over the
state-of-the-art approaches. Specifically, our framework (1) assembles
independent trainers, each of which asynchronously learns a local model on
locally-available parts of the training graph, and (2) only conducts periodic
(time-based) model aggregation to synchronize the local models. Backed by our
theoretical analysis, instead of maximizing the recovery of cross-instance node
dependencies -- which has been considered the key behind closing the
performance gap between model aggregation and centralized training -- , our
framework leverages randomized assignment of nodes or super-nodes (i.e.,
collections of original nodes) to partition the training graph such that it
improves data uniformity and minimizes the discrepancy of gradient and loss
function across instances. In our experiments on social and e-commerce networks
with up to 1.3 billion edges, our proposed RandomTMA and SuperTMA approaches --
despite using less training data -- achieve state-of-the-art performance and
2.31x speedup compared to the fastest baseline, and show better robustness to
trainer failures.
|
[
"cs.LG",
"cs.DC"
] | false |
2305.09896
|
2023-05-17T02:13:18Z
|
Convergence and Privacy of Decentralized Nonconvex Optimization with
Gradient Clipping and Communication Compression
|
[
"Boyue Li",
"Yuejie Chi"
] |
Achieving communication efficiency in decentralized machine learning has been
attracting significant attention, with communication compression recognized as
an effective technique in algorithm design. This paper takes a first step to
understand the role of gradient clipping, a popular strategy in practice, in
decentralized nonconvex optimization with communication compression. We propose
PORTER, which considers two variants of gradient clipping added before or after
taking a mini-batch of stochastic gradients, where the former variant PORTER-DP
allows local differential privacy analysis with additional Gaussian
perturbation, and the latter variant PORTER-GC helps to stabilize training. We
develop a novel analysis framework that establishes their convergence
guarantees without assuming the stringent bounded gradient assumption. To the
best of our knowledge, our work provides the first convergence analysis for
decentralized nonconvex optimization with gradient clipping and communication
compression, highlighting the trade-offs between convergence rate, compression
ratio, network connectivity, and privacy.
|
[
"cs.LG",
"math.OC"
] | false |
2305.09913
|
2023-05-17T02:46:37Z
|
Assessing the Impact of Context Inference Error and Partial
Observability on RL Methods for Just-In-Time Adaptive Interventions
|
[
"Karine Karine",
"Predrag Klasnja",
"Susan A. Murphy",
"Benjamin M. Marlin"
] |
Just-in-Time Adaptive Interventions (JITAIs) are a class of personalized
health interventions developed within the behavioral science community. JITAIs
aim to provide the right type and amount of support by iteratively selecting a
sequence of intervention options from a pre-defined set of components in
response to each individual's time varying state. In this work, we explore the
application of reinforcement learning methods to the problem of learning
intervention option selection policies. We study the effect of context
inference error and partial observability on the ability to learn effective
policies. Our results show that the propagation of uncertainty from context
inferences is critical to improving intervention efficacy as context
uncertainty increases, while policy gradient algorithms can provide remarkable
robustness to partially observed behavioral state information.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.09931
|
2023-05-17T03:28:19Z
|
Mitigating Group Bias in Federated Learning: Beyond Local Fairness
|
[
"Ganghua Wang",
"Ali Payani",
"Myungjin Lee",
"Ramana Kompella"
] |
The issue of group fairness in machine learning models, where certain
sub-populations or groups are favored over others, has been recognized for some
time. While many mitigation strategies have been proposed in centralized
learning, many of these methods are not directly applicable in federated
learning, where data is privately stored on multiple clients. To address this,
many proposals try to mitigate bias at the level of clients before aggregation,
which we call locally fair training. However, the effectiveness of these
approaches is not well understood. In this work, we investigate the theoretical
foundation of locally fair training by studying the relationship between global
model fairness and local model fairness. Additionally, we prove that for a
broad class of fairness metrics, the global model's fairness can be obtained
using only summary statistics from local clients. Based on that, we propose a
globally fair training algorithm that directly minimizes the penalized
empirical loss. Real-data experiments demonstrate the promising performance of
our proposed approach for enhancing fairness while retaining high accuracy
compared to locally fair training methods.
|
[
"cs.LG",
"cs.CY"
] | false |
2305.09947
|
2023-05-17T05:00:47Z
|
Understanding the Initial Condensation of Convolutional Neural Networks
|
[
"Zhangchen Zhou",
"Hanxu Zhou",
"Yuqing Li",
"Zhi-Qin John Xu"
] |
Previous research has shown that fully-connected networks with small
initialization and gradient-based training methods exhibit a phenomenon known
as condensation during training. This phenomenon refers to the input weights of
hidden neurons condensing into isolated orientations during training, revealing
an implicit bias towards simple solutions in the parameter space. However, the
impact of neural network structure on condensation has not been investigated
yet. In this study, we focus on the investigation of convolutional neural
networks (CNNs). Our experiments suggest that when subjected to small
initialization and gradient-based training methods, kernel weights within the
same CNN layer also cluster together during training, demonstrating a
significant degree of condensation. Theoretically, we demonstrate that in a
finite training period, kernels of a two-layer CNN with small initialization
will converge to one or a few directions. This work represents a step towards a
better understanding of the non-linear training behavior exhibited by neural
networks with specialized structures.
|
[
"cs.LG",
"cs.NE"
] | false |
2305.09958
|
2023-05-17T05:35:49Z
|
SIMGA: A Simple and Effective Heterophilous Graph Neural Network with
Efficient Global Aggregation
|
[
"Haoyu Liu",
"Ningyi Liao",
"Siqiang Luo"
] |
Graph neural networks (GNNs) realize great success in graph learning but
suffer from performance loss when meeting heterophily, i.e. neighboring nodes
are dissimilar, due to their local and uniform aggregation. Existing attempts
in incoorporating global aggregation for heterophilous GNNs usually require
iteratively maintaining and updating full-graph information, which entails
$\mathcal{O}(n^2)$ computation efficiency for a graph with $n$ nodes, leading
to weak scalability to large graphs. In this paper, we propose SIMGA, a GNN
structure integrating SimRank structural similarity measurement as global
aggregation. The design of SIMGA is simple, yet it leads to promising results
in both efficiency and effectiveness. The simplicity of SIMGA makes it the
first heterophilous GNN model that can achieve a propagation efficiency
near-linear to $n$. We theoretically demonstrate its effectiveness by treating
SimRank as a new interpretation of GNN and prove that the aggregated node
representation matrix has expected grouping effect. The performances of SIMGA
are evaluated with 11 baselines on 12 benchmark datasets, usually achieving
superior accuracy compared with the state-of-the-art models. Efficiency study
reveals that SIMGA is up to 5$\times$ faster than the state-of-the-art method
on the largest heterophily dataset pokec with over 30 million edges.
|
[
"cs.LG",
"cs.SI"
] | false |
2305.10014
|
2023-05-17T07:48:54Z
|
A Survey on Multi-Objective based Parameter Optimization for Deep
Learning
|
[
"Mrittika Chakraborty",
"Wreetbhas Pal",
"Sanghamitra Bandyopadhyay",
"Ujjwal Maulik"
] |
Deep learning models form one of the most powerful machine learning models
for the extraction of important features. Most of the designs of deep neural
models, i.e., the initialization of parameters, are still manually tuned.
Hence, obtaining a model with high performance is exceedingly time-consuming
and occasionally impossible. Optimizing the parameters of the deep networks,
therefore, requires improved optimization algorithms with high convergence
rates. The single objective-based optimization methods generally used are
mostly time-consuming and do not guarantee optimum performance in all cases.
Mathematical optimization problems containing multiple objective functions that
must be optimized simultaneously fall under the category of multi-objective
optimization sometimes referred to as Pareto optimization. Multi-objective
optimization problems form one of the alternatives yet useful options for
parameter optimization. However, this domain is a bit less explored. In this
survey, we focus on exploring the effectiveness of multi-objective optimization
strategies for parameter optimization in conjunction with deep neural networks.
The case studies used in this study focus on how the two methods are combined
to provide valuable insights into the generation of predictions and analysis in
multiple applications.
|
[
"cs.LG",
"math.OC"
] | false |
2305.10042
|
2023-05-17T08:36:43Z
|
Optimal Weighted Random Forests
|
[
"Xinyu Chen",
"Dalei Yu",
"Xinyu Zhang"
] |
The random forest (RF) algorithm has become a very popular prediction method
for its great flexibility and promising accuracy. In RF, it is conventional to
put equal weights on all the base learners (trees) to aggregate their
predictions. However, the predictive performances of different trees within the
forest can be very different due to the randomization of the embedded bootstrap
sampling and feature selection. In this paper, we focus on RF for regression
and propose two optimal weighting algorithms, namely the 1 Step Optimal
Weighted RF (1step-WRF$_\mathrm{opt}$) and 2 Steps Optimal Weighted RF
(2steps-WRF$_\mathrm{opt}$), that combine the base learners through the weights
determined by weight choice criteria. Under some regularity conditions, we show
that these algorithms are asymptotically optimal in the sense that the
resulting squared loss and risk are asymptotically identical to those of the
infeasible but best possible model averaging estimator. Numerical studies
conducted on real-world data sets indicate that these algorithms outperform the
equal-weight forest and two other weighted RFs proposed in existing literature
in most cases.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.10059
|
2023-05-17T08:55:53Z
|
A hybrid feature learning approach based on convolutional kernels for
ATM fault prediction using event-log data
|
[
"Víctor Manuel Vargas",
"Riccardo Rosati",
"César Hervás-Martínez",
"Adriano Mancini",
"Luca Romeo",
"Pedro Antonio Gutiérrez"
] |
Predictive Maintenance (PdM) methods aim to facilitate the scheduling of
maintenance work before equipment failure. In this context, detecting early
faults in automated teller machines (ATMs) has become increasingly important
since these machines are susceptible to various types of unpredictable
failures. ATMs track execution status by generating massive event-log data that
collect system messages unrelated to the failure event. Predicting machine
failure based on event logs poses additional challenges, mainly in extracting
features that might represent sequences of events indicating impending
failures. Accordingly, feature learning approaches are currently being used in
PdM, where informative features are learned automatically from minimally
processed sensor data. However, a gap remains to be seen on how these
approaches can be exploited for deriving relevant features from event-log-based
data. To fill this gap, we present a predictive model based on a convolutional
kernel (MiniROCKET and HYDRA) to extract features from the original event-log
data and a linear classifier to classify the sample based on the learned
features. The proposed methodology is applied to a significant real-world
collected dataset. Experimental results demonstrated how one of the proposed
convolutional kernels (i.e. HYDRA) exhibited the best classification
performance (accuracy of 0.759 and AUC of 0.693). In addition, statistical
analysis revealed that the HYDRA and MiniROCKET models significantly overcome
one of the established state-of-the-art approaches in time series
classification (InceptionTime), and three non-temporal ML methods from the
literature. The predictive model was integrated into a container-based decision
support system to support operators in the timely maintenance of ATMs.
|
[
"cs.LG",
"cs.AI",
"I.2.1; I.5.4"
] | false |
2305.10113
|
2023-05-17T10:29:02Z
|
Neuro-Symbolic AI for Compliance Checking of Electrical Control Panels
|
[
"Vito Barbara",
"Massimo Guarascio",
"Nicola Leone",
"Giuseppe Manco",
"Alessandro Quarta",
"Francesco Ricca",
"Ettore Ritacco"
] |
Artificial Intelligence plays a main role in supporting and improving smart
manufacturing and Industry 4.0, by enabling the automation of different types
of tasks manually performed by domain experts. In particular, assessing the
compliance of a product with the relative schematic is a time-consuming and
prone-to-error process. In this paper, we address this problem in a specific
industrial scenario. In particular, we define a Neuro-Symbolic approach for
automating the compliance verification of the electrical control panels. Our
approach is based on the combination of Deep Learning techniques with Answer
Set Programming (ASP), and allows for identifying possible anomalies and errors
in the final product even when a very limited amount of training data is
available. The experiments conducted on a real test case provided by an Italian
Company operating in electrical control panel production demonstrate the
effectiveness of the proposed approach.
|
[
"cs.AI",
"cs.LG"
] | false |
2305.10158
|
2023-05-17T12:19:59Z
|
A Global-Local Approximation Framework for Large-Scale Gaussian Process
Modeling
|
[
"Akhil Vakayil",
"Roshan Joseph"
] |
In this work, we propose a novel framework for large-scale Gaussian process
(GP) modeling. Contrary to the global, and local approximations proposed in the
literature to address the computational bottleneck with exact GP modeling, we
employ a combined global-local approach in building the approximation. Our
framework uses a subset-of-data approach where the subset is a union of a set
of global points designed to capture the global trend in the data, and a set of
local points specific to a given testing location to capture the local trend
around the testing location. The correlation function is also modeled as a
combination of a global, and a local kernel. The performance of our framework,
which we refer to as TwinGP, is on par or better than the state-of-the-art GP
modeling methods at a fraction of their computational cost.
|
[
"stat.ML",
"cs.LG"
] | false |
2305.10203
|
2023-05-17T13:25:57Z
|
Exploring the Space of Key-Value-Query Models with Intention
|
[
"Marta Garnelo",
"Wojciech Marian Czarnecki"
] |
Attention-based models have been a key element of many recent breakthroughs
in deep learning. Two key components of Attention are the structure of its
input (which consists of keys, values and queries) and the computations by
which these three are combined. In this paper we explore the space of models
that share said input structure but are not restricted to the computations of
Attention. We refer to this space as Keys-Values-Queries (KVQ) Space. Our goal
is to determine whether there are any other stackable models in KVQ Space that
Attention cannot efficiently approximate, which we can implement with our
current deep learning toolbox and that solve problems that are interesting to
the community. Maybe surprisingly, the solution to the standard least squares
problem satisfies these properties. A neural network module that is able to
compute this solution not only enriches the set of computations that a neural
network can represent but is also provably a strict generalisation of Linear
Attention. Even more surprisingly the computational complexity of this module
is exactly the same as that of Attention, making it a suitable drop in
replacement. With this novel connection between classical machine learning
(least squares) and modern deep learning (Attention) established we justify a
variation of our model which generalises regular Attention in the same way.
Both new modules are put to the test an a wide spectrum of tasks ranging from
few-shot learning to policy distillation that confirm their real-worlds
applicability.
|
[
"cs.LG",
"cs.NE"
] | false |
2305.10298
|
2023-05-17T15:35:31Z
|
Estimation of Remaining Useful Life and SOH of Lithium Ion Batteries
(For EV Vehicles)
|
[
"Ganesh Kumar"
] |
Lithium-ion batteries are widely used in various applications, including
portable electronic devices, electric vehicles, and renewable energy storage
systems. Accurately estimating the remaining useful life of these batteries is
crucial for ensuring their optimal performance, preventing unexpected failures,
and reducing maintenance costs. In this paper, we present a comprehensive
review of the existing approaches for estimating the remaining useful life of
lithium-ion batteries, including data-driven methods, physics-based models, and
hybrid approaches. We also propose a novel approach based on machine learning
techniques for accurately predicting the remaining useful life of lithium-ion
batteries. Our approach utilizes various battery performance parameters,
including voltage, current, and temperature, to train a predictive model that
can accurately estimate the remaining useful life of the battery. We evaluate
the performance of our approach on a dataset of lithium-ion battery cycles and
compare it with other state-of-the-art methods. The results demonstrate the
effectiveness of our proposed approach in accurately estimating the remaining
useful life of lithium-ion batteries.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.10399
|
2023-05-17T17:43:10Z
|
End-To-End Latent Variational Diffusion Models for Inverse Problems in
High Energy Physics
|
[
"Alexander Shmakov",
"Kevin Greif",
"Michael Fenton",
"Aishik Ghosh",
"Pierre Baldi",
"Daniel Whiteson"
] |
High-energy collisions at the Large Hadron Collider (LHC) provide valuable
insights into open questions in particle physics. However, detector effects
must be corrected before measurements can be compared to certain theoretical
predictions or measurements from other detectors. Methods to solve this
\textit{inverse problem} of mapping detector observations to theoretical
quantities of the underlying collision are essential parts of many physics
analyses at the LHC. We investigate and compare various generative deep
learning methods to approximate this inverse mapping. We introduce a novel
unified architecture, termed latent variation diffusion models, which combines
the latent learning of cutting-edge generative art approaches with an
end-to-end variational framework. We demonstrate the effectiveness of this
approach for reconstructing global distributions of theoretical kinematic
quantities, as well as for ensuring the adherence of the learned posterior
distributions to known physics constraints. Our unified approach achieves a
distribution-free distance to the truth of over 20 times less than non-latent
state-of-the-art baseline and 3 times less than traditional latent diffusion
models.
|
[
"hep-ex",
"cs.LG"
] | false |
2305.10411
|
2023-05-17T17:48:24Z
|
Wasserstein Gradient Flows for Optimizing Gaussian Mixture Policies
|
[
"Hanna Ziesche",
"Leonel Rozo"
] |
Robots often rely on a repertoire of previously-learned motion policies for
performing tasks of diverse complexities. When facing unseen task conditions or
when new task requirements arise, robots must adapt their motion policies
accordingly. In this context, policy optimization is the \emph{de facto}
paradigm to adapt robot policies as a function of task-specific objectives.
Most commonly-used motion policies carry particular structures that are often
overlooked in policy optimization algorithms. We instead propose to leverage
the structure of probabilistic policies by casting the policy optimization as
an optimal transport problem. Specifically, we focus on robot motion policies
that build on Gaussian mixture models (GMMs) and formulate the policy
optimization as a Wassertein gradient flow over the GMMs space. This naturally
allows us to constrain the policy updates via the $L^2$-Wasserstein distance
between GMMs to enhance the stability of the policy optimization process.
Furthermore, we leverage the geometry of the Bures-Wasserstein manifold to
optimize the Gaussian distributions of the GMM policy via Riemannian
optimization. We evaluate our approach on common robotic settings: Reaching
motions, collision-avoidance behaviors, and multi-goal tasks. Our results show
that our method outperforms common policy optimization baselines in terms of
task success rate and low-variance solutions.
|
[
"cs.LG",
"cs.RO"
] | false |
2305.10467
|
2023-05-17T13:40:55Z
|
Analysing Biomedical Knowledge Graphs using Prime Adjacency Matrices
|
[
"Konstantinos Bougiatiotis",
"Georgios Paliouras"
] |
Most phenomena related to biomedical tasks are inherently complex, and in
many cases, are expressed as signals on biomedical Knowledge Graphs (KGs). In
this work, we introduce the use of a new representation framework, the Prime
Adjacency Matrix (PAM) for biomedical KGs, which allows for very efficient
network analysis. PAM utilizes prime numbers to enable representing the whole
KG with a single adjacency matrix and the fast computation of multiple
properties of the network. We illustrate the applicability of the framework in
the biomedical domain by working on different biomedical knowledge graphs and
by providing two case studies: one on drug-repurposing for COVID-19 and one on
important metapath extraction. We show that we achieve better results than the
original proposed workflows, using very simple methods that require no
training, in considerably less time.
|
[
"q-bio.QM",
"cs.LG"
] | false |
2305.10504
|
2023-05-17T18:19:23Z
|
Model-Free Robust Average-Reward Reinforcement Learning
|
[
"Yue Wang",
"Alvaro Velasquez",
"George Atia",
"Ashley Prater-Bennette",
"Shaofeng Zou"
] |
Robust Markov decision processes (MDPs) address the challenge of model
uncertainty by optimizing the worst-case performance over an uncertainty set of
MDPs. In this paper, we focus on the robust average-reward MDPs under the
model-free setting. We first theoretically characterize the structure of
solutions to the robust average-reward Bellman equation, which is essential for
our later convergence analysis. We then design two model-free algorithms,
robust relative value iteration (RVI) TD and robust RVI Q-learning, and
theoretically prove their convergence to the optimal solution. We provide
several widely used uncertainty sets as examples, including those defined by
the contamination model, total variation, Chi-squared divergence,
Kullback-Leibler (KL) divergence and Wasserstein distance.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.10569
|
2023-05-17T21:08:02Z
|
Self-Supervised Learning for Physiologically-Based Pharmacokinetic
Modeling in Dynamic PET
|
[
"Francesca De Benetti",
"Walter Simson",
"Magdalini Paschali",
"Hasan Sari",
"Axel Romiger",
"Kuangyu Shi",
"Nassir Navab",
"Thomas Wendler"
] |
Dynamic positron emission tomography imaging (dPET) provides temporally
resolved images of a tracer enabling a quantitative measure of physiological
processes. Voxel-wise physiologically-based pharmacokinetic (PBPK) modeling of
the time activity curves (TAC) can provide relevant diagnostic information for
clinical workflow. Conventional fitting strategies for TACs are slow and ignore
the spatial relation between neighboring voxels. We train a spatio-temporal
UNet to estimate the kinetic parameters given TAC from F-18-fluorodeoxyglucose
(FDG) dPET. This work introduces a self-supervised loss formulation to enforce
the similarity between the measured TAC and those generated with the learned
kinetic parameters. Our method provides quantitatively comparable results at
organ-level to the significantly slower conventional approaches, while
generating pixel-wise parametric images which are consistent with expected
physiology. To the best of our knowledge, this is the first self-supervised
network that allows voxel-wise computation of kinetic parameters consistent
with a non-linear kinetic model. The code will become publicly available upon
acceptance.
|
[
"eess.IV",
"cs.LG"
] | false |
2305.11181
|
2023-05-17T00:29:25Z
|
Comparison of Transfer Learning based Additive Manufacturing Models via
A Case Study
|
[
"Yifan Tang",
"M. Rahmani Dehaghani",
"G. Gary Wang"
] |
Transfer learning (TL) based additive manufacturing (AM) modeling is an
emerging field to reuse the data from historical products and mitigate the data
insufficiency in modeling new products. Although some trials have been
conducted recently, the inherent challenges of applying TL in AM modeling are
seldom discussed, e.g., which source domain to use, how much target data is
needed, and whether to apply data preprocessing techniques. This paper aims to
answer those questions through a case study defined based on an open-source
dataset about metal AM products. In the case study, five TL methods are
integrated with decision tree regression (DTR) and artificial neural network
(ANN) to construct six TL-based models, whose performances are then compared
with the baseline DTR and ANN in a proposed validation framework. The
comparisons are used to quantify the performance of applied TL methods and are
discussed from the perspective of similarity, training data size, and data
preprocessing. Finally, the source AM domain with larger qualitative similarity
and a certain range of target-to-source training data size ratio are
recommended. Besides, the data preprocessing should be performed carefully to
balance the modeling performance and the performance improvement due to TL.
|
[
"cs.LG",
"cs.CE"
] | false |
2305.11910
|
2023-05-17T21:48:02Z
|
Machine Learning and VIIRS Satellite Retrievals for Skillful Fuel
Moisture Content Monitoring in Wildfire Management
|
[
"John S. Schreck",
"William Petzke",
"Pedro A. Jimenez",
"Thomas Brummet",
"Jason C. Knievel",
"Eric James",
"Branko Kosovic",
"David John Gagne"
] |
Monitoring the fuel moisture content (FMC) of vegetation is crucial for
managing and mitigating the impact of wildland fires. The combination of in
situ FMC observations with numerical weather prediction (NWP) models and
satellite retrievals has enabled the development of machine learning (ML)
models to estimate dead FMC retrievals over the contiguous US (CONUS). In this
study, ML models were trained using variables from the National Water Model and
the High-Resolution Rapid Refresh (HRRR) NWP models, and static variables
characterizing the surface properties, as well as surface reflectances and land
surface temperature (LST) retrievals from the VIIRS instrument on board the
Suomi-NPP satellite system. Extensive hyper-parameter optimization yielded
skillful FMC models compared to a daily climatography RMSE (+44\%) and to an
hourly climatography RMSE (+24\%). Furthermore, VIIRS retrievals were important
predictors for estimating FMC, contributing significantly as a group due to
their high band-correlation. In contrast, individual predictors in the HRRR
group had relatively high importance according to the explainability techniques
used. When both HRRR and VIIRS retrievals were not used as model inputs, the
performance dropped significantly. If VIIRS retrievals were not used, the RMSE
performance was worse. This highlights the importance of VIIRS retrievals in
modeling FMC, which yielded better models compared to MODIS. Overall, the
importance of the VIIRS group of predictors corroborates the dynamic
relationship between the 10-h fuel and the atmosphere and soil moisture. These
findings emphasize the significance of selecting appropriate data sources for
predicting FMC with ML models, with VIIRS retrievals and selected HRRR
variables being critical components in producing skillful FMC estimates.
|
[
"cs.LG",
"physics.ao-ph"
] | false |
2305.16330
|
2023-05-17T13:37:15Z
|
Prompt Engineering for Transformer-based Chemical Similarity Search
Identifies Structurally Distinct Functional Analogues
|
[
"Clayton W. Kosonocky",
"Aaron L. Feller",
"Claus O. Wilke",
"Andrew D. Ellington"
] |
Chemical similarity searches are widely used in-silico methods for
identifying new drug-like molecules. These methods have historically relied on
structure-based comparisons to compute molecular similarity. Here, we use a
chemical language model to create a vector-based chemical search. We extend
implementations by creating a prompt engineering strategy that utilizes two
different chemical string representation algorithms: one for the query and the
other for the database. We explore this method by reviewing the search results
from five drug-like query molecules (penicillin G, nirmatrelvir, zidovudine,
lysergic acid diethylamide, and fentanyl) and three dye-like query molecules
(acid blue 25, avobenzone, and 2-diphenylaminocarbazole). We find that this
novel method identifies molecules that are functionally similar to the query,
indicated by the associated patent literature, and that many of these molecules
are structurally distinct from the query, making them unlikely to be found with
traditional chemical similarity search methods. This method may aid in the
discovery of novel structural classes of molecules that achieve target
functionality.
|
[
"physics.chem-ph",
"cs.LG"
] | false |
2306.07973
|
2023-05-17T20:15:26Z
|
PrivaScissors: Enhance the Privacy of Collaborative Inference through
the Lens of Mutual Information
|
[
"Lin Duan",
"Jingwei Sun",
"Yiran Chen",
"Maria Gorlatova"
] |
Edge-cloud collaborative inference empowers resource-limited IoT devices to
support deep learning applications without disclosing their raw data to the
cloud server, thus preserving privacy. Nevertheless, prior research has shown
that collaborative inference still results in the exposure of data and
predictions from edge devices. To enhance the privacy of collaborative
inference, we introduce a defense strategy called PrivaScissors, which is
designed to reduce the mutual information between a model's intermediate
outcomes and the device's data and predictions. We evaluate PrivaScissors's
performance on several datasets in the context of diverse attacks and offer a
theoretical robustness guarantee.
|
[
"cs.CR",
"cs.LG"
] | false |
2307.11688
|
2023-05-17T20:06:17Z
|
Interpretable Graph Networks Formulate Universal Algebra Conjectures
|
[
"Francesco Giannini",
"Stefano Fioravanti",
"Oguzhan Keskin",
"Alisia Maria Lupidi",
"Lucie Charlotte Magister",
"Pietro Lio",
"Pietro Barbiero"
] |
The rise of Artificial Intelligence (AI) recently empowered researchers to
investigate hard mathematical problems which eluded traditional approaches for
decades. Yet, the use of AI in Universal Algebra (UA) -- one of the fields
laying the foundations of modern mathematics -- is still completely unexplored.
This work proposes the first use of AI to investigate UA's conjectures with an
equivalent equational and topological characterization. While topological
representations would enable the analysis of such properties using graph neural
networks, the limited transparency and brittle explainability of these models
hinder their straightforward use to empirically validate existing conjectures
or to formulate new ones. To bridge these gaps, we propose a general algorithm
generating AI-ready datasets based on UA's conjectures, and introduce a novel
neural layer to build fully interpretable graph networks. The results of our
experiments demonstrate that interpretable graph networks: (i) enhance
interpretability without sacrificing task accuracy, (ii) strongly generalize
when predicting universal algebra's properties, (iii) generate simple
explanations that empirically validate existing conjectures, and (iv) identify
subgraphs suggesting the formulation of novel conjectures.
|
[
"cs.LG",
"cs.AI"
] | false |
2305.09869
|
2023-05-17T00:46:01Z
|
A Signed Subgraph Encoding Approach via Linear Optimization for Link
Sign Prediction
|
[
"Zhihong Fang",
"Shaolin Tan",
"Yaonan Wang"
] |
In this paper, we consider the problem of inferring the sign of a link based
on limited sign data in signed networks. Regarding this link sign prediction
problem, SDGNN (Signed Directed Graph Neural Networks) provides the best
prediction performance currently to the best of our knowledge. In this paper,
we propose a different link sign prediction architecture call SELO (Subgraph
Encoding via Linear Optimization), which obtains overall leading prediction
performances compared the state-of-the-art algorithm SDGNN. The proposed model
utilizes a subgraph encoding approach to learn edge embeddings for signed
directed networks. In particular, a signed subgraph encoding approach is
introduced to embed each subgraph into a likelihood matrix instead of the
adjacency matrix through a linear optimization method. Comprehensive
experiments are conducted on six real-world signed networks with AUC, F1,
micro-F1, and Macro-F1 as the evaluation metrics. The experiment results show
that the proposed SELO model outperforms existing baseline feature-based
methods and embedding-based methods on all the six real-world networks and in
all the four evaluation metrics.
|
[
"cs.LG",
"cs.SI",
"stat.ML"
] | false |
2305.09904
|
2023-05-17T02:26:34Z
|
On the ISS Property of the Gradient Flow for Single Hidden-Layer Neural
Networks with Linear Activations
|
[
"Arthur Castello B. de Oliveira",
"Milad Siami",
"Eduardo D. Sontag"
] |
Recent research in neural networks and machine learning suggests that using
many more parameters than strictly required by the initial complexity of a
regression problem can result in more accurate or faster-converging models --
contrary to classical statistical belief. This phenomenon, sometimes known as
``benign overfitting'', raises questions regarding in what other ways might
overparameterization affect the properties of a learning problem. In this work,
we investigate the effects of overfitting on the robustness of gradient-descent
training when subject to uncertainty on the gradient estimation. This
uncertainty arises naturally if the gradient is estimated from noisy data or
directly measured. Our object of study is a linear neural network with a
single, arbitrarily wide, hidden layer and an arbitrary number of inputs and
outputs. In this paper we solve the problem for the case where the input and
output of our neural-network are one-dimensional, deriving sufficient
conditions for robustness of our system based on necessary and sufficient
conditions for convergence in the undisturbed case. We then show that the
general overparametrized formulation introduces a set of spurious equilibria
which lay outside the set where the loss function is minimized, and discuss
directions of future work that might extend our current results for more
general formulations.
|
[
"cs.LG",
"cs.SY",
"eess.SY"
] | false |
2305.09922
|
2023-05-17T03:09:12Z
|
A Genetic Fuzzy System for Interpretable and Parsimonious Reinforcement
Learning Policies
|
[
"Jordan T. Bishop",
"Marcus Gallagher",
"Will N. Browne"
] |
Reinforcement learning (RL) is experiencing a resurgence in research
interest, where Learning Classifier Systems (LCSs) have been applied for many
years. However, traditional Michigan approaches tend to evolve large rule bases
that are difficult to interpret or scale to domains beyond standard mazes. A
Pittsburgh Genetic Fuzzy System (dubbed Fuzzy MoCoCo) is proposed that utilises
both multiobjective and cooperative coevolutionary mechanisms to evolve fuzzy
rule-based policies for RL environments. Multiobjectivity in the system is
concerned with policy performance vs. complexity. The continuous state RL
environment Mountain Car is used as a testing bed for the proposed system.
Results show the system is able to effectively explore the trade-off between
policy performance and complexity, and learn interpretable, high-performing
policies that use as few rules as possible.
|
[
"cs.LG",
"cs.AI",
"cs.NE"
] | false |
2305.09930
|
2023-05-17T03:27:36Z
|
Model-based Validation as Probabilistic Inference
|
[
"Harrison Delecki",
"Anthony Corso",
"Mykel J. Kochenderfer"
] |
Estimating the distribution over failures is a key step in validating
autonomous systems. Existing approaches focus on finding failures for a small
range of initial conditions or make restrictive assumptions about the
properties of the system under test. We frame estimating the distribution over
failure trajectories for sequential systems as Bayesian inference. Our
model-based approach represents the distribution over failure trajectories
using rollouts of system dynamics and computes trajectory gradients using
automatic differentiation. Our approach is demonstrated in an inverted pendulum
control system, an autonomous vehicle driving scenario, and a partially
observable lunar lander. Sampling is performed using an off-the-shelf
implementation of Hamiltonian Monte Carlo with multiple chains to capture
multimodality and gradient smoothing for safe trajectories. In all experiments,
we observed improvements in sample efficiency and parameter space coverage
compared to black-box baseline approaches. This work is open sourced.
|
[
"cs.RO",
"cs.LG",
"stat.ML"
] | false |
2305.09945
|
2023-05-17T04:46:55Z
|
Pittsburgh Learning Classifier Systems for Explainable Reinforcement
Learning: Comparing with XCS
|
[
"Jordan T. Bishop",
"Marcus Gallagher",
"Will N. Browne"
] |
Interest in reinforcement learning (RL) has recently surged due to the
application of deep learning techniques, but these connectionist approaches are
opaque compared with symbolic systems. Learning Classifier Systems (LCSs) are
evolutionary machine learning systems that can be categorised as eXplainable AI
(XAI) due to their rule-based nature. Michigan LCSs are commonly used in RL
domains as the alternative Pittsburgh systems (e.g. SAMUEL) suffer from complex
algorithmic design and high computational requirements; however they can
produce more compact/interpretable solutions than Michigan systems. We aim to
develop two novel Pittsburgh LCSs to address RL domains: PPL-DL and PPL-ST. The
former acts as a "zeroth-level" system, and the latter revisits SAMUEL's core
Monte Carlo learning mechanism for estimating rule strength. We compare our two
Pittsburgh systems to the Michigan system XCS across deterministic and
stochastic FrozenLake environments. Results show that PPL-ST performs on-par or
better than PPL-DL and outperforms XCS in the presence of high levels of
environmental uncertainty. Rulesets evolved by PPL-ST can achieve higher
performance than those evolved by XCS, but in a more parsimonious and therefore
more interpretable fashion, albeit with higher computational cost. This
indicates that PPL-ST is an LCS well-suited to producing explainable policies
in RL domains.
|
[
"cs.LG",
"cs.AI",
"cs.NE"
] | false |
2305.10033
|
2023-05-17T08:19:57Z
|
SHoP: A Deep Learning Framework for Solving High-order Partial
Differential Equations
|
[
"Tingxiong Xiao",
"Runzhao Yang",
"Yuxiao Cheng",
"Jinli Suo",
"Qionghai Dai"
] |
Solving partial differential equations (PDEs) has been a fundamental problem
in computational science and of wide applications for both scientific and
engineering research. Due to its universal approximation property, neural
network is widely used to approximate the solutions of PDEs. However, existing
works are incapable of solving high-order PDEs due to insufficient calculation
accuracy of higher-order derivatives, and the final network is a black box
without explicit explanation. To address these issues, we propose a deep
learning framework to solve high-order PDEs, named SHoP. Specifically, we
derive the high-order derivative rule for neural network, to get the
derivatives quickly and accurately; moreover, we expand the network into a
Taylor series, providing an explicit solution for the PDEs. We conduct
experimental validations four high-order PDEs with different dimensions,
showing that we can solve high-order PDEs efficiently and accurately.
|
[
"cs.LG",
"cs.NA",
"math.NA"
] | false |
2305.10057
|
2023-05-17T08:53:29Z
|
Physics-driven machine learning for the prediction of coronal mass
ejections' travel times
|
[
"Sabrina Guastavino",
"Valentina Candiani",
"Alessandro Bemporad",
"Francesco Marchetti",
"Federico Benvenuto",
"Anna Maria Massone",
"Roberto Susino",
"Daniele Telloni",
"Silvano Fineschi",
"Michele Piana"
] |
Coronal Mass Ejections (CMEs) correspond to dramatic expulsions of plasma and
magnetic field from the solar corona into the heliosphere. CMEs are
scientifically relevant because they are involved in the physical mechanisms
characterizing the active Sun. However, more recently CMEs have attracted
attention for their impact on space weather, as they are correlated to
geomagnetic storms and may induce the generation of Solar Energetic Particles
streams. In this space weather framework, the present paper introduces a
physics-driven artificial intelligence (AI) approach to the prediction of CMEs
travel time, in which the deterministic drag-based model is exploited to
improve the training phase of a cascade of two neural networks fed with both
remote sensing and in-situ data. This study shows that the use of physical
information in the AI architecture significantly improves both the accuracy and
the robustness of the travel time prediction.
|
[
"astro-ph.SR",
"astro-ph.IM",
"cs.LG",
"physics.space-ph",
"68T07, 85-08, 65K10"
] | false |
2305.10060
|
2023-05-17T08:56:43Z
|
XAI for Self-supervised Clustering of Wireless Spectrum Activity
|
[
"Ljupcho Milosheski",
"Gregor Cerar",
"Blaž Bertalanič",
"Carolina Fortuna",
"Mihael Mohorčič"
] |
The so-called black-box deep learning (DL) models are increasingly used in
classification tasks across many scientific disciplines, including wireless
communications domain. In this trend, supervised DL models appear as most
commonly proposed solutions to domain-related classification problems. Although
they are proven to have unmatched performance, the necessity for large labeled
training data and their intractable reasoning, as two major drawbacks, are
constraining their usage. The self-supervised architectures emerged as a
promising solution that reduces the size of the needed labeled data, but the
explainability problem remains. In this paper, we propose a methodology for
explaining deep clustering, self-supervised learning architectures comprised of
a representation learning part based on a Convolutional Neural Network (CNN)
and a clustering part. For the state of the art representation learning part,
our methodology employs Guided Backpropagation to interpret the regions of
interest of the input data. For the clustering part, the methodology relies on
Shallow Trees to explain the clustering result using optimized depth decision
tree. Finally, a data-specific visualizations part enables connection for each
of the clusters to the input data trough the relevant features. We explain on a
use case of wireless spectrum activity clustering how the CNN-based, deep
clustering architecture reasons.
|
[
"cs.LG",
"cs.IT",
"math.IT"
] | false |
2305.10103
|
2023-05-17T10:09:40Z
|
Predicting Tweet Engagement with Graph Neural Networks
|
[
"Marco Arazzi",
"Marco Cotogni",
"Antonino Nocera",
"Luca Virgili"
] |
Social Networks represent one of the most important online sources to share
content across a world-scale audience. In this context, predicting whether a
post will have any impact in terms of engagement is of crucial importance to
drive the profitable exploitation of these media. In the literature, several
studies address this issue by leveraging direct features of the posts,
typically related to the textual content and the user publishing it. In this
paper, we argue that the rise of engagement is also related to another key
component, which is the semantic connection among posts published by users in
social media. Hence, we propose TweetGage, a Graph Neural Network solution to
predict the user engagement based on a novel graph-based model that represents
the relationships among posts. To validate our proposal, we focus on the
Twitter platform and perform a thorough experimental campaign providing
evidence of its quality.
|
[
"cs.SI",
"cs.AI",
"cs.LG"
] | false |
2305.10157
|
2023-05-17T12:19:43Z
|
Provably Correct Physics-Informed Neural Networks
|
[
"Francisco Eiras",
"Adel Bibi",
"Rudy Bunel",
"Krishnamurthy Dj Dvijotham",
"Philip Torr",
"M. Pawan Kumar"
] |
Recent work provides promising evidence that Physics-informed neural networks
(PINN) can efficiently solve partial differential equations (PDE). However,
previous works have failed to provide guarantees on the worst-case residual
error of a PINN across the spatio-temporal domain - a measure akin to the
tolerance of numerical solvers - focusing instead on point-wise comparisons
between their solution and the ones obtained by a solver on a set of inputs. In
real-world applications, one cannot consider tests on a finite set of points to
be sufficient grounds for deployment, as the performance could be substantially
worse on a different set. To alleviate this issue, we establish tolerance-based
correctness conditions for PINNs over the entire input domain. To verify the
extent to which they hold, we introduce $\partial$-CROWN: a general, efficient
and scalable post-training framework to bound PINN residual errors. We
demonstrate its effectiveness in obtaining tight certificates by applying it to
two classically studied PDEs - Burgers' and Schr\"odinger's equations -, and
two more challenging ones with real-world applications - the Allan-Cahn and
Diffusion-Sorption equations.
|
[
"cs.LG",
"math-ph",
"math.MP"
] | false |
2305.10185
|
2023-05-17T13:09:23Z
|
Algorithms for Boolean Matrix Factorization using Integer Programming
|
[
"Christos Kolomvakis",
"Arnaud Vandaele",
"Nicolas Gillis"
] |
Boolean matrix factorization (BMF) approximates a given binary input matrix
as the product of two smaller binary factors. As opposed to binary matrix
factorization which uses standard arithmetic, BMF uses the Boolean OR and
Boolean AND operations to perform matrix products, which leads to lower
reconstruction errors. BMF is an NP-hard problem. In this paper, we first
propose an alternating optimization (AO) strategy that solves the subproblem in
one factor matrix in BMF using an integer program (IP). We also provide two
ways to initialize the factors within AO. Then, we show how several solutions
of BMF can be combined optimally using another IP. This allows us to come up
with a new algorithm: it generates several solutions using AO and then combines
them in an optimal way. Experiments show that our algorithms (available on
gitlab) outperform the state of the art on medium-scale problems.
|
[
"math.OC",
"cs.LG",
"eess.SP",
"stat.ML"
] | false |
2305.10187
|
2023-05-17T13:12:48Z
|
Evaluating Dynamic Conditional Quantile Treatment Effects with
Applications in Ridesharing
|
[
"Ting Li",
"Chengchun Shi",
"Zhaohua Lu",
"Yi Li",
"Hongtu Zhu"
] |
Many modern tech companies, such as Google, Uber, and Didi, utilize online
experiments (also known as A/B testing) to evaluate new policies against
existing ones. While most studies concentrate on average treatment effects,
situations with skewed and heavy-tailed outcome distributions may benefit from
alternative criteria, such as quantiles. However, assessing dynamic quantile
treatment effects (QTE) remains a challenge, particularly when dealing with
data from ride-sourcing platforms that involve sequential decision-making
across time and space. In this paper, we establish a formal framework to
calculate QTE conditional on characteristics independent of the treatment.
Under specific model assumptions, we demonstrate that the dynamic conditional
QTE (CQTE) equals the sum of individual CQTEs across time, even though the
conditional quantile of cumulative rewards may not necessarily equate to the
sum of conditional quantiles of individual rewards. This crucial insight
significantly streamlines the estimation and inference processes for our target
causal estimand. We then introduce two varying coefficient decision process
(VCDP) models and devise an innovative method to test the dynamic CQTE.
Moreover, we expand our approach to accommodate data from spatiotemporal
dependent experiments and examine both conditional quantile direct and indirect
effects. To showcase the practical utility of our method, we apply it to three
real-world datasets from a ride-sourcing platform. Theoretical findings and
comprehensive simulation studies further substantiate our proposal.
|
[
"stat.ME",
"cs.LG",
"stat.ML"
] | false |
2305.10212
|
2023-05-17T13:44:25Z
|
A Novel Stochastic LSTM Model Inspired by Quantum Machine Learning
|
[
"Joseph Lindsay",
"Ramtin Zand"
] |
Works in quantum machine learning (QML) over the past few years indicate that
QML algorithms can function just as well as their classical counterparts, and
even outperform them in some cases. Among the corpus of recent work, many
current QML models take advantage of variational quantum algorithm (VQA)
circuits, given that their scale is typically small enough to be compatible
with NISQ devices and the method of automatic differentiation for optimizing
circuit parameters is familiar to machine learning (ML). While the results bear
interesting promise for an era when quantum machines are more readily
accessible, if one can achieve similar results through non-quantum methods then
there may be a more near-term advantage available to practitioners. To this
end, the nature of this work is to investigate the utilization of stochastic
methods inspired by a variational quantum version of the long short-term memory
(LSTM) model in an attempt to approach the reported successes in performance
and rapid convergence. By analyzing the performance of classical, stochastic,
and quantum methods, this work aims to elucidate if it is possible to achieve
some of QML's major reported benefits on classical machines by incorporating
aspects of its stochasticity.
|
[
"cs.LG",
"cs.ET",
"quant-ph"
] | false |
2305.10219
|
2023-05-17T13:51:43Z
|
Separability and Scatteredness (S&S) Ratio-Based Efficient SVM
Regularization Parameter, Kernel, and Kernel Parameter Selection
|
[
"Mahdi Shamsi",
"Soosan Beheshti"
] |
Support Vector Machine (SVM) is a robust machine learning algorithm with
broad applications in classification, regression, and outlier detection. SVM
requires tuning the regularization parameter (RP) which controls the model
capacity and the generalization performance. Conventionally, the optimum RP is
found by comparison of a range of values through the Cross-Validation (CV)
procedure. In addition, for non-linearly separable data, the SVM uses kernels
where a set of kernels, each with a set of parameters, denoted as a grid of
kernels, are considered. The optimal choice of RP and the grid of kernels is
through the grid-search of CV. By stochastically analyzing the behavior of the
regularization parameter, this work shows that the SVM performance can be
modeled as a function of separability and scatteredness (S&S) of the data.
Separability is a measure of the distance between classes, and scatteredness is
the ratio of the spread of data points. In particular, for the hinge loss cost
function, an S&S ratio-based table provides the optimum RP. The S&S ratio is a
powerful value that can automatically detect linear or non-linear separability
before using the SVM algorithm. The provided S&S ratio-based table can also
provide the optimum kernel and its parameters before using the SVM algorithm.
Consequently, the computational complexity of the CV grid-search is reduced to
only one time use of the SVM. The simulation results on the real dataset
confirm the superiority and efficiency of the proposed approach in the sense of
computational complexity over the grid-search CV method.
|
[
"stat.ML",
"cs.AI",
"cs.LG",
"eess.SP"
] | false |
2305.10222
|
2023-05-17T13:55:50Z
|
rWISDM: Repaired WISDM, a Public Dataset for Human Activity Recognition
|
[
"Mohammadreza Heydarian",
"Thomas E. Doyle"
] |
Human Activity Recognition (HAR) has become a spotlight in recent scientific
research because of its applications in various domains such as healthcare,
athletic competitions, smart cities, and smart home. While researchers focus on
the methodology of processing data, users wonder if the Artificial Intelligence
(AI) methods used for HAR can be trusted. Trust depends mainly on the
reliability or robustness of the system. To investigate the robustness of HAR
systems, we analyzed several suitable current public datasets and selected
WISDM for our investigation of Deep Learning approaches. While the published
specification of WISDM matched our fundamental requirements (e.g., large,
balanced, multi-hardware), several hidden issues were found in the course of
our analysis. These issues reduce the performance and the overall trust of the
classifier. By identifying the problems and repairing the dataset, the
performance of the classifier was increased. This paper presents the methods by
which other researchers may identify and correct similar problems in public
datasets. By fixing the issues dataset veracity is improved, which increases
the overall trust in the trained HAR system.
|
[
"eess.SP",
"cs.HC",
"cs.LG"
] | false |
2305.10227
|
2023-05-17T14:03:47Z
|
Reaching Kesten-Stigum Threshold in the Stochastic Block Model under
Node Corruptions
|
[
"Jingqiu Ding",
"Tommaso d'Orsi",
"Yiding Hua",
"David Steurer"
] |
We study robust community detection in the context of node-corrupted
stochastic block model, where an adversary can arbitrarily modify all the edges
incident to a fraction of the $n$ vertices. We present the first
polynomial-time algorithm that achieves weak recovery at the Kesten-Stigum
threshold even in the presence of a small constant fraction of corrupted nodes.
Prior to this work, even state-of-the-art robust algorithms were known to break
under such node corruption adversaries, when close to the Kesten-Stigum
threshold.
We further extend our techniques to the $Z_2$ synchronization problem, where
our algorithm reaches the optimal recovery threshold in the presence of similar
strong adversarial perturbations.
The key ingredient of our algorithm is a novel identifiability proof that
leverages the push-out effect of the Grothendieck norm of principal
submatrices.
|
[
"cs.LG",
"cs.SI",
"stat.ML"
] | false |
2305.10421
|
2023-05-17T17:55:45Z
|
Evolving Tsukamoto Neuro Fuzzy Model for Multiclass Covid 19
Classification with Chest X Ray Images
|
[
"Marziyeh Rezaei",
"Sevda Molani",
"Negar Firoozeh",
"Hossein Abbasi",
"Farzan Vahedifard",
"Maysam Orouskhani"
] |
Du e to rapid population growth and the need to use artificial intelligence
to make quick decisions, developing a machine learning-based disease detection
model and abnormality identification system has greatly improved the level of
medical diagnosis Since COVID-19 has become one of the most severe diseases in
the world, developing an automatic COVID-19 detection framework helps medical
doctors in the diagnostic process of disease and provides correct and fast
results. In this paper, we propose a machine lear ning based framework for the
detection of Covid 19. The proposed model employs a Tsukamoto Neuro Fuzzy
Inference network to identify and distinguish Covid 19 disease from normal and
pneumonia cases. While the traditional training methods tune the parameters of
the neuro-fuzzy model by gradient-based algorithms and recursive least square
method, we use an evolutionary-based optimization, the Cat swarm algorithm to
update the parameters. In addition, six texture features extracted from chest
X-ray images are give n as input to the model. Finally, the proposed model is
conducted on the chest X-ray dataset to detect Covid 19. The simulation results
indicate that the proposed model achieves an accuracy of 98.51%, sensitivity of
98.35%, specificity of 98.08%, and F1 score of 98.17%.
|
[
"eess.IV",
"cs.LG",
"cs.NE",
"68W50",
"I.5.0"
] | false |
2305.10540
|
2023-05-17T19:56:04Z
|
Generalization Bounds for Neural Belief Propagation Decoders
|
[
"Sudarshan Adiga",
"Xin Xiao",
"Ravi Tandon",
"Bane Vasic",
"Tamal Bose"
] |
Machine learning based approaches are being increasingly used for designing
decoders for next generation communication systems. One widely used framework
is neural belief propagation (NBP), which unfolds the belief propagation (BP)
iterations into a deep neural network and the parameters are trained in a
data-driven manner. NBP decoders have been shown to improve upon classical
decoding algorithms. In this paper, we investigate the generalization
capabilities of NBP decoders. Specifically, the generalization gap of a decoder
is the difference between empirical and expected bit-error-rate(s). We present
new theoretical results which bound this gap and show the dependence on the
decoder complexity, in terms of code parameters (blocklength, message length,
variable/check node degrees), decoding iterations, and the training dataset
size. Results are presented for both regular and irregular parity-check
matrices. To the best of our knowledge, this is the first set of theoretical
results on generalization performance of neural network based decoders. We
present experimental results to show the dependence of generalization gap on
the training dataset size, and decoding iterations for different codes.
|
[
"cs.IT",
"cs.LG",
"math.IT"
] | false |
2305.10548
|
2023-05-17T20:07:30Z
|
Discovering Individual Rewards in Collective Behavior through Inverse
Multi-Agent Reinforcement Learning
|
[
"Daniel Waelchli",
"Pascal Weber",
"Petros Koumoutsakos"
] |
The discovery of individual objectives in collective behavior of complex
dynamical systems such as fish schools and bacteria colonies is a long-standing
challenge. Inverse reinforcement learning is a potent approach for addressing
this challenge but its applicability to dynamical systems, involving continuous
state-action spaces and multiple interacting agents, has been limited. In this
study, we tackle this challenge by introducing an off-policy inverse
multi-agent reinforcement learning algorithm (IMARL). Our approach combines the
ReF-ER techniques with guided cost learning. By leveraging demonstrations, our
algorithm automatically uncovers the reward function and learns an effective
policy for the agents. Through extensive experimentation, we demonstrate that
the proposed policy captures the behavior observed in the provided data, and
achieves promising results across problem domains including single agent models
in the OpenAI gym and multi-agent models of schooling behavior. The present
study shows that the proposed IMARL algorithm is a significant step towards
understanding collective dynamics from the perspective of its constituents, and
showcases its value as a tool for studying complex physical systems exhibiting
collective behaviour.
|
[
"cs.LG",
"cs.AI",
"cs.MA"
] | false |
2305.10550
|
2023-05-17T20:09:35Z
|
Sparsity-depth Tradeoff in Infinitely Wide Deep Neural Networks
|
[
"Chanwoo Chun",
"Daniel D. Lee"
] |
We investigate how sparse neural activity affects the generalization
performance of a deep Bayesian neural network at the large width limit. To this
end, we derive a neural network Gaussian Process (NNGP) kernel with rectified
linear unit (ReLU) activation and a predetermined fraction of active neurons.
Using the NNGP kernel, we observe that the sparser networks outperform the
non-sparse networks at shallow depths on a variety of datasets. We validate
this observation by extending the existing theory on the generalization error
of kernel-ridge regression.
|
[
"cs.LG",
"cond-mat.dis-nn",
"q-bio.NC"
] | false |
2305.11183
|
2023-05-17T18:49:37Z
|
Assessing the predicting power of GPS data for aftershocks forecasting
|
[
"Vincenzo Maria Schimmenti",
"Giuseppe Petrillo",
"Alberto Rosso",
"Francois P. Landes"
] |
We present a machine learning approach for the aftershock forecasting of
Japanese earthquake catalogue from 2015 to 2019. Our method takes as sole input
the ground surface deformation as measured by Global Positioning System (GPS)
stations at the day of the mainshock, and processes it with a Convolutional
Neural Network (CNN), thus capturing the input's spatial correlations. Despite
the moderate amount of data the performance of this new approach is very
promising. The accuracy of the prediction heavily relies on the density of GPS
stations: the predictive power is lost when the mainshocks occur far from
measurement stations, as in offshore regions.
|
[
"physics.geo-ph",
"cs.LG",
"physics.data-an"
] | false |
2305.11905
|
2023-05-17T08:51:42Z
|
Properties of the ENCE and other MAD-based calibration metrics
|
[
"Pascal Pernot"
] |
The Expected Normalized Calibration Error (ENCE) is a popular calibration
statistic used in Machine Learning to assess the quality of prediction
uncertainties for regression problems. Estimation of the ENCE is based on the
binning of calibration data. In this short note, I illustrate an annoying
property of the ENCE, i.e. its proportionality to the square root of the number
of bins for well calibrated or nearly calibrated datasets. A similar behavior
affects the calibration error based on the variance of z-scores (ZVE), and in
both cases this property is a consequence of the use of a Mean Absolute
Deviation (MAD) statistic to estimate calibration errors. Hence, the question
arises of which number of bins to choose for a reliable estimation of
calibration error statistics. A solution is proposed to infer ENCE and ZVE
values that do not depend on the number of bins for datasets assumed to be
calibrated, providing simultaneously a statistical calibration test. It is also
shown that the ZVE is less sensitive than the ENCE to outstanding errors or
uncertainties.
|
[
"cs.LG",
"physics.chem-ph",
"physics.data-an",
"stat.ME"
] | false |
2305.11908
|
2023-05-17T18:49:44Z
|
Sequential Best-Arm Identification with Application to Brain-Computer
Interface
|
[
"Xin Zhou",
"Botao Hao",
"Jian Kang",
"Tor Lattimore",
"Lexin Li"
] |
A brain-computer interface (BCI) is a technology that enables direct
communication between the brain and an external device or computer system. It
allows individuals to interact with the device using only their thoughts, and
holds immense potential for a wide range of applications in medicine,
rehabilitation, and human augmentation. An electroencephalogram (EEG) and
event-related potential (ERP)-based speller system is a type of BCI that allows
users to spell words without using a physical keyboard, but instead by
recording and interpreting brain signals under different stimulus presentation
paradigms. Conventional non-adaptive paradigms treat each word selection
independently, leading to a lengthy learning process. To improve the sampling
efficiency, we cast the problem as a sequence of best-arm identification tasks
in multi-armed bandits. Leveraging pre-trained large language models (LLMs), we
utilize the prior knowledge learned from previous tasks to inform and
facilitate subsequent tasks. To do so in a coherent way, we propose a
sequential top-two Thompson sampling (STTS) algorithm under the
fixed-confidence setting and the fixed-budget setting. We study the theoretical
property of the proposed algorithm, and demonstrate its substantial empirical
improvement through both synthetic data analysis as well as a P300 BCI speller
simulator example.
|
[
"cs.HC",
"cs.LG",
"q-bio.NC",
"stat.ML"
] | false |
2306.07972
|
2023-05-17T15:48:21Z
|
Leveraging Machine Learning for Multichain DeFi Fraud Detection
|
[
"Georgios Palaiokrassas",
"Sandro Scherrers",
"Iason Ofeidis",
"Leandros Tassiulas"
] |
Since the inception of permissionless blockchains with Bitcoin in 2008, it
became apparent that their most well-suited use case is related to making the
financial system and its advantages available to everyone seamlessly without
depending on any trusted intermediaries. Smart contracts across chains provide
an ecosystem of decentralized finance (DeFi), where users can interact with
lending pools, Automated Market Maker (AMM) exchanges, stablecoins,
derivatives, etc. with a cumulative locked value which had exceeded 160B USD.
While DeFi comes with high rewards, it also carries plenty of risks. Many
financial crimes have occurred over the years making the early detection of
malicious activity an issue of high priority. The proposed framework introduces
an effective method for extracting a set of features from different chains,
including the largest one, Ethereum and it is evaluated over an extensive
dataset we gathered with the transactions of the most widely used DeFi
protocols (23 in total, including Aave, Compound, Curve, Lido, and Yearn) based
on a novel dataset in collaboration with Covalent. Different Machine Learning
methods were employed, such as XGBoost and a Neural Network for identifying
fraud accounts detection interacting with DeFi and we demonstrate that the
introduction of novel DeFi-related features, significantly improves the
evaluation results, where Accuracy, Precision, Recall, F1-score and F2-score
where utilized.
|
[
"q-fin.GN",
"cs.CR",
"cs.LG"
] | false |
2309.00618
|
2023-05-17T07:04:32Z
|
Short-Term Stock Price Forecasting using exogenous variables and Machine
Learning Algorithms
|
[
"Albert Wong",
"Steven Whang",
"Emilio Sagre",
"Niha Sachin",
"Gustavo Dutra",
"Yew-Wei Lim",
"Gaetan Hains",
"Youry Khmelevsky",
"Frank Zhang"
] |
Creating accurate predictions in the stock market has always been a
significant challenge in finance. With the rise of machine learning as the next
level in the forecasting area, this research paper compares four machine
learning models and their accuracy in forecasting three well-known stocks
traded in the NYSE in the short term from March 2020 to May 2022. We deploy,
develop, and tune XGBoost, Random Forest, Multi-layer Perceptron, and Support
Vector Regression models. We report the models that produce the highest
accuracies from our evaluation metrics: RMSE, MAPE, MTT, and MPE. Using a
training data set of 240 trading days, we find that XGBoost gives the highest
accuracy despite running longer (up to 10 seconds). Results from this study may
improve by further tuning the individual parameters or introducing more
exogenous variables.
|
[
"q-fin.TR",
"cs.CE",
"cs.LG"
] | false |
2305.09903
|
2023-05-17T02:25:56Z
|
Privacy Loss of Noisy Stochastic Gradient Descent Might Converge Even
for Non-Convex Losses
|
[
"Shahab Asoodeh",
"Mario Diaz"
] |
The Noisy-SGD algorithm is widely used for privately training machine
learning models. Traditional privacy analyses of this algorithm assume that the
internal state is publicly revealed, resulting in privacy loss bounds that
increase indefinitely with the number of iterations. However, recent findings
have shown that if the internal state remains hidden, then the privacy loss
might remain bounded. Nevertheless, this remarkable result heavily relies on
the assumption of (strong) convexity of the loss function. It remains an
important open problem to further relax this condition while proving similar
convergent upper bounds on the privacy loss. In this work, we address this
problem for DP-SGD, a popular variant of Noisy-SGD that incorporates gradient
clipping to limit the impact of individual samples on the training process. Our
findings demonstrate that the privacy loss of projected DP-SGD converges
exponentially fast, without requiring convexity or smoothness assumptions on
the loss function. In addition, we analyze the privacy loss of regularized
(unprojected) DP-SGD. To obtain these results, we directly analyze the
hockey-stick divergence between coupled stochastic processes by relying on
non-linear data processing inequalities.
|
[
"cs.LG",
"cs.CR",
"cs.IT",
"math.IT",
"math.OC"
] | false |
2305.10114
|
2023-05-17T10:40:17Z
|
Automatic Hyperparameter Tuning in Sparse Matrix Factorization
|
[
"Ryota Kawasumi",
"Koujin Takeda"
] |
We study the problem of hyperparameter tuning in sparse matrix factorization
under Bayesian framework. In the prior work, an analytical solution of sparse
matrix factorization with Laplace prior was obtained by variational Bayes
method under several approximations. Based on this solution, we propose a novel
numerical method of hyperparameter tuning by evaluating the zero point of
normalization factor in sparse matrix prior. We also verify that our method
shows excellent performance for ground-truth sparse matrix reconstruction by
comparing it with the widely-used algorithm of sparse principal component
analysis.
|
[
"stat.ML",
"cond-mat.dis-nn",
"cs.IT",
"cs.LG",
"math.IT"
] | false |
2305.10282
|
2023-05-17T15:17:23Z
|
Reward-agnostic Fine-tuning: Provable Statistical Benefits of Hybrid
Reinforcement Learning
|
[
"Gen Li",
"Wenhao Zhan",
"Jason D. Lee",
"Yuejie Chi",
"Yuxin Chen"
] |
This paper studies tabular reinforcement learning (RL) in the hybrid setting,
which assumes access to both an offline dataset and online interactions with
the unknown environment. A central question boils down to how to efficiently
utilize online data collection to strengthen and complement the offline dataset
and enable effective policy fine-tuning. Leveraging recent advances in
reward-agnostic exploration and model-based offline RL, we design a three-stage
hybrid RL algorithm that beats the best of both worlds -- pure offline RL and
pure online RL -- in terms of sample complexities. The proposed algorithm does
not require any reward information during data collection. Our theory is
developed based on a new notion called single-policy partial concentrability,
which captures the trade-off between distribution mismatch and miscoverage and
guides the interplay between offline and online data.
|
[
"cs.LG",
"cs.IT",
"math.IT",
"math.ST",
"stat.ML",
"stat.TH"
] | false |
2305.14366
|
2023-05-17T06:59:26Z
|
Information processing via human soft tissue
|
[
"Yo Kobayashi"
] |
This study demonstrates that the soft biological tissues of humans can be
used as a type of soft body in physical reservoir computing. Soft biological
tissues possess characteristics such as stress-strain nonlinearity and
viscoelasticity that satisfy the requirements for physical reservoir computing,
including nonlinearity and memory. The aim of this study was to utilize the
dynamics of human soft tissues as a physical reservoir for the emulation of
nonlinear dynamical systems. To demonstrate this concept, joint angle data
during motion in the flexion-extension direction of the wrist joint, and
ultrasound images of the muscles associated with that motion, were acquired
from human participants. The input to the system was the angle of the wrist
joint, while the deformation field within the muscle (obtained from ultrasound
images) represented the state of the reservoir. The results indicate that the
dynamics of soft tissue have a positive impact on the computational task of
emulating nonlinear dynamical systems. This research suggests that the soft
tissue of humans can be used as a potential computational resource.
|
[
"q-bio.NC",
"cs.AI",
"cs.HC",
"cs.LG",
"cs.RO",
"cs.SY",
"eess.SY"
] | false |
2305.10640
|
2023-05-18T01:36:23Z
|
Learning Restoration is Not Enough: Transfering Identical Mapping for
Single-Image Shadow Removal
|
[
"Xiaoguang Li",
"Qing Guo",
"Pingping Cai",
"Wei Feng",
"Ivor Tsang",
"Song Wang"
] |
Shadow removal is to restore shadow regions to their shadow-free counterparts
while leaving non-shadow regions unchanged. State-of-the-art shadow removal
methods train deep neural networks on collected shadow & shadow-free image
pairs, which are desired to complete two distinct tasks via shared weights,
i.e., data restoration for shadow regions and identical mapping for non-shadow
regions. We find that these two tasks exhibit poor compatibility, and using
shared weights for these two tasks could lead to the model being optimized
towards only one task instead of both during the training process. Note that
such a key issue is not identified by existing deep learning-based shadow
removal methods. To address this problem, we propose to handle these two tasks
separately and leverage the identical mapping results to guide the shadow
restoration in an iterative manner. Specifically, our method consists of three
components: an identical mapping branch (IMB) for non-shadow regions
processing, an iterative de-shadow branch (IDB) for shadow regions restoration
based on identical results, and a smart aggregation block (SAB). The IMB aims
to reconstruct an image that is identical to the input one, which can benefit
the restoration of the non-shadow regions without explicitly distinguishing
between shadow and non-shadow regions. Utilizing the multi-scale features
extracted by the IMB, the IDB can effectively transfer information from
non-shadow regions to shadow regions progressively, facilitating the process of
shadow removal. The SAB is designed to adaptive integrate features from both
IMB and IDB. Moreover, it generates a finely tuned soft shadow mask that guides
the process of removing shadows. Extensive experiments demonstrate our method
outperforms all the state-of-the-art shadow removal approaches on the widely
used shadow removal datasets.
|
[
"cs.CV"
] | false |
2305.10714
|
2023-05-18T05:25:40Z
|
Vision-Language Pre-training with Object Contrastive Learning for 3D
Scene Understanding
|
[
"Taolin Zhang",
"Sunan He",
"Dai Tao",
"Bin Chen",
"Zhi Wang",
"Shu-Tao Xia"
] |
In recent years, vision language pre-training frameworks have made
significant progress in natural language processing and computer vision,
achieving remarkable performance improvement on various downstream tasks.
However, when extended to point cloud data, existing works mainly focus on
building task-specific models, and fail to extract universal 3D vision-language
embedding that generalize well. We carefully investigate three common tasks in
semantic 3D scene understanding, and derive key insights into the development
of a pre-training model. Motivated by these observations, we propose a
vision-language pre-training framework 3DVLP (3D vision-language pre-training
with object contrastive learning), which transfers flexibly on 3D
vision-language downstream tasks. 3DVLP takes visual grounding as the proxy
task and introduces Object-level IoU-guided Detection (OID) loss to obtain
high-quality proposals in the scene. Moreover, we design Object-level
Cross-Contrastive alignment (OCC) task and Object-level Self-Contrastive
learning (OSC) task to align the objects with descriptions and distinguish
different objects in the scene, respectively. Extensive experiments verify the
excellent performance of 3DVLP on three 3D vision-language tasks, reflecting
its superiority in semantic 3D scene understanding.
|
[
"cs.CV"
] | false |
2305.10754
|
2023-05-18T06:54:56Z
|
Multi-resolution Spatiotemporal Enhanced Transformer Denoising with
Functional Diffusive GANs for Constructing Brain Effective Connectivity in
MCI analysis
|
[
"Qiankun Zuo",
"Chi-Man Pun",
"Yudong Zhang",
"Hongfei Wang",
"Jin Hong"
] |
Effective connectivity can describe the causal patterns among brain regions.
These patterns have the potential to reveal the pathological mechanism and
promote early diagnosis and effective drug development for cognitive disease.
However, the current studies mainly focus on using empirical functional time
series to calculate effective connections, which may not comprehensively
capture the complex causal relationships between brain regions. In this paper,
a novel Multi-resolution Spatiotemporal Enhanced Transformer Denoising (MSETD)
network with an adversarially functional diffusion model is proposed to map
functional magnetic resonance imaging (fMRI) into effective connectivity for
mild cognitive impairment (MCI) analysis. To be specific, the denoising
framework leverages a conditional diffusion process that progressively
translates the noise and conditioning fMRI to effective connectivity in an
end-to-end manner. To ensure reverse diffusion quality and diversity, the
multi-resolution enhanced transformer generator is designed to extract local
and global spatiotemporal features. Furthermore, a multi-scale diffusive
transformer discriminator is devised to capture the temporal patterns at
different scales for generation stability. Evaluations of the ADNI datasets
demonstrate the feasibility and efficacy of the proposed model. The proposed
model not only achieves superior prediction performance compared with other
competing methods but also identifies MCI-related causal connections that are
consistent with clinical studies.
|
[
"cs.CV"
] | false |
2305.10772
|
2023-05-18T07:30:22Z
|
Feature-Balanced Loss for Long-Tailed Visual Recognition
|
[
"Mengke Li",
"Yiu-ming Cheung",
"Juyong Jiang"
] |
Deep neural networks frequently suffer from performance degradation when the
training data is long-tailed because several majority classes dominate the
training, resulting in a biased model. Recent studies have made a great effort
in solving this issue by obtaining good representations from data space, but
few of them pay attention to the influence of feature norm on the predicted
results. In this paper, we therefore address the long-tailed problem from
feature space and thereby propose the feature-balanced loss. Specifically, we
encourage larger feature norms of tail classes by giving them relatively
stronger stimuli. Moreover, the stimuli intensity is gradually increased in the
way of curriculum learning, which improves the generalization of the tail
classes, meanwhile maintaining the performance of the head classes. Extensive
experiments on multiple popular long-tailed recognition benchmarks demonstrate
that the feature-balanced loss achieves superior performance gains compared
with the state-of-the-art methods.
|
[
"cs.CV"
] | false |
2305.10799
|
2023-05-18T08:19:33Z
|
MedBLIP: Bootstrapping Language-Image Pre-training from 3D Medical
Images and Texts
|
[
"Qiuhui Chen",
"Xinyue Hu",
"Zirui Wang",
"Yi Hong"
] |
Vision-language pre-training (VLP) models have been demonstrated to be
effective in many computer vision applications. In this paper, we consider
developing a VLP model in the medical domain for making computer-aided
diagnoses (CAD) based on image scans and text descriptions in electronic health
records, as done in practice. To achieve our goal, we present a lightweight CAD
system MedBLIP, a new paradigm for bootstrapping VLP from off-the-shelf frozen
pre-trained image encoders and frozen large language models. We design a
MedQFormer module to bridge the gap between 3D medical images and 2D
pre-trained image encoders and language models as well. To evaluate the
effectiveness of our MedBLIP, we collect more than 30,000 image volumes from
five public Alzheimer's disease (AD) datasets, i.e., ADNI, NACC, OASIS, AIBL,
and MIRIAD. On this largest AD dataset we know, our model achieves the SOTA
performance on the zero-shot classification of healthy, mild cognitive
impairment (MCI), and AD subjects, and shows its capability of making medical
visual question answering (VQA). The code and pre-trained models is available
online: https://github.com/Qybc/MedBLIP.
|
[
"cs.CV"
] | false |
2305.10801
|
2023-05-18T08:28:01Z
|
Selecting Learnable Training Samples is All DETRs Need in Crowded
Pedestrian Detection
|
[
"Feng Gao",
"Jiaxu Leng",
"Gan Ji",
"Xinbo Gao"
] |
DEtection TRansformer (DETR) and its variants (DETRs) achieved impressive
performance in general object detection. However, in crowded pedestrian
detection, the performance of DETRs is still unsatisfactory due to the
inappropriate sample selection method which results in more false positives. To
settle the issue, we propose a simple but effective sample selection method for
DETRs, Sample Selection for Crowded Pedestrians (SSCP), which consists of the
constraint-guided label assignment scheme (CGLA) and the utilizability-aware
focal loss (UAFL). Our core idea is to select learnable samples for DETRs and
adaptively regulate the loss weights of samples based on their utilizability.
Specifically, in CGLA, we proposed a new cost function to ensure that only
learnable positive training samples are retained and the rest are negative
training samples. Further, considering the utilizability of samples, we
designed UAFL to adaptively assign different loss weights to learnable positive
samples depending on their gradient ratio and IoU. Experimental results show
that the proposed SSCP effectively improves the baselines without introducing
any overhead in inference. Especially, Iter Deformable DETR is improved to
39.7(-2.0)% MR on Crowdhuman and 31.8(-0.4)% MR on Citypersons.
|
[
"cs.CV"
] | false |
2305.10854
|
2023-05-18T10:15:44Z
|
3D Registration with Maximal Cliques
|
[
"Xiyu Zhang",
"Jiaqi Yang",
"Shikun Zhang",
"Yanning Zhang"
] |
As a fundamental problem in computer vision, 3D point cloud registration
(PCR) aims to seek the optimal pose to align a point cloud pair. In this paper,
we present a 3D registration method with maximal cliques (MAC). The key insight
is to loosen the previous maximum clique constraint, and mine more local
consensus information in a graph for accurate pose hypotheses generation: 1) A
compatibility graph is constructed to render the affinity relationship between
initial correspondences. 2) We search for maximal cliques in the graph, each of
which represents a consensus set. We perform node-guided clique selection then,
where each node corresponds to the maximal clique with the greatest graph
weight. 3) Transformation hypotheses are computed for the selected cliques by
the SVD algorithm and the best hypothesis is used to perform registration.
Extensive experiments on U3M, 3DMatch, 3DLoMatch and KITTI demonstrate that MAC
effectively increases registration accuracy, outperforms various
state-of-the-art methods and boosts the performance of deep-learned methods.
MAC combined with deep-learned methods achieves state-of-the-art registration
recall of 95.7% / 78.9% on 3DMatch / 3DLoMatch.
|
[
"cs.CV"
] | false |
2305.10862
|
2023-05-18T10:33:28Z
|
How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses
|
[
"Joana C. Costa",
"Tiago Roxo",
"Hugo Proença",
"Pedro R. M. Inácio"
] |
Deep Learning is currently used to perform multiple tasks, such as object
recognition, face recognition, and natural language processing. However, Deep
Neural Networks (DNNs) are vulnerable to perturbations that alter the network
prediction (adversarial examples), raising concerns regarding its usage in
critical areas, such as self-driving vehicles, malware detection, and
healthcare. This paper compiles the most recent adversarial attacks, grouped by
the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the
datasets and metrics used in the context of adversarial settings, and compare
the state-of-the-art results under different attacks, finishing with the
identification of open issues.
|
[
"cs.CV"
] | false |
2305.10868
|
2023-05-18T10:40:52Z
|
Advancing Incremental Few-shot Semantic Segmentation via Semantic-guided
Relation Alignment and Adaptation
|
[
"Yuan Zhou",
"Xin Chen",
"Yanrong Guo",
"Shijie Hao",
"Richang Hong",
"Qi Tian"
] |
Incremental few-shot semantic segmentation (IFSS) aims to incrementally
extend a semantic segmentation model to novel classes according to only a few
pixel-level annotated data, while preserving its segmentation capability on
previously learned base categories. This task faces a severe semantic-aliasing
issue between base and novel classes due to data imbalance, which makes
segmentation results unsatisfactory. To alleviate this issue, we propose the
Semantic-guided Relation Alignment and Adaptation (SRAA) method that fully
considers the guidance of prior semantic information. Specifically, we first
conduct Semantic Relation Alignment (SRA) in the base step, so as to
semantically align base class representations to their semantics. As a result,
the embeddings of base classes are constrained to have relatively low semantic
correlations to categories that are different from them. Afterwards, based on
the semantically aligned base categories, Semantic-Guided Adaptation (SGA) is
employed during the incremental learning stage. It aims to ensure affinities
between visual and semantic embeddings of encountered novel categories, thereby
making the feature representations be consistent with their semantic
information. In this way, the semantic-aliasing issue can be suppressed. We
evaluate our model on the PASCAL VOC 2012 and the COCO dataset. The
experimental results on both these two datasets exhibit its competitive
performance, which demonstrates the superiority of our method.
|
[
"cs.CV"
] | false |
2305.10884
|
2023-05-18T11:26:27Z
|
Meta-Auxiliary Network for 3D GAN Inversion
|
[
"Bangrui Jiang",
"Zhenhua Guo",
"Yujiu Yang"
] |
Real-world image manipulation has achieved fantastic progress in recent
years. GAN inversion, which aims to map the real image to the latent code
faithfully, is the first step in this pipeline. However, existing GAN inversion
methods fail to achieve high reconstruction quality and fast inference at the
same time. In addition, existing methods are built on 2D GANs and lack
explicitly mechanisms to enforce multi-view consistency.In this work, we
present a novel meta-auxiliary framework, while leveraging the newly developed
3D GANs as generator. The proposed method adopts a two-stage strategy. In the
first stage, we invert the input image to an editable latent code using
off-the-shelf inversion techniques. The auxiliary network is proposed to refine
the generator parameters with the given image as input, which both predicts
offsets for weights of convolutional layers and sampling positions of volume
rendering. In the second stage, we perform meta-learning to fast adapt the
auxiliary network to the input image, then the final reconstructed image is
synthesized via the meta-learned auxiliary network. Extensive experiments show
that our method achieves better performances on both inversion and editing
tasks.
|
[
"cs.CV"
] | false |
2305.10889
|
2023-05-18T11:37:39Z
|
FLIGHT Mode On: A Feather-Light Network for Low-Light Image Enhancement
|
[
"Mustafa Ozcan",
"Hamza Ergezer",
"Mustafa Ayazaoglu"
] |
Low-light image enhancement (LLIE) is an ill-posed inverse problem due to the
lack of knowledge of the desired image which is obtained under ideal
illumination conditions. Low-light conditions give rise to two main issues: a
suppressed image histogram and inconsistent relative color distributions with
low signal-to-noise ratio. In order to address these problems, we propose a
novel approach named FLIGHT-Net using a sequence of neural architecture blocks.
The first block regulates illumination conditions through pixel-wise scene
dependent illumination adjustment. The output image is produced in the output
of the second block, which includes channel attention and denoising sub-blocks.
Our highly efficient neural network architecture delivers state-of-the-art
performance with only 25K parameters. The method's code, pretrained models and
resulting images will be publicly available.
|
[
"cs.CV"
] | false |
2305.10893
|
2023-05-18T11:44:30Z
|
Student-friendly Knowledge Distillation
|
[
"Mengyang Yuan",
"Bo Lang",
"Fengnan Quan"
] |
In knowledge distillation, the knowledge from the teacher model is often too
complex for the student model to thoroughly process. However, good teachers in
real life always simplify complex material before teaching it to students.
Inspired by this fact, we propose student-friendly knowledge distillation (SKD)
to simplify teacher output into new knowledge representations, which makes the
learning of the student model easier and more effective. SKD contains a
softening processing and a learning simplifier. First, the softening processing
uses the temperature hyperparameter to soften the output logits of the teacher
model, which simplifies the output to some extent and makes it easier for the
learning simplifier to process. The learning simplifier utilizes the attention
mechanism to further simplify the knowledge of the teacher model and is jointly
trained with the student model using the distillation loss, which means that
the process of simplification is correlated with the training objective of the
student model and ensures that the simplified new teacher knowledge
representation is more suitable for the specific student model. Furthermore,
since SKD does not change the form of the distillation loss, it can be easily
combined with other distillation methods that are based on the logits or
features of intermediate layers to enhance its effectiveness. Therefore, SKD
has wide applicability. The experimental results on the CIFAR-100 and ImageNet
datasets show that our method achieves state-of-the-art performance while
maintaining high training efficiency.
|
[
"cs.CV"
] | false |
2305.10899
|
2023-05-18T11:54:13Z
|
Ultra-High Resolution Segmentation with Ultra-Rich Context: A Novel
Benchmark
|
[
"Deyi Ji",
"Feng Zhao",
"Hongtao Lu",
"Mingyuan Tao",
"Jieping Ye"
] |
With the increasing interest and rapid development of methods for Ultra-High
Resolution (UHR) segmentation, a large-scale benchmark covering a wide range of
scenes with full fine-grained dense annotations is urgently needed to
facilitate the field. To this end, the URUR dataset is introduced, in the
meaning of Ultra-High Resolution dataset with Ultra-Rich Context. As the name
suggests, URUR contains amounts of images with high enough resolution (3,008
images of size 5,120x5,120), a wide range of complex scenes (from 63 cities),
rich-enough context (1 million instances with 8 categories) and fine-grained
annotations (about 80 billion manually annotated pixels), which is far superior
to all the existing UHR datasets including DeepGlobe, Inria Aerial, UDD, etc..
Moreover, we also propose WSDNet, a more efficient and effective framework for
UHR segmentation especially with ultra-rich context. Specifically, multi-level
Discrete Wavelet Transform (DWT) is naturally integrated to release computation
burden while preserve more spatial details, along with a Wavelet Smooth Loss
(WSL) to reconstruct original structured context and texture with a smooth
constrain. Experiments on several UHR datasets demonstrate its state-of-the-art
performance. The dataset is available at https://github.com/jankyee/URUR.
|
[
"cs.CV"
] | false |
2305.10926
|
2023-05-18T12:38:40Z
|
HMSN: Hyperbolic Self-Supervised Learning by Clustering with Ideal
Prototypes
|
[
"Aiden Durrant",
"Georgios Leontidis"
] |
Hyperbolic manifolds for visual representation learning allow for effective
learning of semantic class hierarchies by naturally embedding tree-like
structures with low distortion within a low-dimensional representation space.
The highly separable semantic class hierarchies produced by hyperbolic learning
have shown to be powerful in low-shot tasks, however, their application in
self-supervised learning is yet to be explored fully. In this work, we explore
the use of hyperbolic representation space for self-supervised representation
learning for prototype-based clustering approaches. First, we extend the Masked
Siamese Networks to operate on the Poincar\'e ball model of hyperbolic space,
secondly, we place prototypes on the ideal boundary of the Poincar\'e ball.
Unlike previous methods we project to the hyperbolic space at the output of the
encoder network and utilise a hyperbolic projection head to ensure that the
representations used for downstream tasks remain hyperbolic. Empirically we
demonstrate the ability of these methods to perform comparatively to Euclidean
methods in lower dimensions for linear evaluation tasks, whilst showing
improvements in extreme few-shot learning tasks.
|
[
"cs.CV"
] | false |
2305.10929
|
2023-05-18T12:43:04Z
|
Architecture-agnostic Iterative Black-box Certified Defense against
Adversarial Patches
|
[
"Di Yang",
"Yihao Huang",
"Qing Guo",
"Felix Juefei-Xu",
"Ming Hu",
"Yang Liu",
"Geguang Pu"
] |
The adversarial patch attack aims to fool image classifiers within a bounded,
contiguous region of arbitrary changes, posing a real threat to computer vision
systems (e.g., autonomous driving, content moderation, biometric
authentication, medical imaging) in the physical world. To address this problem
in a trustworthy way, proposals have been made for certified patch defenses
that ensure the robustness of classification models and prevent future patch
attacks from breaching the defense. State-of-the-art certified defenses can be
compatible with any model architecture, as well as achieve high clean and
certified accuracy. Although the methods are adaptive to arbitrary patch
positions, they inevitably need to access the size of the adversarial patch,
which is unreasonable and impractical in real-world attack scenarios. To
improve the feasibility of the architecture-agnostic certified defense in a
black-box setting (i.e., position and size of the patch are both unknown), we
propose a novel two-stage Iterative Black-box Certified Defense method, termed
IBCD.In the first stage, it estimates the patch size in a search-based manner
by evaluating the size relationship between the patch and mask with pixel
masking. In the second stage, the accuracy results are calculated by the
existing white-box certified defense methods with the estimated patch size. The
experiments conducted on two popular model architectures and two datasets
verify the effectiveness and efficiency of IBCD.
|
[
"cs.CV"
] | false |
2305.11003
|
2023-05-18T14:31:34Z
|
Weakly-Supervised Concealed Object Segmentation with SAM-based Pseudo
Labeling and Multi-scale Feature Grouping
|
[
"Chunming He",
"Kai Li",
"Yachao Zhang",
"Guoxia Xu",
"Longxiang Tang",
"Yulun Zhang",
"Zhenhua Guo",
"Xiu Li"
] |
Weakly-Supervised Concealed Object Segmentation (WSCOS) aims to segment
objects well blended with surrounding environments using sparsely-annotated
data for model training. It remains a challenging task since (1) it is hard to
distinguish concealed objects from the background due to the intrinsic
similarity and (2) the sparsely-annotated training data only provide weak
supervision for model learning. In this paper, we propose a new WSCOS method to
address these two challenges. To tackle the intrinsic similarity challenge, we
design a multi-scale feature grouping module that first groups features at
different granularities and then aggregates these grouping results. By grouping
similar features together, it encourages segmentation coherence, helping obtain
complete segmentation results for both single and multiple-object images. For
the weak supervision challenge, we utilize the recently-proposed vision
foundation model, Segment Anything Model (SAM), and use the provided sparse
annotations as prompts to generate segmentation masks, which are used to train
the model. To alleviate the impact of low-quality segmentation masks, we
further propose a series of strategies, including multi-augmentation result
ensemble, entropy-based pixel-level weighting, and entropy-based image-level
selection. These strategies help provide more reliable supervision to train the
segmentation model. We verify the effectiveness of our method on various WSCOS
tasks, and experiments demonstrate that our method achieves state-of-the-art
performance on these tasks.
|
[
"cs.CV"
] | false |
2305.11012
|
2023-05-18T14:44:27Z
|
SDC-UDA: Volumetric Unsupervised Domain Adaptation Framework for
Slice-Direction Continuous Cross-Modality Medical Image Segmentation
|
[
"Hyungseob Shin",
"Hyeongyu Kim",
"Sewon Kim",
"Yohan Jun",
"Taejoon Eo",
"Dosik Hwang"
] |
Recent advances in deep learning-based medical image segmentation studies
achieve nearly human-level performance in fully supervised manner. However,
acquiring pixel-level expert annotations is extremely expensive and laborious
in medical imaging fields. Unsupervised domain adaptation (UDA) can alleviate
this problem, which makes it possible to use annotated data in one imaging
modality to train a network that can successfully perform segmentation on
target imaging modality with no labels. In this work, we propose SDC-UDA, a
simple yet effective volumetric UDA framework for slice-direction continuous
cross-modality medical image segmentation which combines intra- and inter-slice
self-attentive image translation, uncertainty-constrained pseudo-label
refinement, and volumetric self-training. Our method is distinguished from
previous methods on UDA for medical image segmentation in that it can obtain
continuous segmentation in the slice direction, thereby ensuring higher
accuracy and potential in clinical practice. We validate SDC-UDA with multiple
publicly available cross-modality medical image segmentation datasets and
achieve state-of-the-art segmentation performance, not to mention the superior
slice-direction continuity of prediction compared to previous studies.
|
[
"cs.CV"
] | false |
2305.11031
|
2023-05-18T15:18:01Z
|
ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for
Sparse View Synthesis
|
[
"Shoukang Hu",
"Kaichen Zhou",
"Kaiyu Li",
"Longhui Yu",
"Lanqing Hong",
"Tianyang Hu",
"Zhenguo Li",
"Gim Hee Lee",
"Ziwei Liu"
] |
Neural Radiance Fields (NeRF) has demonstrated remarkable 3D reconstruction
capabilities with dense view images. However, its performance significantly
deteriorates under sparse view settings. We observe that learning the 3D
consistency of pixels among different views is crucial for improving
reconstruction quality in such cases. In this paper, we propose ConsistentNeRF,
a method that leverages depth information to regularize both multi-view and
single-view 3D consistency among pixels. Specifically, ConsistentNeRF employs
depth-derived geometry information and a depth-invariant loss to concentrate on
pixels that exhibit 3D correspondence and maintain consistent depth
relationships. Extensive experiments on recent representative works reveal that
our approach can considerably enhance model performance in sparse view
conditions, achieving improvements of up to 94% in PSNR, 76% in SSIM, and 31%
in LPIPS compared to the vanilla baselines across various benchmarks, including
DTU, NeRF Synthetic, and LLFF.
|
[
"cs.CV"
] | false |
2305.11101
|
2023-05-18T16:45:26Z
|
XFormer: Fast and Accurate Monocular 3D Body Capture
|
[
"Lihui Qian",
"Xintong Han",
"Faqiang Wang",
"Hongyu Liu",
"Haoye Dong",
"Zhiwen Li",
"Huawei Wei",
"Zhe Lin",
"Cheng-Bin Jin"
] |
We present XFormer, a novel human mesh and motion capture method that
achieves real-time performance on consumer CPUs given only monocular images as
input. The proposed network architecture contains two branches: a keypoint
branch that estimates 3D human mesh vertices given 2D keypoints, and an image
branch that makes predictions directly from the RGB image features. At the core
of our method is a cross-modal transformer block that allows information to
flow across these two branches by modeling the attention between 2D keypoint
coordinates and image spatial features. Our architecture is smartly designed,
which enables us to train on various types of datasets including images with
2D/3D annotations, images with 3D pseudo labels, and motion capture datasets
that do not have associated images. This effectively improves the accuracy and
generalization ability of our system. Built on a lightweight backbone
(MobileNetV3), our method runs blazing fast (over 30fps on a single CPU core)
and still yields competitive accuracy. Furthermore, with an HRNet backbone,
XFormer delivers state-of-the-art performance on Huamn3.6 and 3DPW datasets.
|
[
"cs.CV"
] | false |
2305.11102
|
2023-05-18T16:45:51Z
|
Progressive Learning of 3D Reconstruction Network from 2D GAN Data
|
[
"Aysegul Dundar",
"Jun Gao",
"Andrew Tao",
"Bryan Catanzaro"
] |
This paper presents a method to reconstruct high-quality textured 3D models
from single images. Current methods rely on datasets with expensive
annotations; multi-view images and their camera parameters. Our method relies
on GAN generated multi-view image datasets which have a negligible annotation
cost. However, they are not strictly multi-view consistent and sometimes GANs
output distorted images. This results in degraded reconstruction qualities. In
this work, to overcome these limitations of generated datasets, we have two
main contributions which lead us to achieve state-of-the-art results on
challenging objects: 1) A robust multi-stage learning scheme that gradually
relies more on the models own predictions when calculating losses, 2) A novel
adversarial learning pipeline with online pseudo-ground truth generations to
achieve fine details. Our work provides a bridge from 2D supervisions of GAN
models to 3D reconstruction models and removes the expensive annotation
efforts. We show significant improvements over previous methods whether they
were trained on GAN generated multi-view images or on real images with
expensive annotations. Please visit our web-page for 3D visuals:
https://research.nvidia.com/labs/adlr/progressive-3d-learning
|
[
"cs.CV"
] | false |
2305.11167
|
2023-05-18T17:57:29Z
|
MVPSNet: Fast Generalizable Multi-view Photometric Stereo
|
[
"Dongxu Zhao",
"Daniel Lichy",
"Pierre-Nicolas Perrin",
"Jan-Michael Frahm",
"Soumyadip Sengupta"
] |
We propose a fast and generalizable solution to Multi-view Photometric Stereo
(MVPS), called MVPSNet. The key to our approach is a feature extraction network
that effectively combines images from the same view captured under multiple
lighting conditions to extract geometric features from shading cues for stereo
matching. We demonstrate these features, termed `Light Aggregated Feature Maps'
(LAFM), are effective for feature matching even in textureless regions, where
traditional multi-view stereo methods fail. Our method produces similar
reconstruction results to PS-NeRF, a state-of-the-art MVPS method that
optimizes a neural network per-scene, while being 411$\times$ faster (105
seconds vs. 12 hours) in inference. Additionally, we introduce a new synthetic
dataset for MVPS, sMVPS, which is shown to be effective to train a
generalizable MVPS method.
|
[
"cs.CV"
] | false |
2305.11173
|
2023-05-18T17:59:10Z
|
Going Denser with Open-Vocabulary Part Segmentation
|
[
"Peize Sun",
"Shoufa Chen",
"Chenchen Zhu",
"Fanyi Xiao",
"Ping Luo",
"Saining Xie",
"Zhicheng Yan"
] |
Object detection has been expanded from a limited number of categories to
open vocabulary. Moving forward, a complete intelligent vision system requires
understanding more fine-grained object descriptions, object parts. In this
paper, we propose a detector with the ability to predict both open-vocabulary
objects and their part segmentation. This ability comes from two designs.
First, we train the detector on the joint of part-level, object-level and
image-level data to build the multi-granularity alignment between language and
image. Second, we parse the novel object into its parts by its dense semantic
correspondence with the base object. These two designs enable the detector to
largely benefit from various data sources and foundation models. In
open-vocabulary part segmentation experiments, our method outperforms the
baseline by 3.3$\sim$7.3 mAP in cross-dataset generalization on PartImageNet,
and improves the baseline by 7.3 novel AP$_{50}$ in cross-category
generalization on Pascal Part. Finally, we train a detector that generalizes to
a wide range of part segmentation datasets while achieving better performance
than dataset-specific training.
|
[
"cs.CV"
] | true |
2305.11265
|
2023-05-18T19:09:32Z
|
Multi-Focus Image Fusion Based on Spatial Frequency(SF) and Consistency
Verification(CV) in DCT Domain
|
[
"Krishnendu K. S."
] |
Multi-focus is a technique of focusing on different aspects of a particular
object or scene. Wireless Visual Sensor Networks (WVSN) use multi-focus image
fusion, which combines two or more images to create a more accurate output
image that describes the scene better than any individual input image. WVSN has
various applications, including video surveillance, monitoring, and tracking.
Therefore, a high-level analysis of these networks can benefit Biometrics. This
paper introduces an algorithm that utilizes discrete cosine transform (DCT)
standards to fuse multi-focus images in WVSNs. The spatial frequency (SF) of
the corresponding blocks from the source images determines the fusion
criterion. The blocks with higher spatial frequencies make up the DCT
presentation of the fused image, and the Consistency Verification (CV)
procedure is used to enhance the output image quality. The proposed fusion
method was tested on multiple pairs of multi-focus images coded on JPEG
standard to evaluate the fusion performance, and the results indicate that it
improves the visual quality of the output image and outperforms other DCT-based
techniques.
|
[
"cs.CV"
] | false |
2305.11337
|
2023-05-18T22:57:57Z
|
RoomDreamer: Text-Driven 3D Indoor Scene Synthesis with Coherent
Geometry and Texture
|
[
"Liangchen Song",
"Liangliang Cao",
"Hongyu Xu",
"Kai Kang",
"Feng Tang",
"Junsong Yuan",
"Yang Zhao"
] |
The techniques for 3D indoor scene capturing are widely used, but the meshes
produced leave much to be desired. In this paper, we propose "RoomDreamer",
which leverages powerful natural language to synthesize a new room with a
different style. Unlike existing image synthesis methods, our work addresses
the challenge of synthesizing both geometry and texture aligned to the input
scene structure and prompt simultaneously. The key insight is that a scene
should be treated as a whole, taking into account both scene texture and
geometry. The proposed framework consists of two significant components:
Geometry Guided Diffusion and Mesh Optimization. Geometry Guided Diffusion for
3D Scene guarantees the consistency of the scene style by applying the 2D prior
to the entire scene simultaneously. Mesh Optimization improves the geometry and
texture jointly and eliminates the artifacts in the scanned scene. To validate
the proposed method, real indoor scenes scanned with smartphones are used for
extensive experiments, through which the effectiveness of our method is
demonstrated.
|
[
"cs.CV"
] | true |
2305.11338
|
2023-05-18T23:05:01Z
|
Coordinated Transformer with Position \& Sample-aware Central Loss for
Anatomical Landmark Detection
|
[
"Qikui Zhu",
"Yihui Bi",
"Danxin Wang",
"Xiangpeng Chu",
"Jie Chen",
"Yanqing Wang"
] |
Heatmap-based anatomical landmark detection is still facing two unresolved
challenges: 1) inability to accurately evaluate the distribution of heatmap; 2)
inability to effectively exploit global spatial structure information. To
address the computational inability challenge, we propose a novel
position-aware and sample-aware central loss. Specifically, our central loss
can absorb position information, enabling accurate evaluation of the heatmap
distribution. More advanced is that our central loss is sample-aware, which can
adaptively distinguish easy and hard samples and make the model more focused on
hard samples while solving the challenge of extreme imbalance between landmarks
and non-landmarks. To address the challenge of ignoring structure information,
a Coordinated Transformer, called CoorTransformer, is proposed, which
establishes long-range dependencies under the guidance of landmark coordination
information, making the attention more focused on the sparse landmarks while
taking advantage of global spatial structure. Furthermore, CoorTransformer can
speed up convergence, effectively avoiding the defect that Transformers have
difficulty converging in sparse representation learning. Using the advanced
CoorTransformer and central loss, we propose a generalized detection model that
can handle various scenarios, inherently exploiting the underlying relationship
between landmarks and incorporating rich structural knowledge around the target
landmarks. We analyzed and evaluated CoorTransformer and central loss on three
challenging landmark detection tasks. The experimental results show that our
CoorTransformer outperforms state-of-the-art methods, and the central loss
significantly improves the performance of the model with p-values< 0.05.
|
[
"cs.CV"
] | false |
2305.10643
|
2023-05-18T02:01:45Z
|
STREAMLINE: Streaming Active Learning for Realistic Multi-Distributional
Settings
|
[
"Nathan Beck",
"Suraj Kothawade",
"Pradeep Shenoy",
"Rishabh Iyer"
] |
Deep neural networks have consistently shown great performance in several
real-world use cases like autonomous vehicles, satellite imaging, etc.,
effectively leveraging large corpora of labeled training data. However,
learning unbiased models depends on building a dataset that is representative
of a diverse range of realistic scenarios for a given task. This is challenging
in many settings where data comes from high-volume streams, with each scenario
occurring in random interleaved episodes at varying frequencies. We study
realistic streaming settings where data instances arrive in and are sampled
from an episodic multi-distributional data stream. Using submodular information
measures, we propose STREAMLINE, a novel streaming active learning framework
that mitigates scenario-driven slice imbalance in the working labeled data via
a three-step procedure of slice identification, slice-aware budgeting, and data
selection. We extensively evaluate STREAMLINE on real-world streaming scenarios
for image classification and object detection tasks. We observe that STREAMLINE
improves the performance on infrequent yet critical slices of the data over
current baselines by up to $5\%$ in terms of accuracy on our image
classification tasks and by up to $8\%$ in terms of mAP on our object detection
tasks.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.10648
|
2023-05-18T02:06:06Z
|
Adjusting Logit in Gaussian Form for Long-Tailed Visual Recognition
|
[
"Mengke Li",
"Yiu-ming Cheung",
"Yang Lu",
"Zhikai Hu",
"Weichao Lan",
"Hui Huang"
] |
It is not uncommon that real-world data are distributed with a long tail. For
such data, the learning of deep neural networks becomes challenging because it
is hard to classify tail classes correctly. In the literature, several existing
methods have addressed this problem by reducing classifier bias provided that
the features obtained with long-tailed data are representative enough. However,
we find that training directly on long-tailed data leads to uneven embedding
space. That is, the embedding space of head classes severely compresses that of
tail classes, which is not conducive to subsequent classifier learning.
%further improving model performance. This paper therefore studies the problem
of long-tailed visual recognition from the perspective of feature level. We
introduce feature augmentation to balance the embedding distribution. The
features of different classes are perturbed with varying amplitudes in Gaussian
form. Based on these perturbed features, two novel logit adjustment methods are
proposed to improve model performance at a modest computational overhead.
Subsequently, the distorted embedding spaces of all classes can be calibrated.
In such balanced-distributed embedding spaces, the biased classifier can be
eliminated by simply retraining the classifier with class-balanced sampling
data. Extensive experiments conducted on benchmark datasets demonstrate the
superior performance of the proposed method over the state-of-the-art ones.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.10661
|
2023-05-18T02:49:07Z
|
Scribble-Supervised Target Extraction Method Based on Inner
Structure-Constraint for Remote Sensing Images
|
[
"Yitong Li",
"Chang Liu",
"Jie Ma"
] |
Weakly supervised learning based on scribble annotations in target extraction
of remote sensing images has drawn much interest due to scribbles' flexibility
in denoting winding objects and low cost of manually labeling. However,
scribbles are too sparse to identify object structure and detailed information,
bringing great challenges in target localization and boundary description. To
alleviate these problems, in this paper, we construct two inner
structure-constraints, a deformation consistency loss and a trainable active
contour loss, together with a scribble-constraint to supervise the optimization
of the encoder-decoder network without introducing any auxiliary module or
extra operation based on prior cues. Comprehensive experiments demonstrate our
method's superiority over five state-of-the-art algorithms in this field.
Source code is available at https://github.com/yitongli123/ISC-TE.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.10662
|
2023-05-18T02:51:17Z
|
Learning Differentially Private Probabilistic Models for
Privacy-Preserving Image Generation
|
[
"Bochao Liu",
"Shiming Ge",
"Pengju Wang",
"Liansheng Zhuang",
"Tongliang Liu"
] |
A number of deep models trained on high-quality and valuable images have been
deployed in practical applications, which may pose a leakage risk of data
privacy. Learning differentially private generative models can sidestep this
challenge through indirect data access. However, such differentially private
generative models learned by existing approaches can only generate images with
a low-resolution of less than 128x128, hindering the widespread usage of
generated images in downstream training. In this work, we propose learning
differentially private probabilistic models (DPPM) to generate high-resolution
images with differential privacy guarantee. In particular, we first train a
model to fit the distribution of the training data and make it satisfy
differential privacy by performing a randomized response mechanism during
training process. Then we perform Hamiltonian dynamics sampling along with the
differentially private movement direction predicted by the trained
probabilistic model to obtain the privacy-preserving images. In this way, it is
possible to apply these images to different downstream tasks while protecting
private information. Notably, compared to other state-of-the-art differentially
private generative approaches, our approach can generate images up to 256x256
with remarkable visual quality and data utility. Extensive experiments show the
effectiveness of our approach.
|
[
"cs.CV",
"cs.CR"
] | false |
2305.10691
|
2023-05-18T04:03:51Z
|
Re-thinking Data Availablity Attacks Against Deep Neural Networks
|
[
"Bin Fang",
"Bo Li",
"Shuang Wu",
"Ran Yi",
"Shouhong Ding",
"Lizhuang Ma"
] |
The unauthorized use of personal data for commercial purposes and the
clandestine acquisition of private data for training machine learning models
continue to raise concerns. In response to these issues, researchers have
proposed availability attacks that aim to render data unexploitable. However,
many current attack methods are rendered ineffective by adversarial training.
In this paper, we re-examine the concept of unlearnable examples and discern
that the existing robust error-minimizing noise presents an inaccurate
optimization objective. Building on these observations, we introduce a novel
optimization paradigm that yields improved protection results with reduced
computational time requirements. We have conducted extensive experiments to
substantiate the soundness of our approach. Moreover, our method establishes a
robust foundation for future research in this area.
|
[
"cs.CR",
"cs.CV"
] | false |
2305.10724
|
2023-05-18T05:52:06Z
|
Segment Any Anomaly without Training via Hybrid Prompt Regularization
|
[
"Yunkang Cao",
"Xiaohao Xu",
"Chen Sun",
"Yuqi Cheng",
"Zongwei Du",
"Liang Gao",
"Weiming Shen"
] |
We present a novel framework, i.e., Segment Any Anomaly + (SAA+), for
zero-shot anomaly segmentation with hybrid prompt regularization to improve the
adaptability of modern foundation models. Existing anomaly segmentation models
typically rely on domain-specific fine-tuning, limiting their generalization
across countless anomaly patterns. In this work, inspired by the great
zero-shot generalization ability of foundation models like Segment Anything, we
first explore their assembly to leverage diverse multi-modal prior knowledge
for anomaly localization. For non-parameter foundation model adaptation to
anomaly segmentation, we further introduce hybrid prompts derived from domain
expert knowledge and target image context as regularization. Our proposed SAA+
model achieves state-of-the-art performance on several anomaly segmentation
benchmarks, including VisA, MVTec-AD, MTD, and KSDD2, in the zero-shot setting.
We will release the code at
\href{https://github.com/caoyunkang/Segment-Any-Anomaly}{https://github.com/caoyunkang/Segment-Any-Anomaly}.
|
[
"cs.CV",
"cs.AI"
] | false |
2305.10809
|
2023-05-18T08:43:57Z
|
CS-TRD: a Cross Sections Tree Ring Detection method
|
[
"Henry Marichal",
"Diego Passarella",
"Gregory Randall"
] |
This work describes a Tree Ring Detection method for complete Cross-Sections
of trees (CS-TRD). The method is based on the detection, processing, and
connection of edges corresponding to the tree's growth rings. The method
depends on the parameters for the Canny Devernay edge detector ($\sigma$ and
two thresholds), a resize factor, the number of rays, and the pith location.
The first five parameters are fixed by default. The pith location can be marked
manually or using an automatic pith detection algorithm. Besides the pith
localization, the CS-TRD method is fully automated and achieves an F-Score of
89\% in the UruDendro dataset (of Pinus Taeda) with a mean execution time of 17
seconds and of 97\% in the Kennel dataset (of Abies Alba) with an average
execution time 11 seconds.
|
[
"cs.CV",
"q-bio.PE"
] | false |
2305.10919
|
2023-05-18T12:31:33Z
|
From the Lab to the Wild: Affect Modeling via Privileged Information
|
[
"Konstantinos Makantasis",
"Kosmas Pinitas",
"Antonios Liapis",
"Georgios N. Yannakakis"
] |
How can we reliably transfer affect models trained in controlled laboratory
conditions (in-vitro) to uncontrolled real-world settings (in-vivo)? The
information gap between in-vitro and in-vivo applications defines a core
challenge of affective computing. This gap is caused by limitations related to
affect sensing including intrusiveness, hardware malfunctions and availability
of sensors. As a response to these limitations, we introduce the concept of
privileged information for operating affect models in real-world scenarios (in
the wild). Privileged information enables affect models to be trained across
multiple modalities available in a lab, and ignore, without significant
performance drops, those modalities that are not available when they operate in
the wild. Our approach is tested in two multimodal affect databases one of
which is designed for testing models of affect in the wild. By training our
affect models using all modalities and then using solely raw footage frames for
testing the models, we reach the performance of models that fuse all available
modalities for both training and testing. The results are robust across both
classification and regression affect modeling tasks which are dominant
paradigms in affective computing. Our findings make a decisive step towards
realizing affect interaction in the wild.
|
[
"cs.HC",
"cs.CV"
] | false |
2305.10973
|
2023-05-18T13:41:25Z
|
Drag Your GAN: Interactive Point-based Manipulation on the Generative
Image Manifold
|
[
"Xingang Pan",
"Ayush Tewari",
"Thomas Leimkühler",
"Lingjie Liu",
"Abhimitra Meka",
"Christian Theobalt"
] |
Synthesizing visual content that meets users' needs often requires flexible
and precise controllability of the pose, shape, expression, and layout of the
generated objects. Existing approaches gain controllability of generative
adversarial networks (GANs) via manually annotated training data or a prior 3D
model, which often lack flexibility, precision, and generality. In this work,
we study a powerful yet much less explored way of controlling GANs, that is, to
"drag" any points of the image to precisely reach target points in a
user-interactive manner, as shown in Fig.1. To achieve this, we propose
DragGAN, which consists of two main components: 1) a feature-based motion
supervision that drives the handle point to move towards the target position,
and 2) a new point tracking approach that leverages the discriminative
generator features to keep localizing the position of the handle points.
Through DragGAN, anyone can deform an image with precise control over where
pixels go, thus manipulating the pose, shape, expression, and layout of diverse
categories such as animals, cars, humans, landscapes, etc. As these
manipulations are performed on the learned generative image manifold of a GAN,
they tend to produce realistic outputs even for challenging scenarios such as
hallucinating occluded content and deforming shapes that consistently follow
the object's rigidity. Both qualitative and quantitative comparisons
demonstrate the advantage of DragGAN over prior approaches in the tasks of
image manipulation and point tracking. We also showcase the manipulation of
real images through GAN inversion.
|
[
"cs.CV",
"cs.GR"
] | true |
2305.11077
|
2023-05-18T16:03:37Z
|
A Comparative Study of Face Detection Algorithms for Masked Face
Detection
|
[
"Sahel Mohammad Iqbal",
"Danush Shekar",
"Subhankar Mishra"
] |
Contemporary face detection algorithms have to deal with many challenges such
as variations in pose, illumination, and scale. A subclass of the face
detection problem that has recently gained increasing attention is occluded
face detection, or more specifically, the detection of masked faces. Three
years on since the advent of the COVID-19 pandemic, there is still a complete
lack of evidence regarding how well existing face detection algorithms perform
on masked faces. This article first offers a brief review of state-of-the-art
face detectors and detectors made for the masked face problem, along with a
review of the existing masked face datasets. We evaluate and compare the
performances of a well-representative set of face detectors at masked face
detection and conclude with a discussion on the possible contributing factors
to their performance.
|
[
"cs.CV",
"cs.LG"
] | false |
2305.11080
|
2023-05-18T16:08:11Z
|
Inspecting the Geographical Representativeness of Images from
Text-to-Image Models
|
[
"Abhipsa Basu",
"R. Venkatesh Babu",
"Danish Pruthi"
] |
Recent progress in generative models has resulted in models that produce both
realistic as well as relevant images for most textual inputs. These models are
being used to generate millions of images everyday, and hold the potential to
drastically impact areas such as generative art, digital marketing and data
augmentation. Given their outsized impact, it is important to ensure that the
generated content reflects the artifacts and surroundings across the globe,
rather than over-representing certain parts of the world. In this paper, we
measure the geographical representativeness of common nouns (e.g., a house)
generated through DALL.E 2 and Stable Diffusion models using a crowdsourced
study comprising 540 participants across 27 countries. For deliberately
underspecified inputs without country names, the generated images most reflect
the surroundings of the United States followed by India, and the top
generations rarely reflect surroundings from all other countries (average score
less than 3 out of 5). Specifying the country names in the input increases the
representativeness by 1.44 points on average for DALL.E 2 and 0.75 for Stable
Diffusion, however, the overall scores for many countries still remain low,
highlighting the need for future models to be more geographically inclusive.
Lastly, we examine the feasibility of quantifying the geographical
representativeness of generated images without conducting user studies.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.11089
|
2023-05-18T16:24:12Z
|
Blackout Diffusion: Generative Diffusion Models in Discrete-State Spaces
|
[
"Javier E Santos",
"Zachary R. Fox",
"Nicholas Lubbers",
"Yen Ting Lin"
] |
Typical generative diffusion models rely on a Gaussian diffusion process for
training the backward transformations, which can then be used to generate
samples from Gaussian noise. However, real world data often takes place in
discrete-state spaces, including many scientific applications. Here, we develop
a theoretical formulation for arbitrary discrete-state Markov processes in the
forward diffusion process using exact (as opposed to variational) analysis. We
relate the theory to the existing continuous-state Gaussian diffusion as well
as other approaches to discrete diffusion, and identify the corresponding
reverse-time stochastic process and score function in the continuous-time
setting, and the reverse-time mapping in the discrete-time setting. As an
example of this framework, we introduce ``Blackout Diffusion'', which learns to
produce samples from an empty image instead of from noise. Numerical
experiments on the CIFAR-10, Binarized MNIST, and CelebA datasets confirm the
feasibility of our approach. Generalizing from specific (Gaussian) forward
processes to discrete-state processes without a variational approximation sheds
light on how to interpret diffusion models, which we discuss.
|
[
"cs.LG",
"cs.CV"
] | false |
2305.11116
|
2023-05-18T16:57:57Z
|
LLMScore: Unveiling the Power of Large Language Models in Text-to-Image
Synthesis Evaluation
|
[
"Yujie Lu",
"Xianjun Yang",
"Xiujun Li",
"Xin Eric Wang",
"William Yang Wang"
] |
Existing automatic evaluation on text-to-image synthesis can only provide an
image-text matching score, without considering the object-level
compositionality, which results in poor correlation with human judgments. In
this work, we propose LLMScore, a new framework that offers evaluation scores
with multi-granularity compositionality. LLMScore leverages the large language
models (LLMs) to evaluate text-to-image models. Initially, it transforms the
image into image-level and object-level visual descriptions. Then an evaluation
instruction is fed into the LLMs to measure the alignment between the
synthesized image and the text, ultimately generating a score accompanied by a
rationale. Our substantial analysis reveals the highest correlation of LLMScore
with human judgments on a wide range of datasets (Attribute Binding Contrast,
Concept Conjunction, MSCOCO, DrawBench, PaintSkills). Notably, our LLMScore
achieves Kendall's tau correlation with human evaluations that is 58.8% and
31.2% higher than the commonly-used text-image matching metrics CLIP and BLIP,
respectively.
|
[
"cs.CV",
"cs.CL"
] | false |
2305.11125
|
2023-05-18T17:15:08Z
|
Skin Lesion Diagnosis Using Convolutional Neural Networks
|
[
"Daniel Alonso Villanueva Nunez",
"Yongmin Li"
] |
Cancerous skin lesions are one of the most common malignancies detected in
humans, and if not detected at an early stage, they can lead to death.
Therefore, it is crucial to have access to accurate results early on to
optimize the chances of survival. Unfortunately, accurate results are typically
obtained by highly trained dermatologists, who may not be accessible to many
people, particularly in low-income and middle-income countries. Artificial
Intelligence (AI) appears to be a potential solution to this problem, as it has
proven to provide equal or even better diagnoses than healthcare professionals.
This project aims to address the issue by collecting state-of-the-art
techniques for image classification from various fields and implementing them.
Some of these techniques include mixup, presizing, and test-time augmentation,
among others. Three architectures were used for the implementation:
DenseNet121, VGG16 with batch normalization, and ResNet50. The models were
designed with two main purposes. First, to classify images into seven
categories, including melanocytic nevus, melanoma, benign keratosis-like
lesions, basal cell carcinoma, actinic keratoses and intraepithelial carcinoma,
vascular lesions, and dermatofibroma. Second, to classify images into benign or
malignant. The models were trained using a dataset of 8012 images, and their
performance was evaluated using 2003 images. It's worth noting that this model
is trained end-to-end, directly from the image to the labels, without the need
for handcrafted feature extraction.
|
[
"eess.IV",
"cs.CV"
] | false |
2305.11191
|
2023-05-18T04:17:01Z
|
Towards Generalizable Data Protection With Transferable Unlearnable
Examples
|
[
"Bin Fang",
"Bo Li",
"Shuang Wu",
"Tianyi Zheng",
"Shouhong Ding",
"Ran Yi",
"Lizhuang Ma"
] |
Artificial Intelligence (AI) is making a profound impact in almost every
domain. One of the crucial factors contributing to this success has been the
access to an abundance of high-quality data for constructing machine learning
models. Lately, as the role of data in artificial intelligence has been
significantly magnified, concerns have arisen regarding the secure utilization
of data, particularly in the context of unauthorized data usage. To mitigate
data exploitation, data unlearning have been introduced to render data
unexploitable. However, current unlearnable examples lack the generalization
required for wide applicability. In this paper, we present a novel,
generalizable data protection method by generating transferable unlearnable
examples. To the best of our knowledge, this is the first solution that
examines data privacy from the perspective of data distribution. Through
extensive experimentation, we substantiate the enhanced generalizable
protection capabilities of our proposed method.
|
[
"cs.CR",
"cs.CV"
] | false |
2305.10655
|
2023-05-18T02:26:16Z
|
DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D
Medical Images
|
[
"Andres Diaz-Pinto",
"Pritesh Mehta",
"Sachidanand Alle",
"Muhammad Asad",
"Richard Brown",
"Vishwesh Nath",
"Alvin Ihsani",
"Michela Antonelli",
"Daniel Palkovics",
"Csaba Pinter",
"Ron Alkalay",
"Steve Pieper",
"Holger R. Roth",
"Daguang Xu",
"Prerna Dogra",
"Tom Vercauteren",
"Andrew Feng",
"Abood Quraini",
"Sebastien Ourselin",
"M. Jorge Cardoso"
] |
Automatic segmentation of medical images is a key step for diagnostic and
interventional tasks. However, achieving this requires large amounts of
annotated volumes, which can be tedious and time-consuming task for expert
annotators. In this paper, we introduce DeepEdit, a deep learning-based method
for volumetric medical image annotation, that allows automatic and
semi-automatic segmentation, and click-based refinement. DeepEdit combines the
power of two methods: a non-interactive (i.e. automatic segmentation using
nnU-Net, UNET or UNETR) and an interactive segmentation method (i.e. DeepGrow),
into a single deep learning model. It allows easy integration of
uncertainty-based ranking strategies (i.e. aleatoric and epistemic uncertainty
computation) and active learning. We propose and implement a method for
training DeepEdit by using standard training combined with user interaction
simulation. Once trained, DeepEdit allows clinicians to quickly segment their
datasets by using the algorithm in auto segmentation mode or by providing
clicks via a user interface (i.e. 3D Slicer, OHIF). We show the value of
DeepEdit through evaluation on the PROSTATEx dataset for prostate/prostatic
lesions and the Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) dataset
for abdominal CT segmentation, using state-of-the-art network architectures as
baseline for comparison. DeepEdit could reduce the time and effort annotating
3D medical images compared to DeepGrow alone. Source code is available at
https://github.com/Project-MONAI/MONAILabel
|
[
"eess.IV",
"cs.CV",
"cs.LG"
] | false |
2305.10727
|
2023-05-18T05:55:48Z
|
Boost Vision Transformer with GPU-Friendly Sparsity and Quantization
|
[
"Chong Yu",
"Tao Chen",
"Zhongxue Gan",
"Jiayuan Fan"
] |
The transformer extends its success from the language to the vision domain.
Because of the stacked self-attention and cross-attention blocks, the
acceleration deployment of vision transformer on GPU hardware is challenging
and also rarely studied. This paper thoroughly designs a compression scheme to
maximally utilize the GPU-friendly 2:4 fine-grained structured sparsity and
quantization. Specially, an original large model with dense weight parameters
is first pruned into a sparse one by 2:4 structured pruning, which considers
the GPU's acceleration of 2:4 structured sparse pattern with FP16 data type,
then the floating-point sparse model is further quantized into a fixed-point
one by sparse-distillation-aware quantization aware training, which considers
GPU can provide an extra speedup of 2:4 sparse calculation with integer
tensors. A mixed-strategy knowledge distillation is used during the pruning and
quantization process. The proposed compression scheme is flexible to support
supervised and unsupervised learning styles. Experiment results show GPUSQ-ViT
scheme achieves state-of-the-art compression by reducing vision transformer
models 6.4-12.7 times on model size and 30.3-62 times on FLOPs with negligible
accuracy degradation on ImageNet classification, COCO detection and ADE20K
segmentation benchmarking tasks. Moreover, GPUSQ-ViT can boost actual
deployment performance by 1.39-1.79 times and 3.22-3.43 times of latency and
throughput on A100 GPU, and 1.57-1.69 times and 2.11-2.51 times improvement of
latency and throughput on AGX Orin.
|
[
"cs.CV",
"cs.LG",
"cs.PF"
] | false |
2305.10766
|
2023-05-18T07:13:02Z
|
Adversarial Amendment is the Only Force Capable of Transforming an Enemy
into a Friend
|
[
"Chong Yu",
"Tao Chen",
"Zhongxue Gan"
] |
Adversarial attack is commonly regarded as a huge threat to neural networks
because of misleading behavior. This paper presents an opposite perspective:
adversarial attacks can be harnessed to improve neural models if amended
correctly. Unlike traditional adversarial defense or adversarial training
schemes that aim to improve the adversarial robustness, the proposed
adversarial amendment (AdvAmd) method aims to improve the original accuracy
level of neural models on benign samples. We thoroughly analyze the
distribution mismatch between the benign and adversarial samples. This
distribution mismatch and the mutual learning mechanism with the same learning
ratio applied in prior art defense strategies is the main cause leading the
accuracy degradation for benign samples. The proposed AdvAmd is demonstrated to
steadily heal the accuracy degradation and even leads to a certain accuracy
boost of common neural models on benign classification, object detection, and
segmentation tasks. The efficacy of the AdvAmd is contributed by three key
components: mediate samples (to reduce the influence of distribution mismatch
with a fine-grained amendment), auxiliary batch norm (to solve the mutual
learning mechanism and the smoother judgment surface), and AdvAmd loss (to
adjust the learning ratios according to different attack vulnerabilities)
through quantitative and ablation experiments.
|
[
"cs.AI",
"cs.CR",
"cs.CV"
] | false |
2305.10882
|
2023-05-18T11:22:33Z
|
StawGAN: Structural-Aware Generative Adversarial Networks for Infrared
Image Translation
|
[
"Luigi Sigillo",
"Eleonora Grassucci",
"Danilo Comminiello"
] |
This paper addresses the problem of translating night-time thermal infrared
images, which are the most adopted image modalities to analyze night-time
scenes, to daytime color images (NTIT2DC), which provide better perceptions of
objects. We introduce a novel model that focuses on enhancing the quality of
the target generation without merely colorizing it. The proposed structural
aware (StawGAN) enables the translation of better-shaped and high-definition
objects in the target domain. We test our model on aerial images of the
DroneVeichle dataset containing RGB-IR paired images. The proposed approach
produces a more accurate translation with respect to other state-of-the-art
image translation models. The source code is available at
https://github.com/LuigiSigillo/StawGAN
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] | false |
2305.10975
|
2023-05-18T13:42:15Z
|
Benchmarking Deep Learning Frameworks for Automated Diagnosis of Ocular
Toxoplasmosis: A Comprehensive Approach to Classification and Segmentation
|
[
"Syed Samiul Alam",
"Samiul Based Shuvo",
"Shams Nafisa Ali",
"Fardeen Ahmed",
"Arbil Chakma",
"Yeong Min Jang"
] |
Ocular Toxoplasmosis (OT), is a common eye infection caused by T. gondii that
can cause vision problems. Diagnosis is typically done through a clinical
examination and imaging, but these methods can be complicated and costly,
requiring trained personnel. To address this issue, we have created a benchmark
study that evaluates the effectiveness of existing pre-trained networks using
transfer learning techniques to detect OT from fundus images. Furthermore, we
have also analysed the performance of transfer-learning based segmentation
networks to segment lesions in the images. This research seeks to provide a
guide for future researchers looking to utilise DL techniques and develop a
cheap, automated, easy-to-use, and accurate diagnostic method. We have
performed in-depth analysis of different feature extraction techniques in order
to find the most optimal one for OT classification and segmentation of lesions.
For classification tasks, we have evaluated pre-trained models such as VGG16,
MobileNetV2, InceptionV3, ResNet50, and DenseNet121 models. Among them,
MobileNetV2 outperformed all other models in terms of Accuracy (Acc), Recall,
and F1 Score outperforming the second-best model, InceptionV3 by 0.7% higher
Acc. However, DenseNet121 achieved the best result in terms of Precision, which
was 0.1% higher than MobileNetv2. For the segmentation task, this work has
exploited U-Net architecture. In order to utilize transfer learning the encoder
block of the traditional U-Net was replaced by MobileNetV2, InceptionV3,
ResNet34, and VGG16 to evaluate different architectures moreover two different
two different loss functions (Dice loss and Jaccard loss) were exploited in
order to find the most optimal one. The MobileNetV2/U-Net outperformed ResNet34
by 0.5% and 2.1% in terms of Acc and Dice Score, respectively when Jaccard loss
function is employed during the training.
|
[
"eess.IV",
"cs.AI",
"cs.CV"
] | false |
2305.11172
|
2023-05-18T17:59:06Z
|
ONE-PEACE: Exploring One General Representation Model Toward Unlimited
Modalities
|
[
"Peng Wang",
"Shijie Wang",
"Junyang Lin",
"Shuai Bai",
"Xiaohuan Zhou",
"Jingren Zhou",
"Xinggang Wang",
"Chang Zhou"
] |
In this work, we explore a scalable way for building a general representation
model toward unlimited modalities. We release ONE-PEACE, a highly extensible
model with 4B parameters that can seamlessly align and integrate
representations across vision, audio, and language modalities. The architecture
of ONE-PEACE comprises modality adapters, shared self-attention layers, and
modality FFNs. This design allows for the easy extension of new modalities by
adding adapters and FFNs, while also enabling multi-modal fusion through
self-attention layers. To pretrain ONE-PEACE, we develop two modality-agnostic
pretraining tasks, cross-modal aligning contrast and intra-modal denoising
contrast, which align the semantic space of different modalities and capture
fine-grained details within modalities concurrently. With the scaling-friendly
architecture and pretraining tasks, ONE-PEACE has the potential to expand to
unlimited modalities. Without using any vision or language pretrained model for
initialization, ONE-PEACE achieves leading results on a wide range of uni-modal
and multi-modal tasks, including image classification (ImageNet), semantic
segmentation (ADE20K), audio-text retrieval (AudioCaps, Clotho), audio
classification (ESC-50, FSD50K, VGGSound), audio question answering (AVQA),
image-text retrieval (MSCOCO, Flickr30K), and visual grounding (RefCOCO/+/g).
Code is available at https://github.com/OFA-Sys/ONE-PEACE.
|
[
"cs.CV",
"cs.CL",
"cs.SD",
"eess.AS"
] | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.