text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Artificial neural networks have recently shown great results in many
disciplines and a variety of applications, including natural language
understanding, speech processing, games and image data generation. One
particular application in which the strong performance of artificial neural
networks was demonstrated is the recognition of objects in images, where deep
convolutional neural networks are commonly applied. In this survey, we give a
comprehensive introduction to this topic (object recognition with deep
convolutional neural networks), with a strong focus on the evolution of network
architectures. Therefore, we aim to compress the most important concepts in
this field in a simple and non-technical manner to allow for future researchers
to have a quick general understanding.
This work is structured as follows:
1. We will explain the basic ideas of (convolutional) neural networks and
deep learning and examine their usage for three object recognition tasks: image
classification, object localization and object detection.
2. We give a review on the evolution of deep convolutional neural networks by
providing an extensive overview of the most important network architectures
presented in chronological order of their appearances. | [
"cs.CV"
] |
Recurrent Neural Networks (RNNs) are powerful sequence modeling tools.
However, when dealing with high dimensional inputs, the training of RNNs
becomes computational expensive due to the large number of model parameters.
This hinders RNNs from solving many important computer vision tasks, such as
Action Recognition in Videos and Image Captioning. To overcome this problem, we
propose a compact and flexible structure, namely Block-Term tensor
decomposition, which greatly reduces the parameters of RNNs and improves their
training efficiency. Compared with alternative low-rank approximations, such as
tensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only
more concise (when using the same rank), but also able to attain a better
approximation to the original RNNs with much fewer parameters. On three
challenging tasks, including Action Recognition in Videos, Image Captioning and
Image Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of
both prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes
17,388 times fewer parameters than the standard LSTM to achieve an accuracy
improvement over 15.6\% in the Action Recognition task on the UCF11 dataset. | [
"cs.LG",
"stat.ML"
] |
It is a big challenge of computer vision to make machine automatically
describe the content of an image with a natural language sentence. Previous
works have made great progress on this task, but they only use the global or
local image feature, which may lose some important subtle or global information
of an image. In this paper, we propose a model with 3-gated model which fuses
the global and local image features together for the task of image caption
generation. The model mainly has three gated structures. 1) Gate for the global
image feature, which can adaptively decide when and how much the global image
feature should be imported into the sentence generator. 2) The gated recurrent
neural network (RNN) is used as the sentence generator. 3) The gated feedback
method for stacking RNN is employed to increase the capability of nonlinearity
fitting. More specially, the global and local image features are combined
together in this paper, which makes full use of the image information. The
global image feature is controlled by the first gate and the local image
feature is selected by the attention mechanism. With the latter two gates, the
relationship between image and text can be well explored, which improves the
performance of the language part as well as the multi-modal embedding part.
Experimental results show that our proposed method outperforms the
state-of-the-art for image caption generation. | [
"cs.CV"
] |
We propose the use of unsupervised learning to train projection networks that
project onto the latent space of an already trained generator. We apply our
method to a trained StyleGAN, and use our projection network to perform image
super-resolution and clustering of images into semantically identifiable
groups. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Semi-supervised learning (SSL) has tremendous value in practice due to its
ability to utilize both labeled data and unlabelled data. An important class of
SSL methods is to naturally represent data as graphs such that the label
information of unlabelled samples can be inferred from the graphs, which
corresponds to graph-based semi-supervised learning (GSSL) methods. GSSL
methods have demonstrated their advantages in various domains due to their
uniqueness of structure, the universality of applications, and their
scalability to large scale data. Focusing on this class of methods, this work
aims to provide both researchers and practitioners with a solid and systematic
understanding of relevant advances as well as the underlying connections among
them. This makes our paper distinct from recent surveys that cover an overall
picture of SSL methods while neglecting fundamental understanding of GSSL
methods. In particular, a major contribution of this paper lies in a new
generalized taxonomy for GSSL, including graph regularization and graph
embedding methods, with the most up-to-date references and useful resources
such as codes, datasets, and applications. Furthermore, we present several
potential research directions as future work with insights into this rapidly
growing field. | [
"cs.LG"
] |
Machine learning is increasingly applied in high-stakes decision making that
directly affect people's lives, and this leads to an increased demand for
systems to explain their decisions. Explanations often take the form of
counterfactuals, which consists of conveying to the end user what she/he needs
to change in order to improve the outcome. Computing counterfactual
explanations is challenging, because of the inherent tension between a rich
semantics of the domain, and the need for real time response. In this paper we
present GeCo, the first system that can compute plausible and feasible
counterfactual explanations in real time. At its core, GeCo relies on a genetic
algorithm, which is customized to favor searching counterfactual explanations
with the smallest number of changes. To achieve real-time performance, we
introduce two novel optimizations: $\Delta$-representation of candidate
counterfactuals, and partial evaluation of the classifier. We compare
empirically GeCo against five other systems described in the literature, and
show that it is the only system that can achieve both high quality explanations
and real time answers. | [
"cs.LG",
"cs.DB"
] |
Pattern spotting consists of searching in a collection of historical document
images for occurrences of a graphical object using an image query. Contrary to
object detection, no prior information nor predefined class is given about the
query so training a model of the object is not feasible. In this paper, a
convolutional neural network approach is proposed to tackle this problem. We
use RetinaNet as a feature extractor to obtain multiscale embeddings of the
regions of the documents and also for the queries. Experiments conducted on the
DocExplore dataset show that our proposal is better at locating patterns and
requires less storage for indexing images than the state-of-the-art system, but
fails at retrieving some pages containing multiple instances of the query. | [
"cs.CV"
] |
Time Series Forecasting is at the core of many practical applications such as
sales forecasting for business, rainfall forecasting for agriculture and many
others. Though this problem has been extensively studied for years, it is still
considered a challenging problem due to complex and evolving nature of time
series data. Typical methods proposed for time series forecasting modeled
linear or non-linear dependencies between data observations. However it is a
generally accepted notion that no one method is universally effective for all
kinds of time series data. Attempts have been made to use dynamic and weighted
combination of heterogeneous and independent forecasting models and it has been
found to be a promising direction to tackle this problem. This method is based
on the assumption that different forecasters have different specialization and
varying performance for different distribution of data and weights are
dynamically assigned to multiple forecasters accordingly. However in many
practical time series data-set, the distribution of data slowly evolves with
time. We propose to employ a re-weighting based method to adjust the assigned
weights to various forecasters in order to account for such distribution-drift.
An exhaustive testing was performed against both real-world and synthesized
time-series. Experimental results show the competitiveness of the method in
comparison to state-of-the-art approaches for combining forecasters and
handling drift. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper proposes two important contributions for conditional Generative
Adversarial Networks (cGANs) to improve the wide variety of applications that
exploit this architecture. The first main contribution is an analysis of cGANs
to show that they are not explicitly conditional. In particular, it will be
shown that the discriminator and subsequently the cGAN does not automatically
learn the conditionality between inputs. The second contribution is a new
method, called acontrario, that explicitly models conditionality for both parts
of the adversarial architecture via a novel acontrario loss that involves
training the discriminator to learn unconditional (adverse) examples. This
leads to a novel type of data augmentation approach for GANs (acontrario
learning) which allows to restrict the search space of the generator to
conditional outputs using adverse examples. Extensive experimentation is
carried out to evaluate the conditionality of the discriminator by proposing a
probability distribution analysis. Comparisons with the cGAN architecture for
different applications show significant improvements in performance on well
known datasets including, semantic image synthesis, image segmentation and
monocular depth prediction using different metrics including Fr\'echet
Inception Distance(FID), mean Intersection over Union (mIoU), Root Mean Square
Error log (RMSE log) and Number of statistically-Different Bins (NDB) | [
"cs.CV",
"cs.AI"
] |
Deep neural networks (DNNs) have been extremely successful in solving many
challenging AI tasks in natural language processing, speech recognition, and
computer vision nowadays. However, DNNs are typically computation intensive,
memory demanding, and power hungry, which significantly limits their usage on
platforms with constrained resources. Therefore, a variety of compression
techniques (e.g. quantization, pruning, and knowledge distillation) have been
proposed to reduce the size and power consumption of DNNs. Blockwise knowledge
distillation is one of the compression techniques that can effectively reduce
the size of a highly complex DNN. However, it is not widely adopted due to its
long training time. In this paper, we propose a novel parallel blockwise
distillation algorithm to accelerate the distillation process of sophisticated
DNNs. Our algorithm leverages local information to conduct independent
blockwise distillation, utilizes depthwise separable layers as the efficient
replacement block architecture, and properly addresses limiting factors (e.g.
dependency, synchronization, and load balancing) that affect parallelism. The
experimental results running on an AMD server with four Geforce RTX 2080Ti GPUs
show that our algorithm can achieve 3x speedup plus 19% energy savings on VGG
distillation, and 3.5x speedup plus 29% energy savings on ResNet distillation,
both with negligible accuracy loss. The speedup of ResNet distillation can be
further improved to 3.87 when using four RTX6000 GPUs in a distributed cluster. | [
"cs.LG"
] |
We study the complexity of approximate representation and learning of
submodular functions over the uniform distribution on the Boolean hypercube
$\{0,1\}^n$. Our main result is the following structural theorem: any
submodular function is $\epsilon$-close in $\ell_2$ to a real-valued decision
tree (DT) of depth $O(1/\epsilon^2)$. This immediately implies that any
submodular function is $\epsilon$-close to a function of at most
$2^{O(1/\epsilon^2)}$ variables and has a spectral $\ell_1$ norm of
$2^{O(1/\epsilon^2)}$. It also implies the closest previous result that states
that submodular functions can be approximated by polynomials of degree
$O(1/\epsilon^2)$ (Cheraghchi et al., 2012). Our result is proved by
constructing an approximation of a submodular function by a DT of rank
$4/\epsilon^2$ and a proof that any rank-$r$ DT can be $\epsilon$-approximated
by a DT of depth $\frac{5}{2}(r+\log(1/\epsilon))$.
We show that these structural results can be exploited to give an
attribute-efficient PAC learning algorithm for submodular functions running in
time $\tilde{O}(n^2) \cdot 2^{O(1/\epsilon^{4})}$. The best previous algorithm
for the problem requires $n^{O(1/\epsilon^{2})}$ time and examples (Cheraghchi
et al., 2012) but works also in the agnostic setting. In addition, we give
improved learning algorithms for a number of related settings.
We also prove that our PAC and agnostic learning algorithms are essentially
optimal via two lower bounds: (1) an information-theoretic lower bound of
$2^{\Omega(1/\epsilon^{2/3})}$ on the complexity of learning monotone
submodular functions in any reasonable model; (2) computational lower bound of
$n^{\Omega(1/\epsilon^{2/3})}$ based on a reduction to learning of sparse
parities with noise, widely-believed to be intractable. These are the first
lower bounds for learning of submodular functions over the uniform
distribution. | [
"cs.LG",
"cs.CC",
"cs.DS"
] |
Current state-of-the-art methods for image segmentation form a dense image
representation where the color, shape and texture information are all processed
together inside a deep CNN. This however may not be ideal as they contain very
different type of information relevant for recognition. Here, we propose a new
two-stream CNN architecture for semantic segmentation that explicitly wires
shape information as a separate processing branch, i.e. shape stream, that
processes information in parallel to the classical stream. Key to this
architecture is a new type of gates that connect the intermediate layers of the
two streams. Specifically, we use the higher-level activations in the classical
stream to gate the lower-level activations in the shape stream, effectively
removing noise and helping the shape stream to only focus on processing the
relevant boundary-related information. This enables us to use a very shallow
architecture for the shape stream that operates on the image-level resolution.
Our experiments show that this leads to a highly effective architecture that
produces sharper predictions around object boundaries and significantly boosts
performance on thinner and smaller objects. Our method achieves
state-of-the-art performance on the Cityscapes benchmark, in terms of both mask
(mIoU) and boundary (F-score) quality, improving by 2% and 4% over strong
baselines. | [
"cs.CV",
"cs.LG"
] |
While GANs have shown success in realistic image generation, the idea of
using GANs for other tasks unrelated to synthesis is underexplored. Do GANs
learn meaningful structural parts of objects during their attempt to reproduce
those objects? In this work, we test this hypothesis and propose a simple and
effective approach based on GANs for semantic part segmentation that requires
as few as one label example along with an unlabeled dataset. Our key idea is to
leverage a trained GAN to extract pixel-wise representation from the input
image and use it as feature vectors for a segmentation network. Our experiments
demonstrate that GANs representation is "readily discriminative" and produces
surprisingly good results that are comparable to those from supervised
baselines trained with significantly more labels. We believe this novel
repurposing of GANs underlies a new class of unsupervised representation
learning that is applicable to many other tasks. More results are available at
https://repurposegans.github.io/. | [
"cs.CV",
"cs.LG"
] |
Feature importance ranking has become a powerful tool for explainable AI.
However, its nature of combinatorial optimization poses a great challenge for
deep learning. In this paper, we propose a novel dual-net architecture
consisting of operator and selector for discovery of an optimal feature subset
of a fixed size and ranking the importance of those features in the optimal
subset simultaneously. During learning, the operator is trained for a
supervised learning task via optimal feature subset candidates generated by the
selector that learns predicting the learning performance of the operator
working on different optimal subset candidates. We develop an alternate
learning algorithm that trains two nets jointly and incorporates a stochastic
local search procedure into learning to address the combinatorial optimization
challenge. In deployment, the selector generates an optimal feature subset and
ranks feature importance, while the operator makes predictions based on the
optimal subset for test data. A thorough evaluation on synthetic, benchmark and
real data sets suggests that our approach outperforms several state-of-the-art
feature importance ranking and supervised feature selection methods. (Our
source code is available: https://github.com/maksym33/FeatureImportanceDL) | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
We present Cycle-Contrastive Learning (CCL), a novel self-supervised method
for learning video representation. Following a nature that there is a belong
and inclusion relation of video and its frames, CCL is designed to find
correspondences across frames and videos considering the contrastive
representation in their domains respectively. It is different from recent
approaches that merely learn correspondences across frames or clips. In our
method, the frame and video representations are learned from a single network
based on an R3D architecture, with a shared non-linear transformation for
embedding both frame and video features before the cycle-contrastive loss. We
demonstrate that the video representation learned by CCL can be transferred
well to downstream tasks of video understanding, outperforming previous methods
in nearest neighbour retrieval and action recognition tasks on UCF101, HMDB51
and MMAct. | [
"cs.CV",
"cs.LG"
] |
In generative modeling, the Wasserstein distance (WD) has emerged as a useful
metric to measure the discrepancy between generated and real data
distributions. Unfortunately, it is challenging to approximate the WD of
high-dimensional distributions. In contrast, the sliced Wasserstein distance
(SWD) factorizes high-dimensional distributions into their multiple
one-dimensional marginal distributions and is thus easier to approximate. In
this paper, we introduce novel approximations of the primal and dual SWD.
Instead of using a large number of random projections, as it is done by
conventional SWD approximation methods, we propose to approximate SWDs with a
small number of parameterized orthogonal projections in an end-to-end deep
learning fashion. As concrete applications of our SWD approximations, we design
two types of differentiable SWD blocks to equip modern generative
frameworks---Auto-Encoders (AE) and Generative Adversarial Networks (GAN). In
the experiments, we not only show the superiority of the proposed generative
models on standard image synthesis benchmarks, but also demonstrate the
state-of-the-art performance on challenging high resolution image and video
generation in an unsupervised manner. | [
"cs.CV",
"stat.ML"
] |
With the proliferation of mobile devices and the internet of things,
developing principled solutions for privacy in time series applications has
become increasingly important. While differential privacy is the gold standard
for database privacy, many time series applications require a different kind of
guarantee, and a number of recent works have used some form of inferential
privacy to address these situations.
However, a major barrier to using inferential privacy in practice is its lack
of graceful composition -- even if the same or related sensitive data is used
in multiple releases that are safe individually, the combined release may have
poor privacy properties. In this paper, we study composition properties of a
form of inferential privacy called Pufferfish when applied to time-series data.
We show that while general Pufferfish mechanisms may not compose gracefully, a
specific Pufferfish mechanism, called the Markov Quilt Mechanism, which was
recently introduced, has strong composition properties comparable to that of
pure differential privacy when applied to time series data. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
In this paper, we address the task of layout-to-image translation, which aims
to translate an input semantic layout to a realistic image. One open challenge
widely observed in existing methods is the lack of effective semantic
constraints during the image translation process, leading to models that cannot
preserve the semantic information and ignore the semantic dependencies within
the same object. To address this issue, we propose a novel Double Pooing GAN
(DPGAN) for generating photo-realistic and semantically-consistent results from
the input layout. We also propose a novel Double Pooling Module (DPM), which
consists of the Square-shape Pooling Module (SPM) and the Rectangle-shape
Pooling Module (RPM). Specifically, SPM aims to capture short-range semantic
dependencies of the input layout with different spatial scales, while RPM aims
to capture long-range semantic dependencies from both horizontal and vertical
directions. We then effectively fuse both outputs of SPM and RPM to further
enlarge the receptive field of our generator. Extensive experiments on five
popular datasets show that the proposed DPGAN achieves better results than
state-of-the-art methods. Finally, both SPM and SPM are general and can be
seamlessly integrated into any GAN-based architectures to strengthen the
feature representation. The code is available at
https://github.com/Ha0Tang/DPGAN. | [
"cs.CV"
] |
Graphical models have gained a lot of attention recently as a tool for
learning and representing dependencies among variables in multivariate data.
Often, domain scientists are looking specifically for differences among the
dependency networks of different conditions or populations (e.g. differences
between regulatory networks of different species, or differences between
dependency networks of diseased versus healthy populations). The standard
method for finding these differences is to learn the dependency networks for
each condition independently and compare them. We show that this approach is
prone to high false discovery rates (low precision) that can render the
analysis useless. We then show that by imposing a bias towards learning similar
dependency networks for each condition the false discovery rates can be reduced
to acceptable levels, at the cost of finding a reduced number of differences.
Algorithms developed in the transfer learning literature can be used to vary
the strength of the imposed similarity bias and provide a natural mechanism to
smoothly adjust this differential precision-recall tradeoff to cater to the
requirements of the analysis conducted. We present real case studies
(oncological and neurological) where domain experts use the proposed technique
to extract useful differential networks that shed light on the biological
processes involved in cancer and brain function. | [
"stat.ML",
"cs.LG"
] |
We propose Scale-aware AutoAug to learn data augmentation policies for object
detection. We define a new scale-aware search space, where both image- and
box-level augmentations are designed for maintaining scale invariance. Upon
this search space, we propose a new search metric, termed Pareto Scale Balance,
to facilitate search with high efficiency. In experiments, Scale-aware AutoAug
yields significant and consistent improvement on various object detectors
(e.g., RetinaNet, Faster R-CNN, Mask R-CNN, and FCOS), even compared with
strong multi-scale training baselines. Our searched augmentation policies are
transferable to other datasets and box-level tasks beyond object detection
(e.g., instance segmentation and keypoint estimation) to improve performance.
The search cost is much less than previous automated augmentation approaches
for object detection. It is notable that our searched policies have meaningful
patterns, which intuitively provide valuable insight for human data
augmentation design. Code and models will be available at
https://github.com/Jia-Research-Lab/SA-AutoAug. | [
"cs.CV"
] |
A well trained and generalized deep neural network (DNN) should be robust to
both seen and unseen classes. However, the performance of most of the existing
supervised DNN algorithms degrade for classes which are unseen in the training
set. To learn a discriminative classifier which yields good performance in
Zero-Shot Learning (ZSL) settings, we propose to generate an Over-Complete
Distribution (OCD) using Conditional Variational Autoencoder (CVAE) of both
seen and unseen classes. In order to enforce the separability between classes
and reduce the class scatter, we propose the use of Online Batch Triplet Loss
(OBTL) and Center Loss (CL) on the generated OCD. The effectiveness of the
framework is evaluated using both Zero-Shot Learning and Generalized Zero-Shot
Learning protocols on three publicly available benchmark databases, SUN, CUB
and AWA2. The results show that generating over-complete distributions and
enforcing the classifier to learn a transform function from overlapping to
non-overlapping distributions can improve the performance on both seen and
unseen classes. | [
"cs.CV",
"cs.LG"
] |
Metric learning has the aim to improve classification accuracy by learning a
distance measure which brings data points from the same class closer together
and pushes data points from different classes further apart. Recent research
has demonstrated that metric learning approaches can also be applied to trees,
such as molecular structures, abstract syntax trees of computer programs, or
syntax trees of natural language, by learning the cost function of an edit
distance, i.e. the costs of replacing, deleting, or inserting nodes in a tree.
However, learning such costs directly may yield an edit distance which violates
metric axioms, is challenging to interpret, and may not generalize well. In
this contribution, we propose a novel metric learning approach for trees which
learns an edit distance indirectly by embedding the tree nodes as vectors, such
that the Euclidean distance between those vectors supports class
discrimination. We learn such embeddings by reducing the distance to
prototypical trees from the same class and increasing the distance to
prototypical trees from different classes. In our experiments, we show that our
proposed metric learning approach improves upon the state-of-the-art in metric
learning for trees on six benchmark data sets, ranging from computer science
over biomedical data to a natural-language processing data set containing over
300,000 nodes. | [
"cs.LG",
"stat.ML"
] |
The bucketed PCA neural network (PCA-NN) with transforms is developed here in
an effort to benchmark deep neural networks (DNN's), for problems on supervised
classification. Most classical PCA models apply PCA to the entire training data
set to establish a reductive representation and then employ non-network tools
such as high-order polynomial classifiers. In contrast, the bucketed PCA-NN
applies PCA to individual buckets which are constructed in two consecutive
phases, as well as retains a genuine architecture of a neural network. This
facilitates a fair apple-to-apple comparison to DNN's, esp. to reveal that a
major chunk of accuracy achieved by many impressive DNN's could possibly be
explained by the bucketed PCA-NN (e.g., 96% out of 98% for the MNIST data set
as an example). Compared with most DNN's, the three building blocks of the
bucketed PCA-NN are easier to comprehend conceptually - PCA, transforms, and
bucketing for error correction. Furthermore, unlike the somewhat quasi-random
neurons ubiquitously observed in DNN's, the PCA neurons resemble or mirror the
input signals and are more straightforward to decipher as a result. | [
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML",
"I.2.10; I.2.6"
] |
An important pillar for safe machine learning (ML) is the systematic
mitigation of weaknesses in neural networks to afford their deployment in
critical applications. An ubiquitous class of safety risks are learned
shortcuts, i.e. spurious correlations a network exploits for its decisions that
have no semantic connection to the actual task. Networks relying on such
shortcuts bear the risk of not generalizing well to unseen inputs.
Explainability methods help to uncover such network vulnerabilities. However,
many of these techniques are not directly applicable if access to the network
is constrained, in so-called black-box setups. These setups are prevalent when
using third-party ML components. To address this constraint, we present an
approach to detect learned shortcuts using an interpretable-by-design network
as a proxy to the black-box model of interest. Leveraging the proxy's
guarantees on introspection we automatically extract candidates for learned
shortcuts. Their transferability to the black box is validated in a systematic
fashion. Concretely, as proxy model we choose a BagNet, which bases its
decisions purely on local image patches. We demonstrate on the autonomous
driving dataset A2D2 that extracted patch shortcuts significantly influence the
black box model. By efficiently identifying such patch-based vulnerabilities,
we contribute to safer ML models. | [
"cs.CV",
"cs.CR",
"cs.LG"
] |
Foreground (FG) pixel labelling plays a vital role in video surveillance.
Recent engineering solutions have attempted to exploit the efficacy of deep
learning (DL) models initially targeted for image classification to deal with
FG pixel labelling. One major drawback of such strategy is the lacking
delineation of visual objects when training samples are limited. To grapple
with this issue, we introduce a multi-view receptive field fully convolutional
neural network (MV-FCN) that harness recent seminal ideas, such as, fully
convolutional structure, inception modules, and residual networking. Therefrom,
we implement a system in an encoder-decoder fashion that subsumes a core and
two complementary feature flow paths. The model exploits inception modules at
early and late stages with three different sizes of receptive fields to capture
invariance at various scales. The features learned in the encoding phase are
fused with appropriate feature maps in the decoding phase through residual
connections for achieving enhanced spatial representation. These multi-view
receptive fields and residual feature connections are expected to yield highly
generalized features for an accurate pixel-wise FG region identification. It
is, then, trained with database specific exemplary segmentations to predict
desired FG objects.
The comparative experimental results on eleven benchmark datasets validate
that the proposed model achieves very competitive performance with the prior-
and state-of-the-art algorithms. We also report that how well a transfer
learning approach can be useful to enhance the performance of our proposed
MV-FCN. | [
"cs.CV"
] |
Deep neural networks (DNNs) have shown superior performances on various
multimodal learning problems. However, it often requires huge efforts to adapt
DNNs to individual multimodal tasks by manually engineering unimodal features
and designing multimodal feature fusion strategies. This paper proposes Bilevel
Multimodal Neural Architecture Search (BM-NAS) framework, which makes the
architecture of multimodal fusion models fully searchable via a bilevel
searching scheme. At the upper level, BM-NAS selects the inter/intra-modal
feature pairs from the pretrained unimodal backbones. At the lower level,
BM-NAS learns the fusion strategy for each feature pair, which is a combination
of predefined primitive operations. The primitive operations are elaborately
designed and they can be flexibly combined to accommodate various effective
feature fusion modules such as multi-head attention (Transformer) and Attention
on Attention (AoA). Experimental results on three multimodal tasks demonstrate
the effectiveness and efficiency of the proposed BM-NAS framework. BM-NAS
achieves competitive performances with much less search time and fewer model
parameters in comparison with the existing generalized multimodal NAS methods. | [
"cs.CV",
"cs.LG"
] |
Digital camera pipelines employ color constancy methods to estimate an
unknown scene illuminant, in order to re-illuminate images as if they were
acquired under an achromatic light source. Fully-supervised learning approaches
exhibit state-of-the-art estimation accuracy with camera-specific labelled
training imagery. Resulting models typically suffer from domain gaps and fail
to generalise across imaging devices. In this work, we propose a new approach
that affords fast adaptation to previously unseen cameras, and robustness to
changes in capture device by leveraging annotated samples across different
cameras and datasets. We present a general approach that utilizes the concept
of color temperature to frame color constancy as a set of distinct, homogeneous
few-shot regression tasks, each associated with an intuitive physical meaning.
We integrate this novel formulation within a meta-learning framework, enabling
fast generalisation to previously unseen cameras using only handfuls of camera
specific training samples. Consequently, the time spent for data collection and
annotation substantially diminishes in practice whenever a new sensor is used.
To quantify this gain, we evaluate our pipeline on three publicly available
datasets comprising 12 different cameras and diverse scene content. Our
approach delivers competitive results both qualitatively and quantitatively
while requiring a small fraction of the camera-specific samples compared to
standard approaches. | [
"cs.CV",
"stat.ML"
] |
While maximizing expected return is the goal in most reinforcement learning
approaches, risk-sensitive objectives such as conditional value at risk (CVaR)
are more suitable for many high-stakes applications. However, relatively little
is known about how to explore to quickly learn policies with good CVaR. In this
paper, we present the first algorithm for sample-efficient learning of
CVaR-optimal policies in Markov decision processes based on the optimism in the
face of uncertainty principle. This method relies on a novel optimistic version
of the distributional Bellman operator that moves probability mass from the
lower to the upper tail of the return distribution. We prove asymptotic
convergence and optimism of this operator for the tabular policy evaluation
case. We further demonstrate that our algorithm finds CVaR-optimal policies
substantially faster than existing baselines in several simulated environments
with discrete and continuous state spaces. | [
"cs.LG",
"cs.AI"
] |
The presence of noisy instances in mobile phone data is a fundamental issue
for classifying user phone call behavior (i.e., accept, reject, missed and
outgoing), with many potential negative consequences. The classification
accuracy may decrease and the complexity of the classifiers may increase due to
the number of redundant training samples. To detect such noisy instances from a
training dataset, researchers use naive Bayes classifier (NBC) as it identifies
misclassified instances by taking into account independence assumption and
conditional probabilities of the attributes. However, some of these
misclassified instances might indicate usages behavioral patterns of individual
mobile phone users. Existing naive Bayes classifier based noise detection
techniques have not considered this issue and, thus, are lacking in
classification accuracy. In this paper, we propose an improved noise detection
technique based on naive Bayes classifier for effectively classifying users'
phone call behaviors. In order to improve the classification accuracy, we
effectively identify noisy instances from the training dataset by analyzing the
behavioral patterns of individuals. We dynamically determine a noise threshold
according to individual's unique behavioral patterns by using both the naive
Bayes classifier and Laplace estimator. We use this noise threshold to identify
noisy instances. To measure the effectiveness of our technique in classifying
user phone call behavior, we employ the most popular classification algorithm
(e.g., decision tree). Experimental results on the real phone call log dataset
show that our proposed technique more accurately identifies the noisy instances
from the training datasets that leads to better classification accuracy. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
We consider the problem of learning an unknown Markov Decision Process (MDP)
that is weakly communicating in the infinite horizon setting. We propose a
Thompson Sampling-based reinforcement learning algorithm with dynamic episodes
(TSDE). At the beginning of each episode, the algorithm generates a sample from
the posterior distribution over the unknown model parameters. It then follows
the optimal stationary policy for the sampled model for the rest of the
episode. The duration of each episode is dynamically determined by two stopping
criteria. The first stopping criterion controls the growth rate of episode
length. The second stopping criterion happens when the number of visits to any
state-action pair is doubled. We establish $\tilde O(HS\sqrt{AT})$ bounds on
expected regret under a Bayesian setting, where $S$ and $A$ are the sizes of
the state and action spaces, $T$ is time, and $H$ is the bound of the span.
This regret bound matches the best available bound for weakly communicating
MDPs. Numerical results show it to perform better than existing algorithms for
infinite horizon MDPs. | [
"cs.LG"
] |
Deep neural networks are known to be extremely vulnerable to adversarial
examples under white-box setting. Moreover, the malicious adversaries crafted
on the surrogate (source) model often exhibit black-box transferability on
other models with the same learning task but having different architectures.
Recently, various methods are proposed to boost the adversarial
transferability, among which the input transformation is one of the most
effective approaches. We investigate in this direction and observe that
existing transformations are all applied on a single image, which might limit
the adversarial transferability. To this end, we propose a new input
transformation based attack method called Admix that considers the input image
and a set of images randomly sampled from other categories. Instead of directly
calculating the gradient on the original input, Admix calculates the gradient
on the input image admixed with a small portion of each add-in image while
using the original label of the input to craft more transferable adversaries.
Empirical evaluations on standard ImageNet dataset demonstrate that Admix could
achieve significantly better transferability than existing input transformation
methods under both single model setting and ensemble-model setting. By
incorporating with existing input transformations, our method could further
improve the transferability and outperforms the state-of-the-art combination of
input transformations by a clear margin when attacking nine advanced defense
models under ensemble-model setting. Code is available at
https://github.com/JHL-HUST/Admix. | [
"cs.CV",
"cs.CR"
] |
Human Activity Recognition from body-worn sensor data poses an inherent
challenge in capturing spatial and temporal dependencies of time-series
signals. In this regard, the existing recurrent or convolutional or their
hybrid models for activity recognition struggle to capture spatio-temporal
context from the feature space of sensor reading sequence. To address this
complex problem, we propose a self-attention based neural network model that
foregoes recurrent architectures and utilizes different types of attention
mechanisms to generate higher dimensional feature representation used for
classification. We performed extensive experiments on four popular publicly
available HAR datasets: PAMAP2, Opportunity, Skoda and USC-HAD. Our model
achieve significant performance improvement over recent state-of-the-art models
in both benchmark test subjects and Leave-one-subject-out evaluation. We also
observe that the sensor attention maps produced by our model is able capture
the importance of the modality and placement of the sensors in predicting the
different activity classes. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
One of the key challenges when developing a predictive model is the
capability to describe the domain knowledge and the cause-effect relationships
in a simple way. Decision rules are a useful and important methodology in this
context, justifying their application in several areas, in particular in
clinical practice. Several machine-learning classifiers have exploited the
advantageous properties of decision rules to build intelligent prediction
models, namely decision trees and ensembles of trees (ETs). However, such
methodologies usually suffer from a trade-off between interpretability and
predictive performance. Some procedures consider a simplification of ETs, using
heuristic approaches to select an optimal reduced set of decision rules. In
this paper, we introduce a novel step to those methodologies. We create a new
component to predict if a given rule will be correct or not for a particular
patient, which introduces personalization into the procedure. Furthermore, the
validation results using three public clinical datasets show that it also
allows to increase the predictive performance of the selected set of rules,
improving the mentioned trade-off. | [
"cs.LG"
] |
Recently, the application of machine learning models has gained momentum in
natural sciences and engineering, which is a natural fit due to the abundance
of data in these fields. However, the modeling of physical processes from
simulation data without first principle solutions remains difficult. Here, we
present a Graph Neural Networks approach towards accurate modeling of complex
3D granular flow simulation processes created by the discrete element method
LIGGGHTS and concentrate on simulations of physical systems found in real world
applications like rotating drums and hoppers. We discuss how to implement Graph
Neural Networks that deal with 3D objects, boundary conditions, particle -
particle, and particle - boundary interactions such that an accurate modeling
of relevant physical quantities is made possible. Finally, we compare the
machine learning based trajectories to LIGGGHTS trajectories in terms of
particle flows and mixing entropies. | [
"cs.LG",
"stat.ML"
] |
The abundance of data has given machine learning huge momentum in natural
sciences and engineering. However, the modeling of simulated physical processes
remains difficult. A key problem in doing so is the correct handling of
geometric boundaries. While triangularized geometric boundaries are very common
in engineering applications, they are notoriously difficult to model by machine
learning approaches due to their heterogeneity with respect to size and
orientation. In this work, we introduce Boundary Graph Neural Networks (BGNNs),
which dynamically modify graph structures to address boundary conditions.
Boundary graph structures are constructed via modifying edges, augmenting node
features, and dynamically inserting virtual nodes. The new BGNNs are tested on
complex 3D granular flow processes of hoppers and rotating drums which are
standard parts of industrial machinery. Using precise simulations that are
obtained by an expensive and complex discrete element method, BGNNs are
evaluated in terms of computational efficiency as well as prediction accuracy
of particle flows and mixing entropies. Even if complex boundaries are present,
BGNNs are able to accurately reproduce 3D granular flows within simulation
uncertainties over hundreds of thousands of simulation timesteps, and most
notably particles completely stay within the geometric objects without using
handcrafted conditions or restrictions. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Event cameras are novel sensors that perceive the per-pixel intensity changes
and output asynchronous event streams with high dynamic range and less motion
blur. It has been shown that events alone can be used for end-task learning,
\eg, semantic segmentation, based on encoder-decoder-like networks. However, as
events are sparse and mostly reflect edge information, it is difficult to
recover original details merely relying on the decoder. Moreover, most methods
resort to pixel-wise loss alone for supervision, which might be insufficient to
fully exploit the visual details from sparse events, thus leading to less
optimal performance. In this paper, we propose a simple yet flexible two-stream
framework named Dual Transfer Learning (DTL) to effectively enhance the
performance on the end-tasks without adding extra inference cost. The proposed
approach consists of three parts: event to end-task learning (EEL) branch,
event to image translation (EIT) branch, and transfer learning (TL) module that
simultaneously explores the feature-level affinity information and pixel-level
knowledge from the EIT branch to improve the EEL branch. This simple yet novel
method leads to strong representation learning from events and is evidenced by
the significant performance boost on the end-tasks such as semantic
segmentation and depth estimation. | [
"cs.CV"
] |
Deep Neural Networks (DNNs) have achieved remarkable success in many computer
vision tasks recently, but the huge number of parameters and the high
computation overhead hinder their deployments on resource-constrained edge
devices. It is worth noting that channel pruning is an effective approach for
compressing DNN models. A critical challenge is to determine which channels are
to be removed, so that the model accuracy will not be negatively affected. In
this paper, we first propose Spatial and Channel Attention (SCA), a new
attention module combining both spatial and channel attention that respectively
focuses on "where" and "what" are the most informative parts. Guided by the
scale values generated by SCA for measuring channel importance, we further
propose a new channel pruning approach called Channel Pruning guided by Spatial
and Channel Attention (CPSCA). Experimental results indicate that SCA achieves
the best inference accuracy, while incurring negligibly extra resource
consumption, compared to other state-of-the-art attention modules. Our
evaluation on two benchmark datasets shows that, with the guidance of SCA, our
CPSCA approach achieves higher inference accuracy than other state-of-the-art
pruning methods under the same pruning ratios. | [
"cs.CV",
"cs.AI"
] |
Hybrid-distorted image restoration (HD-IR) is dedicated to restore real
distorted image that is degraded by multiple distortions. Existing HD-IR
approaches usually ignore the inherent interference among hybrid distortions
which compromises the restoration performance. To decompose such interference,
we introduce the concept of Disentangled Feature Learning to achieve the
feature-level divide-and-conquer of hybrid distortions. Specifically, we
propose the feature disentanglement module (FDM) to distribute feature
representations of different distortions into different channels by revising
gain-control-based normalization. We also propose a feature aggregation module
(FAM) with channel-wise attention to adaptively filter out the distortion
representations and aggregate useful content information from different
channels for the construction of raw image. The effectiveness of the proposed
scheme is verified by visualizing the correlation matrix of features and
channel responses of different distortions. Extensive experimental results also
prove superior performance of our approach compared with the latest HD-IR
schemes. | [
"cs.CV",
"eess.IV"
] |
Action recognition has attracted increasing attention from RGB input in
computer vision partially due to potential applications on somatic simulation
and statistics of sport such as virtual tennis game and tennis techniques and
tactics analysis by video. Recently, deep learning based methods have achieved
promising performance for action recognition. In this paper, we propose
weighted Long Short-Term Memory adopted with convolutional neural network
representations for three dimensional tennis shots recognition. First, the
local two-dimensional convolutional neural network spatial representations are
extracted from each video frame individually using a pre-trained Inception
network. Then, a weighted Long Short-Term Memory decoder is introduced to take
the output state at time t and the historical embedding feature at time t-1 to
generate feature vector using a score weighting scheme. Finally, we use the
adopted CNN and weighted LSTM to map the original visual features into a vector
space to generate the spatial-temporal semantical description of visual
sequences and classify the action video content. Experiments on the benchmark
demonstrate that our method using only simple raw RGB video can achieve better
performance than the state-of-the-art baselines for tennis shot recognition. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
We present a Polyhedral Scene Generator system which creates a random scene
based on a few user parameters, renders the scene from random view points and
creates a dataset containing the renderings and corresponding annotation files.
We hope that this generator will enable research on how a program could parse a
scene if it had multiple viewpoints to consider. For ambiguous scenes,
typically people move their head or change their position to see the scene from
different angles as well as seeing how it changes while they move; this
research field is called active perception. The random scene generator
presented is designed to support research in this field by generating images of
scenes with known complexity characteristics and with verifiable properties
with respect to the distribution of features across a population. Thus, it is
well-suited for research in active perception without the requirement of a live
3D environment and mobile sensing agent, including comparative performance
evaluations. The system is publicly available at
https://polyhedral.eecs.yorku.ca. | [
"cs.CV"
] |
In many review classification applications, a fine-grained analysis of the
reviews is desirable, because different segments (e.g., sentences) of a review
may focus on different aspects of the entity in question. However, training
supervised models for segment-level classification requires segment labels,
which may be more difficult or expensive to obtain than review labels. In this
paper, we employ Multiple Instance Learning (MIL) and use only weak supervision
in the form of a single label per review. First, we show that when
inappropriate MIL aggregation functions are used, then MIL-based networks are
outperformed by simpler baselines. Second, we propose a new aggregation
function based on the sigmoid attention mechanism and show that our proposed
model outperforms the state-of-the-art models for segment-level sentiment
classification (by up to 9.8% in F1). Finally, we highlight the importance of
fine-grained predictions in an important public-health application: finding
actionable reports of foodborne illness. We show that our model achieves 48.6%
higher recall compared to previous models, thus increasing the chance of
identifying previously unknown foodborne outbreaks. | [
"cs.LG",
"cs.CL",
"cs.IR",
"stat.ML"
] |
Deep learning continues to push state-of-the-art performance for the semantic
segmentation of color (i.e., RGB) imagery; however, the lack of annotated data
for many remote sensing sensors (i.e. hyperspectral imagery (HSI)) prevents
researchers from taking advantage of this recent success. Since generating
sensor specific datasets is time intensive and cost prohibitive, remote sensing
researchers have embraced deep unsupervised feature extraction. Although these
methods have pushed state-of-the-art performance on current HSI benchmarks,
many of these tools are not readily accessible to many researchers. In this
letter, we introduce a software pipeline, which we call EarthMapper, for the
semantic segmentation of non-RGB remote sensing imagery. It includes
self-taught spatial-spectral feature extraction, various standard and deep
learning classifiers, and undirected graphical models for post-processing. We
evaluated EarthMapper on the Indian Pines and Pavia University datasets and
have released this code for public use. | [
"stat.ML",
"cs.CV",
"cs.LG"
] |
Transfer learning has emerged as a powerful methodology for adapting
pre-trained deep neural networks on image recognition tasks to new domains.
This process consists of taking a neural network pre-trained on a large
feature-rich source dataset, freezing the early layers that encode essential
generic image properties, and then fine-tuning the last few layers in order to
capture specific information related to the target situation. This approach is
particularly useful when only limited or weakly labeled data are available for
the new task. In this work, we demonstrate that adversarially-trained models
transfer better than non-adversarially-trained models, especially if only
limited data are available for the new domain task. Further, we observe that
adversarial training biases the learnt representations to retaining shapes, as
opposed to textures, which impacts the transferability of the source models.
Finally, through the lens of influence functions, we discover that transferred
adversarially-trained models contain more human-identifiable semantic
information, which explains -- at least partly -- why adversarially-trained
models transfer better. | [
"cs.LG",
"stat.ML"
] |
Comparing data defined over space and time is notoriously hard, because it
involves quantifying both spatial and temporal variability, while at the same
time taking into account the chronological structure of data. Dynamic Time
Warping (DTW) computes an optimal alignment between time series in agreement
with the chronological order, but is inherently blind to spatial shifts. In
this paper, we propose Spatio-Temporal Alignments (STA), a new differentiable
formulation of DTW, in which spatial differences between time samples are
accounted for using regularized optimal transport (OT). Our temporal alignments
are handled through a smooth variant of DTW called soft-DTW, for which we prove
a new property: soft-DTW increases quadratically with time shifts. The cost
matrix within soft-DTW that we use are computed using unbalanced OT, to handle
the case in which observations are not normalized probabilities. Experiments on
handwritten letters and brain imaging data confirm our theoretical findings and
illustrate the effectiveness of STA as a dissimilarity for spatio-temporal
data. | [
"stat.ML",
"cs.LG"
] |
Paper-intensive industries like insurance, law, and government have long
leveraged optical character recognition (OCR) to automatically transcribe
hordes of scanned documents into text strings for downstream processing. Even
in 2019, there are still many scanned documents and mail that come into
businesses in non-digital format. Text to be extracted from real world
documents is often nestled inside rich formatting, such as tabular structures
or forms with fill-in-the-blank boxes or underlines whose ink often touches or
even strikes through the ink of the text itself. Further, the text region could
have random ink smudges or spurious strokes. Such ink artifacts can severely
interfere with the performance of recognition algorithms or other downstream
processing tasks. In this work, we propose DeepErase, a neural-based
preprocessor to erase ink artifacts from text images. We devise a method to
programmatically assemble real text images and real artifacts into
realistic-looking "dirty" text images, and use them to train an artifact
segmentation network in a weakly supervised manner, since pixel-level
annotations are automatically obtained during the assembly process. In addition
to high segmentation accuracy, we show that our cleansed images achieve a
significant boost in recognition accuracy by popular OCR software such as
Tesseract 4.0. Finally, we test DeepErase on out-of-distribution datasets (NIST
SDB) of scanned IRS tax return forms and achieve double-digit improvements in
accuracy. All experiments are performed on both printed and handwritten text.
Code for all experiments is available at https://github.com/yikeqicn/DeepErase | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Weakly supervised object detection (WSOD) using only image-level annotations
has attracted a growing attention over the past few years. Whereas such task is
typically addressed with a domain-specific solution focused on natural images,
we show that a simple multiple instance approach applied on pre-trained deep
features yields excellent performances on non-photographic datasets, possibly
including new classes. The approach does not include any fine-tuning or
cross-domain learning and is therefore efficient and possibly applicable to
arbitrary datasets and classes. We investigate several flavors of the proposed
approach, some including multi-layers perceptron and polyhedral classifiers.
Despite its simplicity, our method shows competitive results on a range of
publicly available datasets, including paintings (People-Art, IconArt),
watercolors, cliparts and comics and allows to quickly learn unseen visual
categories. | [
"cs.CV"
] |
The outbreak of the novel coronavirus disease 2019 (COVID-19), caused by the
severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been
continuously affecting human lives and communities around the world in many
ways, from cities under lockdown to new social experiences. Although in most
cases COVID-19 results in mild illness, it has drawn global attention due to
the extremely contagious nature of SARS-CoV-2. Governments and healthcare
professionals, along with people and society as a whole, have taken any
measures to break the chain of transition and flatten the epidemic curve. In
this study, we used multiple data sources, i.e., PubMed and ArXiv, and built
several machine learning models to characterize the landscape of current
COVID-19 research by identifying the latent topics and analyzing the temporal
evolution of the extracted research themes, publications similarity, and
sentiments, within the time-frame of January- May 2020. Our findings confirm
the types of research available in PubMed and ArXiv differ significantly, with
the former exhibiting greater diversity in terms of COVID-19 related issues and
the latter focusing more on intelligent systems/tools to predict/diagnose
COVID-19. The special attention of the research community to the high-risk
groups and people with complications was also confirmed. | [
"cs.LG",
"cs.DL",
"cs.IR"
] |
Wasserstein distributionally robust optimization (WDRO) attempts to learn a
model that minimizes the local worst-case risk in the vicinity of the empirical
data distribution defined by Wasserstein ball. While WDRO has received
attention as a promising tool for inference since its introduction, its
theoretical understanding has not been fully matured. Gao et al. (2017)
proposed a minimizer based on a tractable approximation of the local worst-case
risk, but without showing risk consistency. In this paper, we propose a
minimizer based on a novel approximation theorem and provide the corresponding
risk consistency results. Furthermore, we develop WDRO inference for locally
perturbed data that include the Mixup (Zhang et al., 2017) as a special case.
We show that our approximation and risk consistency results naturally extend to
the cases when data are locally perturbed. Numerical experiments demonstrate
robustness of the proposed method using image classification datasets. Our
results show that the proposed method achieves significantly higher accuracy
than baseline models on noisy datasets. | [
"stat.ML",
"cs.LG"
] |
Modern neural networks can assign high confidence to inputs drawn from
outside the training distribution, posing threats to models in real-world
deployments. While much research attention has been placed on designing new
out-of-distribution (OOD) detection methods, the precise definition of OOD is
often left in vagueness and falls short of the desired notion of OOD in
reality. In this paper, we present a new formalization and model the data
shifts by taking into account both the invariant and environmental (spurious)
features. Under such formalization, we systematically investigate how spurious
correlation in the training set impacts OOD detection. Our results suggest that
the detection performance is severely worsened when the correlation between
spurious features and labels is increased in the training set. We further show
insights on detection methods that are more effective in reducing the impact of
spurious correlation and provide theoretical analysis on why reliance on
environmental features leads to high OOD detection error. Our work aims to
facilitate a better understanding of OOD samples and their formalization, as
well as the exploration of methods that enhance OOD detection. | [
"cs.LG",
"cs.AI"
] |
The $Q$-function is a central quantity in many Reinforcement Learning (RL)
algorithms for which RL agents behave following a (soft)-greedy policy w.r.t.
to $Q$. It is a powerful tool that allows action selection without a model of
the environment and even without explicitly modeling the policy. Yet, this
scheme can only be used in discrete action tasks, with small numbers of
actions, as the softmax cannot be computed exactly otherwise. Especially the
usage of function approximation, to deal with continuous action spaces in
modern actor-critic architectures, intrinsically prevents the exact computation
of a softmax. We propose to alleviate this issue by parametrizing the
$Q$-function implicitly, as the sum of a log-policy and of a value function. We
use the resulting parametrization to derive a practical off-policy deep RL
algorithm, suitable for large action spaces, and that enforces the softmax
relation between the policy and the $Q$-value. We provide a theoretical
analysis of our algorithm: from an Approximate Dynamic Programming perspective,
we show its equivalence to a regularized version of value iteration, accounting
for both entropy and Kullback-Leibler regularization, and that enjoys
beneficial error propagation results. We then evaluate our algorithm on classic
control tasks, where its results compete with state-of-the-art methods. | [
"cs.LG"
] |
Accurate detection and segmentation of transmission towers~(TTs) and power
lines~(PLs) from aerial images plays a key role in protecting power-grid
security and low-altitude UAV safety. Meanwhile, aerial images of TTs and PLs
pose a number of new challenges to the computer vision researchers who work on
object detection and segmentation -- PLs are long and thin, and may show
similar color as the background; TTs can be of various shapes and most likely
made up of line structures of various sparsity; The background scene, lighting,
and object sizes can vary significantly from one image to another. In this
paper we collect and release a new TT/PL Aerial-image (TTPLA) dataset,
consisting of 1,100 images with the resolution of 3,840$\times$2,160 pixels, as
well as manually labeled 8,987 instances of TTs and PLs. We develop novel
policies for collecting, annotating, and labeling the images in TTPLA.
Different from other relevant datasets, TTPLA supports evaluation of instance
segmentation, besides detection and semantic segmentation. To build a baseline
for detection and segmentation tasks on TTPLA, we report the performance of
several state-of-the-art deep learning models on our dataset. TTPLA dataset is
publicly available at https://github.com/r3ab/ttpla_dataset | [
"cs.CV"
] |
We aim to predict whether an employee of a company will leave or not, using
the k-Nearest Neighbors algorithm. We use evaluation of employee performance,
average monthly hours at work and number of years spent in the company, among
others, as our features. Other approaches to this problem include the use of
ANNs, decision trees and logistic regression. The dataset was split, using 70%
for training the algorithm and 30% for testing it, achieving an accuracy of
94.32%. | [
"stat.ML",
"cs.LG"
] |
In this paper, we propose a novel form of the loss function to increase the
performance of LiDAR-based 3d object detection and obtain more explainable and
convincing uncertainty for the prediction. The loss function was designed using
corner transformation and uncertainty modeling. With the new loss function, the
performance of our method on the val split of KITTI dataset shows up to a 15%
increase in terms of Average Precision (AP) comparing with the baseline using
simple L1 Loss. In the study of the characteristics of predicted uncertainties,
we find that generally more accurate prediction of the bounding box is usually
accompanied by lower uncertainty. The distribution of corner uncertainties
agrees on the distribution of the point cloud in the bounding box, which means
the corner with denser observed points has lower uncertainty. Moreover, our
method also learns the constraint from the cuboid geometry of the bounding box
in uncertainty prediction. Finally, we propose an efficient Bayesian updating
method to recover the uncertainty for the original parameters of the bounding
boxes which can help to provide probabilistic results for the planning module. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Stochastic compositional optimization arises in many important machine
learning tasks such as value function evaluation in reinforcement learning and
portfolio management. The objective function is the composition of two
expectations of stochastic functions, and is more challenging to optimize than
vanilla stochastic optimization problems. In this paper, we investigate the
stochastic compositional optimization in the general smooth non-convex setting.
We employ a recently developed idea of \textit{Stochastic Recursive Gradient
Descent} to design a novel algorithm named SARAH-Compositional, and prove a
sharp Incremental First-order Oracle (IFO) complexity upper bound for
stochastic compositional optimization: $\mathcal{O}((n+m)^{1/2}
\varepsilon^{-2})$ in the finite-sum case and $\mathcal{O}(\varepsilon^{-3})$
in the online case. Such a complexity is known to be the best one among IFO
complexity results for non-convex stochastic compositional optimization, and is
believed to be optimal. Our experiments validate the theoretical performance of
our algorithm. | [
"stat.ML",
"cs.LG",
"math.OC"
] |
Automatic face recognition has received significant performance improvement
by developing specialised facial image representations. On the other hand,
generic object recognition has rarely been applied to the face recognition.
Spatial pyramid pooling of features encoded by an over-complete dictionary has
been the key component of many state-of-the-art image classification systems.
Inspired by its success, in this work we develop a new face image
representation method inspired by the second-order pooling in Carreira et al.
[1], which was originally proposed for image segmentation. The proposed method
differs from the previous methods in that, we encode the densely extracted
local patches by a small-size dictionary; and the facial image signatures are
obtained by pooling the second-order statistics of the encoded features. We
show the importance of pooling on encoded features, which is bypassed by the
original second-order pooling method to avoid the high computational cost.
Equipped with a simple linear classifier, the proposed method outperforms the
state-of-the-art face identification performance by large margins. For example,
on the LFW databases, the proposed method performs better than the previous
best by around 13% accuracy. | [
"cs.CV"
] |
In this paper, we propose several novel deep learning methods for object
saliency detection based on the powerful convolutional neural networks. In our
approach, we use a gradient descent method to iteratively modify an input image
based on the pixel-wise gradients to reduce a cost function measuring the
class-specific objectness of the image. The pixel-wise gradients can be
efficiently computed using the back-propagation algorithm. The discrepancy
between the modified image and the original one may be used as a saliency map
for the image. Moreover, we have further proposed several new training methods
to learn saliency-specific convolutional nets for object saliency detection, in
order to leverage the available pixel-wise segmentation information. Our
methods are extremely computationally efficient (processing 20-40 images per
second in one GPU). In this work, we use the computed saliency maps for image
segmentation. Experimental results on two benchmark tasks, namely Microsoft
COCO and Pascal VOC 2012, have shown that our proposed methods can generate
high-quality salience maps, clearly outperforming many existing methods. In
particular, our approaches excel in handling many difficult images, which
contain complex background, highly-variable salient objects, multiple objects,
and/or very small salient objects. | [
"cs.CV"
] |
In this paper, we consider the problem of automatically segmenting neuronal
cells in dual-color confocal microscopy images. This problem is a key task in
various quantitative analysis applications in neuroscience, such as tracing
cell genesis in Danio rerio (zebrafish) brains. Deep learning, especially using
fully convolutional networks (FCN), has profoundly changed segmentation
research in biomedical imaging. We face two major challenges in this problem.
First, neuronal cells may form dense clusters, making it difficult to correctly
identify all individual cells (even to human experts). Consequently,
segmentation results of the known FCN-type models are not accurate enough.
Second, pixel-wise ground truth is difficult to obtain. Only a limited amount
of approximate instance-wise annotation can be collected, which makes the
training of FCN models quite cumbersome. We propose a new FCN-type deep
learning model, called deep complete bipartite networks (CB-Net), and a new
scheme for leveraging approximate instance-wise annotation to train our
pixel-wise prediction model. Evaluated using seven real datasets, our proposed
new CB-Net model outperforms the state-of-the-art FCN models and produces
neuron segmentation results of remarkable quality | [
"cs.CV"
] |
Automotive radar sensors output a lot of unwanted clutter or ghost
detections, whose position and velocity do not correspond to any real object in
the sensor's field of view. This poses a substantial challenge for environment
perception methods like object detection or tracking. Especially problematic
are clutter detections that occur in groups or at similar locations in multiple
consecutive measurements. In this paper, a new algorithm for identifying such
erroneous detections is presented. It is mainly based on the modeling of
specific commonly occurring wave propagation paths that lead to clutter. In
particular, the three effects explicitly covered are reflections at the
underbody of a car or truck, signals traveling back and forth between the
vehicle on which the sensor is mounted and another object, and multipath
propagation via specular reflection. The latter often occurs near guardrails,
concrete walls or similar reflective surfaces. Each of these effects is
described both theoretically and regarding a method for identifying the
corresponding clutter detections. Identification is done by analyzing
detections generated from a single sensor measurement only. The final algorithm
is evaluated on recordings of real extra-urban traffic. For labeling, a
semi-automatic process is employed. The results are promising, both in terms of
performance and regarding the very low execution time. Typically, a large part
of clutter is found, while only a small ratio of detections corresponding to
real objects are falsely classified by the algorithm. | [
"cs.CV",
"eess.SP"
] |
Learning data representations that are useful for various downstream tasks is
a cornerstone of artificial intelligence. While existing methods are typically
evaluated on downstream tasks such as classification or generative image
quality, we propose to assess representations through their usefulness in
downstream control tasks, such as reaching or pushing objects. By training over
10,000 reinforcement learning policies, we extensively evaluate to what extent
different representation properties affect out-of-distribution (OOD)
generalization. Finally, we demonstrate zero-shot transfer of these policies
from simulation to the real world, without any domain randomization or
fine-tuning. This paper aims to establish the first systematic characterization
of the usefulness of learned representations for real-world OOD downstream
tasks. | [
"cs.LG",
"stat.ML"
] |
We present the first purely event-based, energy-efficient approach for object
detection and categorization using an event camera. Compared to traditional
frame-based cameras, choosing event cameras results in high temporal resolution
(order of microseconds), low power consumption (few hundred mW) and wide
dynamic range (120 dB) as attractive properties. However, event-based object
recognition systems are far behind their frame-based counterparts in terms of
accuracy. To this end, this paper presents an event-based feature extraction
method devised by accumulating local activity across the image frame and then
applying principal component analysis (PCA) to the normalized neighborhood
region. Subsequently, we propose a backtracking-free k-d tree mechanism for
efficient feature matching by taking advantage of the low-dimensionality of the
feature representation. Additionally, the proposed k-d tree mechanism allows
for feature selection to obtain a lower-dimensional dictionary representation
when hardware resources are limited to implement dimensionality reduction.
Consequently, the proposed system can be realized on a field-programmable gate
array (FPGA) device leading to high performance over resource ratio. The
proposed system is tested on real-world event-based datasets for object
categorization, showing superior classification performance and relevance to
state-of-the-art algorithms. Additionally, we verified the object detection
method and real-time FPGA performance in lab settings under non-controlled
illumination conditions with limited training data and ground truth
annotations. | [
"cs.CV",
"cs.RO"
] |
Salient object detection (SOD) has been well studied in recent years,
especially using deep neural networks. However, SOD with RGB and RGB-D images
is usually treated as two different tasks with different network structures
that need to be designed specifically. In this paper, we proposed a unified and
efficient structure with a cross-attention context extraction (CRACE) module to
address both tasks of SOD efficiently. The proposed CRACE module receives and
appropriately fuses two (for RGB SOD) or three (for RGB-D SOD) inputs. The
simple unified feature pyramid network (FPN)-like structure with CRACE modules
conveys and refines the results under the multi-level supervisions of saliency
and boundaries. The proposed structure is simple yet effective; the rich
context information of RGB and depth can be appropriately extracted and fused
by the proposed structure efficiently. Experimental results show that our
method outperforms other state-of-the-art methods in both RGB and RGB-D SOD
tasks on various datasets and in terms of most metrics. | [
"cs.CV"
] |
Big data has had a great share in the success of deep learning in computer
vision. Recent works suggest that there is significant further potential to
increase object detection performance by utilizing even bigger datasets. In
this paper, we introduce the EuroCity Persons dataset, which provides a large
number of highly diverse, accurate and detailed annotations of pedestrians,
cyclists and other riders in urban traffic scenes. The images for this dataset
were collected on-board a moving vehicle in 31 cities of 12 European countries.
With over 238200 person instances manually labeled in over 47300 images,
EuroCity Persons is nearly one order of magnitude larger than person datasets
used previously for benchmarking. The dataset furthermore contains a large
number of person orientation annotations (over 211200). We optimize four
state-of-the-art deep learning approaches (Faster R-CNN, R-FCN, SSD and YOLOv3)
to serve as baselines for the new object detection benchmark. In experiments
with previous datasets we analyze the generalization capabilities of these
detectors when trained with the new dataset. We furthermore study the effect of
the training set size, the dataset diversity (day- vs. night-time, geographical
region), the dataset detail (i.e. availability of object orientation
information) and the annotation quality on the detector performance. Finally,
we analyze error sources and discuss the road ahead. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
Reinforcement learning (RL)-based neural architecture search (NAS) generally
guarantees better convergence yet suffers from the requirement of huge
computational resources compared with gradient-based approaches, due to the
rollout bottleneck -- exhaustive training for each sampled generation on proxy
tasks. In this paper, we propose a general pipeline to accelerate the
convergence of the rollout process as well as the RL process in NAS. It is
motivated by the interesting observation that both the architecture and the
parameter knowledge can be transferred between different experiments and even
different tasks. We first introduce an uncertainty-aware critic (value
function) in Proximal Policy Optimization (PPO) to utilize the architecture
knowledge in previous experiments, which stabilizes the training process and
reduces the searching time by 4 times. Further, an architecture knowledge pool
together with a block similarity function is proposed to utilize parameter
knowledge and reduces the searching time by 2 times. It is the first to
introduce block-level weight sharing in RLbased NAS. The block similarity
function guarantees a 100% hitting ratio with strict fairness. Besides, we show
that a simply designed off-policy correction factor used in "replay buffer" in
RL optimization can further reduce half of the searching time. Experiments on
the Mobile Neural Architecture Search (MNAS) search space show the proposed
Fast Neural Architecture Search (FNAS) accelerates standard RL-based NAS
process by ~10x (e.g. ~256 2x2 TPUv2 x days / 20,000 GPU x hour -> 2,000 GPU x
hour for MNAS), and guarantees better performance on various vision tasks. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Illumination estimation is the essential step of computational color
constancy, one of the core parts of various image processing pipelines of
modern digital cameras. Having an accurate and reliable illumination estimation
is important for reducing the illumination influence on the image colors. To
motivate the generation of new ideas and the development of new algorithms in
this field, the 2nd Illumination estimation challenge~(IEC\#2) was conducted.
The main advantage of testing a method on a challenge over testing in on some
of the known datasets is the fact that the ground-truth illuminations for the
challenge test images are unknown up until the results have been submitted,
which prevents any potential hyperparameter tuning that may be biased.
The challenge had several tracks: general, indoor, and two-illuminant with
each of them focusing on different parameters of the scenes. Other main
features of it are a new large dataset of images (about 5000) taken with the
same camera sensor model, a manual markup accompanying each image, diverse
content with scenes taken in numerous countries under a huge variety of
illuminations extracted by using the SpyderCube calibration object, and a
contest-like markup for the images from the Cube+ dataset that was used in
IEC\#1.
This paper focuses on the description of the past two challenges, algorithms
which won in each track, and the conclusions that were drawn based on the
results obtained during the 1st and 2nd challenge that can be useful for
similar future developments. | [
"cs.CV"
] |
The existing auto-encoder based face pose editing methods primarily focus on
modeling the identity preserving ability during pose synthesis, but are less
able to preserve the image style properly, which refers to the color,
brightness, saturation, etc. In this paper, we take advantage of the well-known
frontal/profile optical illusion and present a novel two-stage approach to
solve the aforementioned dilemma, where the task of face pose manipulation is
cast into face inpainting. By selectively sampling pixels from the input face
and slightly adjust their relative locations with the proposed ``Pixel
Attention Sampling" module, the face editing result faithfully keeps the
identity information as well as the image style unchanged. By leveraging
high-dimensional embedding at the inpainting stage, finer details are
generated. Further, with the 3D facial landmarks as guidance, our method is
able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and
roll, resulting in more flexible face pose editing than merely controlling the
yaw angle as usually achieved by the current state-of-the-art. Both the
qualitative and quantitative evaluations validate the superiority of the
proposed approach. | [
"cs.CV"
] |
Recently, deep convolutional neural networks (CNNs) have obtained promising
results in image processing tasks including super-resolution (SR). However,
most CNN-based SR methods treat low-resolution (LR) inputs and features equally
across channels, rarely notice the loss of information flow caused by the
activation function and fail to leverage the representation ability of CNNs. In
this letter, we propose a novel single-image super-resolution (SISR) algorithm
named Wider Channel Attention Network (WCAN) for remote sensing images.
Firstly, the channel attention mechanism is used to adaptively recalibrate the
importance of each channel at the middle of the wider attention block (WAB).
Secondly, we propose the Local Memory Connection (LMC) to enhance the
information flow. Finally, the features within each WAB are fused to take
advantage of the network's representation capability and further improve
information and gradient flow. Analytic experiments on a public remote sensing
data set (UC Merced) show that our WCAN achieves better accuracy and visual
improvements against most state-of-the-art methods. | [
"cs.CV",
"eess.IV"
] |
In 2D/3D object detection task, Intersection-over-Union (IoU) has been widely
employed as an evaluation metric to evaluate the performance of different
detectors in the testing stage. However, during the training stage, the common
distance loss (\eg, $L_1$ or $L_2$) is often adopted as the loss function to
minimize the discrepancy between the predicted and ground truth Bounding Box
(Bbox). To eliminate the performance gap between training and testing, the IoU
loss has been introduced for 2D object detection in \cite{yu2016unitbox} and
\cite{rezatofighi2019generalized}. Unfortunately, all these approaches only
work for axis-aligned 2D Bboxes, which cannot be applied for more general
object detection task with rotated Bboxes. To resolve this issue, we
investigate the IoU computation for two rotated Bboxes first and then implement
a unified framework, IoU loss layer for both 2D and 3D object detection tasks.
By integrating the implemented IoU loss into several state-of-the-art 3D object
detectors, consistent improvements have been achieved for both bird-eye-view 2D
detection and point cloud 3D detection on the public KITTI benchmark. | [
"cs.CV"
] |
This work proposes a novel method based on a pseudo-parabolic diffusion
process to be employed for texture recognition. The proposed operator is
applied over a range of time scales giving rise to a family of images
transformed by nonlinear filters. Therefore each of those images are encoded by
a local descriptor (we use local binary patterns for that purpose) and they are
summarized by a simple histogram, yielding in this way the image feature
vector. The proposed approach is tested on the classification of well
established benchmark texture databases and on a practical task of plant
species recognition. In both cases, it is compared with several
state-of-the-art methodologies employed for texture recognition. Our proposal
outperforms those methods in terms of classification accuracy, confirming its
competitiveness. The good performance can be justified to a large extent by the
ability of the pseudo-parabolic operator to smooth possibly noisy details
inside homogeneous regions of the image at the same time that it preserves
discontinuities that convey critical information for the object description.
Such results also confirm that model-based approaches like the proposed one can
still be competitive with the omnipresent learning-based approaches, especially
when the user does not have access to a powerful computational structure and a
large amount of labeled data for training. | [
"cs.CV",
"cs.NA",
"math.NA"
] |
Learning group representation is a commonly concerned issue in tasks where
the basic unit is a group, set, or sequence. Previously, the research community
tries to tackle it by aggregating the elements in a group based on an indicator
either defined by humans such as the quality and saliency, or generated by a
black box such as the attention score. This article provides a more essential
and explicable view. We claim the most significant indicator to show whether
the group representation can be benefited from one of its element is not the
quality or an inexplicable score, but the discriminability w.r.t. the model. We
explicitly design the discrimiability using embedded class centroids on a proxy
set. We show the discrimiability knowledge has good properties that can be
distilled by a light-weight distillation network and can be generalized on the
unseen target set. The whole procedure is denoted as discriminability
distillation learning (DDL). The proposed DDL can be flexibly plugged into many
group-based recognition tasks without influencing the original training
procedures. Comprehensive experiments on various tasks have proven the
effectiveness of DDL for both accuracy and efficiency. Moreover, it pushes
forward the state-of-the-art results on these tasks by an impressive margin. | [
"cs.CV"
] |
Aggregating multi-level feature representation plays a critical role in
achieving robust volumetric medical image segmentation, which is important for
the auxiliary diagnosis and treatment. Unlike the recent neural architecture
search (NAS) methods that typically searched the optimal operators in each
network layer, but missed a good strategy to search for feature aggregations,
this paper proposes a novel NAS method for 3D medical image segmentation, named
UXNet, which searches both the scale-wise feature aggregation strategies as
well as the block-wise operators in the encoder-decoder network. UXNet has
several appealing benefits. (1) It significantly improves flexibility of the
classical UNet architecture, which only aggregates feature representations of
encoder and decoder in equivalent resolution. (2) A continuous relaxation of
UXNet is carefully designed, enabling its searching scheme performed in an
efficient differentiable manner. (3) Extensive experiments demonstrate the
effectiveness of UXNet compared with recent NAS methods for medical image
segmentation. The architecture discovered by UXNet outperforms existing
state-of-the-art models in terms of Dice on several public 3D medical image
segmentation benchmarks, especially for the boundary locations and tiny
tissues. The searching computational complexity of UXNet is cheap, enabling to
search a network with the best performance less than 1.5 days on two TitanXP
GPUs. | [
"cs.CV"
] |
Universal style transfer tries to explicitly minimize the losses in feature
space, thus it does not require training on any pre-defined styles. It usually
uses different layers of VGG network as the encoders and trains several
decoders to invert the features into images. Therefore, the effect of style
transfer is achieved by feature transform. Although plenty of methods have been
proposed, a theoretical analysis of feature transform is still missing. In this
paper, we first propose a novel interpretation by treating it as the optimal
transport problem. Then, we demonstrate the relations of our formulation with
former works like Adaptive Instance Normalization (AdaIN) and Whitening and
Coloring Transform (WCT). Finally, we derive a closed-form solution named
Optimal Style Transfer (OST) under our formulation by additionally considering
the content loss of Gatys. Comparatively, our solution can preserve better
structure and achieve visually pleasing results. It is simple yet effective and
we demonstrate its advantages both quantitatively and qualitatively. Besides,
we hope our theoretical analysis can inspire future works in neural style
transfer. Code is available at https://github.com/lu-m13/OptimalStyleTransfer. | [
"cs.CV",
"eess.IV"
] |
Computer vision has received a significant attention in recent years, which
is one of the important parts for robots to apperceive external environment.
Discriminative Correlation Filter (DCF) based trackers gained more popularity
due to their efficiency, however, tracking in low-illumination environments is
a challenging problem, not yet successfully addressed in the literature. In
this work, we tackle the problems by introducing Low-Illumination Long-term
Correlation Tracker (LLCT). First, fused features only including HOG and Color
Names are employed to boost the tracking efficiency. Second, we used the
standard PCA to reduction scheme in the translation and scale estimation phase
for accelerating. Third, we learned a long-term correlation filter to keep the
long-term memory ability. Finally, update memory templates with interval
updates, then re-match existing and initial templates every few frames to
maintain template accuracy. The extensive experiments on popular Object
Tracking Benchmark OTB-50 datasets have demonstrated that the proposed tracker
outperforms the state-of-the-art trackers significantly achieves a high
real-time (33FPS) performance. In addition, the proposed approach can be
integrated easily in robot system and the running speed performed well. The
experimental results show that the novel tracker performance in
low-illumination environment is better than that of general trackers. | [
"cs.CV"
] |
Point clouds and RGB images are naturally complementary modalities for 3D
visual understanding - the former provides sparse but accurate locations of
points on objects, while the latter contains dense color and texture
information. Despite this potential for close sensor fusion, many methods train
two models in isolation and use simple feature concatenation to represent 3D
sensor data. This separated training scheme results in potentially sub-optimal
performance and prevents 3D tasks from being used to benefit 2D tasks that are
often useful on their own. To provide a more integrated approach, we propose a
novel Multi-Modality Task Cascade network (MTC-RCNN) that leverages 3D box
proposals to improve 2D segmentation predictions, which are then used to
further refine the 3D boxes. We show that including a 2D network between two
stages of 3D modules significantly improves both 2D and 3D task performance.
Moreover, to prevent the 3D module from over-relying on the overfitted 2D
predictions, we propose a dual-head 2D segmentation training and inference
scheme, allowing the 2nd 3D module to learn to interpret imperfect 2D
segmentation predictions. Evaluating our model on the challenging SUN RGB-D
dataset, we improve upon state-of-the-art results of both single modality and
fusion networks by a large margin ($\textbf{+3.8}$ [email protected]). Code will be
released $\href{https://github.com/Divadi/MTC_RCNN}{\text{here.}}$ | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.RO"
] |
Over the last few years, we have seen increasing data generated from
non-Euclidean domains, which are usually represented as graphs with complex
relationships, and Graph Neural Networks (GNN) have gained a high interest
because of their potential in processing graph-structured data. In particular,
there is a strong interest in exploring the possibilities in performing
convolution on graphs using an extension of the GNN architecture, generally
referred to as Graph Convolutional Neural Networks (GCNN). Convolution on
graphs has been achieved mainly in two forms: spectral and spatial
convolutions. Due to the higher flexibility in exploring and exploiting the
graph structure of data, recently, there is an increasing interest in
investigating the possibilities that the spatial approach can offer. The idea
of finding a way to adapt the network behaviour to the inputs they process to
maximize the total performances has aroused much interest in the neural
networks literature over the years. This paper presents a novel method to adapt
the behaviour of a GCNN to the input proposing two ways to perform spatial
convolution on graphs using input-based filters which are dynamically
generated. Our model also investigates the problem of discovering and refining
relations among nodes. The experimental assessment confirms the capabilities of
the proposed approach, which achieves satisfying results using simple
architectures with a low number of filters. | [
"cs.LG",
"cs.AI"
] |
The vulnerability of face recognition systems to presentation attacks has
limited their application in security-critical scenarios. Automatic methods of
detecting such malicious attempts are essential for the safe use of facial
recognition technology. Although various methods have been suggested for
detecting such attacks, most of them over-fit the training set and fail in
generalizing to unseen attacks and environments. In this work, we use transfer
learning from the vision transformer model for the zero-shot anti-spoofing
task. The effectiveness of the proposed approach is demonstrated through
experiments in publicly available datasets. The proposed approach outperforms
the state-of-the-art methods in the zero-shot protocols in the HQ-WMCA and
SiW-M datasets by a large margin. Besides, the model achieves a significant
boost in cross-database performance as well. | [
"cs.CV"
] |
Dynamic graph representation learning strategies are based on different
neural architectures to capture the graph evolution over time. However, the
underlying neural architectures require a large amount of parameters to train
and suffer from high online inference latency, that is several model parameters
have to be updated when new data arrive online. In this study we propose
Distill2Vec, a knowledge distillation strategy to train a compact model with a
low number of trainable parameters, so as to reduce the latency of online
inference and maintain the model accuracy high. We design a distillation loss
function based on Kullback-Leibler divergence to transfer the acquired
knowledge from a teacher model trained on offline data, to a small-size student
model for online data. Our experiments with publicly available datasets show
the superiority of our proposed model over several state-of-the-art approaches
with relative gains up to 5% in the link prediction task. In addition, we
demonstrate the effectiveness of our knowledge distillation strategy, in terms
of number of required parameters, where Distill2Vec achieves a compression
ratio up to 7:100 when compared with baseline approaches. For reproduction
purposes, our implementation is publicly available at
https://stefanosantaris.github.io/Distill2Vec. | [
"cs.LG",
"cs.AI"
] |
The Word Mover's Distance (WMD) is a metric that measures the semantic
dissimilarity between two text documents by computing the cost of moving all
words of a source/query document to the most similar words of a target document
optimally. Computing WMD between two documents is costly because it requires
solving an optimization problem that costs \(O(V^3log(V))\) where \(V\) is the
number of unique words in the document. Fortunately, the WMD can be framed as
the Earth Mover's Distance (EMD) (also known as the Optimal Transportation
Distance) for which it has been shown that the algorithmic complexity can be
reduced to \(O(V^2)\) by adding an entropy penalty to the optimization problem
and a similar idea can be adapted to compute WMD efficiently. Additionally, the
computation can be made highly parallel by computing WMD of a single query
document against multiple target documents at once (e.g., finding whether a
given tweet is similar to any other tweets happened in a day). In this paper,
we present a shared-memory parallel Sinkhorn-Knopp Algorithm to compute the WMD
of one document against many other documents by adopting the \(O(V^2)\) EMD
algorithm. We used algorithmic transformations to change the original dense
compute-heavy kernel to a sparse compute kernel and obtained \(67\times\)
speedup using \(96\) cores on the state-of-the-art of Intel\textregistered{}
4-sockets Cascade Lake machine w.r.t. its sequential run. Our parallel
algorithm is over \(700\times\) faster than the naive parallel python code that
internally uses optimized matrix library calls. | [
"cs.LG",
"cs.DC",
"stat.ML"
] |
An increasing share of captured images and videos are transmitted for storage
and remote analysis by computer vision algorithms, rather than to be viewed by
humans. Contrary to traditional standard codecs with engineered tools, neural
network based codecs can be trained end-to-end to optimally compress images
with respect to a target rate and any given differentiable performance metric.
Although it is possible to train such compression tools to achieve better
rate-accuracy performance for a particular computer vision task, it could be
practical and relevant to re-use the compressed bit-stream for multiple machine
tasks. For this purpose, we introduce 'Connectors' that are inserted between
the decoder and the task algorithms to enable a direct transformation of the
compressed content, which was previously optimized for a specific task, to
multiple other machine tasks. We demonstrate the effectiveness of the proposed
method by achieving significant rate-accuracy performance improvement for both
image classification and object segmentation, using the same bit-stream,
originally optimized for object detection. | [
"cs.CV",
"cs.AI"
] |
This work presents a novel training technique for deep neural networks that
makes use of additional data from a distribution that is different from that of
the original input data. This technique aims to reduce overfitting and improve
the generalization performance of the network. Our proposed technique, namely
Passive Batch Injection Training Technique (PBITT), even reduces the level of
overfitting in networks that already use the standard techniques for reducing
overfitting such as $L_2$ regularization and batch normalization, resulting in
significant accuracy improvements. Passive Batch Injection Training Technique
(PBITT) introduces a few passive mini-batches into the training process that
contain data from a distribution that is different from the input data
distribution. This technique does not increase the number of parameters in the
final model and also does not increase the inference (test) time but still
improves the performance of deep CNNs. To the best of our knowledge, this is
the first work that makes use of different data distribution to aid the
training of convolutional neural networks (CNNs). We thoroughly evaluate the
proposed approach on standard architectures: VGG, ResNet, and WideResNet, and
on several popular datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. We
observe consistent accuracy improvement by using the proposed technique. We
also show experimentally that the model trained by our technique generalizes
well to other tasks such as object detection on the MS-COCO dataset using
Faster R-CNN. We present extensive ablations to validate the proposed approach.
Our approach improves the accuracy of VGG-16 by a significant margin of 2.1%
over the CIFAR-100 dataset. | [
"cs.CV",
"cs.LG"
] |
Navigation inside a closed area with no GPS-signal accessibility is a highly
challenging task. In order to tackle this problem, recently the imaging-based
methods have grabbed the attention of many researchers. These methods either
extract the features (e.g. using SIFT, or SOSNet) and map the descriptive ones
to the camera position and rotation information, or deploy an end-to-end system
that directly estimates this information out of RGB images, similar to PoseNet.
While the former methods suffer from heavy computational burden during the test
process, the latter suffers from lack of accuracy and robustness against
environmental changes and object movements. However, end-to-end systems are
quite fast during the test and inference and are pretty qualified for
real-world applications, even though their training phase could be longer than
the former ones. In this paper, a novel multi-modal end-to-end system for
large-scale indoor positioning has been proposed, namely APS (Alpha Positioning
System), which integrates a Pix2Pix GAN network to reconstruct the point cloud
pair of the input query image, with a deep CNN network in order to robustly
estimate the position and rotation information of the camera. For this
integration, the existing datasets have the shortcoming of paired RGB/point
cloud images for indoor environments. Therefore, we created a new dataset to
handle this situation. By implementing the proposed APS system, we could
achieve a highly accurate camera positioning with a precision level of less
than a centimeter. | [
"cs.CV"
] |
This paper introduces the MCML approach for empirically studying the
learnability of relational properties that can be expressed in the well-known
software design language Alloy. A key novelty of MCML is quantification of the
performance of and semantic differences among trained machine learning (ML)
models, specifically decision trees, with respect to entire (bounded) input
spaces, and not just for given training and test datasets (as is the common
practice). MCML reduces the quantification problems to the classic complexity
theory problem of model counting, and employs state-of-the-art model counters.
The results show that relatively simple ML models can achieve surprisingly high
performance (accuracy and F1-score) when evaluated in the common setting of
using training and test datasets - even when the training dataset is much
smaller than the test dataset - indicating the seeming simplicity of learning
relational properties. However, MCML metrics based on model counting show that
the performance can degrade substantially when tested against the entire
(bounded) input space, indicating the high complexity of precisely learning
these properties, and the usefulness of model counting in quantifying the true
performance. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We explore the application of super-resolution techniques to satellite
imagery, and the effects of these techniques on object detection algorithm
performance. Specifically, we enhance satellite imagery beyond its native
resolution, and test if we can identify various types of vehicles, planes, and
boats with greater accuracy than native resolution. Using the Very Deep
Super-Resolution (VDSR) framework and a custom Random Forest Super-Resolution
(RFSR) framework we generate enhancement levels of 2x, 4x, and 8x over five
distinct resolutions ranging from 30 cm to 4.8 meters. Using both native and
super-resolved data, we then train several custom detection models using the
SIMRDWN object detection framework. SIMRDWN combines a number of popular object
detection algorithms (e.g. SSD, YOLO) into a unified framework that is designed
to rapidly detect objects in large satellite images. This approach allows us to
quantify the effects of super-resolution techniques on object detection
performance across multiple classes and resolutions. We also quantify the
performance of object detection as a function of native resolution and object
pixel size. For our test set we note that performance degrades from mean
average precision (mAP) = 0.53 at 30 cm resolution, down to mAP = 0.11 at 4.8 m
resolution. Super-resolving native 30 cm imagery to 15 cm yields the greatest
benefit; a 13-36% improvement in mAP. Super-resolution is less beneficial at
coarser resolutions, though still provides a small improvement in performance. | [
"cs.CV"
] |
To capture spatial relationships and temporal dynamics in traffic data,
spatio-temporal models for traffic forecasting have drawn significant attention
in recent years. Most of the recent works employed graph neural networks(GNN)
with multiple layers to capture the spatial dependency. However, road junctions
with different hop-distance can carry distinct traffic information which should
be exploited separately but existing multi-layer GNNs are incompetent to
discriminate between their impact. Again, to capture the temporal
interrelationship, recurrent neural networks are common in state-of-the-art
approaches that often fail to capture long-range dependencies. Furthermore,
traffic data shows repeated patterns in a daily or weekly period which should
be addressed explicitly. To address these limitations, we have designed a
Simplified Spatio-temporal Traffic forecasting GNN(SST-GNN) that effectively
encodes the spatial dependency by separately aggregating different neighborhood
representations rather than with multiple layers and capture the temporal
dependency with a simple yet effective weighted spatio-temporal aggregation
mechanism. We capture the periodic traffic patterns by using a novel position
encoding scheme with historical and current data in two different models. With
extensive experimental analysis, we have shown that our model has significantly
outperformed the state-of-the-art models on three real-world traffic datasets
from the Performance Measurement System (PeMS). | [
"cs.LG"
] |
When developing and analyzing new hyperparameter optimization (HPO) methods,
it is vital to empirically evaluate and compare them on well-curated benchmark
suites. In this work, we list desirable properties and requirements for such
benchmarks and propose a new set of challenging and relevant multifidelity HPO
benchmark problems motivated by these requirements. For this, we revisit the
concept of surrogate-based benchmarks and empirically compare them to more
widely-used tabular benchmarks, showing that the latter ones may induce bias in
performance estimation and ranking of HPO methods. We present a new
surrogate-based benchmark suite for multifidelity HPO methods consisting of 9
benchmark collections that constitute over 700 multifidelity HPO problems in
total. All our benchmarks also allow for querying of multiple optimization
targets, enabling the benchmarking of multi-objective HPO. We examine and
compare our benchmark suite with respect to the defined requirements and show
that our benchmarks provide viable additions to existing suites. | [
"cs.LG",
"stat.ML"
] |
Reinforcement learning (RL) has shown a promising performance in learning
optimal policies for a variety of sequential decision-making tasks. However, in
many real-world RL problems, besides optimizing the main objectives, the agent
is expected to satisfy a certain level of safety (e.g., avoiding collisions in
autonomous driving). While RL problems are commonly formalized as Markov
decision processes (MDPs), safety constraints are incorporated via constrained
Markov decision processes (CMDPs). Although recent advances in safe RL have
enabled learning safe policies in CMDPs, these safety requirements should be
satisfied during both training and in the deployment process. Furthermore, it
is shown that in memory-based and partially observable environments, these
methods fail to maintain safety over unseen out-of-distribution observations.
To address these limitations, we propose a Lyapunov-based uncertainty-aware
safe RL model. The introduced model adopts a Lyapunov function that converts
trajectory-based constraints to a set of local linear constraints. Furthermore,
to ensure the safety of the agent in highly uncertain environments, an
uncertainty quantification method is developed that enables identifying
risk-averse actions through estimating the probability of constraint
violations. Moreover, a Transformers model is integrated to provide the agent
with memory to process long time horizons of information via the self-attention
mechanism. The proposed model is evaluated in grid-world navigation tasks where
safety is defined as avoiding static and dynamic obstacles in fully and
partially observable environments. The results of these experiments show a
significant improvement in the performance of the agent both in achieving
optimality and satisfying safety constraints. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Systems that can associate images with their spoken audio captions are an
important step towards visually grounded language learning. We describe a
scalable method to automatically generate diverse audio for image captioning
datasets. This supports pretraining deep networks for encoding both audio and
images, which we do via a dual encoder that learns to align latent
representations from both modalities. We show that a masked margin softmax loss
for such models is superior to the standard triplet loss. We fine-tune these
models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art
results---improving recall in the top 10 from 29.6% to 49.5%. We also obtain
human ratings on retrieval outputs to better assess the impact of incidentally
matching image-caption pairs that were not associated in the data, finding that
automatic evaluation substantially underestimates the quality of the retrieved
results. | [
"cs.CV",
"cs.CL",
"cs.SD",
"eess.AS"
] |
Our aim is to establish a framework where reinforcement learning (RL) of
optimizing interventions retrospectively allows us a regulatory compliant
pathway to prospective clinical testing of the learned policies in a clinical
deployment. We focus on infections in intensive care units which are one of the
major causes of death and difficult to treat because of the complex and opaque
patient dynamics, and the clinically debated, highly-divergent set of
intervention policies required by each individual patient, yet intensive care
units are naturally data rich. In our work, we build on RL approaches in
healthcare ("AI Clinicians"), and learn off-policy continuous dosing policy of
pharmaceuticals for sepsis treatment using historical intensive care data under
partially observable MDPs (POMDPs). POMPDs capture uncertainty in patient state
better by taking in all historical information, yielding an efficient
representation, which we investigate through ablations. We compensate for the
lack of exploration in our retrospective data by evaluating each encountered
state with a best-first tree search. We mitigate state distributional shift by
optimizing our policy in the vicinity of the clinicians' compound policy.
Crucially, we evaluate our model recommendations using not only conventional
policy evaluations but a novel framework that incorporates human experts: a
model-agnostic pre-clinical evaluation method to estimate the accuracy and
uncertainty of clinician's decisions versus our system recommendations when
confronted with the same individual patient history ("shadow mode"). | [
"cs.LG",
"cs.AI"
] |
As a long-standing problem in computer vision, face detection has attracted
much attention in recent decades for its practical applications. With the
availability of face detection benchmark WIDER FACE dataset, much of the
progresses have been made by various algorithms in recent years. Among them,
the Selective Refinement Network (SRN) face detector introduces the two-step
classification and regression operations selectively into an anchor-based face
detector to reduce false positives and improve location accuracy
simultaneously. Moreover, it designs a receptive field enhancement block to
provide more diverse receptive field. In this report, to further improve the
performance of SRN, we exploit some existing techniques via extensive
experiments, including new data augmentation strategy, improved backbone
network, MS COCO pretraining, decoupled classification module, segmentation
branch and Squeeze-and-Excitation block. Some of these techniques bring
performance improvements, while few of them do not well adapt to our baseline.
As a consequence, we present an improved SRN face detector by combining these
useful techniques together and obtain the best performance on widely used face
detection benchmark WIDER FACE dataset. | [
"cs.CV"
] |
Finding communities in networks is a problem that remains difficult, in spite
of the amount of attention it has recently received. The Stochastic Block-Model
(SBM) is a generative model for graphs with "communities" for which, because of
its simplicity, the theoretical understanding has advanced fast in recent
years. In particular, there have been various results showing that simple
versions of spectral clustering using the Normalized Laplacian of the graph can
recover the communities almost perfectly with high probability. Here we show
that essentially the same algorithm used for the SBM and for its extension
called Degree-Corrected SBM, works on a wider class of Block-Models, which we
call Preference Frame Models, with essentially the same guarantees. Moreover,
the parametrization we introduce clearly exhibits the free parameters needed to
specify this class of models, and results in bounds that expose with more
clarity the parameters that control the recovery error in this model class. | [
"stat.ML",
"cs.LG"
] |
Using FPGAs to accelerate ConvNets has attracted significant attention in
recent years. However, FPGA accelerator design has not leveraged the latest
progress of ConvNets. As a result, the key application characteristics such as
frames-per-second (FPS) are ignored in favor of simply counting GOPs, and
results on accuracy, which is critical to application success, are often not
even reported. In this work, we adopt an algorithm-hardware co-design approach
to develop a ConvNet accelerator called Synetgy and a novel ConvNet model
called DiracDeltaNet$^{\dagger}$. Both the accelerator and ConvNet are tailored
to FPGA requirements. DiracDeltaNet, as the name suggests, is a ConvNet with
only $1\times 1$ convolutions while spatial convolutions are replaced by more
efficient shift operations. DiracDeltaNet achieves competitive accuracy on
ImageNet (88.7\% top-5), but with 42$\times$ fewer parameters and 48$\times$
fewer OPs than VGG16. We further quantize DiracDeltaNet's weights to 4-bit and
activations to 4-bits, with less than 1\% accuracy loss. These quantizations
exploit well the nature of FPGA hardware. In short, DiracDeltaNet's small model
size, low computational OP count, low precision and simplified operators allow
us to co-design a highly customized computing unit for an FPGA. We implement
the computing units for DiracDeltaNet on an Ultra96 SoC system through
high-level synthesis. Our accelerator's final top-5 accuracy of 88.1\% on
ImageNet, is higher than all the previously reported embedded FPGA
accelerators. In addition, the accelerator reaches an inference speed of 66.3
FPS on the ImageNet classification task, surpassing prior works with similar
accuracy by at least 11.6$\times$. | [
"cs.CV",
"cs.AR"
] |
Few-shot learning features the capability of generalizing from a few
examples. In this paper, we first identify that a discriminative feature space,
namely a rectified metric space, that is learned to maintain the metric
consistency from training to testing, is an essential component to the success
of metric-based few-shot learning. Numerous analyses indicate that a simple
modification of the objective can yield substantial performance gains. The
resulting approach, called rectified metric propagation (ReMP), further
optimizes an attentive prototype propagation network, and applies a repulsive
force to make confident predictions. Extensive experiments demonstrate that the
proposed ReMP is effective and efficient, and outperforms the state of the arts
on various standard few-shot learning datasets. | [
"cs.CV",
"cs.LG"
] |
Image landmark detection aims to automatically identify the locations of
predefined fiducial points. Despite recent success in this field,
higher-ordered structural modeling to capture implicit or explicit
relationships among anatomical landmarks has not been adequately exploited. In
this work, we present a new topology-adapting deep graph learning approach for
accurate anatomical facial and medical (e.g., hand, pelvis) landmark detection.
The proposed method constructs graph signals leveraging both local image
features and global shape features. The adaptive graph topology naturally
explores and lands on task-specific structures which are learned end-to-end
with two Graph Convolutional Networks (GCNs). Extensive experiments are
conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as
well as three real-world X-ray medical datasets (Cephalometric (public), Hand
and Pelvis). Quantitative results comparing with the previous state-of-the-art
approaches across all studied datasets indicating the superior performance in
both robustness and accuracy. Qualitative visualizations of the learned graph
topologies demonstrate a physically plausible connectivity laying behind the
landmarks. | [
"cs.CV"
] |
We introduce a novel self-supervised learning approach to learn
representations of videos that are responsive to changes in the motion
dynamics. Our representations can be learned from data without human annotation
and provide a substantial boost to the training of neural networks on small
labeled data sets for tasks such as action recognition, which require to
accurately distinguish the motion of objects. We promote an accurate learning
of motion without human annotation by training a neural network to discriminate
a video sequence from its temporally transformed versions. To learn to
distinguish non-trivial motions, the design of the transformations is based on
two principles: 1) To define clusters of motions based on time warps of
different magnitude; 2) To ensure that the discrimination is feasible only by
observing and analyzing as many image frames as possible. Thus, we introduce
the following transformations: forward-backward playback, random frame
skipping, and uniform frame skipping. Our experiments show that networks
trained with the proposed method yield representations with improved transfer
performance for action recognition on UCF101 and HMDB51. | [
"cs.CV"
] |
Recurrent neural network(RNN) has been broadly applied to natural language
processing(NLP) problems. This kind of neural network is designed for modeling
sequential data and has been testified to be quite efficient in sequential
tagging tasks. In this paper, we propose to use bi-directional RNN with long
short-term memory(LSTM) units for Chinese word segmentation, which is a crucial
preprocess task for modeling Chinese sentences and articles. Classical methods
focus on designing and combining hand-craft features from context, whereas
bi-directional LSTM network(BLSTM) does not need any prior knowledge or
pre-designing, and it is expert in keeping the contextual information in both
directions. Experiment result shows that our approach gets state-of-the-art
performance in word segmentation on both traditional Chinese datasets and
simplified Chinese datasets. | [
"cs.LG",
"cs.CL"
] |
We present an approach to simultaneously perform semantic segmentation and
prepositional phrase attachment resolution for captioned images. Some
ambiguities in language cannot be resolved without simultaneously reasoning
about an associated image. If we consider the sentence "I shot an elephant in
my pajamas", looking at language alone (and not using common sense), it is
unclear if it is the person or the elephant wearing the pajamas or both. Our
approach produces a diverse set of plausible hypotheses for both semantic
segmentation and prepositional phrase attachment resolution that are then
jointly reranked to select the most consistent pair. We show that our semantic
segmentation and prepositional phrase attachment resolution modules have
complementary strengths, and that joint reasoning produces more accurate
results than any module operating in isolation. Multiple hypotheses are also
shown to be crucial to improved multiple-module reasoning. Our vision and
language approach significantly outperforms the Stanford Parser (De Marneffe et
al., 2006) by 17.91% (28.69% relative) and 12.83% (25.28% relative) in two
different experiments. We also make small improvements over DeepLab-CRF (Chen
et al., 2015). | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Policy advice is a transfer learning method where a student agent is able to
learn faster via advice from a teacher. However, both this and other
reinforcement learning transfer methods have little theoretical analysis. This
paper formally defines a setting where multiple teacher agents can provide
advice to a student and introduces an algorithm to leverage both autonomous
exploration and teacher's advice. Our regret bounds justify the intuition that
good teachers help while bad teachers hurt. Using our formalization, we are
also able to quantify, for the first time, when negative transfer can occur
within such a reinforcement learning setting. | [
"cs.LG"
] |
Multishot Magnetic Resonance Imaging (MRI) has recently gained popularity as
it accelerates the MRI data acquisition process without compromising the
quality of final MR image. However, it suffers from motion artifacts caused by
patient movements which may lead to misdiagnosis. Modern state-of-the-art
motion correction techniques are able to counter small degree motion, however,
their adoption is hindered by their time complexity. This paper proposes a
Generative Adversarial Network (GAN) for reconstructing motion free
high-fidelity images while reducing the image reconstruction time by an
impressive two orders of magnitude. | [
"cs.CV"
] |
In Natural Language Processing (NLP), pretrained language models (LMs) that
are transferred to downstream tasks have been recently shown to achieve
state-of-the-art results. However, standard fine-tuning can degrade the
general-domain representations captured during pretraining. To address this
issue, we introduce a new regularization technique, AFTER; domain Adversarial
Fine-Tuning as an Effective Regularizer. Specifically, we complement the
task-specific loss used during fine-tuning with an adversarial objective. This
additional loss term is related to an adversarial classifier, that aims to
discriminate between in-domain and out-of-domain text representations.
In-domain refers to the labeled dataset of the task at hand while out-of-domain
refers to unlabeled data from a different domain. Intuitively, the adversarial
classifier acts as a regularizer which prevents the model from overfitting to
the task-specific domain. Empirical results on various natural language
understanding tasks show that AFTER leads to improved performance compared to
standard fine-tuning. | [
"cs.LG",
"stat.ML"
] |
3D object detectors based only on LiDAR point clouds hold the
state-of-the-art on modern street-view benchmarks. However, LiDAR-based
detectors poorly generalize across domains due to domain shift. In the case of
LiDAR, in fact, domain shift is not only due to changes in the environment and
in the object appearances, as for visual data from RGB cameras, but is also
related to the geometry of the point clouds (e.g., point density variations).
This paper proposes SF-UDA$^{3D}$, the first Source-Free Unsupervised Domain
Adaptation (SF-UDA) framework to domain-adapt the state-of-the-art PointRCNN 3D
detector to target domains for which we have no annotations (unsupervised),
neither we hold images nor annotations of the source domain (source-free).
SF-UDA$^{3D}$ is novel on both aspects. Our approach is based on
pseudo-annotations, reversible scale-transformations and motion coherency.
SF-UDA$^{3D}$ outperforms both previous domain adaptation techniques based on
features alignment and state-of-the-art 3D object detection methods which
additionally use few-shot target annotations or target annotation statistics.
This is demonstrated by extensive experiments on two large-scale datasets,
i.e., KITTI and nuScenes. | [
"cs.CV"
] |
We define a message-passing algorithm for computing magnetizations in
Restricted Boltzmann machines, which are Ising models on bipartite graphs
introduced as neural network models for probability distributions over spin
configurations. To model nontrivial statistical dependencies between the spins'
couplings, we assume that the rectangular coupling matrix is drawn from an
arbitrary bi-rotation invariant random matrix ensemble. Using the dynamical
functional method of statistical mechanics we exactly analyze the dynamics of
the algorithm in the large system limit. We prove the global convergence of the
algorithm under a stability criterion and compute asymptotic convergence rates
showing excellent agreement with numerical simulations. | [
"cs.LG",
"cond-mat.dis-nn",
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.