text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
We propose a novel reformulation of the stochastic optimal control problem as
an approximate inference problem, demonstrating, that such a interpretation
leads to new practical methods for the original problem. In particular we
characterise a novel class of iterative solutions to the stochastic optimal
control problem based on a natural relaxation of the exact dual formulation.
These theoretical insights are applied to the Reinforcement Learning problem
where they lead to new model free, off policy methods for discrete and
continuous problems. | [
"cs.LG",
"stat.ML"
] |
With access to large datasets, deep neural networks (DNN) have achieved
human-level accuracy in image and speech recognition tasks. However, in
chemistry, data is inherently small and fragmented. In this work, we develop an
approach of using rule-based knowledge for training ChemNet, a transferable and
generalizable deep neural network for chemical property prediction that learns
in a weak-supervised manner from large unlabeled chemical databases. When
coupled with transfer learning approaches to predict other smaller datasets for
chemical properties that it was not originally trained on, we show that
ChemNet's accuracy outperforms contemporary DNN models that were trained using
conventional supervised learning. Furthermore, we demonstrate that the ChemNet
pre-training approach is equally effective on both CNN (Chemception) and RNN
(SMILES2vec) models, indicating that this approach is network architecture
agnostic and is effective across multiple data modalities. Our results indicate
a pre-trained ChemNet that incorporates chemistry domain knowledge, enables the
development of generalizable neural networks for more accurate prediction of
novel chemical properties. | [
"stat.ML",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Training an agent to solve control tasks directly from high-dimensional
images with model-free reinforcement learning (RL) has proven difficult. A
promising approach is to learn a latent representation together with the
control policy. However, fitting a high-capacity encoder using a scarce reward
signal is sample inefficient and leads to poor performance. Prior work has
shown that auxiliary losses, such as image reconstruction, can aid efficient
representation learning. However, incorporating reconstruction loss into an
off-policy learning algorithm often leads to training instability. We explore
the underlying reasons and identify variational autoencoders, used by previous
investigations, as the cause of the divergence. Following these findings, we
propose effective techniques to improve training stability. This results in a
simple approach capable of matching state-of-the-art model-free and model-based
algorithms on MuJoCo control tasks. Furthermore, our approach demonstrates
robustness to observational noise, surpassing existing approaches in this
setting. Code, results, and videos are anonymously available at
https://sites.google.com/view/sac-ae/home. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Satellite imagery is important for many applications including disaster
response, law enforcement, and environmental monitoring. These applications
require the manual identification of objects and facilities in the imagery.
Because the geographic expanses to be covered are great and the analysts
available to conduct the searches are few, automation is required. Yet
traditional object detection and classification algorithms are too inaccurate
and unreliable to solve the problem. Deep learning is a family of machine
learning algorithms that have shown promise for the automation of such tasks.
It has achieved success in image understanding by means of convolutional neural
networks. In this paper we apply them to the problem of object and facility
recognition in high-resolution, multi-spectral satellite imagery. We describe a
deep learning system for classifying objects and facilities from the IARPA
Functional Map of the World (fMoW) dataset into 63 different classes. The
system consists of an ensemble of convolutional neural networks and additional
neural networks that integrate satellite metadata with image features. It is
implemented in Python using the Keras and TensorFlow deep learning libraries
and runs on a Linux server with an NVIDIA Titan X graphics card. At the time of
writing the system is in 2nd place in the fMoW TopCoder competition. Its total
accuracy is 83%, the F1 score is 0.797, and it classifies 15 of the classes
with accuracies of 95% or better. | [
"cs.CV",
"cs.LG"
] |
When an image classifier makes a prediction, which parts of the image are
relevant and why? We can rephrase this question to ask: which parts of the
image, if they were not seen by the classifier, would most change its decision?
Producing an answer requires marginalizing over images that could have been
seen but weren't. We can sample plausible image in-fills by conditioning a
generative model on the rest of the image. We then optimize to find the image
regions that most change the classifier's decision after in-fill. Our approach
contrasts with ad-hoc in-filling approaches, such as blurring or injecting
noise, which generate inputs far from the data distribution, and ignore
informative relationships between different parts of the image. Our method
produces more compact and relevant saliency maps, with fewer artifacts compared
to previous methods. | [
"cs.CV"
] |
Vision transformers (ViTs) have recently received explosive popularity, but
their enormous model sizes and training costs remain daunting. Conventional
post-training pruning often incurs higher training budgets. In contrast, this
paper aims to trim down both the training memory overhead and the inference
complexity, without sacrificing the achievable accuracy. We launch and report
the first-of-its-kind comprehensive exploration, on taking a unified approach
of integrating sparsity in ViTs "from end to end". Specifically, instead of
training full ViTs, we dynamically extract and train sparse subnetworks, while
sticking to a fixed small parameter budget. Our approach jointly optimizes
model parameters and explores connectivity throughout training, ending up with
one sparse network as the final output. The approach is seamlessly extended
from unstructured to structured sparsity, the latter by considering to guide
the prune-and-grow of self-attention heads inside ViTs. For additional
efficiency gains, we further co-explore data and architecture sparsity, by
plugging in a novel learnable token selector to adaptively determine the
currently most vital patches. Extensive results on ImageNet with diverse ViT
backbones validate the effectiveness of our proposals which obtain
significantly reduced computational cost and almost unimpaired generalization.
Perhaps most surprisingly, we find that the proposed sparse (co-)training can
even improve the ViT accuracy rather than compromising it, making sparsity a
tantalizing "free lunch". For example, our sparsified DeiT-Small at (5%, 50%)
sparsity for (data, architecture), improves 0.28% top-1 accuracy, and meanwhile
enjoys 49.32% FLOPs and 4.40% running time savings. Our codes are available at
https://github.com/VITA-Group/SViTE. | [
"cs.CV",
"cs.AI"
] |
In contrast to the literature where local patterns in 3D point clouds are
captured by customized convolutional operators, in this paper we study the
problem of how to effectively and efficiently project such point clouds into a
2D image space so that traditional 2D convolutional neural networks (CNNs) such
as U-Net can be applied for segmentation. To this end, we are motivated by
graph drawing and reformulate it as an integer programming problem to learn the
topology-preserving graph-to-grid mapping for each individual point cloud. To
accelerate the computation in practice, we further propose a novel hierarchical
approximate algorithm. With the help of the Delaunay triangulation for graph
construction from point clouds and a multi-scale U-Net for segmentation, we
manage to demonstrate the state-of-the-art performance on ShapeNet and PartNet,
respectively, with significant improvement over the literature. Code is
available at https://github.com/Zhang-VISLab. | [
"cs.CV",
"cs.LG"
] |
Forecasting of weakly correlated time series of conversion rate by methods of
exponential smoothing, neural network and decision tree on the example of
conversion percent series for an electronic store is considered in the paper.
The advantages and disadvantages of each method are considered. | [
"cs.LG",
"90B18",
"D.4.6; E.3; C.2"
] |
The Explainable Abstract Trains Dataset is an image dataset containing
simplified representations of trains. It aims to provide a platform for the
application and research of algorithms for justification and explanation
extraction. The dataset is accompanied by an ontology that conceptualizes and
classifies the depicted trains based on their visual characteristics, allowing
for a precise understanding of how each train was labeled. Each image in the
dataset is annotated with multiple attributes describing the trains' features
and with bounding boxes for the train elements. | [
"cs.CV",
"cs.AI"
] |
Unsupervised domain transfer is the task of transferring or translating
samples from a source distribution to a different target distribution. Current
solutions unsupervised domain transfer often operate on data on which the modes
of the distribution are well-matched, for instance have the same frequencies of
classes between source and target distributions. However, these models do not
perform well when the modes are not well-matched, as would be the case when
samples are drawn independently from two different, but related, domains. This
mode imbalance is problematic as generative adversarial networks (GANs), a
successful approach in this setting, are sensitive to mode frequency, which
results in a mismatch of semantics between source samples and generated samples
of the target distribution. We propose a principled method of re-weighting
training samples to correct for such mass shift between the transferred
distributions, which we call batch-weight. We also provide rigorous
probabilistic setting for domain transfer and new simplified objective for
training transfer networks, an alternative to complex, multi-component loss
functions used in the current state-of-the art image-to-image translation
models. The new objective stems from the discrimination of joint distributions
and enforces cycle-consistency in an abstract, high-level, rather than
pixel-wise, sense. Lastly, we experimentally show the effectiveness of the
proposed methods in several image-to-image translation tasks. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Recently, end-to-end trainable deep neural networks have significantly
improved stereo depth estimation for perspective images. However, 360{\deg}
images captured under equirectangular projection cannot benefit from directly
adopting existing methods due to distortion introduced (i.e., lines in 3D are
not projected onto lines in 2D). To tackle this issue, we present a novel
architecture specifically designed for spherical disparity using the setting of
top-bottom 360{\deg} camera pairs. Moreover, we propose to mitigate the
distortion issue by (1) an additional input branch capturing the position and
relation of each pixel in the spherical coordinate, and (2) a cost volume built
upon a learnable shifting filter. Due to the lack of 360{\deg} stereo data, we
collect two 360{\deg} stereo datasets from Matterport3D and Stanford3D for
training and evaluation. Extensive experiments and ablation study are provided
to validate our method against existing algorithms. Finally, we show promising
results on real-world environments capturing images with two consumer-level
cameras. | [
"cs.CV"
] |
What mechanisms causes GAN's entanglement? Although developing disentangled
GAN has attracted sufficient attention, it is unclear how entanglement is
originated by GAN transformation. We in this research propose a
difference-in-difference (DID) counterfactual framework to design experiments
for analyzing the entanglement mechanism in on of the Progressive-growing GAN
(PG-GAN). Our experiment clarify the mechanisms how pixel normalization causes
PG-GAN entanglement during a input-unit-ablation transformation. We discover
that pixel normalization causes object entanglement by in-painting the area
occupied by ablated objects. We also discover the unit-object relation
determines whether and how pixel normalization causes objects entanglement. Our
DID framework theoretically guarantees that the mechanisms that we discover is
solid, explainable and comprehensively. | [
"cs.CV",
"cs.LG"
] |
Slowly changing variables in a continuous state space constitute an important
category of reinforcement learning and see its application in many domains,
such as modeling a climate control system where temperature, humidity, etc.
change slowly over time. However, this subject is less addressed in recent
studies. Classical methods with certain variants, such as Dynamic Programming
with Tile Coding which discretizes the state space, fail to handle slowly
changing variables because those methods cannot capture the tiny changes in
each transition step, as it is computationally expensive or impossible to
establish an extremely granular grid system. In this paper, we introduce a
Hyperspace Neighbor Penetration (HNP) approach that solves the problem. HNP
captures in each transition step the state's partial "penetration" into its
neighboring hyper-tiles in the gridded hyperspace, thus does not require the
transition to be inter-tile in order for the change to be captured. Therefore,
HNP allows for a very coarse grid system, which makes the computation feasible.
HNP assumes near linearity of the transition function in a local space, which
is commonly satisfied. In summary, HNP can be orders of magnitude more
efficient than classical method in handling slowly changing variables in
reinforcement learning. We have made an industrial implementation of NHP with a
great success. | [
"cs.LG"
] |
Multimodal relational data analysis has become of increasing importance in
recent years, for exploring across different domains of data, such as images
and their text tags obtained from social networking services (e.g., Flickr). A
variety of data analysis methods have been developed for visualization; to give
an example, t-Stochastic Neighbor Embedding (t-SNE) computes low-dimensional
feature vectors so that their similarities keep those of the observed data
vectors. However, t-SNE is designed only for a single domain of data but not
for multimodal data; this paper aims at visualizing multimodal relational data
consisting of data vectors in multiple domains with relations across these
vectors. By extending t-SNE, we herein propose Multimodal Relational Stochastic
Neighbor Embedding (MR-SNE), that (1) first computes augmented relations, where
we observe the relations across domains and compute those within each of
domains via the observed data vectors, and (2) jointly embeds the augmented
relations to a low-dimensional space. Through visualization of Flickr and
Animal with Attributes 2 datasets, proposed MR-SNE is compared with other graph
embedding-based approaches; MR-SNE demonstrates the promising performance. | [
"cs.LG",
"cs.CL",
"cs.CV",
"cs.HC",
"stat.ML"
] |
We aim at predicting a complete and high-resolution depth map from
incomplete, sparse and noisy depth measurements. Existing methods handle this
problem either by exploiting various regularizations on the depth maps directly
or resorting to learning based methods. When the corresponding color images are
available, the correlation between the depth maps and the color images are used
to improve the completion performance, assuming the color images are clean and
sharp. However, in real world dynamic scenes, color images are often blurry due
to the camera motion and the moving objects in the scene. In this paper, we
propose to tackle the problem of depth map completion by jointly exploiting the
blurry color image sequences and the sparse depth map measurements, and present
an energy minimization based formulation to simultaneously complete the depth
maps, estimate the scene flow and deblur the color images. Our experimental
evaluations on both outdoor and indoor scenarios demonstrate the
state-of-the-art performance of our approach. | [
"cs.CV"
] |
In this paper, we propose a Seed-Augment-Train/Transfer (SAT) framework that
contains a synthetic seed image dataset generation procedure for languages with
different numeral systems using freely available open font file datasets. This
seed dataset of images is then augmented to create a purely synthetic training
dataset, which is in turn used to train a deep neural network and test on
held-out real world handwritten digits dataset spanning five Indic scripts,
Kannada, Tamil, Gujarati, Malayalam, and Devanagari. We showcase the efficacy
of this approach both qualitatively, by training a Boundary-seeking GAN (BGAN)
that generates realistic digit images in the five languages, and also
quantitatively by testing a CNN trained on the synthetic data on the real-world
datasets. This establishes not only an interesting nexus between the
font-datasets-world and transfer learning but also provides a recipe for
universal-digit classification in any script. | [
"cs.CV",
"cs.CL"
] |
Flow based models such as Real NVP are an extremely powerful approach to
density estimation. However, existing flow based models are restricted to
transforming continuous densities over a continuous input space into similarly
continuous distributions over continuous latent variables. This makes them
poorly suited for modeling and representing discrete structures in data
distributions, for example class membership or discrete symmetries. To address
this difficulty, we present a normalizing flow architecture which relies on
domain partitioning using locally invertible functions, and possesses both real
and discrete valued latent variables. This Real and Discrete (RAD) approach
retains the desirable normalizing flow properties of exact sampling, exact
inference, and analytically computable probabilities, while at the same time
allowing simultaneous modeling of both continuous and discrete structure in a
data distribution. | [
"cs.LG",
"stat.ML"
] |
Is strong supervision necessary for learning a good visual representation? Do
we really need millions of semantically-labeled images to train a Convolutional
Neural Network (CNN)? In this paper, we present a simple yet surprisingly
powerful approach for unsupervised learning of CNN. Specifically, we use
hundreds of thousands of unlabeled videos from the web to learn visual
representations. Our key idea is that visual tracking provides the supervision.
That is, two patches connected by a track should have similar visual
representation in deep feature space since they probably belong to the same
object or object part. We design a Siamese-triplet network with a ranking loss
function to train this CNN representation. Without using a single image from
ImageNet, just using 100K unlabeled videos and the VOC 2012 dataset, we train
an ensemble of unsupervised networks that achieves 52% mAP (no bounding box
regression). This performance comes tantalizingly close to its
ImageNet-supervised counterpart, an ensemble which achieves a mAP of 54.4%. We
also show that our unsupervised network can perform competitively in other
tasks such as surface-normal estimation. | [
"cs.CV"
] |
Building agents to interact with the web would allow for significant
improvements in knowledge understanding and representation learning. However,
web navigation tasks are difficult for current deep reinforcement learning (RL)
models due to the large discrete action space and the varying number of actions
between the states. In this work, we introduce DOM-Q-NET, a novel architecture
for RL-based web navigation to address both of these problems. It parametrizes
Q functions with separate networks for different action categories: clicking a
DOM element and typing a string input. Our model utilizes a graph neural
network to represent the tree-structured HTML of a standard web page. We
demonstrate the capabilities of our model on the MiniWoB environment where we
can match or outperform existing work without the use of expert demonstrations.
Furthermore, we show 2x improvements in sample efficiency when training in the
multi-task setting, allowing our model to transfer learned behaviours across
tasks. | [
"cs.LG",
"stat.ML"
] |
Size uniformity is one of the main criteria of superpixel methods. But size
uniformity rarely conforms to the varying content of an image. The chosen size
of the superpixels therefore represents a compromise - how to obtain the fewest
superpixels without losing too much important detail. We propose that a more
appropriate criterion for creating image segments is information uniformity. We
introduce a novel method for segmenting an image based on this criterion. Since
information is a natural way of measuring image complexity, our proposed
algorithm leads to image segments that are smaller and denser in areas of high
complexity and larger in homogeneous regions, thus simplifying the image while
preserving its details. Our algorithm is simple and requires just one input
parameter - a threshold on the information content. On segmentation comparison
benchmarks it proves to be superior to the state-of-the-art. In addition, our
method is computationally very efficient, approaching real-time performance,
and is easily extensible to three-dimensional image stacks and video volumes. | [
"cs.CV"
] |
Lesion diagnosis of skin lesions is a very challenging task due to high
inter-class similarities and intra-class variations in terms of color, size,
site and appearance among different skin lesions. With the emergence of
computer vision especially deep learning algorithms, lesion diagnosis is made
possible using these algorithms trained on dermoscopic images. Usually, deep
classification networks are used for the lesion diagnosis to determine
different types of skin lesions. In this work, we used pixel-wise
classification network to provide lesion diagnosis rather than classification
network. We propose to use DeeplabV3+ for multi-class lesion diagnosis in
dermoscopic images of Task 3 of ISIC Challenge 2018. We used various
post-processing methods with DeeplabV3+ to determine the lesion diagnosis in
this challenge and submitted the test results. | [
"cs.CV"
] |
The autoregressive language model (ALM) trained with maximum likelihood
estimation (MLE) is widely used in unconditional text generation. Due to
exposure bias, the generated texts still suffer from low quality and diversity.
This presents statistically as a discrepancy between the real text and
generated text. Some research shows a discriminator can detect this
discrepancy. Because the discriminator can encode more information than the
generator, discriminator has the potentiality to improve generator. To
alleviate the exposure bias, generative adversarial networks (GAN) use the
discriminator to update the generator's parameters directly, but they fail by
being evaluated precisely. A critical reason for the failure is the difference
between the discriminator input and the ALM input. We propose a novel mechanism
by adding a filter which has the same input as the discriminator. First,
discriminator detects the discrepancy signals and passes to filter directly (or
by learning). Then, we use the filter to reject some generated samples with a
sampling-based method. Thus, the original generative distribution is revised to
reduce the discrepancy. Two ALMs, RNN-based and Transformer-based, are
experimented. Evaluated precisely by three metrics, our mechanism consistently
outperforms the ALMs and all kinds of GANs across two benchmark data sets. | [
"cs.CV",
"cs.CL"
] |
For people with chronic pain, the assessment of protective behavior during
physical functioning is essential to understand their subjective pain-related
experiences (e.g., fear and anxiety toward pain and injury) and how they deal
with such experiences (avoidance or reliance on specific body joints), with the
ultimate goal of guiding intervention. Advances in deep learning (DL) can
enable the development of such intervention. Using the EmoPain MoCap dataset,
we investigate how attention-based DL architectures can be used to improve the
detection of protective behavior by capturing the most informative temporal and
body configurational cues characterizing specific movements and the strategies
used to perform them. We propose an end-to-end deep learning architecture named
BodyAttentionNet (BANet). BANet is designed to learn temporal and bodily parts
that are more informative to the detection of protective behavior. The approach
addresses the variety of ways people execute a movement (including healthy
people) independently of the type of movement analyzed. Through extensive
comparison experiments with other state-of-the-art machine learning techniques
used with motion capture data, we show statistically significant improvements
achieved by using these attention mechanisms. In addition, the BANet
architecture requires a much lower number of parameters than the state of the
art for comparable if not higher performances. | [
"cs.LG",
"stat.ML"
] |
Recent results in Reinforcement Learning (RL) have shown that agents with
limited training environments are susceptible to a large amount of overfitting
across many domains. A key challenge for RL generalization is to quantitatively
explain the effects of changing parameters on testing performance. Such
parameters include architecture, regularization, and RL-dependent variables
such as discount factor and action stochasticity. We provide empirical results
that show complex and interdependent relationships between hyperparameters and
generalization. We further show that several empirical metrics such as gradient
cosine similarity and trajectory-dependent metrics serve to provide intuition
towards these results. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Gaussian processes (GPs) are important probabilistic tools for inference and
learning in spatio-temporal modelling problems such as those in climate science
and epidemiology. However, existing GP approximations do not simultaneously
support large numbers of off-the-grid spatial data-points and long time-series
which is a hallmark of many applications.
Pseudo-point approximations, one of the gold-standard methods for scaling GPs
to large data sets, are well suited for handling off-the-grid spatial data.
However, they cannot handle long temporal observation horizons effectively
reverting to cubic computational scaling in the time dimension. State space GP
approximations are well suited to handling temporal data, if the temporal GP
prior admits a Markov form, leading to linear complexity in the number of
temporal observations, but have a cubic spatial cost and cannot handle
off-the-grid spatial data.
In this work we show that there is a simple and elegant way to combine
pseudo-point methods with the state space GP approximation framework to get the
best of both worlds. The approach hinges on a surprising conditional
independence property which applies to space--time separable GPs. We
demonstrate empirically that the combined approach is more scalable and
applicable to a greater range of spatio-temporal problems than either method on
its own. | [
"cs.LG",
"stat.ML"
] |
Transformer has become the new standard method in natural language processing
(NLP), and it also attracts research interests in computer vision area. In this
paper we investigate the application of Transformer in Image Quality (TRIQ)
assessment. Following the original Transformer encoder employed in Vision
Transformer (ViT), we propose an architecture of using a shallow Transformer
encoder on the top of a feature map extracted by convolution neural networks
(CNN). Adaptive positional embedding is employed in the Transformer encoder to
handle images with arbitrary resolutions. Different settings of Transformer
architectures have been investigated on publicly available image quality
databases. We have found that the proposed TRIQ architecture achieves
outstanding performance. The implementation of TRIQ is published on Github
(https://github.com/junyongyou/triq). | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In constrained reinforcement learning (RL), a learning agent seeks to not
only optimize the overall reward but also satisfy the additional safety,
diversity, or budget constraints. Consequently, existing constrained RL
solutions require several new algorithmic ingredients that are notably
different from standard RL. On the other hand, reward-free RL is independently
developed in the unconstrained literature, which learns the transition dynamics
without using the reward information, and thus naturally capable of addressing
RL with multiple objectives under the common dynamics. This paper bridges
reward-free RL and constrained RL. Particularly, we propose a simple
meta-algorithm such that given any reward-free RL oracle, the approachability
and constrained RL problems can be directly solved with negligible overheads in
sample complexity. Utilizing the existing reward-free RL solvers, our framework
provides sharp sample complexity results for constrained RL in the tabular MDP
setting, matching the best existing results up to a factor of horizon
dependence; our framework directly extends to a setting of tabular two-player
Markov games, and gives a new result for constrained RL with linear function
approximation. | [
"cs.LG",
"cs.AI"
] |
To make the best use of the underlying minute and subtle differences,
fine-grained classifiers collect information about inter-class variations. The
task is very challenging due to the small differences between the colors,
viewpoint, and structure in the same class entities. The classification becomes
more difficult due to the similarities between the differences in viewpoint
with other classes and differences with its own. In this work, we investigate
the performance of the landmark general CNN classifiers, which presented
top-notch results on large scale classification datasets, on the fine-grained
datasets, and compare it against state-of-the-art fine-grained classifiers. In
this paper, we pose two specific questions: (i) Do the general CNN classifiers
achieve comparable results to fine-grained classifiers? (ii) Do general CNN
classifiers require any specific information to improve upon the fine-grained
ones? Throughout this work, we train the general CNN classifiers without
introducing any aspect that is specific to fine-grained datasets. We show an
extensive evaluation on six datasets to determine whether the fine-grained
classifier is able to elevate the baseline in their experiments. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Humans are capable of identifying a book only by looking at its cover, but
how can computers do the same? In this paper, we explore different feature
detectors and matching methods for book cover identification, and compare their
performances in terms of both speed and accuracy. This will allow, for example,
libraries to develop interactive services based on cover book picture. Only one
single image of a cover book needs to be available through a database. Tests
have been performed by taking into account different transformations of each
book cover image. Encouraging results have been achieved. | [
"cs.CV",
"cs.MM",
"eess.IV"
] |
In this work, we propose a graph-adaptive pruning (GAP) method for efficient
inference of convolutional neural networks (CNNs). In this method, the network
is viewed as a computational graph, in which the vertices denote the
computation nodes and edges represent the information flow. Through topology
analysis, GAP is capable of adapting to different network structures,
especially the widely used cross connections and multi-path data flow in recent
novel convolutional models. The models can be adaptively pruned at vertex-level
as well as edge-level without any post-processing, thus GAP can directly get
practical model compression and inference speed-up. Moreover, it does not need
any customized computation library or hardware support. Finetuning is conducted
after pruning to restore the model performance. In the finetuning step, we
adopt a self-taught knowledge distillation (KD) strategy by utilizing
information from the original model, through which, the performance of the
optimized model can be sufficiently improved, without introduction of any other
teacher model. Experimental results show the proposed GAP can achieve promising
result to make inference more efficient, e.g., for ResNeXt-29 on CIFAR10, it
can get 13X model compression and 4.3X practical speed-up with marginal loss of
accuracy. | [
"cs.CV",
"cs.LG"
] |
Solving the Maximum a Posteriori on Markov Random Field, MRF-MAP, is a
prevailing method in recent interactive image segmentation tools. Although
mathematically explicit in its computational targets, and impressive for the
segmentation quality, MRF-MAP is hard to accomplish without the interactive
information from users. So it is rarely adopted in the automatic style up to
today. In this paper, we present an automatic image segmentation algorithm,
NegCut, based on the approximation to MRF-MAP. First we prove MRF-MAP is
NP-hard when the probabilistic models are unknown, and then present an
approximation function in the form of minimum cuts on graphs with negative
weights. Finally, the binary segmentation is taken from the largest eigenvector
of the target matrix, with a tuned version of the Lanczos eigensolver. It is
shown competitive at the segmentation quality in our experiments. | [
"cs.CV"
] |
Model-based reinforcement learning (MBRL) approaches rely on discrete-time
state transition models whereas physical systems and the vast majority of
control tasks operate in continuous-time. To avoid time-discretization
approximation of the underlying process, we propose a continuous-time MBRL
framework based on a novel actor-critic method. Our approach also infers the
unknown state evolution differentials with Bayesian neural ordinary
differential equations (ODE) to account for epistemic uncertainty. We implement
and test our method on a new ODE-RL suite that explicitly solves
continuous-time control systems. Our experiments illustrate that the model is
robust against irregular and noisy data, is sample-efficient, and can solve
control problems which pose challenges to discrete-time MBRL methods. | [
"cs.LG",
"stat.ML"
] |
Most Visual Question Answering (VQA) models suffer from the language prior
problem, which is caused by inherent data biases. Specifically, VQA models tend
to answer questions (e.g., what color is the banana?) based on the
high-frequency answers (e.g., yellow) ignoring image contents. Existing
approaches tackle this problem by creating delicate models or introducing
additional visual annotations to reduce question dependency while strengthening
image dependency. However, they are still subject to the language prior problem
since the data biases have not been even alleviated. In this paper, we
introduce a self-supervised learning framework to solve this problem.
Concretely, we first automatically generate labeled data to balance the biased
data, and propose a self-supervised auxiliary task to utilize the balanced data
to assist the base VQA model to overcome language priors. Our method can
compensate for the data biases by generating balanced data without introducing
external annotations. Experimental results show that our method can
significantly outperform the state-of-the-art, improving the overall accuracy
from 49.50% to 57.59% on the most commonly used benchmark VQA-CP v2. In other
words, we can increase the performance of annotation-based methods by 16%
without using external annotations. | [
"cs.CV",
"cs.MM"
] |
The problem of labeled graph generation is gaining attention in the Deep
Learning community. The task is challenging due to the sparse and discrete
nature of graph spaces. Several approaches have been proposed in the
literature, most of which require to transform the graphs into sequences that
encode their structure and labels and to learn the distribution of such
sequences through an auto-regressive generative model. Among this family of
approaches, we focus on the GraphGen model. The preprocessing phase of GraphGen
transforms graphs into unique edge sequences called Depth-First Search (DFS)
codes, such that two isomorphic graphs are assigned the same DFS code. Each
element of a DFS code is associated with a graph edge: specifically, it is a
quintuple comprising one node identifier for each of the two endpoints, their
node labels, and the edge label. GraphGen learns to generate such sequences
auto-regressively and models the probability of each component of the quintuple
independently. While effective, the independence assumption made by the model
is too loose to capture the complex label dependencies of real-world graphs
precisely. By introducing a novel graph preprocessing approach, we are able to
process the labeling information of both nodes and edges jointly. The
corresponding model, which we term GraphGen-Redux, improves upon the generative
performances of GraphGen in a wide range of datasets of chemical and social
graphs. In addition, it uses approximately 78% fewer parameters than the
vanilla variant and requires 50% fewer epochs of training on average. | [
"cs.LG"
] |
We present a simple, effective, and general activation function we term ACON
which learns to activate the neurons or not. Interestingly, we find Swish, the
recent popular NAS-searched activation, can be interpreted as a smooth
approximation to ReLU. Intuitively, in the same way, we approximate the more
general Maxout family to our novel ACON family, which remarkably improves the
performance and makes Swish a special case of ACON. Next, we present meta-ACON,
which explicitly learns to optimize the parameter switching between non-linear
(activate) and linear (inactivate) and provides a new design space. By simply
changing the activation function, we show its effectiveness on both small
models and highly optimized large models (e.g. it improves the ImageNet top-1
accuracy rate by 6.7% and 1.8% on MobileNet-0.25 and ResNet-152, respectively).
Moreover, our novel ACON can be naturally transferred to object detection and
semantic segmentation, showing that ACON is an effective alternative in a
variety of tasks. Code is available at https://github.com/nmaac/acon. | [
"cs.CV"
] |
Generative adversarial networks (GANs) have proven to be surprisingly
efficient for image editing by inverting and manipulating the latent code
corresponding to a natural image. This property emerges from the disentangled
nature of the latent space. In this paper, we identify two geometric
limitations of such latent space: (a) euclidean distances differ from image
perceptual distance, and (b) disentanglement is not optimal and facial
attribute separation using linear model is a limiting hypothesis. We thus
propose a new method to learn a proxy latent representation using normalizing
flows to remedy these limitations, and show that this leads to a more efficient
space for face image editing. | [
"cs.CV"
] |
Recent approaches have achieved great success in image generation from
structured inputs, e.g., semantic segmentation, scene graph or layout. Although
these methods allow specification of objects and their locations at
image-level, they lack the fidelity and semantic control to specify visual
appearance of these objects at an instance-level. To address this limitation,
we propose a new image generation method that enables instance-level attribute
control. Specifically, the input to our attribute-guided generative model is a
tuple that contains: (1) object bounding boxes, (2) object categories and (3)
an (optional) set of attributes for each object. The output is a generated
image where the requested objects are in the desired locations and have
prescribed attributes. Several losses work collaboratively to encourage
accurate, consistent and diverse image generation. Experiments on Visual Genome
dataset demonstrate our model's capacity to control object-level attributes in
generated images, and validate plausibility of disentangled object-attribute
representation in the image generation from layout task. Also, the generated
images from our model have higher resolution, object classification accuracy
and consistency, as compared to the previous state-of-the-art. | [
"cs.CV"
] |
The simplicity of gradient descent (GD) made it the default method for
training ever-deeper and complex neural networks. Both loss functions and
architectures are often explicitly tuned to be amenable to this basic local
optimization. In the context of weakly-supervised CNN segmentation, we
demonstrate a well-motivated loss function where an alternative optimizer (ADM)
achieves the state-of-the-art while GD performs poorly. Interestingly, GD
obtains its best result for a "smoother" tuning of the loss function. The
results are consistent across different network architectures. Our loss is
motivated by well-understood MRF/CRF regularization models in "shallow"
segmentation and their known global solvers. Our work suggests that network
design/training should pay more attention to optimization methods. | [
"cs.LG",
"stat.ML"
] |
In most real world scenarios, a policy trained by reinforcement learning in
one environment needs to be deployed in another, potentially quite different
environment. However, generalization across different environments is known to
be hard. A natural solution would be to keep training after deployment in the
new environment, but this cannot be done if the new environment offers no
reward signal. Our work explores the use of self-supervision to allow the
policy to continue training after deployment without using any rewards. While
previous methods explicitly anticipate changes in the new environment, we
assume no prior knowledge of those changes yet still obtain significant
improvements. Empirical evaluations are performed on diverse simulation
environments from DeepMind Control suite and ViZDoom, as well as real robotic
manipulation tasks in continuously changing environments, taking observations
from an uncalibrated camera. Our method improves generalization in 31 out of 36
environments across various tasks and outperforms domain randomization on a
majority of environments. | [
"cs.LG",
"cs.CV",
"cs.RO",
"stat.ML"
] |
Recent breakthroughs in the field of deep learning have led to advancements
in a broad spectrum of tasks in computer vision, audio processing, natural
language processing and other areas. In most instances where these tasks are
deployed in real-world scenarios, the models used in them have been shown to be
susceptible to adversarial attacks, making it imperative for us to address the
challenge of their adversarial robustness. Existing techniques for adversarial
robustness fall into three broad categories: defensive distillation techniques,
adversarial training techniques, and randomized or non-deterministic model
based techniques. In this paper, we propose a novel neural network paradigm
that falls under the category of randomized models for adversarial robustness,
but differs from all existing techniques under this category in that it models
each parameter of the network as a statistical distribution with learnable
parameters. We show experimentally that this framework is highly robust to a
variety of white-box and black-box adversarial attacks, while preserving the
task-specific performance of the traditional neural network model. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
The location of broken insulators in aerial images is a challenging task.
This paper, focusing on the self-blast glass insulator, proposes a deep
learning solution. We address the broken insulators location problem as a low
signal-noise-ratio image location framework with two modules: 1) object
detection based on Fast R-CNN, and 2) classification of pixels based on U-net.
A diverse aerial image set of some grid in China is tested to validated the
proposed approach. Furthermore, a comparison is made among different methods
and the result shows that our approach is accurate and real-time. | [
"cs.CV"
] |
Video-based person re-identification has received increasing attention
recently, as it plays an important role within surveillance video analysis.
Video-based Re-ID is an expansion of earlier image-based re-identification
methods by learning features from a video via multiple image frames for each
person. Most contemporary video Re-ID methods utilise complex CNNbased network
architectures using 3D convolution or multibranch networks to extract
spatial-temporal video features. By contrast, in this paper, we illustrate
superior performance from a simple single stream 2D convolution network
leveraging the ResNet50-IBN architecture to extract frame-level features
followed by temporal attention for clip level features. These clip level
features can be generalised to extract video level features by averaging
without any significant additional cost. Our approach uses best video Re-ID
practice and transfer learning between datasets to outperform existing
state-of-the-art approaches on the MARS, PRID2011 and iLIDS-VID datasets with
89:62%, 97:75%, 97:33% rank-1 accuracy respectively and with 84:61% mAP for
MARS, without reliance on complex and memory intensive 3D convolutions or
multi-stream networks architectures as found in other contemporary work.
Conversely, our work shows that global features extracted by the 2D convolution
network are a sufficient representation for robust state of the art video
Re-ID. | [
"cs.CV"
] |
All previous methods for audio-driven talking head generation assume the
input audio to be clean with a neutral tone. As we show empirically, one can
easily break these systems by simply adding certain background noise to the
utterance or changing its emotional tone (to such as sad). To make talking head
generation robust to such variations, we propose an explicit audio
representation learning framework that disentangles audio sequences into
various factors such as phonetic content, emotional tone, background noise and
others. We conduct experiments to validate that conditioned on disentangled
content representation, the generated mouth movement by our model is
significantly more accurate than previous approaches (without disentangled
learning) in the presence of noise and emotional variations. We further
demonstrate that our framework is compatible with current state-of-the-art
approaches by replacing their original audio learning component with ours. To
our best knowledge, this is the first work which improves the performance of
talking head generation from disentangled audio representation perspective,
which is important for many real-world applications. | [
"cs.CV",
"cs.LG",
"eess.AS"
] |
In this work, we propose a novel approach for reinforcement learning driven
by evolutionary computation. Our algorithm, dubbed as Evolutionary-Driven
Reinforcement Learning (evo-RL), embeds the reinforcement learning algorithm in
an evolutionary cycle, where we distinctly differentiate between purely
evolvable (instinctive) behaviour versus purely learnable behaviour.
Furthermore, we propose that this distinction is decided by the evolutionary
process, thus allowing evo-RL to be adaptive to different environments. In
addition, evo-RL facilitates learning on environments with rewardless states,
which makes it more suited for real-world problems with incomplete information.
To show that evo-RL leads to state-of-the-art performance, we present the
performance of different state-of-the-art reinforcement learning algorithms
when operating within evo-RL and compare it with the case when these same
algorithms are executed independently. Results show that reinforcement learning
algorithms embedded within our evo-RL approach significantly outperform the
stand-alone versions of the same RL algorithms on OpenAI Gym control problems
with rewardless states constrained by the same computational budget. | [
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
] |
With the rapid development of facial manipulation techniques, face forgery
detection has received considerable attention in digital media forensics due to
security concerns. Most existing methods formulate face forgery detection as a
classification problem and utilize binary labels or manipulated region masks as
supervision. However, without considering the correlation between local
regions, these global supervisions are insufficient to learn a generalized
feature and prone to overfitting. To address this issue, we propose a novel
perspective of face forgery detection via local relation learning.
Specifically, we propose a Multi-scale Patch Similarity Module (MPSM), which
measures the similarity between features of local regions and forms a robust
and generalized similarity pattern. Moreover, we propose an RGB-Frequency
Attention Module (RFAM) to fuse information in both RGB and frequency domains
for more comprehensive local feature representation, which further improves the
reliability of the similarity pattern. Extensive experiments show that the
proposed method consistently outperforms the state-of-the-arts on widely-used
benchmarks. Furthermore, detailed visualization shows the robustness and
interpretability of our method. | [
"cs.CV"
] |
Deploying trained convolutional neural networks (CNNs) to mobile devices is a
challenging task because of the simultaneous requirements of the deployed model
to be fast, lightweight and accurate. Designing and training a CNN architecture
that does well on all three metrics is highly non-trivial and can be very
time-consuming if done by hand. One way to solve this problem is to compress
the trained CNN models before deploying to mobile devices. This work asks and
answers three questions on compressing CNN models automatically: a) How to
control the trade-off between speed, memory and accuracy during model
compression? b) In practice, a deployed model may not see all classes and/or
may not need to produce all class labels. Can this fact be used to improve the
trade-off? c) How to scale the compression algorithm to execute within a
reasonable amount of time for many deployments? The paper demonstrates that a
model compression algorithm utilizing reinforcement learning with architecture
search and knowledge distillation can answer these questions in the
affirmative. Experimental results are provided for current state-of-the-art CNN
model families for image feature extraction like VGG and ResNet with CIFAR
datasets. | [
"cs.LG",
"stat.ML"
] |
Image-to-Image (I2I) translation is a heated topic in academia, and it also
has been applied in real-world industry for tasks like image synthesis,
super-resolution, and colorization. However, traditional I2I translation
methods train data in two or more domains together. This requires lots of
computation resources. Moreover, the results are of lower quality, and they
contain many more artifacts. The training process could be unstable when the
data in different domains are not balanced, and modal collapse is more likely
to happen. We proposed a new I2I translation method that generates a new model
in the target domain via a series of model transformations on a pre-trained
StyleGAN2 model in the source domain. After that, we proposed an inversion
method to achieve the conversion between an image and its latent vector. By
feeding the latent vector into the generated model, we can perform I2I
translation between the source domain and target domain. Both qualitative and
quantitative evaluations were conducted to prove that the proposed method can
achieve outstanding performance in terms of image quality, diversity and
semantic similarity to the input and reference images compared to
state-of-the-art works. | [
"cs.CV"
] |
In this work, we address the problem of few-shot multi-class object counting
with point-level annotations. The proposed technique leverages a class agnostic
attention mechanism that sequentially attends to objects in the image and
extracts their relevant features. This process is employed on an adapted
prototypical-based few-shot approach that uses the extracted features to
classify each one either as one of the classes present in the support set
images or as background. The proposed technique is trained on point-level
annotations and uses a novel loss function that disentangles class-dependent
and class-agnostic aspects of the model to help with the task of few-shot
object counting. We present our results on a variety of
object-counting/detection datasets, including FSOD and MS COCO. In addition, we
introduce a new dataset that is specifically designed for weakly supervised
multi-class object counting/detection and contains considerably different
classes and distribution of number of classes/instances per image compared to
the existing datasets. We demonstrate the robustness of our approach by testing
our system on a totally different distribution of classes from what it has been
trained on. | [
"cs.CV",
"cs.LG"
] |
The claims data, containing medical codes, services information, and incurred
expenditure, can be a good resource for estimating an individual's health
condition and medical risk level. In this study, we developed Transformer-based
Multimodal AutoEncoder (TMAE), an unsupervised learning framework that can
learn efficient patient representation by encoding meaningful information from
the claims data. TMAE is motivated by the practical needs in healthcare to
stratify patients into different risk levels for improving care delivery and
management. Compared to previous approaches, TMAE is able to 1) model
inpatient, outpatient, and medication claims collectively, 2) handle irregular
time intervals between medical events, 3) alleviate the sparsity issue of the
rare medical codes, and 4) incorporate medical expenditure information. We
trained TMAE using a real-world pediatric claims dataset containing more than
600,000 patients and compared its performance with various approaches in two
clustering tasks. Experimental results demonstrate that TMAE has superior
performance compared to all baselines. Multiple downstream applications are
also conducted to illustrate the effectiveness of our framework. The promising
results confirm that the TMAE framework is scalable to large claims data and is
able to generate efficient patient embeddings for risk stratification and
analysis. | [
"cs.LG",
"cs.AI"
] |
In digital photography, two image restoration tasks have been studied
extensively and resolved independently: demosaicing and super-resolution. Both
these tasks are related to resolution limitations of the camera. Performing
super-resolution on a demosaiced images simply exacerbates the artifacts
introduced by demosaicing. In this paper, we show that such accumulation of
errors can be easily averted by jointly performing demosaicing and
super-resolution. To this end, we propose a deep residual network for learning
an end-to-end mapping between Bayer images and high-resolution images. By
training on high-quality samples, our deep residual demosaicing and
super-resolution network is able to recover high-quality super-resolved images
from low-resolution Bayer mosaics in a single step without producing the
artifacts common to such processing when the two operations are done
separately. We perform extensive experiments to show that our deep residual
network achieves demosaiced and super-resolved images that are superior to the
state-of-the-art both qualitatively and in terms of PSNR and SSIM metrics. | [
"cs.CV"
] |
This paper addresses the problem of how to exploit spatio-temporal
information available in videos to improve the object detection precision. We
propose a two stage object detector called FANet based on short-term
spatio-temporal feature aggregation to give a first detection set, and
long-term object linking to refine these detections. Firstly, we generate a set
of short tubelet proposals containing the object in $N$ consecutive frames.
Then, we aggregate RoI pooled deep features through the tubelet using a
temporal pooling operator that summarizes the information with a fixed size
output independent of the number of input frames. On top of that, we define a
double head implementation that we feed with spatio-temporal aggregated
information for spatio-temporal object classification, and with spatial
information extracted from the current frame for object localization and
spatial classification. Furthermore, we also specialize each head branch
architecture to better perform in each task taking into account the input data.
Finally, a long-term linking method builds long tubes using the previously
calculated short tubelets to overcome detection errors. We have evaluated our
model in the widely used ImageNet VID dataset achieving a 80.9% mAP, which is
the new state-of-the-art result for single models. Also, in the challenging
small object detection dataset USC-GRAD-STDdb, our proposal outperforms the
single frame baseline by 5.4% mAP. | [
"cs.CV"
] |
This paper analyzes the predictions of image captioning models with attention
mechanisms beyond visualizing the attention itself. We develop variants of
layer-wise relevance propagation (LRP) and gradient-based explanation methods,
tailored to image captioning models with attention mechanisms. We compare the
interpretability of attention heatmaps systematically against the explanations
provided by explanation methods such as LRP, Grad-CAM, and Guided Grad-CAM. We
show that explanation methods provide simultaneously pixel-wise image
explanations (supporting and opposing pixels of the input image) and linguistic
explanations (supporting and opposing words of the preceding sequence) for each
word in the predicted captions. We demonstrate with extensive experiments that
explanation methods 1) can reveal additional evidence used by the model to make
decisions compared to attention; 2) correlate to object locations with high
precision; 3) are helpful to "debug" the model, e.g. by analyzing the reasons
for hallucinated object words. With the observed properties of explanations, we
further design an LRP-inference fine-tuning strategy that reduces the issue of
object hallucination in image captioning models, and meanwhile, maintains the
sentence fluency. We conduct experiments with two widely used attention
mechanisms: the adaptive attention mechanism calculated with the additive
attention and the multi-head attention mechanism calculated with the scaled dot
product. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Depth map fusion is an essential part in both stereo and RGB-D based 3-D
reconstruction pipelines. Whether produced with a passive stereo reconstruction
or using an active depth sensor, such as Microsoft Kinect, the depth maps have
noise and may have poor initial registration. In this paper, we introduce a
method which is capable of handling outliers, and especially, even significant
registration errors. The proposed method first fuses a sequence of depth maps
into a single non-redundant point cloud so that the redundant points are merged
together by giving more weight to more certain measurements. Then, the original
depth maps are re-registered to the fused point cloud to refine the original
camera extrinsic parameters. The fusion is then performed again with the
refined extrinsic parameters. This procedure is repeated until the result is
satisfying or no significant changes happen between iterations. The method is
robust to outliers and erroneous depth measurements as well as even significant
depth map registration errors due to inaccurate initial camera poses. | [
"cs.CV"
] |
Recent generative adversarial networks (GANs) are able to generate impressive
photo-realistic images. However, controllable generation with GANs remains a
challenging research problem. Achieving controllable generation requires
semantically interpretable and disentangled factors of variation. It is
challenging to achieve this goal using simple fixed distributions such as
Gaussian distribution. Instead, we propose an unsupervised framework to learn a
distribution of latent codes that control the generator through self-training.
Self-training provides an iterative feedback in the GAN training, from the
discriminator to the generator, and progressively improves the proposal of the
latent codes as training proceeds. The latent codes are sampled from a latent
variable model that is learned in the feature space of the discriminator. We
consider a normalized independent component analysis model and learn its
parameters through tensor factorization of the higher-order moments. Our
framework exhibits better disentanglement compared to other variants such as
the variational autoencoder, and is able to discover semantically meaningful
latent codes without any supervision. We demonstrate empirically on both cars
and faces datasets that each group of elements in the learned code controls a
mode of variation with a semantic meaning, e.g. pose or background change. We
also demonstrate with quantitative metrics that our method generates better
results compared to other approaches. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
This work presents an analysis of the discriminators used in Generative
Adversarial Networks (GANs) for Video. We show that unconstrained video
discriminator architectures induce a loss surface with high curvature which
make optimisation difficult. We also show that this curvature becomes more
extreme as the maximal kernel dimension of video discriminators increases. With
these observations in hand, we propose a family of efficient Lower-Dimensional
Video Discriminators for GANs (LDVD GANs). The proposed family of
discriminators improve the performance of video GAN models they are applied to
and demonstrate good performance on complex and diverse datasets such as
UCF-101. In particular, we show that they can double the performance of
Temporal-GANs and provide for state-of-the-art performance on a single GPU. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
In this work, we ask the following question: Can visual analogies, learned in
an unsupervised way, be used in order to transfer knowledge between pairs of
games and even play one game using an agent trained for another game? We
attempt to answer this research question by creating visual analogies between a
pair of games: a source game and a target game. For example, given a video
frame in the target game, we map it to an analogous state in the source game
and then attempt to play using a trained policy learned for the source game. We
demonstrate convincing visual mapping between four pairs of games (eight
mappings), which are used to evaluate three transfer learning approaches. | [
"cs.LG",
"stat.ML"
] |
Distinguishing between classes of time series sampled from dynamic systems is
a common challenge in systems and control engineering, for example in the
context of health monitoring, fault detection, and quality control. The
challenge is increased when no underlying model of a system is known,
measurement noise is present, and long signals need to be interpreted. In this
paper we address these issues with a new non parametric classifier based on
topological signatures. Our model learns classes as weighted kernel density
estimates (KDEs) over persistent homology diagrams and predicts new trajectory
labels using Sinkhorn divergences on the space of diagram KDEs to quantify
proximity. We show that this approach accurately discriminates between states
of chaotic systems that are close in parameter space, and its performance is
robust to noise. | [
"cs.LG",
"nlin.CD",
"stat.ML"
] |
Safety and robustness are two desired properties for any reinforcement
learning algorithm. CMDPs can handle additional safety constraints and RMDPs
can perform well under model uncertainties. In this paper, we propose to unite
these two frameworks resulting in robust constrained MDPs (RCMDPs). The
motivation is to develop a framework that can satisfy safety constraints while
also simultaneously offer robustness to model uncertainties. We develop the
RCMDP objective, derive gradient update formula to optimize this objective and
then propose policy gradient based algorithms. We also independently propose
Lyapunov based reward shaping for RCMDPs, yielding better stability and
convergence properties. | [
"cs.LG",
"cs.SY",
"eess.SY"
] |
Recently, graph neural networks have attracted great attention and achieved
prominent performance in various research fields. Most of those algorithms have
assumed pairwise relationships of objects of interest. However, in many real
applications, the relationships between objects are in higher-order, beyond a
pairwise formulation. To efficiently learn deep embeddings on the high-order
graph-structured data, we introduce two end-to-end trainable operators to the
family of graph neural networks, i.e., hypergraph convolution and hypergraph
attention. Whilst hypergraph convolution defines the basic formulation of
performing convolution on a hypergraph, hypergraph attention further enhances
the capacity of representation learning by leveraging an attention module. With
the two operators, a graph neural network is readily extended to a more
flexible model and applied to diverse applications where non-pairwise
relationships are observed. Extensive experimental results with semi-supervised
node classification demonstrate the effectiveness of hypergraph convolution and
hypergraph attention. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Uncertainty quantification (UQ) plays a pivotal role in reduction of
uncertainties during both optimization and decision making processes. It can be
applied to solve a variety of real-world applications in science and
engineering. Bayesian approximation and ensemble learning techniques are two
most widely-used UQ methods in the literature. In this regard, researchers have
proposed different UQ methods and examined their performance in a variety of
applications such as computer vision (e.g., self-driving cars and object
detection), image processing (e.g., image restoration), medical image analysis
(e.g., medical image classification and segmentation), natural language
processing (e.g., text classification, social media texts and recidivism
risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ
methods used in deep learning. Moreover, we also investigate the application of
these methods in reinforcement learning (RL). Then, we outline a few important
applications of UQ methods. Finally, we briefly highlight the fundamental
research challenges faced by UQ methods and discuss the future research
directions in this field. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
We define a neural network as a septuple consisting of (1) a state vector,
(2) an input projection, (3) an output projection, (4) a weight matrix, (5) a
bias vector, (6) an activation map and (7) a loss function. We argue that the
loss function can be imposed either on the boundary (i.e. input and/or output
neurons) or in the bulk (i.e. hidden neurons) for both supervised and
unsupervised systems. We apply the principle of maximum entropy to derive a
canonical ensemble of the state vectors subject to a constraint imposed on the
bulk loss function by a Lagrange multiplier (or an inverse temperature
parameter). We show that in an equilibrium the canonical partition function
must be a product of two factors: a function of the temperature and a function
of the bias vector and weight matrix. Consequently, the total Shannon entropy
consists of two terms which represent respectively a thermodynamic entropy and
a complexity of the neural network. We derive the first and second laws of
learning: during learning the total entropy must decrease until the system
reaches an equilibrium (i.e. the second law), and the increment in the loss
function must be proportional to the increment in the thermodynamic entropy
plus the increment in the complexity (i.e. the first law). We calculate the
entropy destruction to show that the efficiency of learning is given by the
Laplacian of the total free energy which is to be maximized in an optimal
neural architecture, and explain why the optimization condition is better
satisfied in a deep network with a large number of hidden layers. The key
properties of the model are verified numerically by training a supervised
feedforward neural network using the method of stochastic gradient descent. We
also discuss a possibility that the entire universe on its most fundamental
level is a neural network. | [
"cs.LG",
"cond-mat.dis-nn",
"hep-th",
"quant-ph"
] |
In recent years, deep learning based methods have achieved promising
performance in standard object detection. However, these methods lack
sufficient capabilities to handle underwater object detection due to these
challenges: (1) Objects in real applications are usually small and their images
are blurry, and (2) images in the underwater datasets and real applications
accompany heterogeneous noise. To address these two problems, we first propose
a novel neural network architecture, namely Sample-WeIghted hyPEr Network
(SWIPENet), for small object detection. SWIPENet consists of high resolution
and semantic rich Hyper Feature Maps which can significantly improve small
object detection accuracy. In addition, we propose a novel sample-weighted loss
function which can model sample weights for SWIPENet, which uses a novel sample
re-weighting algorithm, namely Invert Multi-Class Adaboost (IMA), to reduce the
influence of noise on the proposed SWIPENet. Experiments on two underwater
robot picking contest datasets URPC2017 and URPC2018 show that the proposed
SWIPENet+IMA framework achieves better performance in detection accuracy
against several state-of-the-art object detection approaches. | [
"cs.CV",
"cs.LG"
] |
Active contours Model (ACM) has been extensively used in computer vision and
image processing. In recent studies, Convolutional Neural Networks (CNNs) have
been combined with active contours replacing the user in the process of contour
evolution and image segmentation to eliminate limitations associated with ACM's
dependence on parameters of the energy functional and initialization. However,
prior works did not aim for automatic initialization which is addressed here.
In addition to manual initialization, current methods are highly sensitive to
initial location and fail to delineate borders accurately. We propose a fully
automatic image segmentation method to address problems of manual
initialization, insufficient capture range, and poor convergence to boundaries,
in addition to the problem of assignment of energy functional parameters. We
train two CNNs, which predict active contour weighting parameters and generate
a ground truth mask to extract Distance Transform (DT) and an initialization
circle. Distance transform is used to form a vector field pointing from each
pixel of the image towards the closest point on the boundary, the size of which
is equal to the Euclidean distance map. We evaluate our method on four publicly
available datasets including two building instance segmentation datasets,
Vaihingen and Bing huts, and two mammography image datasets, INBreast and
DDSM-BCRP. Our approach outperforms latest research by 0.59 ans 2.39 percent in
mean Intersection-over-Union (mIoU), 7.38 and 8.62 percent in Boundary F-score
(BoundF) for Vaihingen and Bing huts datasets, respectively. Dice similarity
coefficient for the INBreast and DDSM-BCRP datasets is 94.23% and 90.89%,
respectively indicating our method is comparable to state-of-the-art
frameworks. | [
"cs.CV"
] |
Differentiable forest is an ensemble of decision trees with full
differentiability. Its simple tree structure is easy to use and explain. With
full differentiability, it would be trained in the end-to-end learning
framework with gradient-based optimization method. In this paper, we propose
tree attention block(TAB) in the framework of differentiable forest. TAB block
has two operations, squeeze and regulate. The squeeze operation would extract
the characteristic of each tree. The regulate operation would learn nonlinear
relations between these trees. So TAB block would learn the importance of each
tree and adjust its weight to improve accuracy. Our experiment on large tabular
dataset shows attention augmented differentiable forest would get comparable
accuracy with gradient boosted decision trees(GBDT), which is the
state-of-the-art algorithm for tabular datasets. And on some datasets, our
model has higher accuracy than best GBDT libs (LightGBM, Catboost, and
XGBoost). Differentiable forest model supports batch training and batch size is
much smaller than the size of training set. So on larger data sets, its memory
usage is much lower than GBDT model. The source codes are available at
https://github.com/closest-git/QuantumForest. | [
"cs.LG",
"stat.ML"
] |
\emph{Objective and Impact Statement}. With the renaissance of deep learning,
automatic diagnostic systems for computed tomography (CT) have achieved many
successful applications. However, they are mostly attributed to careful expert
annotations, which are often scarce in practice. This drives our interest to
the unsupervised representation learning. \emph{Introduction}. Recent studies
have shown that self-supervised learning is an effective approach for learning
representations, but most of them rely on the empirical design of
transformations and pretext tasks. \emph{Methods}. To avoid the subjectivity
associated with these methods, we propose the MVCNet, a novel unsupervised
three dimensional (3D) representation learning method working in a
transformation-free manner. We view each 3D lesion from different orientations
to collect multiple two dimensional (2D) views. Then, an embedding function is
learned by minimizing a contrastive loss so that the 2D views of the same 3D
lesion are aggregated, and the 2D views of different lesions are separated. We
evaluate the representations by training a simple classification head upon the
embedding layer. \emph{Results}. Experimental results show that MVCNet achieves
state-of-the-art accuracies on the LIDC-IDRI (89.55\%), LNDb (77.69\%) and
TianChi (79.96\%) datasets for \emph{unsupervised representation learning}.
When fine-tuned on 10\% of the labeled data, the accuracies are comparable to
the supervised learning model (89.46\% vs. 85.03\%, 73.85\% vs. 73.44\%,
83.56\% vs. 83.34\% on the three datasets, respectively). \emph{Conclusion}.
Results indicate the superiority of MVCNet in \emph{learning representations
with limited annotations}. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Bio-image analysis is challenging due to inhomogeneous intensity
distributions and high levels of noise in the images. Bayesian inference
provides a principled way for regularizing the problem using prior knowledge. A
fundamental choice is how one measures "distances" between shapes in an image.
It has been shown that the straightforward geometric L2 distance is degenerate
and leads to pathological situations. This is avoided when using Sobolev
gradients, rendering the segmentation problem less ill-posed. The high
computational cost and implementation overhead of Sobolev gradients, however,
have hampered practical applications. We show how particle methods as applied
to image segmentation allow for a simple and computationally efficient
implementation of Sobolev gradients. We show that the evaluation of Sobolev
gradients amounts to particle-particle interactions along the contour in an
image. We extend an existing particle-based segmentation algorithm to using
Sobolev gradients. Using synthetic and real-world images, we benchmark the
results for both 2D and 3D images using piecewise smooth and piecewise constant
region models. The present particle approximation of Sobolev gradients is 2.8
to 10 times faster than the previous reference implementation, but retains the
known favorable properties of Sobolev gradients. This speedup is achieved by
using local particle-particle interactions instead of solving a global Poisson
equation at each iteration. The computational time per iteration is higher for
Sobolev gradients than for L2 gradients. Since Sobolev gradients precondition
the optimization problem, however, a smaller number of overall iterations may
be necessary for the algorithm to converge, which can in some cases amortize
the higher per-iteration cost. | [
"cs.CV",
"cs.CE",
"cs.NA",
"q-bio.QM"
] |
3D point clouds of natural environments relevant to problems in geomorphology
often require classification of the data into elementary relevant classes. A
typical example is the separation of riparian vegetation from ground in fluvial
environments, the distinction between fresh surfaces and rockfall in cliff
environments, or more generally the classification of surfaces according to
their morphology. Natural surfaces are heterogeneous and their distinctive
properties are seldom defined at a unique scale, prompting the use of
multi-scale criteria to achieve a high degree of classification success. We
have thus defined a multi-scale measure of the point cloud dimensionality
around each point, which characterizes the local 3D organization. We can thus
monitor how the local cloud geometry behaves across scales. We present the
technique and illustrate its efficiency in separating riparian vegetation from
ground and classifying a mountain stream as vegetation, rock, gravel or water
surface. In these two cases, separating the vegetation from ground or other
classes achieve accuracy larger than 98 %. Comparison with a single scale
approach shows the superiority of the multi-scale analysis in enhancing class
separability and spatial resolution. The technique is robust to missing data,
shadow zones and changes in point density within the scene. The classification
is fast and accurate and can account for some degree of intra-class
morphological variability such as different vegetation types. A probabilistic
confidence in the classification result is given at each point, allowing the
user to remove the points for which the classification is uncertain. The
process can be both fully automated, but also fully customized by the user
including a graphical definition of the classifiers. Although developed for
fully 3D data, the method can be readily applied to 2.5D airborne lidar data. | [
"cs.CV",
"physics.geo-ph"
] |
Autonomous systems possess the features of inferring their own ego-motion,
autonomously understanding their surroundings, and planning trajectories. With
the applications of deep learning and reinforcement learning, the perception
and decision-making abilities of autonomous systems are being efficiently
addressed, and many new learning-based algorithms have surfaced with respect to
autonomous perception and decision-making. In this review, we focus on the
applications of learning-based approaches in perception and decision-making in
autonomous systems, which is different from previous reviews that discussed
traditional methods. First, we delineate the existing classical simultaneous
localization and mapping (SLAM) solutions and review the environmental
perception and understanding methods based on deep learning, including deep
learning-based monocular depth estimation, ego-motion prediction, image
enhancement, object detection, semantic segmentation, and their combinations
with traditional SLAM frameworks. Second, we briefly summarize the existing
motion planning techniques, such as path planning and trajectory planning
methods, and discuss the navigation methods based on reinforcement learning.
Finally, we examine the several challenges and promising directions discussed
and concluded in related research for future works in the era of computer
science, automatic control, and robotics. | [
"cs.CV"
] |
Dubbing is a technique for translating video content from one language to
another. However, state-of-the-art visual dubbing techniques directly copy
facial expressions from source to target actors without considering
identity-specific idiosyncrasies such as a unique type of smile. We present a
style-preserving visual dubbing approach from single video inputs, which
maintains the signature style of target actors when modifying facial
expressions, including mouth motions, to match foreign languages. At the heart
of our approach is the concept of motion style, in particular for facial
expressions, i.e., the person-specific expression change that is yet another
essential factor beyond visual accuracy in face editing applications. Our
method is based on a recurrent generative adversarial network that captures the
spatiotemporal co-activation of facial expressions, and enables generating and
modifying the facial expressions of the target actor while preserving their
style. We train our model with unsynchronized source and target videos in an
unsupervised manner using cycle-consistency and mouth expression losses, and
synthesize photorealistic video frames using a layered neural face renderer.
Our approach generates temporally coherent results, and handles dynamic
backgrounds. Our results show that our dubbing approach maintains the
idiosyncratic style of the target actor better than previous approaches, even
for widely differing source and target actors. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Offside detection in soccer has emerged as one of the most important
decisions with an average of 50 offside decisions every game. False detections
and rash calls adversely affect game conditions and in many cases drastically
change the outcome of the game. The human eye has finite precision and can only
discern a limited amount of detail in a given instance. Current offside
decisions are made manually by sideline referees and tend to remain
controversial in many games. This calls for automated offside detection
techniques in order to assist accurate refereeing. In this work, we have
explicitly used computer vision and image processing techniques like Hough
transform, color similarity (quantization), graph connected components, and
vanishing point ideas to identify the probable offside regions.
Keywords: Hough transform, connected components, KLT tracking, color
similarity. | [
"cs.CV"
] |
Semantic segmentation of satellite imagery is a common approach to identify
patterns and detect changes around the planet. Most of the state-of-the-art
semantic segmentation models are trained in a fully supervised way using
Convolutional Neural Network (CNN). The generalization property of CNN is poor
for satellite imagery because the data can be very diverse in terms of
landscape types, image resolutions, and scarcity of labels for different
geographies and seasons. Hence, the performance of CNN doesn't translate well
to images from unseen regions or seasons. Inspired by Conditional Generative
Adversarial Networks (CGAN) based approach of image-to-image translation for
high-resolution satellite imagery, we propose a CGAN framework for land cover
classification using medium-resolution Sentinel-2 imagery. We find that the
CGAN model outperforms the CNN model of similar complexity by a significant
margin on an unseen imbalanced test dataset. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Many recent datasets contain a variety of different data modalities, for
instance, image, question, and answer data in visual question answering (VQA).
When training deep net classifiers on those multi-modal datasets, the
modalities get exploited at different scales, i.e., some modalities can more
easily contribute to the classification results than others. This is suboptimal
because the classifier is inherently biased towards a subset of the modalities.
To alleviate this shortcoming, we propose a novel regularization term based on
the functional entropy. Intuitively, this term encourages to balance the
contribution of each modality to the classification result. However,
regularization with the functional entropy is challenging. To address this, we
develop a method based on the log-Sobolev inequality, which bounds the
functional entropy with the functional-Fisher-information. Intuitively, this
maximizes the amount of information that the modalities contribute. On the two
challenging multi-modal datasets VQA-CPv2 and SocialIQ, we obtain
state-of-the-art results while more uniformly exploiting the modalities. In
addition, we demonstrate the efficacy of our method on Colored MNIST. | [
"cs.CV",
"cs.LG"
] |
Many semantic events in team sport activities e.g. basketball often involve
both group activities and the outcome (score or not). Motion patterns can be an
effective means to identify different activities. Global and local motions have
their respective emphasis on different activities, which are difficult to
capture from the optical flow due to the mixture of global and local motions.
Hence it calls for a more effective way to separate the global and local
motions. When it comes to the specific case for basketball game analysis, the
successful score for each round can be reliably detected by the appearance
variation around the basket. Based on the observations, we propose a scheme to
fuse global and local motion patterns (MPs) and key visual information (KVI)
for semantic event recognition in basketball videos. Firstly, an algorithm is
proposed to estimate the global motions from the mixed motions based on the
intrinsic property of camera adjustments. And the local motions could be
obtained from the mixed and global motions. Secondly, a two-stream 3D CNN
framework is utilized for group activity recognition over the separated global
and local motion patterns. Thirdly, the basket is detected and its appearance
features are extracted through a CNN structure. The features are utilized to
predict the success or failure. Finally, the group activity recognition and
success/failure prediction results are integrated using the kronecker product
for event recognition. Experiments on NCAA dataset demonstrate that the
proposed method obtains state-of-the-art performance. | [
"cs.CV"
] |
Reservoir Computing is a class of simple yet efficient Recurrent Neural
Networks where internal weights are fixed at random and only a linear output
layer is trained. In the large size limit, such random neural networks have a
deep connection with kernel methods. Our contributions are threefold: a) We
rigorously establish the recurrent kernel limit of Reservoir Computing and
prove its convergence. b) We test our models on chaotic time series prediction,
a classic but challenging benchmark in Reservoir Computing, and show how the
Recurrent Kernel is competitive and computationally efficient when the number
of data points remains moderate. c) When the number of samples is too large, we
leverage the success of structured Random Features for kernel approximation by
introducing Structured Reservoir Computing. The two proposed methods, Recurrent
Kernel and Structured Reservoir Computing, turn out to be much faster and more
memory-efficient than conventional Reservoir Computing. | [
"stat.ML",
"cs.LG",
"eess.SP"
] |
Self-similarity learning has been recognized as a promising method for single
image super-resolution (SR) to produce high-resolution (HR) image in recent
years. The performance of learning based SR reconstruction, however, highly
depends on learned representation coeffcients. Due to the degradation of input
image, conventional sparse coding is prone to produce unfaithful representation
coeffcients. To this end, we propose a novel kernel based low-rank sparse model
with self-similarity learning for single image SR which incorporates
nonlocalsimilarity prior to enforce similar patches having similar
representation weights. We perform a gradual magnification scheme, using
self-examples extracted from the degraded input image and up-scaled versions.
To exploit nonlocal-similarity, we concatenate the vectorized input patch and
its nonlocal neighbors at different locations into a data matrix which consists
of similar components. Then we map the nonlocal data matrix into a
high-dimensional feature space by kernel method to capture their nonlinear
structures. Under the assumption that the sparse coeffcients for the nonlocal
data in the kernel space should be low-rank, we impose low-rank constraint on
sparse coding to share similarities among representation coeffcients and remove
outliers in order that stable weights for SR reconstruction can be obtained.
Experimental results demonstrate the advantage of our proposed method in both
visual quality and reconstruction error. | [
"cs.CV"
] |
We propose a novel model named Multi-Channel Attention Selection Generative
Adversarial Network (SelectionGAN) for guided image-to-image translation, where
we translate an input image into another while respecting an external semantic
guidance. The proposed SelectionGAN explicitly utilizes the semantic guidance
information and consists of two stages. In the first stage, the input image and
the conditional semantic guidance are fed into a cycled semantic-guided
generation network to produce initial coarse results. In the second stage, we
refine the initial results by using the proposed multi-scale spatial pooling \&
channel selection module and the multi-channel attention selection module.
Moreover, uncertainty maps automatically learned from attention maps are used
to guide the pixel loss for better network optimization. Exhaustive experiments
on four challenging guided image-to-image translation tasks (face, hand, body
and street view) demonstrate that our SelectionGAN is able to generate
significantly better results than the state-of-the-art methods. Meanwhile, the
proposed framework and modules are unified solutions and can be applied to
solve other generation tasks, such as semantic image synthesis. The code is
available at https://github.com/Ha0Tang/SelectionGAN. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
This paper presents a general framework to build fast and accurate algorithms
for video enhancement tasks such as super-resolution, deblurring, and
denoising. Essential to our framework is the realization that the accuracy,
rather than the density, of pixel flows is what is required for high-quality
video enhancement. Most of prior works take the opposite approach: they
estimate dense (per-pixel)-but generally less robust-flows, mostly using
computationally costly algorithms. Instead, we propose a lightweight flow
estimation algorithm; it fuses the sparse point cloud data and (even sparser
and less reliable) IMU data available in modern autonomous agents to estimate
the flow information. Building on top of the flow estimation, we demonstrate a
general framework that integrates the flows in a plug-and-play fashion with
different task-specific layers. Algorithms built in our framework achieve 1.78x
- 187.41x speedup while providing a 0.42 dB - 6.70 dB quality improvement over
competing methods. | [
"cs.CV",
"cs.RO"
] |
Since their introduction in the shape analysis community, functional maps
have met with considerable success due to their ability to compactly represent
dense correspondences between deformable shapes, with applications ranging from
shape matching and image segmentation, to exploration of large shape
collections. Despite the numerous advantages of such representation, however,
the problem of converting a given functional map back to a point-to-point map
has received a surprisingly limited interest. In this paper we analyze the
general problem of point-wise map recovery from arbitrary functional maps. In
doing so, we rule out many of the assumptions required by the currently
established approach -- most notably, the limiting requirement of the input
shapes being nearly-isometric. We devise an efficient recovery process based on
a simple probabilistic model. Experiments confirm that this approach achieves
remarkable accuracy improvements in very challenging cases. | [
"cs.CV",
"cs.CG"
] |
Gaussian processes (GPs) are nonparametric priors over functions. Fitting a
GP implies computing a posterior distribution of functions consistent with the
observed data. Similarly, deep Gaussian processes (DGPs) should allow us to
compute a posterior distribution of compositions of multiple functions giving
rise to the observations. However, exact Bayesian inference is intractable for
DGPs, motivating the use of various approximations. We show that the
application of simplifying mean-field assumptions across the hierarchy leads to
the layers of a DGP collapsing to near-deterministic transformations. We argue
that such an inference scheme is suboptimal, not taking advantage of the
potential of the model to discover the compositional structure in the data. To
address this issue, we examine alternative variational inference schemes
allowing for dependencies across different layers and discuss their advantages
and limitations. | [
"stat.ML",
"cs.LG"
] |
Pedestrian attribute recognition has been an emerging research topic in the
area of video surveillance. To predict the existence of a particular attribute,
it is demanded to localize the regions related to the attribute. However, in
this task, the region annotations are not available. How to carve out these
attribute-related regions remains challenging. Existing methods applied
attribute-agnostic visual attention or heuristic body-part localization
mechanisms to enhance the local feature representations, while neglecting to
employ attributes to define local feature areas. We propose a flexible
Attribute Localization Module (ALM) to adaptively discover the most
discriminative regions and learns the regional features for each attribute at
multiple levels. Moreover, a feature pyramid architecture is also introduced to
enhance the attribute-specific localization at low-levels with high-level
semantic guidance. The proposed framework does not require additional region
annotations and can be trained end-to-end with multi-level deep supervision.
Extensive experiments show that the proposed method achieves state-of-the-art
results on three pedestrian attribute datasets, including PETA, RAP, and
PA-100K. | [
"cs.CV"
] |
This paper focuses on the construction of stronger local features and the
effective fusion of image and LiDAR data. We adopt different modalities of
LiDAR data to generate richer features and present an adaptive and
azimuth-aware network to aggregate local features from image, bird's eye view
maps and point cloud. Our network mainly consists of three subnetworks: ground
plane estimation network, region proposal network and adaptive fusion network.
The ground plane estimation network extracts features of point cloud and
predicts the parameters of a plane which are used for generating abundant 3D
anchors. The region proposal network generates features of image and bird's eye
view maps to output region proposals. To integrate heterogeneous image and
point cloud features, the adaptive fusion network explicitly adjusts the
intensity of multiple local features and achieves the orientation consistency
between image and LiDAR data by introduce an azimuth-aware fusion module.
Experiments are conducted on KITTI dataset and the results validate the
advantages of our aggregation of multimodal local features and the adaptive
fusion network. | [
"cs.CV"
] |
Providing pixel-level supervisions for scene text segmentation is inherently
difficult and costly, so that only few small datasets are available for this
task. To face the scarcity of training data, previous approaches based on
Convolutional Neural Networks (CNNs) rely on the use of a synthetic dataset for
pre-training. However, synthetic data cannot reproduce the complexity and
variability of natural images. In this work, we propose to use a weakly
supervised learning approach to reduce the domain-shift between synthetic and
real data. Leveraging the bounding-box supervision of the COCO-Text and the MLT
datasets, we generate weak pixel-level supervisions of real images. In
particular, the COCO-Text-Segmentation (COCO_TS) and the MLT-Segmentation
(MLT_S) datasets are created and released. These two datasets are used to train
a CNN, the Segmentation Multiscale Attention Network (SMANet), which is
specifically designed to face some peculiarities of the scene text segmentation
task. The SMANet is trained end-to-end on the proposed datasets, and the
experiments show that COCO_TS and MLT_S are a valid alternative to synthetic
images, allowing to use only a fraction of the training samples and improving
significantly the performances. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Objective: Breast cancer screening is of great significance in contemporary
women's health prevention. The existing machines embedded in the AI system do
not reach the accuracy that clinicians hope. How to make intelligent systems
more reliable is a common problem. Methods: 1) Ultrasound image
super-resolution: the SRGAN super-resolution network reduces the unclearness of
ultrasound images caused by the device itself and improves the accuracy and
generalization of the detection model. 2) In response to the needs of medical
images, we have improved the YOLOv4 and the CenterNet models. 3) Multi-AI
model: based on the respective advantages of different AI models, we employ two
AI models to determine clinical resuls cross validation. And we accept the same
results and refuses others. Results: 1) With the help of the super-resolution
model, the YOLOv4 model and the CenterNet model both increased the mAP score by
9.6% and 13.8%. 2) Two methods for transforming the target model into a
classification model are proposed. And the unified output is in a specified
format to facilitate the call of the molti-AI model. 3) In the classification
evaluation experiment, concatenated by the YOLOv4 model (sensitivity 57.73%,
specificity 90.08%) and the CenterNet model (sensitivity 62.64%, specificity
92.54%), the multi-AI model will refuse to make judgments on 23.55% of the
input data. Correspondingly, the performance has been greatly improved to
95.91% for the sensitivity and 96.02% for the specificity. Conclusion: Our work
makes the AI model more reliable in medical image diagnosis. Significance: 1)
The proposed method makes the target detection model more suitable for
diagnosing breast ultrasound images. 2) It provides a new idea for artificial
intelligence in medical diagnosis, which can more conveniently introduce target
detection models from other fields to serve medical lesion screening. | [
"cs.CV",
"J.3; I.5.1"
] |
I describe an optimal control view of adversarial machine learning, where the
dynamical system is the machine learner, the input are adversarial actions, and
the control costs are defined by the adversary's goals to do harm and be hard
to detect. This view encompasses many types of adversarial machine learning,
including test-item attacks, training-data poisoning, and adversarial reward
shaping. The view encourages adversarial machine learning researcher to utilize
advances in control theory and reinforcement learning. | [
"cs.LG",
"stat.ML"
] |
The research of visual signal compression has a long history. Fueled by deep
learning, exciting progress has been made recently. Despite achieving better
compression performance, existing end-to-end compression algorithms are still
designed towards better signal quality in terms of rate-distortion
optimization. In this paper, we show that the design and optimization of
network architecture could be further improved for compression towards machine
vision. We propose an inverted bottleneck structure for end-to-end compression
towards machine vision, which specifically accounts for efficient
representation of the semantic information. Moreover, we quest the capability
of optimization by incorporating the analytics accuracy into the optimization
process, and the optimality is further explored with generalized rate-accuracy
optimization in an iterative manner. We use object detection as a showcase for
end-to-end compression towards machine vision, and extensive experiments show
that the proposed scheme achieves significant BD-rate savings in terms of
analysis performance. Moreover, the promise of the scheme is also demonstrated
with strong generalization capability towards other machine vision tasks, due
to the enabling of signal-level reconstruction. | [
"cs.CV",
"cs.MM",
"eess.IV"
] |
Background: Building visual encoding models to accurately predict visual
responses is a central challenge for current vision-based brain-machine
interface techniques. To achieve high prediction accuracy on neural signals,
visual encoding models should include precise visual features and appropriate
prediction algorithms. Most existing visual encoding models employ hand-craft
visual features (e.g., Gabor wavelets or semantic labels) or data-driven
features (e.g., features extracted from deep neural networks (DNN)). They also
assume a linear mapping between feature representation to brain activity.
However, it remains unknown whether such linear mapping is sufficient for
maximizing prediction accuracy. New Method: We construct a new visual encoding
framework to predict cortical responses in a benchmark functional magnetic
resonance imaging (fMRI) dataset. In this framework, we employ the transfer
learning technique to incorporate a pre-trained DNN (i.e., AlexNet) and train a
nonlinear mapping from visual features to brain activity. This nonlinear
mapping replaces the conventional linear mapping and is supposed to improve
prediction accuracy on brain activity. Results: The proposed framework can
significantly predict responses of over 20% voxels in early visual areas (i.e.,
V1-lateral occipital region, LO) and achieve unprecedented prediction accuracy.
Comparison with Existing Methods: Comparing to two conventional visual encoding
models, we find that the proposed encoding model shows consistent higher
prediction accuracy in all early visual areas, especially in relatively
anterior visual areas (i.e., V4 and LO). Conclusions: Our work proposes a new
framework to utilize pre-trained visual features and train non-linear mappings
from visual features to brain activity. | [
"cs.CV",
"q-bio.NC"
] |
Pedestrian trajectory prediction is a key technology in autopilot, which
remains to be very challenging due to complex interactions between pedestrians.
However, previous works based on dense undirected interaction suffer from
modeling superfluous interactions and neglect of trajectory motion tendency,
and thus inevitably result in a considerable deviance from the reality. To cope
with these issues, we present a Sparse Graph Convolution Network~(SGCN) for
pedestrian trajectory prediction. Specifically, the SGCN explicitly models the
sparse directed interaction with a sparse directed spatial graph to capture
adaptive interaction pedestrians. Meanwhile, we use a sparse directed temporal
graph to model the motion tendency, thus to facilitate the prediction based on
the observed direction. Finally, parameters of a bi-Gaussian distribution for
trajectory prediction are estimated by fusing the above two sparse graphs. We
evaluate our proposed method on the ETH and UCY datasets, and the experimental
results show our method outperforms comparative state-of-the-art methods by 9%
in Average Displacement Error(ADE) and 13% in Final Displacement Error(FDE).
Notably, visualizations indicate that our method can capture adaptive
interactions between pedestrians and their effective motion tendencies. | [
"cs.CV"
] |
Face editing represents a popular research topic within the computer vision
and image processing communities. While significant progress has been made
recently in this area, existing solutions: (i) are still largely focused on
low-resolution images, (ii) often generate editing results with visual
artefacts, or (iii) lack fine-grained control and alter multiple (entangled)
attributes at once, when trying to generate the desired facial semantics. In
this paper, we aim to address these issues though a novel attribute editing
approach called MaskFaceGAN. The proposed approach is based on an optimization
procedure that directly optimizes the latent code of a pre-trained
(state-of-the-art) Generative Adversarial Network (i.e., StyleGAN2) with
respect to several constraints that ensure: (i) preservation of relevant image
content, (ii) generation of the targeted facial attributes, and (iii)
spatially--selective treatment of local image areas. The constraints are
enforced with the help of an (differentiable) attribute classifier and face
parser that provide the necessary reference information for the optimization
procedure. MaskFaceGAN is evaluated in extensive experiments on the CelebA-HQ,
Helen and SiblingsDB-HQf datasets and in comparison with several
state-of-the-art techniques from the literature, i.e., StarGAN, AttGAN, STGAN,
and two versions of InterFaceGAN. Our experimental results show that the
proposed approach is able to edit face images with respect to several facial
attributes with unprecedented image quality and at high-resolutions
(1024x1024), while exhibiting considerably less problems with attribute
entanglement than competing solutions. The source code is made freely available
from: https://github.com/MartinPernus/MaskFaceGAN. | [
"cs.CV"
] |
Real-world perception systems in many cases build on hardware with limited
resources to adhere to cost and power limitations of their carrying system.
Deploying deep neural networks on resource-constrained hardware became possible
with model compression techniques, as well as efficient and hardware-aware
architecture design. However, model adaptation is additionally required due to
the diverse operation environments. In this work, we address the problem of
training deep neural networks on resource-constrained hardware in the context
of visual domain adaptation. We select the task of monocular depth estimation
where our goal is to transform a pre-trained model to the target's domain data.
While the source domain includes labels, we assume an unlabelled target domain,
as it happens in real-world applications. Then, we present an adversarial
learning approach that is adapted for training on the device with limited
resources. Since visual domain adaptation, i.e. neural network training, has
not been previously explored for resource-constrained hardware, we present the
first feasibility study for image-based depth estimation. Our experiments show
that visual domain adaptation is relevant only for efficient network
architectures and training sets at the order of a few hundred samples. Models
and code are publicly available. | [
"cs.CV",
"cs.LG"
] |
We present a novel method to learn temporally consistent 3D reconstruction of
clothed people from a monocular video. Recent methods for 3D human
reconstruction from monocular video using volumetric, implicit or parametric
human shape models, produce per frame reconstructions giving temporally
inconsistent output and limited performance when applied to video. In this
paper, we introduce an approach to learn temporally consistent features for
textured reconstruction of clothed 3D human sequences from monocular video by
proposing two advances: a novel temporal consistency loss function; and hybrid
representation learning for implicit 3D reconstruction from 2D images and
coarse 3D geometry. The proposed advances improve the temporal consistency and
accuracy of both the 3D reconstruction and texture prediction from a monocular
video. Comprehensive comparative performance evaluation on images of people
demonstrates that the proposed method significantly outperforms the
state-of-the-art learning-based single image 3D human shape estimation
approaches achieving significant improvement of reconstruction accuracy,
completeness, quality and temporal consistency. | [
"cs.CV"
] |
Graph neural networks (GNNs) have attracted increasing interests. With broad
deployments of GNNs in real-world applications, there is an urgent need for
understanding the robustness of GNNs under adversarial attacks, especially in
realistic setups. In this work, we study the problem of attacking GNNs in a
restricted and realistic setup, by perturbing the features of a small set of
nodes, with no access to model parameters and model predictions. Our formal
analysis draws a connection between this type of attacks and an influence
maximization problem on the graph. This connection not only enhances our
understanding on the problem of adversarial attack on GNNs, but also allows us
to propose a group of effective and practical attack strategies. Our
experiments verify that the proposed attack strategies significantly degrade
the performance of three popular GNN models and outperform baseline adversarial
attack strategies. | [
"cs.LG",
"cs.AI"
] |
Modern time series classifiers display impressive predictive capabilities,
yet their decision-making processes mostly remain black boxes to the user. At
the same time, model-agnostic explainers, such as the recently proposed SHAP,
promise to make the predictions of machine learning models interpretable,
provided there are well-designed domain mappings. We bring both worlds together
in our timeXplain framework, extending the reach of explainable artificial
intelligence to time series classification and value prediction. We present
novel domain mappings for the time and the frequency domain as well as series
statistics and analyze their explicative power as well as their limits. We
employ timeXplain in a large-scale experimental comparison of several
state-of-the-art time series classifiers and discover similarities between
seemingly distinct classification concepts such as residual neural networks and
elastic ensembles. | [
"cs.LG",
"stat.ML"
] |
We propose real-time, six degrees of freedom (6DoF), 3D face pose estimation
without face detection or landmark localization. We observe that estimating the
6DoF rigid transformation of a face is a simpler problem than facial landmark
detection, often used for 3D face alignment. In addition, 6DoF offers more
information than face bounding box labels. We leverage these observations to
make multiple contributions: (a) We describe an easily trained, efficient,
Faster R-CNN--based model which regresses 6DoF pose for all faces in the photo,
without preliminary face detection. (b) We explain how pose is converted and
kept consistent between the input photo and arbitrary crops created while
training and evaluating our model. (c) Finally, we show how face poses can
replace detection bounding box training labels. Tests on AFLW2000-3D and BIWI
show that our method runs at real-time and outperforms state of the art (SotA)
face pose estimators. Remarkably, our method also surpasses SotA models of
comparable complexity on the WIDER FACE detection benchmark, despite not been
optimized on bounding box labels. | [
"cs.CV"
] |
We introduce a weakly supervised method for representation learning based on
aligning temporal sequences (e.g., videos) of the same process (e.g., human
action). The main idea is to use the global temporal ordering of latent
correspondences across sequence pairs as a supervisory signal. In particular,
we propose a loss based on scoring the optimal sequence alignment to train an
embedding network. Our loss is based on a novel probabilistic path finding view
of dynamic time warping (DTW) that contains the following three key features:
(i) the local path routing decisions are contrastive and differentiable, (ii)
pairwise distances are cast as probabilities that are contrastive as well, and
(iii) our formulation naturally admits a global cycle consistency loss that
verifies correspondences. For evaluation, we consider the tasks of fine-grained
action classification, few shot learning, and video synchronization. We report
significant performance increases over previous methods. In addition, we report
two applications of our temporal alignment framework, namely 3D pose
reconstruction and fine-grained audio/visual retrieval. | [
"cs.CV"
] |
Recommender systems are an essential part of any e-commerce platform.
Recommendations are typically generated by aggregating large amounts of user
data. A malicious actor may be motivated to sway the output of such recommender
systems by injecting malicious datapoints to leverage the system for financial
gain. In this work, we propose a semi-supervised attack detection algorithm to
identify the malicious datapoints. We do this by leveraging a portion of the
dataset that has a lower chance of being polluted to learn the distribution of
genuine datapoints. Our proposed approach modifies the Generative Adversarial
Network architecture to take into account the contextual information from user
activity. This allows the model to distinguish legitimate datapoints from the
injected ones. | [
"cs.LG"
] |
Solving tasks with sparse rewards is a main challenge in reinforcement
learning. While hierarchical controllers are an intuitive approach to this
problem, current methods often require manual reward shaping, alternating
training phases, or manually defined sub tasks. We introduce modulated policy
hierarchies (MPH), that can learn end-to-end to solve tasks from sparse
rewards. To achieve this, we study different modulation signals and exploration
for hierarchical controllers. Specifically, we find that communicating via
bit-vectors is more efficient than selecting one out of multiple skills, as it
enables mixing between them. To facilitate exploration, MPH uses its different
time scales for temporally extended intrinsic motivation at each level of the
hierarchy. We evaluate MPH on the robotics tasks of pushing and sparse block
stacking, where it outperforms recent baselines. | [
"cs.LG",
"cs.AI"
] |
Visual attention can be defined as the behavioral and cognitive process of
selectively focusing on a discrete aspect of sensory cues while disregarding
other perceivable information. This biological mechanism, more specifically
saliency detection, has long been used in multimedia indexing to drive the
analysis only on relevant parts of images or videos for further processing.
The recent advent of silicon retinas (or event cameras -- sensors that
measure pixel-wise changes in brightness and output asynchronous events
accordingly) raises the question of how to adapt attention and saliency to the
unconventional type of such sensors' output. Silicon retina aims to reproduce
the biological retina behaviour. In that respect, they produce punctual events
in time that can be construed as neural spikes and interpreted as such by a
neural network.
In particular, Spiking Neural Networks (SNNs) represent an asynchronous type
of artificial neural network closer to biology than traditional artificial
networks, mainly because they seek to mimic the dynamics of neural membrane and
action potentials over time. SNNs receive and process information in the form
of spike trains. Therefore, they make for a suitable candidate for the
efficient processing and classification of incoming event patterns measured by
silicon retinas. In this paper, we review the biological background behind the
attentional mechanism, and introduce a case study of event videos
classification with SNNs, using a biology-grounded low-level computational
attention mechanism, with interesting preliminary results. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
A robust and efficient anomaly detection technique is proposed, capable of
dealing with crowded scenes where traditional tracking based approaches tend to
fail. Initial foreground segmentation of the input frames confines the analysis
to foreground objects and effectively ignores irrelevant background dynamics.
Input frames are split into non-overlapping cells, followed by extracting
features based on motion, size and texture from each cell. Each feature type is
independently analysed for the presence of an anomaly. Unlike most methods, a
refined estimate of object motion is achieved by computing the optical flow of
only the foreground pixels. The motion and size features are modelled by an
approximated version of kernel density estimation, which is computationally
efficient even for large training datasets. Texture features are modelled by an
adaptively grown codebook, with the number of entries in the codebook selected
in an online fashion. Experiments on the recently published UCSD Anomaly
Detection dataset show that the proposed method obtains considerably better
results than three recent approaches: MPPCA, social force, and mixture of
dynamic textures (MDT). The proposed method is also several orders of magnitude
faster than MDT, the next best performing method. | [
"cs.CV",
"I.2.10; I.4.6; I.4.8; I.5.4"
] |
In this work we address the challenging problem of unsupervised learning from
videos. Existing methods utilize the spatio-temporal continuity in contiguous
video frames as regularization for the learning process. Typically, this
temporal coherence of close frames is used as a free form of annotation,
encouraging the learned representations to exhibit small differences between
these frames. But this type of approach fails to capture the dissimilarity
between videos with different content, hence learning less discriminative
features. We here propose two Siamese architectures for Convolutional Neural
Networks, and their corresponding novel loss functions, to learn from unlabeled
videos, which jointly exploit the local temporal coherence between contiguous
frames, and a global discriminative margin used to separate representations of
different videos. An extensive experimental evaluation is presented, where we
validate the proposed models on various tasks. First, we show how the learned
features can be used to discover actions and scenes in video collections.
Second, we show the benefits of such an unsupervised learning from just
unlabeled videos, which can be directly used as a prior for the supervised
recognition tasks of actions and objects in images, where our results further
show that our features can even surpass a traditional and heavily supervised
pre-training plus fine-tunning strategy. | [
"cs.CV"
] |
Image segmentation is an important step in most visual tasks. While
convolutional neural networks have shown to perform well on single image
segmentation, to our knowledge, no study has been been done on leveraging
recurrent gated architectures for video segmentation. Accordingly, we propose a
novel method for online segmentation of video sequences that incorporates
temporal data. The network is built from fully convolutional element and
recurrent unit that works on a sliding window over the temporal data. We also
introduce a novel convolutional gated recurrent unit that preserves the spatial
information and reduces the parameters learned. Our method has the advantage
that it can work in an online fashion instead of operating over the whole input
batch of video frames. The network is tested on the change detection dataset,
and proved to have 5.5\% improvement in F-measure over a plain fully
convolutional network for per frame segmentation. It was also shown to have
improvement of 1.4\% for the F-measure compared to our baseline network that we
call FCN 12s. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.