text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
This paper focuses on inverse reinforcement learning for autonomous
navigation using distance and semantic category observations. The objective is
to infer a cost function that explains demonstrated behavior while relying only
on the expert's observations and state-control trajectory. We develop a map
encoder, that infers semantic category probabilities from the observation
sequence, and a cost encoder, defined as a deep neural network over the
semantic features. Since the expert cost is not directly observable, the model
parameters can only be optimized by differentiating the error between
demonstrated controls and a control policy computed from the cost estimate. We
propose a new model of expert behavior that enables error minimization using a
closed-form subgradient computed only over a subset of promising states via a
motion planning algorithm. Our approach allows generalizing the learned
behavior to new environments with new spatial configurations of the semantic
categories. We analyze the different components of our model in a minigrid
environment. We also demonstrate that our approach learns to follow traffic
rules in the autonomous driving CARLA simulator by relying on semantic
observations of buildings, sidewalks, and road lanes. | [
"cs.LG",
"cs.RO"
] |
Generative adversarial networks have gained a lot of attention in the
computer vision community due to their capability of data generation without
explicitly modelling the probability density function. The adversarial loss
brought by the discriminator provides a clever way of incorporating unlabeled
samples into training and imposing higher order consistency. This has proven to
be useful in many cases, such as domain adaptation, data augmentation, and
image-to-image translation. These properties have attracted researchers in the
medical imaging community, and we have seen rapid adoption in many traditional
and novel applications, such as image reconstruction, segmentation, detection,
classification, and cross-modality synthesis. Based on our observations, this
trend will continue and we therefore conducted a review of recent advances in
medical imaging using the adversarial training scheme with the hope of
benefiting researchers interested in this technique. | [
"cs.CV",
"cs.LG"
] |
Most domain adaptation methods consider the problem of transferring knowledge
to the target domain from a single source dataset. However, in practical
applications, we typically have access to multiple sources. In this paper we
propose the first approach for Multi-Source Domain Adaptation (MSDA) based on
Generative Adversarial Networks. Our method is inspired by the observation that
the appearance of a given image depends on three factors: the domain, the style
(characterized in terms of low-level features variations) and the content. For
this reason we propose to project the image features onto a space where only
the dependence from the content is kept, and then re-project this invariant
representation onto the pixel space using the target domain and style. In this
way, new labeled images can be generated which are used to train a final target
classifier. We test our approach using common MSDA benchmarks, showing that it
outperforms state-of-the-art methods. | [
"cs.CV"
] |
Data-efficient learning in continuous state-action spaces using very
high-dimensional observations remains a key challenge in developing fully
autonomous systems. In this paper, we consider one instance of this challenge,
the pixels to torques problem, where an agent must learn a closed-loop control
policy from pixel information only. We introduce a data-efficient, model-based
reinforcement learning algorithm that learns such a closed-loop policy directly
from pixel information. The key ingredient is a deep dynamical model that uses
deep auto-encoders to learn a low-dimensional embedding of images jointly with
a predictive model in this low-dimensional feature space. Joint learning
ensures that not only static but also dynamic properties of the data are
accounted for. This is crucial for long-term predictions, which lie at the core
of the adaptive model predictive control strategy that we use for closed-loop
control. Compared to state-of-the-art reinforcement learning methods for
continuous states and actions, our approach learns quickly, scales to
high-dimensional state spaces and is an important step toward fully autonomous
learning from pixels to torques. | [
"stat.ML",
"cs.LG",
"cs.RO",
"cs.SY"
] |
The assumption of positivity in causal inference (also known as common
support and co-variate overlap) is necessary to obtain valid causal estimates.
Therefore, confirming it holds in a given dataset is an important first step of
any causal analysis. Most common methods to date are insufficient for
discovering non-positivity, as they do not scale for modern high-dimensional
covariate spaces, or they cannot pinpoint the subpopulation violating
positivity. To overcome these issues, we suggest to harness decision trees for
detecting violations. By dividing the covariate space into mutually exclusive
regions, each with maximized homogeneity of treatment groups, decision trees
can be used to automatically detect subspaces violating positivity. By
augmenting the method with an additional random forest model, we can quantify
the robustness of the violation within each subspace. This solution is scalable
and provides an interpretable characterization of the subspaces in which
violations occur. We provide a visualization of the stratification rules that
define each subpopulation, combined with the severity of positivity violation
within it. We also provide an interactive version of the visualization that
allows a deeper dive into the properties of each subspace. | [
"stat.ML",
"cs.LG"
] |
Advances in computing technology have allowed researchers across many fields
of endeavor to collect and maintain vast amounts of observational statistical
data such as clinical data,biological patient data,data regarding access of web
sites,financial data,and the like.Brain Magnetic Resonance
Imaging(MRI)segmentation is a complex problem in the field of medical imaging
despite various presented methods.MR image of human brain can be divided into
several sub regions especially soft tissues such as gray matter,white matter
and cerebrospinal fluid.Although edge information is the main clue in image
segmentation,it can not get a better result in analysis the content of images
without combining other information.The segmentation of brain tissue in the
magnetic resonance imaging(MRI)is very important for detecting the existence
and outlines of tumors.In this paper,an algorithm about segmentation based on
the symmetry character of brain MRI image is presented.Our goal is to detect
the position and boundary of tumors automatically.Experiments were conducted on
real pictures,and the results show that the algorithm is flexible and
convenient. | [
"cs.CV"
] |
The vision of automated driving is to increase both road safety and
efficiency, while offering passengers a convenient travel experience. This
requires that autonomous systems correctly estimate the current traffic scene
and its likely evolution. In highway scenarios early recognition of cut-in
maneuvers is essential for risk-aware maneuver planning. In this paper, a
statistical approach is proposed, which advantageously utilizes a set of
prototypical lane change trajectories to realize both early maneuver detection
and uncertainty-aware trajectory prediction for traffic participants.
Generation of prototype trajectories from real traffic data is accomplished by
Agglomerative Hierarchical Clustering. During clustering, the alignment of the
cluster prototypes to each other is optimized and the cohesion of the resulting
prototype is limited when two clusters merge. In the prediction stage, the
similarity of observed vehicle motion and typical lane change patterns in the
data base is evaluated to construct a set of significant features for maneuver
classification via Boosted Decision Trees. The future trajectory is predicted
combining typical lane change realizations in a mixture model. B-splines based
trajectory adaptations guarantee continuity during transition from actually
observed to predicted vehicle states. Quantitative evaluation results
demonstrate the proposed concept's improved performance for both maneuver and
trajectory prediction compared to a previously implemented reference approach. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
3D visual grounding aims at grounding a natural language description about a
3D scene, usually represented in the form of 3D point clouds, to the targeted
object region. Point clouds are sparse, noisy, and contain limited semantic
information compared with 2D images. These inherent limitations make the 3D
visual grounding problem more challenging. In this study, we propose 2D
Semantics Assisted Training (SAT) that utilizes 2D image semantics in the
training stage to ease point-cloud-language joint representation learning and
assist 3D visual grounding. The main idea is to learn auxiliary alignments
between rich, clean 2D object representations and the corresponding objects or
mentioned entities in 3D scenes. SAT takes 2D object semantics, i.e., object
label, image feature, and 2D geometric feature, as the extra input in training
but does not require such inputs during inference. By effectively utilizing 2D
semantics in training, our approach boosts the accuracy on the Nr3D dataset
from 37.7% to 49.2%, which significantly surpasses the non-SAT baseline with
the identical network architecture and inference input. Our approach
outperforms the state of the art by large margins on multiple 3D visual
grounding datasets, i.e., +10.4% absolute accuracy on Nr3D, +9.9% on Sr3D, and
+5.6% on ScanRef. | [
"cs.CV"
] |
The quality of the image representations obtained from self-supervised
learning depends strongly on the type of data augmentations used in the
learning formulation. Recent papers have ported these methods from still images
to videos and found that leveraging both audio and video signals yields strong
gains; however, they did not find that spatial augmentations such as cropping,
which are very important for still images, work as well for videos. In this
paper, we improve these formulations in two ways unique to the spatio-temporal
aspect of videos. First, for space, we show that spatial augmentations such as
cropping do work well for videos too, but that previous implementations, due to
the high processing and memory cost, could not do this at a scale sufficient
for it to work well. To address this issue, we first introduce Feature Crop, a
method to simulate such augmentations much more efficiently directly in feature
space. Second, we show that as opposed to naive average pooling, the use of
transformer-based attention improves performance significantly, and is well
suited for processing feature crops. Combining both of our discoveries into a
new method, Space-time Crop & Attend (STiCA) we achieve state-of-the-art
performance across multiple video-representation learning benchmarks. In
particular, we achieve new state-of-the-art accuracies of 67.0% on HMDB-51 and
93.1% on UCF-101 when pre-training on Kinetics-400. | [
"cs.CV"
] |
Generative models have achieved impressive results in many domains including
image and text generation. In the natural sciences, generative models have led
to rapid progress in automated drug discovery. Many of the current methods
focus on either 1-D or 2-D representations of typically small, drug-like
molecules. However, many molecules require 3-D descriptors and exceed the
chemical complexity of commonly used dataset. We present a method to encode and
decode the position of atoms in 3-D molecules from a dataset of nearly 50,000
stable crystal unit cells that vary from containing 1 to over 100 atoms. We
construct a smooth and continuous 3-D density representation of each crystal
based on the positions of different atoms. Two different neural networks were
trained on a dataset of over 120,000 three-dimensional samples of single and
repeating crystal structures, made by rotating the single unit cells. The
first, an Encoder-Decoder pair, constructs a compressed latent space
representation of each molecule and then decodes this description into an
accurate reconstruction of the input. The second network segments the resulting
output into atoms and assigns each atom an atomic number. By generating
compressed, continuous latent spaces representations of molecules we are able
to decode random samples, interpolate between two molecules, and alter known
molecules. | [
"cs.LG",
"cond-mat.mtrl-sci",
"physics.comp-ph",
"stat.ML"
] |
Many interesting tasks in machine learning and computer vision are learned by
optimising an objective function defined as a weighted linear combination of
multiple losses. The final performance is sensitive to choosing the correct
(relative) weights for these losses. Finding a good set of weights is often
done by adopting them into the set of hyper-parameters, which are set using an
extensive grid search. This is computationally expensive. In this paper, we
propose a weighting scheme based on the coefficient of variations and set the
weights based on properties observed while training the model. The proposed
method incorporates a measure of uncertainty to balance the losses, and as a
result the loss weights evolve during training without requiring another
(learning based) optimisation. In contrast to many loss weighting methods in
literature, we focus on single-task multi-loss problems, such as monocular
depth estimation and semantic segmentation, and show that multi-task approaches
for loss weighting do not work on those single-tasks. The validity of the
approach is shown empirically for depth estimation and semantic segmentation on
multiple datasets. | [
"cs.CV",
"cs.AI",
"68T45",
"I.4"
] |
Involuntary motion during weight-bearing cone-beam computed tomography (CT)
scans of the knee causes artifacts in the reconstructed volumes making them
unusable for clinical diagnosis. Currently, image-based or marker-based methods
are applied to correct for this motion, but often require long execution or
preparation times. We propose to attach an inertial measurement unit (IMU)
containing an accelerometer and a gyroscope to the leg of the subject in order
to measure the motion during the scan and correct for it. To validate this
approach, we present a simulation study using real motion measured with an
optical 3D tracking system. With this motion, an XCAT numerical knee phantom is
non-rigidly deformed during a simulated CT scan creating motion corrupted
projections. A biomechanical model is animated with the same tracked motion in
order to generate measurements of an IMU placed below the knee. In our proposed
multi-stage algorithm, these signals are transformed to the global coordinate
system of the CT scan and applied for motion compensation during
reconstruction. Our proposed approach can effectively reduce motion artifacts
in the reconstructed volumes. Compared to the motion corrupted case, the
average structural similarity index and root mean squared error with respect to
the no-motion case improved by 13-21% and 68-70%, respectively. These results
are qualitatively and quantitatively on par with a state-of-the-art
marker-based method we compared our approach to. The presented study shows the
feasibility of this novel approach, and yields promising results towards a
purely IMU-based motion compensation in C-arm CT. | [
"cs.CV"
] |
Deep learning tools have gained tremendous attention in applied machine
learning. However such tools for regression and classification do not capture
model uncertainty. In comparison, Bayesian models offer a mathematically
grounded framework to reason about model uncertainty, but usually come with a
prohibitive computational cost. In this paper we develop a new theoretical
framework casting dropout training in deep neural networks (NNs) as approximate
Bayesian inference in deep Gaussian processes. A direct result of this theory
gives us tools to model uncertainty with dropout NNs -- extracting information
from existing models that has been thrown away so far. This mitigates the
problem of representing uncertainty in deep learning without sacrificing either
computational complexity or test accuracy. We perform an extensive study of the
properties of dropout's uncertainty. Various network architectures and
non-linearities are assessed on tasks of regression and classification, using
MNIST as an example. We show a considerable improvement in predictive
log-likelihood and RMSE compared to existing state-of-the-art methods, and
finish by using dropout's uncertainty in deep reinforcement learning. | [
"stat.ML",
"cs.LG"
] |
Many top-performing image captioning models rely solely on object features
computed with an object detection model to generate image descriptions.
However, recent studies propose to directly use scene graphs to introduce
information about object relations into captioning, hoping to better describe
interactions between objects. In this work, we thoroughly investigate the use
of scene graphs in image captioning. We empirically study whether using
additional scene graph encoders can lead to better image descriptions and
propose a conditional graph attention network (C-GAT), where the image
captioning decoder state is used to condition the graph updates. Finally, we
determine to what extent noise in the predicted scene graphs influence caption
quality. Overall, we find no significant difference between models that use
scene graph features and models that only use object detection features across
different captioning metrics, which suggests that existing scene graph
generation models are still too noisy to be useful in image captioning.
Moreover, although the quality of predicted scene graphs is very low in
general, when using high quality scene graphs we obtain gains of up to 3.3
CIDEr compared to a strong Bottom-Up Top-Down baseline. We open source code to
reproduce all our experiments in
https://github.com/iacercalixto/butd-image-captioning. | [
"cs.CV",
"cs.CL",
"68T50, 68T45",
"I.2.7; I.2.10"
] |
It is important to identify the change point of a system's health status,
which usually signifies an incipient fault under development. The One-Class
Support Vector Machine (OC-SVM) is a popular machine learning model for anomaly
detection and hence could be used for identifying change points; however, it is
sometimes difficult to obtain a good OC-SVM model that can be used on sensor
measurement time series to identify the change points in system health status.
In this paper, we propose a novel approach for calibrating OC-SVM models. The
approach uses a heuristic search method to find a good set of input data and
hyperparameters that yield a well-performing model. Our results on the C-MAPSS
dataset demonstrate that OC-SVM can also achieve satisfactory accuracy in
detecting change point in time series with fewer training data, compared to
state-of-the-art deep learning approaches. In our case study, the OC-SVM
calibrated by the proposed model is shown to be useful especially in scenarios
with limited amount of training data. | [
"cs.LG",
"cs.NE",
"cs.SY",
"stat.ML"
] |
Modern machine learning models (such as deep neural networks and boosting
decision tree models) have become increasingly popular in financial market
prediction, due to their superior capacity to extract complex non-linear
patterns. However, since financial datasets have very low signal-to-noise ratio
and are non-stationary, complex models are often very prone to overfitting and
suffer from instability issues. Moreover, as various machine learning and data
mining tools become more widely used in quantitative trading, many trading
firms have been producing an increasing number of features (aka factors).
Therefore, how to automatically select effective features becomes an imminent
problem. To address these issues, we propose DoubleEnsemble, an ensemble
framework leveraging learning trajectory based sample reweighting and shuffling
based feature selection. Specifically, we identify the key samples based on the
training dynamics on each sample and elicit key features based on the ablation
impact of each feature via shuffling. Our model is applicable to a wide range
of base models, capable of extracting complex patterns, while mitigating the
overfitting and instability issues for financial market prediction. We conduct
extensive experiments, including price prediction for cryptocurrencies and
stock trading, using both DNN and gradient boosting decision tree as base
models. Our experiment results demonstrate that DoubleEnsemble achieves a
superior performance compared with several baseline methods. | [
"cs.LG",
"q-fin.GN",
"stat.ML"
] |
In this work, we propose to employ information-geometric tools to optimize a
graph neural network architecture such as the graph convolutional networks.
More specifically, we develop optimization algorithms for the graph-based
semi-supervised learning by employing the natural gradient information in the
optimization process. This allows us to efficiently exploit the geometry of the
underlying statistical model or parameter space for optimization and inference.
To the best of our knowledge, this is the first work that has utilized the
natural gradient for the optimization of graph neural networks that can be
extended to other semi-supervised problems. Efficient computations algorithms
are developed and extensive numerical studies are conducted to demonstrate the
superior performance of our algorithms over existing algorithms such as ADAM
and SGD. | [
"cs.LG",
"stat.ML"
] |
Salient object detection plays an important part in a vision system to detect
important regions. Convolutional neural network (CNN) based methods directly
train their models with large-scale datasets, but what is the crucial feature
for saliency is still a problem. In this paper, we establish a novel bottom-up
feature named convex hull overlap (CHO), combining with appearance contrast
features, to detect salient objects. CHO feature is a kind of enhanced Gestalt
cue. Psychologists believe that surroundedness reflects objects overlap
relationship. An object which is on the top of the others is attractive. Our
method significantly differs from other earlier works in (1) We set up a
hand-crafted feature to detect salient object that our model does not need to
be trained by large-scale datasets; (2) Previous works only focus on appearance
features, while our CHO feature makes up the gap between the spatial object
covering and the object's saliency. Our experiments on a large number of public
datasets have obtained very positive results. | [
"cs.CV",
"cs.AI"
] |
Recent advances in deep learning have significantly pushed the
state-of-the-art in photorealistic video animation given a single image. In
this paper, we extrapolate those advances to the 3D domain, by studying 3D
image-to-video translation with a particular focus on 4D facial expressions.
Although 3D facial generative models have been widely explored during the past
years, 4D animation remains relatively unexplored. To this end, in this study
we employ a deep mesh encoder-decoder like architecture to synthesize realistic
high resolution facial expressions by using a single neutral frame along with
an expression identification. In addition, processing 3D meshes remains a
non-trivial task compared to data that live on grid-like structures, such as
images. Given the recent progress in mesh processing with graph convolutions,
we make use of a recently introduced learnable operator which acts directly on
the mesh structure by taking advantage of local vertex orderings. In order to
generalize to 4D facial expressions across subjects, we trained our model using
a high resolution dataset with 4D scans of six facial expressions from 180
subjects. Experimental results demonstrate that our approach preserves the
subject's identity information even for unseen subjects and generates high
quality expressions. To the best of our knowledge, this is the first study
tackling the problem of 4D facial expression synthesis. | [
"cs.CV"
] |
We use multilayer Long Short Term Memory (LSTM) networks to learn
representations of video sequences. Our model uses an encoder LSTM to map an
input sequence into a fixed length representation. This representation is
decoded using single or multiple decoder LSTMs to perform different tasks, such
as reconstructing the input sequence, or predicting the future sequence. We
experiment with two kinds of input sequences - patches of image pixels and
high-level representations ("percepts") of video frames extracted using a
pretrained convolutional net. We explore different design choices such as
whether the decoder LSTMs should condition on the generated output. We analyze
the outputs of the model qualitatively to see how well the model can
extrapolate the learned video representation into the future and into the past.
We try to visualize and interpret the learned features. We stress test the
model by running it on longer time scales and on out-of-domain data. We further
evaluate the representations by finetuning them for a supervised learning
problem - human action recognition on the UCF-101 and HMDB-51 datasets. We show
that the representations help improve classification accuracy, especially when
there are only a few training examples. Even models pretrained on unrelated
datasets (300 hours of YouTube videos) can help action recognition performance. | [
"cs.LG",
"cs.CV",
"cs.NE"
] |
Learning functions on point clouds has applications in many fields, including
computer vision, computer graphics, physics, and chemistry. Recently, there has
been a growing interest in neural architectures that are invariant or
equivariant to all three shape-preserving transformations of point clouds:
translation, rotation, and permutation.
In this paper, we present a first study of the approximation power of these
architectures. We first derive two sufficient conditions for an equivariant
architecture to have the universal approximation property, based on a novel
characterization of the space of equivariant polynomials. We then use these
conditions to show that two recently suggested models are universal, and for
devising two other novel universal architectures. | [
"cs.LG",
"cs.CG"
] |
A kitchen robot properly needs to understand the cooking environment to
continue any cooking activities. But object's state detection has not been
researched well so far as like object detection. In this paper, we propose a
deep learning approach to identify different cooking states from images for a
kitchen robot. In our research, we investigate particularly the performance of
Inception architecture and propose a modified architecture based on Inception
model to classify different cooking states. The model is analyzed robustly in
terms of different layers, and optimizers. Experimental results on a cooking
datasets demonstrate that proposed model can be a potential solution to the
cooking state recognition problem. | [
"cs.CV"
] |
In this paper, we propose FairNN a neural network that performs joint feature
representation and classification for fairness-aware learning. Our approach
optimizes a multi-objective loss function in which (a) learns a fair
representation by suppressing protected attributes (b) maintains the
information content by minimizing a reconstruction loss and (c) allows for
solving a classification task in a fair manner by minimizing the classification
error and respecting the equalized odds-based fairness regularized. Our
experiments on a variety of datasets demonstrate that such a joint approach is
superior to separate treatment of unfairness in representation learning or
supervised learning. Additionally, our regularizers can be adaptively weighted
to balance the different components of the loss function, thus allowing for a
very general framework for conjoint fair representation learning and decision
making. | [
"cs.LG",
"stat.ML"
] |
The state-of-the-art method for automatically segmenting white matter bundles
in diffusion-weighted MRI is tractography in conjunction with streamline
cluster selection. This process involves long chains of processing steps which
are not only computationally expensive but also complex to setup and tedious
with respect to quality control. Direct bundle segmentation methods treat the
task as a traditional image segmentation problem. While they so far did not
deliver competitive results, they can potentially mitigate many of the
mentioned issues. We present a novel supervised approach for direct tract
segmentation that shows major performance gains. It builds upon a stacked U-Net
architecture which is trained on manual bundle segmentations from Human
Connectome Project subjects. We evaluate our approach \textit{in vivo} as well
as \textit{in silico} using the ISMRM 2015 Tractography Challenge phantom
dataset. We achieve human segmentation performance and a major performance gain
over previous pipelines. We show how the learned spatial priors efficiently
guide the segmentation even at lower image qualities with little quality loss. | [
"cs.CV",
"q-bio.NC",
"q-bio.QM"
] |
Offline handwritten mathematical expression recognition is a challenging
task, because handwritten mathematical expressions mainly have two problems in
the process of recognition. On one hand, it is how to correctly recognize
different mathematical symbols. On the other hand, it is how to correctly
recognize the two-dimensional structure existing in mathematical expressions.
Inspired by recent work in deep learning, a new neural network model that
combines a Multi-Scale convolutional neural network (CNN) with an Attention
recurrent neural network (RNN) is proposed to identify two-dimensional
handwritten mathematical expressions as one-dimensional LaTeX sequences. As a
result, the model proposed in the present work has achieved a WER error of
25.715% and ExpRate of 28.216%. | [
"cs.CV",
"cs.LG"
] |
Image-to-Image (I2I) translation is a heated topic in academia, and it also
has been applied in real-world industry for tasks like image synthesis,
super-resolution, and colorization. However, traditional I2I translation
methods train data in two or more domains together. This requires lots of
computation resources. Moreover, the results are of lower quality, and they
contain many more artifacts. The training process could be unstable when the
data in different domains are not balanced, and modal collapse is more likely
to happen. We proposed a new I2I translation method that generates a new model
in the target domain via a series of model transformations on a pre-trained
StyleGAN2 model in the source domain. After that, we proposed an inversion
method to achieve the conversion between an image and its latent vector. By
feeding the latent vector into the generated model, we can perform I2I
translation between the source domain and target domain. Both qualitative and
quantitative evaluations were conducted to prove that the proposed method can
achieve outstanding performance in terms of image quality, diversity and
semantic similarity to the input and reference images compared to
state-of-the-art works. | [
"cs.CV"
] |
Many real-world graphs involve different types of nodes and relations between
nodes, being heterogeneous by nature. The representation learning of
heterogeneous graphs (HGs) embeds the rich structure and semantics of such
graphs into a low-dimensional space and facilitates various data mining tasks,
such as node classification, node clustering, and link prediction. In this
paper, we propose a self-supervised method that learns HG representations by
relying on knowledge exchange and discovery among different HG structural
semantics (meta-paths). Specifically, by maximizing the mutual information of
meta-path representations, we promote meta-path information fusion and
consensus, and ensure that globally shared semantics are encoded. By extensive
experiments on node classification, node clustering, and link prediction tasks,
we show that the proposed self-supervision both outperforms and improves
competing methods by 1% and up to 10% for all tasks. | [
"cs.LG"
] |
Road detection is a critically important task for self-driving cars. By
employing LiDAR data, recent works have significantly improved the accuracy of
road detection. Relying on LiDAR sensors limits the wide application of those
methods when only cameras are available. In this paper, we propose a novel road
detection approach with RGB being the only input during inference.
Specifically, we exploit pseudo-LiDAR using depth estimation, and propose a
feature fusion network where RGB and learned depth information are fused for
improved road detection. To further optimize the network structure and improve
the efficiency of the network. we search for the network structure of the
feature fusion module using NAS techniques. Finally, be aware of that
generating pseudo-LiDAR from RGB via depth estimation introduces extra
computational costs and relies on depth estimation networks, we design a
modality distillation strategy and leverage it to further free our network from
these extra computational cost and dependencies during inference. The proposed
method achieves state-of-the-art performance on two challenging benchmarks,
KITTI and R2D. | [
"cs.CV",
"cs.RO"
] |
Understanding human motion behaviour is a critical task for several possible
applications like self-driving cars or social robots, and in general for all
those settings where an autonomous agent has to navigate inside a human-centric
environment. This is non-trivial because human motion is inherently
multi-modal: given a history of human motion paths, there are many plausible
ways by which people could move in the future. Additionally, people activities
are often driven by goals, e.g. reaching particular locations or interacting
with the environment. We address the aforementioned aspects by proposing a new
recurrent generative model that considers both single agents' future goals and
interactions between different agents. The model exploits a double
attention-based graph neural network to collect information about the mutual
influences among different agents and to integrate it with data about agents'
possible future objectives. Our proposal is general enough to be applied to
different scenarios: the model achieves state-of-the-art results in both urban
environments and also in sports applications. | [
"cs.CV",
"cs.LG"
] |
Assessing aesthetic preference is a fundamental task related to human
cognition. It can also contribute to various practical applications such as
image creation for online advertisements. Despite crucial influences of image
quality, auxiliary information of ad images such as tags and target subjects
can also determine image preference. Existing studies mainly focus on images
and thus are less useful for advertisement scenarios where rich auxiliary data
are available. Here we propose a modality fusion-based neural network that
evaluates the aesthetic preference of images with auxiliary information. Our
method fully utilizes auxiliary data by introducing multi-step modality fusion
using both conditional batch normalization-based low-level and attention-based
high-level fusion mechanisms, inspired by the findings from statistical
analyses on real advertisement data. Our approach achieved state-of-the-art
performance on the AVA dataset, a widely used dataset for aesthetic assessment.
Besides, the proposed method is evaluated on large-scale real-world
advertisement image data with rich auxiliary attributes, providing promising
preference prediction results. Through extensive experiments, we investigate
how image and auxiliary information together influence click-through rate. | [
"cs.LG",
"cs.IR"
] |
Graph neural networks (GNNs) are powerful machine learning models for various
graph learning tasks. Recently, the limitations of the expressive power of
various GNN models have been revealed. For example, GNNs cannot distinguish
some non-isomorphic graphs and they cannot learn efficient graph algorithms. In
this paper, we demonstrate that GNNs become powerful just by adding a random
feature to each node. We prove that the random features enable GNNs to learn
almost optimal polynomial-time approximation algorithms for the minimum
dominating set problem and maximum matching problem in terms of approximation
ratios. The main advantage of our method is that it can be combined with
off-the-shelf GNN models with slight modifications. Through experiments, we
show that the addition of random features enables GNNs to solve various
problems that normal GNNs, including the graph convolutional networks (GCNs)
and graph isomorphism networks (GINs), cannot solve. | [
"cs.LG",
"stat.ML"
] |
Recent studies revealed the mathematical connection of deep neural network
(DNN) and dynamic system. However, the fundamental principle of DNN has not
been fully characterized with dynamic system in terms of optimization and
generalization. To this end, we build the connection of DNN and continuity
equation where the measure is conserved to model the forward propagation
process of DNN which has not been addressed before. DNN learns the
transformation of the input distribution to the output one. However, in the
measure space, there are infinite curves connecting two distributions. Which
one can lead to good optimization and generaliztion for DNN? By diving the
optimal transport theory, we find DNN with weight decay attempts to learn the
geodesic curve in the Wasserstein space, which is induced by the optimal
transport map. Compared with plain network, ResNet is a better approximation to
the geodesic curve, which explains why ResNet can be optimized and generalize
better. Numerical experiments show that the data tracks of both plain network
and ResNet tend to be line-shape in term of line-shape score (LSS), and the map
learned by ResNet is closer to the optimal transport map in term of optimal
transport score (OTS). In a word, we conclude a mathematical principle of deep
learning is to learn the geodesic curve in the Wasserstein space; and deep
learning is a great engineering realization of continuous transformation in
high-dimensional space. | [
"cs.LG",
"stat.ML"
] |
End-to-end deep reinforcement learning has enabled agents to learn with
little preprocessing by humans. However, it is still difficult to learn stably
and efficiently because the learning method usually uses a nonlinear function
approximation. Neural Episodic Control (NEC), which has been proposed in order
to improve sample efficiency, is able to learn stably by estimating action
values using a non-parametric method. In this paper, we propose an architecture
that incorporates random projection into NEC to train with more stability. In
addition, we verify the effectiveness of our architecture by Atari's five
games. The main idea is to reduce the number of parameters that have to learn
by replacing neural networks with random projection in order to reduce
dimensions while keeping the learning end-to-end. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Training (source) domain bias affects state-of-the-art object detectors, such
as Faster R-CNN, when applied to new (target) domains. To alleviate this
problem, researchers proposed various domain adaptation methods to improve
object detection results in the cross-domain setting, e.g. by translating
images with ground-truth labels from the source domain to the target domain
using Cycle-GAN. On top of combining Cycle-GAN transformations and self-paced
learning in a smart and efficient way, in this paper, we propose a novel
self-paced algorithm that learns from easy to hard. Our method is simple and
effective, without any overhead during inference. It uses only pseudo-labels
for samples taken from the target domain, i.e. the domain adaptation is
unsupervised. We conduct experiments on four cross-domain benchmarks, showing
better results than the state of the art. We also perform an ablation study
demonstrating the utility of each component in our framework. Additionally, we
study the applicability of our framework to other object detectors.
Furthermore, we compare our difficulty measure with other measures from the
related literature, proving that it yields superior results and that it
correlates well with the performance metric. | [
"cs.CV",
"cs.LG"
] |
Current neural architecture search (NAS) strategies focus only on finding a
single, good, architecture. They offer little insight into why a specific
network is performing well, or how we should modify the architecture if we want
further improvements. We propose a Bayesian optimisation (BO) approach for NAS
that combines the Weisfeiler-Lehman graph kernel with a Gaussian process
surrogate. Our method optimises the architecture in a highly data-efficient
manner: it is capable of capturing the topological structures of the
architectures and is scalable to large graphs, thus making the high-dimensional
and graph-like search spaces amenable to BO. More importantly, our method
affords interpretability by discovering useful network features and their
corresponding impact on the network performance. Indeed, we demonstrate
empirically that our surrogate model is capable of identifying useful motifs
which can guide the generation of new architectures. We finally show that our
method outperforms existing NAS approaches to achieve the state of the art on
both closed- and open-domain search spaces. | [
"cs.LG",
"stat.ML"
] |
Region-based convolutional neural networks
(R-CNN)~\cite{fast_rcnn,faster_rcnn,mask_rcnn} have largely dominated object
detection. Operators defined on RoIs (Region of Interests) play an important
role in R-CNNs such as RoIPooling~\cite{fast_rcnn} and
RoIAlign~\cite{mask_rcnn}. They all only utilize information inside RoIs for
RoI prediction, even with their recent deformable
extensions~\cite{deformable_cnn}. Although surrounding context is well-known
for its importance in object detection, it has yet been integrated in R-CNNs in
a flexible and effective way. Inspired by the auto-context
work~\cite{auto_context} and the multi-class object layout
work~\cite{nms_context}, this paper presents a generic context-mining RoI
operator (i.e., \textit{RoICtxMining}) seamlessly integrated in R-CNNs, and the
resulting object detection system is termed \textbf{Auto-Context R-CNN} which
is trained end-to-end. The proposed RoICtxMining operator is a simple yet
effective two-layer extension of the RoIPooling or RoIAlign operator. Centered
at an object-RoI, it creates a $3\times 3$ layout to mine contextual
information adaptively in the $8$ surrounding context regions on-the-fly.
Within each of the $8$ context regions, a context-RoI is mined in term of
discriminative power and its RoIPooling / RoIAlign features are concatenated
with the object-RoI for final prediction. \textit{The proposed Auto-Context
R-CNN is robust to occlusion and small objects, and shows promising
vulnerability for adversarial attacks without being adversarially-trained.} In
experiments, it is evaluated using RoIPooling as the backbone and shows
competitive results on Pascal VOC, Microsoft COCO, and KITTI datasets
(including $6.9\%$ mAP improvements over the R-FCN~\cite{rfcn} method on COCO
\textit{test-dev} dataset and the first place on both KITTI pedestrian and
cyclist detection as of this submission). | [
"cs.CV"
] |
We provide a novel new approach for aligning geometric models using a dual
graph structure where local features are mapping probabilities. Alignment of
non-rigid structures is one of the most challenging computer vision tasks due
to the high number of unknowns needed to model the correspondence. We have seen
a leap forward using DNN models in template alignment and functional maps, but
those methods fail for inter-class alignment where nonisometric deformations
exist. Here we propose to rethink this task and use unrolling concepts on a
dual graph structure - one for a forward map and one for a backward map, where
the features are pulled back matching probabilities from the target into the
source. We report state of the art results on stretchable domains alignment in
a rapid and stable solution for meshes and cloud of points. | [
"cs.CV",
"cs.LG"
] |
Anomaly detection is concerned with identifying data patterns that deviate
remarkably from the expected behaviour. This is an important research problem,
due to its broad set of application domains, from data analysis to e-health,
cybersecurity, predictive maintenance, fault prevention, and industrial
automation. Herein, we review state-of-the-art methods that may be employed to
detect anomalies in the specific area of sensor systems, which poses hard
challenges in terms of information fusion, data volumes, data speed, and
network/energy efficiency, to mention but the most pressing ones. In this
context, anomaly detection is a particularly hard problem, given the need to
find computing-energy accuracy trade-offs in a constrained environment. We
taxonomize methods ranging from conventional techniques (statistical methods,
time-series analysis, signal processing, etc.) to data-driven techniques
(supervised learning, reinforcement learning, deep learning, etc.). We also
look at the impact that different architectural environments (Cloud, Fog, Edge)
can have on the sensors ecosystem. The review points to the most promising
intelligent-sensing methods, and pinpoints a set of interesting open issues and
challenges. | [
"cs.LG"
] |
1 bit deep neural networks (DNNs), of which both the activations and weights
are binarized , are attracting more and more attention due to their high
computational efficiency and low memory requirement . However, the drawback of
large accuracy dropping also restrict s its application. In this paper, we
propose a novel Targeted Acceleration and Compression (TAC) framework to
improve the performance of 1 bit deep neural networks W e consider that the
acceleration and compression effects of binarizing fully connected layer s are
not sufficient to compensate for the accuracy loss caused by it In the proposed
framework, t he convolutional and fully connected layer are separated and
optimized i ndividually . F or the convolutional layer s , both the activations
and weights are binarized. For the fully connected layer s, the binarization
operation is re placed by network pruning and low bit quantization. The
proposed framework is implemented on the CIFAR 10, CIFAR 100 and ImageNet (
ILSVRC 12 ) datasets , and experimental results show that the proposed TAC can
significantly improve the accuracy of 1 bit deep neural networks and
outperforms the state of the art by more than 6 percentage points . | [
"cs.CV"
] |
The large volume of video content and high viewing frequency demand automatic
video summarization algorithms, of which a key property is the capability of
modeling diversity. If videos are lengthy like hours-long egocentric videos, it
is necessary to track the temporal structures of the videos and enforce local
diversity. The local diversity refers to that the shots selected from a short
time duration are diverse but visually similar shots are allowed to co-exist in
the summary if they appear far apart in the video. In this paper, we propose a
novel probabilistic model, built upon SeqDPP, to dynamically control the time
span of a video segment upon which the local diversity is imposed. In
particular, we enable SeqDPP to learn to automatically infer how local the
local diversity is supposed to be from the input video. The resulting model is
extremely involved to train by the hallmark maximum likelihood estimation
(MLE), which further suffers from the exposure bias and non-differentiable
evaluation metrics. To tackle these problems, we instead devise a reinforcement
learning algorithm for training the proposed model. Extensive experiments
verify the advantages of our model and the new learning algorithm over
MLE-based methods. | [
"cs.CV"
] |
Scalability in terms of object density in a scene is a primary challenge in
unsupervised sequential object-oriented representation learning. Most of the
previous models have been shown to work only on scenes with a few objects. In
this paper, we propose SCALOR, a probabilistic generative world model for
learning SCALable Object-oriented Representation of a video. With the proposed
spatially-parallel attention and proposal-rejection mechanisms, SCALOR can deal
with orders of magnitude larger numbers of objects compared to the previous
state-of-the-art models. Additionally, we introduce a background module that
allows SCALOR to model complex dynamic backgrounds as well as many foreground
objects in the scene. We demonstrate that SCALOR can deal with crowded scenes
containing up to a hundred objects while jointly modeling complex dynamic
backgrounds. Importantly, SCALOR is the first unsupervised object
representation model shown to work for natural scenes containing several tens
of moving objects. | [
"cs.LG",
"stat.ML"
] |
This paper presents a novel approach for learning instance segmentation with
image-level class labels as supervision. Our approach generates pseudo instance
segmentation labels of training images, which are used to train a fully
supervised model. For generating the pseudo labels, we first identify confident
seed areas of object classes from attention maps of an image classification
model, and propagate them to discover the entire instance areas with accurate
boundaries. To this end, we propose IRNet, which estimates rough areas of
individual instances and detects boundaries between different object classes.
It thus enables to assign instance labels to the seeds and to propagate them
within the boundaries so that the entire areas of instances can be estimated
accurately. Furthermore, IRNet is trained with inter-pixel relations on the
attention maps, thus no extra supervision is required. Our method with IRNet
achieves an outstanding performance on the PASCAL VOC 2012 dataset, surpassing
not only previous state-of-the-art trained with the same level of supervision,
but also some of previous models relying on stronger supervision. | [
"cs.CV",
"cs.LG"
] |
Many current behavior generation methods struggle to handle real-world
traffic situations as they do not scale well with complexity. However,
behaviors can be learned off-line using data-driven approaches. Especially,
reinforcement learning is promising as it implicitly learns how to behave
utilizing collected experiences. In this work, we combine policy-based
reinforcement learning with local optimization to foster and synthesize the
best of the two methodologies. The policy-based reinforcement learning
algorithm provides an initial solution and guiding reference for the
post-optimization. Therefore, the optimizer only has to compute a single
homotopy class, e.g.\ drive behind or in front of the other vehicle. By storing
the state-history during reinforcement learning, it can be used for constraint
checking and the optimizer can account for interactions. The post-optimization
additionally acts as a safety-layer and the novel method, thus, can be applied
in safety-critical applications. We evaluate the proposed method using
lane-change scenarios with a varying number of vehicles. | [
"cs.LG",
"cs.AI",
"cs.MA",
"cs.RO"
] |
Goal-directed Reinforcement Learning (RL) traditionally considers an agent
interacting with an environment, prescribing a real-valued reward to an agent
proportional to the completion of some goal. Goal-directed RL has seen large
gains in sample efficiency, due to the ease of reusing or generating new
experience by proposing goals. One approach,self-play, allows an agent to
"play" against itself by alternatively setting and accomplishing goals,
creating a learned curriculum through which an agent can learn to accomplish
progressively more difficult goals. However, self-play has been limited to goal
curriculum learning or learning progressively harder goals within a single
environment. Recent work on robotic agents has shown that varying the
environment during training, for example with domain randomization, leads to
more robust transfer. As a result, we extend the self-play framework to jointly
learn a goal and environment curriculum, leading to an approach that learns the
most fruitful domain randomization strategy with self-play. Our method,
Self-Supervised Active Domain Randomization(SS-ADR), generates a coupled
goal-task curriculum, where agents learn through progressively more difficult
tasks and environment variations. By encouraging the agent to try tasks that
are just outside of its current capabilities, SS-ADR builds a domain
randomization curriculum that enables state-of-the-art results on
varioussim2real transfer tasks. Our results show that a curriculum of
co-evolving the environment difficulty together with the difficulty of goals
set in each environment provides practical benefits in the goal-directed tasks
tested. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Policy gradient algorithms are among the best candidates for the much
anticipated application of reinforcement learning to real-world control tasks,
such as the ones arising in robotics. However, the trial-and-error nature of
these methods introduces safety issues whenever the learning phase itself must
be performed on a physical system. In this paper, we address a specific safety
formulation, where danger is encoded in the reward signal and the learning
agent is constrained to never worsen its performance. By studying actor-only
policy gradient from a stochastic optimization perspective, we establish
improvement guarantees for a wide class of parametric policies, generalizing
existing results on Gaussian policies. This, together with novel upper bounds
on the variance of policy gradient estimators, allows to identify those
meta-parameter schedules that guarantee monotonic improvement with high
probability. The two key meta-parameters are the step size of the parameter
updates and the batch size of the gradient estimators. By a joint, adaptive
selection of these meta-parameters, we obtain a safe policy gradient algorithm. | [
"cs.LG",
"stat.ML"
] |
End-to-end training from scratch of current deep architectures for new
computer vision problems would require Imagenet-scale datasets, and this is not
always possible. In this paper we present a method that is able to take
advantage of freely available multi-modal content to train computer vision
algorithms without human supervision. We put forward the idea of performing
self-supervised learning of visual features by mining a large scale corpus of
multi-modal (text and image) documents. We show that discriminative visual
features can be learnt efficiently by training a CNN to predict the semantic
context in which a particular image is more probable to appear as an
illustration. For this we leverage the hidden semantic structures discovered in
the text corpus with a well-known topic modeling technique. Our experiments
demonstrate state of the art performance in image classification, object
detection, and multi-modal retrieval compared to recent self-supervised or
natural-supervised approaches. | [
"cs.CV"
] |
Understanding how goal states control behavior is a question ripe for
interrogation by new methods from machine learning. These methods require large
and labeled datasets to train models. To annotate a large-scale image dataset
with observed search fixations, we collected 16,184 fixations from people
searching for either microwaves or clocks in a dataset of 4,366 images
(MS-COCO). We then used this behaviorally-annotated dataset and the machine
learning method of Inverse-Reinforcement Learning (IRL) to learn
target-specific reward functions and policies for these two target goals.
Finally, we used these learned policies to predict the fixations of 60 new
behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of
kitchen scenes depicting both a microwave and a clock (thus controlling for
differences in low-level image contrast). We found that the IRL model predicted
behavioral search efficiency and fixation-density maps using multiple metrics.
Moreover, reward maps from the IRL model revealed target-specific patterns that
suggest, not just attention guidance by target features, but also guidance by
scene context (e.g., fixations along walls in the search of clocks). Using
machine learning and the psychologically-meaningful principle of reward, it is
possible to learn the visual features used in goal-directed attention control. | [
"cs.CV"
] |
Retrieval networks are essential for searching and indexing. Compared to
classification networks, attention visualization for retrieval networks is
hardly studied. We formulate attention visualization as a constrained
optimization problem. We leverage the unit L2-Norm constraint as an attention
filter (L2-CAF) to localize attention in both classification and retrieval
networks. Unlike recent literature, our approach requires neither architectural
changes nor fine-tuning. Thus, a pre-trained network's performance is never
undermined
L2-CAF is quantitatively evaluated using weakly supervised object
localization. State-of-the-art results are achieved on classification networks.
For retrieval networks, significant improvement margins are achieved over a
Grad-CAM baseline. Qualitative evaluation demonstrates how the L2-CAF
visualizes attention per frame for a recurrent retrieval network. Further
ablation studies highlight the computational cost of our approach and compare
L2-CAF with other feasible alternatives. Code available at
https://bit.ly/3iDBLFv | [
"cs.CV"
] |
Place recognition is one of the most fundamental topics in computer vision
and robotics communities, where the task is to accurately and efficiently
recognize the location of a given query image. Despite years of wisdom
accumulated in this field, place recognition still remains an open problem due
to the various ways in which the appearance of real-world places may differ.
This paper presents an overview of the place recognition literature. Since
condition invariant and viewpoint invariant features are essential factors to
long-term robust visual place recognition system, We start with traditional
image description methodology developed in the past, which exploit techniques
from image retrieval field. Recently, the rapid advances of related fields such
as object detection and image classification have inspired a new technique to
improve visual place recognition system, i.e., convolutional neural networks
(CNNs). Thus we then introduce recent progress of visual place recognition
system based on CNNs to automatically learn better image representations for
places. Eventually, we close with discussions and future work of place
recognition. | [
"cs.CV"
] |
Most modern multiple object tracking (MOT) systems follow the
tracking-by-detection paradigm, consisting of a detector followed by a method
for associating detections into tracks. There is a long history in tracking of
combining motion and appearance features to provide robustness to occlusions
and other challenges, but typically this comes with the trade-off of a more
complex and slower implementation. Recent successes on popular 2D tracking
benchmarks indicate that top-scores can be achieved using a state-of-the-art
detector and relatively simple associations relying on single-frame spatial
offsets -- notably outperforming contemporary methods that leverage learned
appearance features to help re-identify lost tracks. In this paper, we propose
an efficient joint detection and tracking model named DEFT, or "Detection
Embeddings for Tracking." Our approach relies on an appearance-based object
matching network jointly-learned with an underlying object detection network.
An LSTM is also added to capture motion constraints. DEFT has comparable
accuracy and speed to the top methods on 2D online tracking leaderboards while
having significant advantages in robustness when applied to more challenging
tracking data. DEFT raises the bar on the nuScenes monocular 3D tracking
challenge, more than doubling the performance of the previous top method. Code
is publicly available. | [
"cs.CV"
] |
We present Megaverse, a new 3D simulation platform for reinforcement learning
and embodied AI research. The efficient design of our engine enables
physics-based simulation with high-dimensional egocentric observations at more
than 1,000,000 actions per second on a single 8-GPU node. Megaverse is up to
70x faster than DeepMind Lab in fully-shaded 3D scenes with interactive
objects. We achieve this high simulation performance by leveraging batched
simulation, thereby taking full advantage of the massive parallelism of modern
GPUs. We use Megaverse to build a new benchmark that consists of several
single-agent and multi-agent tasks covering a variety of cognitive challenges.
We evaluate model-free RL on this benchmark to provide baselines and facilitate
future research. The source code is available at https://www.megaverse.info | [
"cs.LG",
"cs.AI"
] |
Learning generic representations with deep networks requires massive training
samples and significant computer resources. To learn a new specific task, an
important issue is to transfer the generic teacher's representation to a
student network. In this paper, we propose to use a metric between
representations that is based on a functional view of neurons. We use optimal
transport to quantify the match between two representations, yielding a
distance that embeds some invariances inherent to the representation of deep
networks. This distance defines a regularizer promoting the similarity of the
student's representation with that of the teacher. Our approach can be used in
any learning context where representation transfer is applicable. We experiment
here on two standard settings: inductive transfer learning, where the teacher's
representation is transferred to a student network of same architecture for a
new related task, and knowledge distillation, where the teacher's
representation is transferred to a student of simpler architecture for the same
task (model compression). Our approach also lends itself to solving new
learning problems; we demonstrate this by showing how to directly transfer the
teacher's representation to a simpler architecture student for a new related
task. | [
"cs.LG",
"stat.ML"
] |
There has been a substantial amount of research involving computer methods
and technology for the detection and recognition of diabetic foot ulcers
(DFUs), but there is a lack of systematic comparisons of state-of-the-art deep
learning object detection frameworks applied to this problem. DFUC2020 provided
participants with a comprehensive dataset consisting of 2,000 images for
training and 2,000 images for testing. This paper summarises the results of
DFUC2020 by comparing the deep learning-based algorithms proposed by the
winning teams: Faster R-CNN, three variants of Faster R-CNN and an ensemble
method; YOLOv3; YOLOv5; EfficientDet; and a new Cascade Attention Network. For
each deep learning method, we provide a detailed description of model
architecture, parameter settings for training and additional stages including
pre-processing, data augmentation and post-processing. We provide a
comprehensive evaluation for each method. All the methods required a data
augmentation stage to increase the number of images available for training and
a post-processing stage to remove false positives. The best performance was
obtained from Deformable Convolution, a variant of Faster R-CNN, with a mean
average precision (mAP) of 0.6940 and an F1-Score of 0.7434. Finally, we
demonstrate that the ensemble method based on different deep learning methods
can enhanced the F1-Score but not the mAP. | [
"cs.CV"
] |
In this dissertation is provided a comparative analysis that evaluates the
performance of several deep learning (DL) architectures on a large number of
time series datasets of different nature and for different applications. Two
main fruitful research fields are discussed here which were strategically
chosen in order to address current cross disciplinary research priorities
attracting the interest of geodetic community. The first problem is related to
ionospheric Total Electron Content (TEC) modeling which is an important issue
in many real time Global Navigation System Satellites (GNSS) applications.
Reliable and fast knowledge about ionospheric variations becomes increasingly
important. GNSS users of single frequency receivers and satellite navigation
systems need accurate corrections to remove signal degradation effects caused
by the ionosphere. Ionospheric modeling using signal processing techniques is
the subject of discussion in the present contribution. The next problem under
discussion is energy disaggregation which is an important issue for energy
efficiency and energy consumption awareness. Reliable and fast knowledge about
residential energy consumption at appliance level becomes increasingly
important nowadays and it is an important mitigation measure to prevent energy
wastage. Energy disaggregation or Nonintrusive load monitoring (NILM) is a
single channel blind source separation problem where the task is to estimate
the consumption of each electrical appliance given the total energy
consumption. For both problems various deep learning models (DL) are proposed
that cover various aspects of the problem under study, whereas experimental
results indicate the proposed methods superiority compared to the current state
of the art. | [
"cs.LG"
] |
Acute respiratory infections have epidemic and pandemic potential and thus
are being studied worldwide, albeit in many different contexts and study
formats. Predicting infection from symptom data is critical, though using
symptom data from varied studies in aggregate is challenging because the data
is collected in different ways. Accordingly, different symptom profiles could
be more predictive in certain studies, or even symptoms of the same name could
have different meanings in different contexts. We assess state-of-the-art
transfer learning methods for improving prediction of infection from symptom
data in multiple types of health care data ranging from clinical, to home-visit
as well as crowdsourced studies. We show interesting characteristics regarding
six different study types and their feature domains. Further, we demonstrate
that it is possible to use data collected from one study to predict infection
in another, at close to or better than using a single dataset for prediction on
itself. We also investigate in which conditions specific transfer learning and
domain adaptation methods may perform better on symptom data. This work has the
potential for broad applicability as we show how it is possible to transfer
learning from one public health study design to another, and data collected
from one study may be used for prediction of labels for another, even collected
through different study designs, populations and contexts. | [
"cs.LG",
"q-bio.PE",
"q-bio.QM",
"stat.ML"
] |
Japanese comics (called manga) are traditionally created in monochrome
format. In recent years, in addition to monochrome comics, full color comics, a
more attractive medium, have appeared. Unfortunately, color comics require
manual colorization, which incurs high labor costs. Although automatic
colorization methods have been recently proposed, most of them are designed for
illustrations, not for comics. Unlike illustrations, since comics are composed
of many consecutive images, the painting style must be consistent. To realize
consistent colorization, we propose here a semi-automatic colorization method
based on generative adversarial networks (GAN); the method learns the painting
style of a specific comic from small amount of training data. The proposed
method takes a pair of a screen tone image and a flat colored image as input,
and outputs a colorized image. Experiments show that the proposed method
achieves better performance than the existing alternatives. | [
"cs.CV",
"eess.IV"
] |
In this work we propose for the first time a transformer-based framework for
unsupervised representation learning of multivariate time series. Pre-trained
models can be potentially used for downstream tasks such as regression and
classification, forecasting and missing value imputation. By evaluating our
models on several benchmark datasets for multivariate time series regression
and classification, we show that not only does our modeling approach represent
the most successful method employing unsupervised learning of multivariate time
series presented to date, but also that it exceeds the current state-of-the-art
performance of supervised methods; it does so even when the number of training
samples is very limited, while offering computational efficiency. Finally, we
demonstrate that unsupervised pre-training of our transformer models offers a
substantial performance benefit over fully supervised learning, even without
leveraging additional unlabeled data, i.e., by reusing the same data samples
through the unsupervised objective. | [
"cs.LG",
"cs.AI"
] |
We study the spatio-temporal prediction problem, which has attracted the
attention of many researchers due to its critical real-life applications. In
particular, we introduce a novel approach to this problem. Our approach is
based on the Hawkes process, which is a non-stationary and self-exciting point
process. We extend the formulations of a standard point process model that can
represent time-series data to represent a spatio-temporal data. We model the
data as nonstationary in time and space. Furthermore, we partition the spatial
region we are working on into subregions via an adaptive decision tree and
model the source statistics in each subregion with individual but mutually
interacting point processes. We also provide a gradient based joint
optimization algorithm for the point process and decision tree parameters.
Thus, we introduce a model that can jointly infer the source statistics and an
adaptive partitioning of the spatial region. Finally, we provide experimental
results on real-life data, which provides significant improvement due to space
adaptation and joint optimization compared to standard well-known methods in
the literature. | [
"stat.ML",
"cs.LG"
] |
Conditional image generation is effective for diverse tasks including
training data synthesis for learning-based computer vision. However, despite
the recent advances in generative adversarial networks (GANs), it is still a
challenging task to generate images with detailed conditioning on object
shapes. Existing methods for conditional image generation use category labels
and/or keypoints and are only give limited control over object categories. In
this work, we present SCGAN, an architecture to generate images with a desired
shape specified by an input normal map. The shape-conditioned image generation
task is achieved by explicitly modeling the image appearance via a latent
appearance vector. The network is trained using unpaired training samples of
real images and rendered normal maps. This approach enables us to generate
images of arbitrary object categories with the target shape and diverse image
appearances. We show the effectiveness of our method through both qualitative
and quantitative evaluation on training data generation tasks. | [
"cs.CV"
] |
Deep attention models have advanced the modelling of sequential data across
many domains. For language modelling in particular, the Transformer-XL -- a
Transformer augmented with a long-range memory of past activations -- has been
shown to be state-of-the-art across a variety of well-studied benchmarks. The
Transformer-XL incorporates a long-range memory at every layer of the network,
which renders its state to be thousands of times larger than RNN predecessors.
However it is unclear whether this is necessary. We perform a set of
interventions to show that comparable performance can be obtained with 6X fewer
long range memories and better performance can be obtained by limiting the
range of attention in lower layers of the network. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
We propose a 3D color point cloud processing pipeline to count apples on
individual apple trees in trellis structured orchards. Fruit counting at the
tree level requires separating trees, which is challenging in dense orchards.
We employ point clouds acquired from the leaf-off orchard in winter period,
where the branch structure is visible, to delineate tree crowns. We localize
apples in point clouds acquired in harvest period. Alignment of the two point
clouds enables mapping apple locations to the delineated winter cloud and
assigning each apple to its bearing tree. Our apple assignment method achieves
an accuracy rate higher than 95%. In addition to presenting a first proof of
feasibility, we also provide suggestions for further improvement on our apple
assignment pipeline. | [
"cs.CV"
] |
In the last few years, several works have tackled the problem of novel view
synthesis from stereo images or even from a single picture. However, previous
methods are computationally expensive, specially for high-resolution images. In
this paper, we address the problem of generating a multiplane image (MPI) from
a single high-resolution picture. We present the adaptive-MPI representation,
which allows rendering novel views with low computational requirements. To this
end, we propose an adaptive slicing algorithm that produces an MPI with a
variable number of image planes. We present a new lightweight CNN for depth
estimation, which is learned by knowledge distillation from a larger network.
Occluded regions in the adaptive-MPI are inpainted also by a lightweight CNN.
We show that our method is capable of producing high-quality predictions with
one order of magnitude less parameters compared to previous approaches. The
robustness of our method is evidenced on challenging pictures from the
Internet. | [
"cs.CV"
] |
With the rise in the employment of deep learning methods in safety-critical
scenarios, interpretability is more essential than ever before. Although many
different directions regarding interpretability have been explored for visual
modalities, time-series data has been neglected with only a handful of methods
tested due to their poor intelligibility. We approach the problem of
interpretability in a novel way by proposing TSInsight where we attach an
auto-encoder to the classifier with a sparsity-inducing norm on its output and
fine-tune it based on the gradients from the classifier and a reconstruction
penalty. TSInsight learns to preserve features that are important for
prediction by the classifier and suppresses those that are irrelevant i.e.
serves as a feature attribution method to boost interpretability. In contrast
to most other attribution frameworks, TSInsight is capable of generating both
instance-based and model-based explanations. We evaluated TSInsight along with
9 other commonly used attribution methods on 8 different time-series datasets
to validate its efficacy. Evaluation results show that TSInsight naturally
achieves output space contraction, therefore, is an effective tool for the
interpretability of deep time-series models. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We introduce a sampling perspective to tackle the challenging task of
training robust Reinforcement Learning (RL) agents. Leveraging the powerful
Stochastic Gradient Langevin Dynamics, we present a novel, scalable two-player
RL algorithm, which is a sampling variant of the two-player policy gradient
method. Our algorithm consistently outperforms existing baselines, in terms of
generalization across different training and testing conditions, on several
MuJoCo environments. Our experiments also show that, even for objective
functions that entirely ignore potential environmental shifts, our sampling
approach remains highly robust in comparison to standard RL algorithms. | [
"cs.LG",
"stat.ML"
] |
Recognizing car license plates in natural scene images is an important yet
still challenging task in realistic applications. Many existing approaches
perform well for license plates collected under constrained conditions, eg,
shooting in frontal and horizontal view-angles and under good lighting
conditions. However, their performance drops significantly in an unconstrained
environment that features rotation, distortion, occlusion, blurring, shading or
extreme dark or bright conditions. In this work, we propose a robust framework
for license plate recognition in the wild. It is composed of a tailored
CycleGAN model for license plate image generation and an elaborate designed
image-to-sequence network for plate recognition. On one hand, the CycleGAN
based plate generation engine alleviates the exhausting human annotation work.
Massive amount of training data can be obtained with a more balanced character
distribution and various shooting conditions, which helps to boost the
recognition accuracy to a large extent. On the other hand, the 2D attentional
based license plate recognizer with an Xception-based CNN encoder is capable of
recognizing license plates with different patterns under various scenarios
accurately and robustly. Without using any heuristics rule or post-processing,
our method achieves the state-of-the-art performance on four public datasets,
which demonstrates the generality and robustness of our framework. Moreover, we
released a new license plate dataset, named "CLPD", with 1200 images from all
31 provinces in mainland China. The dataset can be available from:
https://github.com/wangpengnorman/CLPD_dataset. | [
"cs.CV"
] |
Visibility Graph (VG) transforms time series into graphs, facilitating signal
processing by advanced graph data mining algorithms. In this paper, based on
the classic Limited Penetrable Visibility Graph (LPVG) method, we propose a
novel nonlinear mapping method named Circular Limited Penetrable Visibility
Graph (CLPVG). The testing on degree distribution and clustering coefficient on
the generated graphs of typical time series validates that our CLPVG is able to
effectively capture the important features of time series and has better
anti-noise ability than traditional LPVG. The experiments on real-world
time-series datasets of radio signal and electroencephalogram (EEG) also
suggest that the structural features provided by CLPVG, rather than LPVG, are
more useful for time-series classification, leading to higher accuracy. And
this classification performance can be further enhanced through structural
feature expansion by adopting Subgraph Networks (SGN). All of these results
validate the effectiveness of our CLPVG model. | [
"cs.LG",
"eess.SP"
] |
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field. | [
"cs.CV"
] |
Learning optimal feedback control laws capable of executing optimal
trajectories is essential for many robotic applications. Such policies can be
learned using reinforcement learning or planned using optimal control. While
reinforcement learning is sample inefficient, optimal control only plans an
optimal trajectory from a specific starting configuration. In this paper we
propose deep optimal feedback control to learn an optimal feedback policy
rather than a single trajectory. By exploiting the inherent structure of the
robot dynamics and strictly convex action cost, we can derive principled cost
functions such that the optimal policy naturally obeys the action limits, is
globally optimal and stable on the training domain given the optimal value
function. The corresponding optimal value function is learned end-to-end by
embedding a deep differential network in the Hamilton-Jacobi-Bellmann
differential equation and minimizing the error of this equality while
simultaneously decreasing the discounting from short- to far-sighted to enable
the learning. Our proposed approach enables us to learn an optimal feedback
control law in continuous time, that in contrast to existing approaches
generates an optimal trajectory from any point in state-space without the need
of replanning. The resulting approach is evaluated on non-linear systems and
achieves optimal feedback control, where standard optimal control methods
require frequent replanning. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
Graphs arise naturally in many real-world applications including social
networks, recommender systems, ontologies, biology, and computational finance.
Traditionally, machine learning models for graphs have been mostly designed for
static graphs. However, many applications involve evolving graphs. This
introduces important challenges for learning and inference since nodes,
attributes, and edges change over time. In this survey, we review the recent
advances in representation learning for dynamic graphs, including dynamic
knowledge graphs. We describe existing models from an encoder-decoder
perspective, categorize these encoders and decoders based on the techniques
they employ, and analyze the approaches in each category. We also review
several prominent applications and widely used datasets and highlight
directions for future research. | [
"cs.LG",
"stat.ML"
] |
This paper presents a novel framework in which video/image segmentation and
localization are cast into a single optimization problem that integrates
information from low level appearance cues with that of high level localization
cues in a very weakly supervised manner. The proposed framework leverages two
representations at different levels, exploits the spatial relationship between
bounding boxes and superpixels as linear constraints and simultaneously
discriminates between foreground and background at bounding box and superpixel
level. Different from previous approaches that mainly rely on discriminative
clustering, we incorporate a foreground model that minimizes the histogram
difference of an object across all image frames. Exploiting the geometric
relation between the superpixels and bounding boxes enables the transfer of
segmentation cues to improve localization output and vice-versa. Inclusion of
the foreground model generalizes our discriminative framework to video data
where the background tends to be similar and thus, not discriminative. We
demonstrate the effectiveness of our unified framework on the YouTube Object
video dataset, Internet Object Discovery dataset and Pascal VOC 2007. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Recent advancements in language representation models such as BERT have led
to a rapid improvement in numerous natural language processing tasks. However,
language models usually consist of a few hundred million trainable parameters
with embedding space distributed across multiple layers, thus making them
challenging to be fine-tuned for a specific task or to be transferred to a new
domain. To determine whether there are task-specific neurons that can be
exploited for unsupervised transfer learning, we introduce a method for
selecting the most important neurons to solve a specific classification task.
This algorithm is further extended to multi-source transfer learning by
computing the importance of neurons for several single-source transfer learning
scenarios between different subsets of data sources. Besides, a task-specific
fingerprint for each data source is obtained based on the percentage of the
selected neurons in each layer. We perform extensive experiments in
unsupervised transfer learning for sentiment analysis, natural language
inference and sentence similarity, and compare our results with the existing
literature and baselines. Significantly, we found that the source and target
data sources with higher degrees of similarity between their task-specific
fingerprints demonstrate a better transferability property. We conclude that
our method can lead to better performance using just a few hundred
task-specific and interpretable neurons. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Assembly modeling is a core task of computer aided design (CAD), comprising
around one third of the work in a CAD workflow. Optimizing this process
therefore represents a huge opportunity in the design of a CAD system, but
current research of assembly based modeling is not directly applicable to
modern CAD systems because it eschews the dominant data structure of modern
CAD: parametric boundary representations (BREPs). CAD assembly modeling defines
assemblies as a system of pairwise constraints, called mates, between parts,
which are defined relative to BREP topology rather than in world coordinates
common to existing work. We propose SB-GCN, a representation learning scheme on
BREPs that retains the topological structure of parts, and use these learned
representations to predict CAD type mates. To train our system, we compiled the
first large scale dataset of BREP CAD assemblies, which we are releasing along
with benchmark mate prediction tasks. Finally, we demonstrate the compatibility
of our model with an existing commercial CAD system by building a tool that
assists users in mate creation by suggesting mate completions, with 72.2%
accuracy. | [
"cs.CV",
"cs.GR",
"cs.LG",
"I.3.5; I.2.10"
] |
We extend the neural basis expansion analysis (NBEATS) to incorporate
exogenous factors. The resulting method, called NBEATSx, improves on a well
performing deep learning model, extending its capabilities by including
exogenous variables and allowing it to integrate multiple sources of useful
information. To showcase the utility of the NBEATSx model, we conduct a
comprehensive study of its application to electricity price forecasting (EPF)
tasks across a broad range of years and markets. We observe state-of-the-art
performance, significantly improving the forecast accuracy by nearly 20% over
the original NBEATS model, and by up to 5% over other well established
statistical and machine learning methods specialized for these tasks.
Additionally, the proposed neural network has an interpretable configuration
that can structurally decompose time series, visualizing the relative impact of
trend and seasonal components and revealing the modeled processes' interactions
with exogenous factors. To assist related work we made the code available in
https://github.com/cchallu/nbeatsx. | [
"cs.LG",
"cs.AI"
] |
Broadly speaking, the objective in cardiac image segmentation is to delineate
the outer and inner walls of the heart to segment out either the entire or
parts of the organ boundaries. This paper will focus on MR images as they are
the most widely used in cardiac segmentation -- as a result of the accurate
morphological information and better soft tissue contrast they provide. This
cardiac segmentation information is very useful as it eases physical
measurements that provides useful metrics for cardiac diagnosis such as
infracted volumes, ventricular volumes, ejection fraction, myocardial mass,
cardiac movement, and the like. But, this task is difficult due to the
intensity and texture similarities amongst the different cardiac and background
structures on top of some noisy artifacts present in MR images. Thus far,
various researchers have proposed different techniques to solve some of the
pressing issues. This seminar paper presents an overview of representative
medical image segmentation techniques. The paper also highlights preferred
approaches for segmentation of the four cardiac chambers: the left ventricle
(LV), right ventricle (RV), left atrium (LA) and right atrium (RA), on short
axis image planes. | [
"cs.CV"
] |
We propose a novel framework for value function factorization in multi-agent
deep reinforcement learning (MARL) using graph neural networks (GNNs). In
particular, we consider the team of agents as the set of nodes of a complete
directed graph, whose edge weights are governed by an attention mechanism.
Building upon this underlying graph, we introduce a mixing GNN module, which is
responsible for i) factorizing the team state-action value function into
individual per-agent observation-action value functions, and ii) explicit
credit assignment to each agent in terms of fractions of the global team
reward. Our approach, which we call GraphMIX, follows the centralized training
and decentralized execution paradigm, enabling the agents to make their
decisions independently once training is completed. We show the superiority of
GraphMIX as compared to the state-of-the-art on several scenarios in the
StarCraft II multi-agent challenge (SMAC) benchmark. We further demonstrate how
GraphMIX can be used in conjunction with a recent hierarchical MARL
architecture to both improve the agents' performance and enable fine-tuning
them on mismatched test scenarios with higher numbers of agents and/or actions. | [
"cs.LG",
"cs.MA"
] |
We introduce a new algorithm for multi-objective reinforcement learning
(MORL) with linear preferences, with the goal of enabling few-shot adaptation
to new tasks. In MORL, the aim is to learn policies over multiple competing
objectives whose relative importance (preferences) is unknown to the agent.
While this alleviates dependence on scalar reward design, the expected return
of a policy can change significantly with varying preferences, making it
challenging to learn a single model to produce optimal policies under different
preference conditions. We propose a generalized version of the Bellman equation
to learn a single parametric representation for optimal policies over the space
of all possible preferences. After an initial learning phase, our agent can
execute the optimal policy under any given preference, or automatically infer
an underlying preference with very few samples. Experiments across four
different domains demonstrate the effectiveness of our approach. | [
"cs.LG",
"cs.AI"
] |
The rapid development of embedded hardware in autonomous vehicles broadens
their computational capabilities, thus bringing the possibility to mount more
complete sensor setups able to handle driving scenarios of higher complexity.
As a result, new challenges such as multiple detections of the same object have
to be addressed. In this work, a siamese network is integrated into the
pipeline of a well-known 3D object detector approach to suppress duplicate
proposals coming from different cameras via re-identification. Additionally,
associations are exploited to enhance the 3D box regression of the object by
aggregating their corresponding LiDAR frustums. The experimental evaluation on
the nuScenes dataset shows that the proposed method outperforms traditional NMS
approaches. | [
"cs.CV"
] |
Automatically generating the descriptions of an image, i.e., image
captioning, is an important and fundamental topic in artificial intelligence,
which bridges the gap between computer vision and natural language processing.
Based on the successful deep learning models, especially the CNN model and Long
Short-Term Memories (LSTMs) with attention mechanism, we propose a hierarchical
attention model by utilizing both of the global CNN features and the local
object features for more effective feature representation and reasoning in
image captioning. The generative adversarial network (GAN), together with a
reinforcement learning (RL) algorithm, is applied to solve the exposure bias
problem in RNN-based supervised training for language problems. In addition,
through the automatic measurement of the consistency between the generated
caption and the image content by the discriminator in the GAN framework and RL
optimization, we make the finally generated sentences more accurate and
natural. Comprehensive experiments show the improved performance of the
hierarchical attention mechanism and the effectiveness of our RL-based
optimization method. Our model achieves state-of-the-art results on several
important metrics in the MSCOCO dataset, using only greedy inference. | [
"cs.CV"
] |
Automatic detection of rail track and its fasteners via using continuously
collected railway images is important to maintenance as it can significantly
improve maintenance efficiency and better ensure system safety. Dominant
computer vision-based detection models typically rely on convolutional neural
networks that utilize local image features and cumbersome prior settings to
generate candidate boxes. In this paper, we propose a deep convolutional
transformer network based method to detect multi-class rail components
including the rail, clip, and bolt. We effectively synergize advantages of the
convolutional structure on extracting latent features from raw images as well
as advantages of transformers on selectively determining valuable latent
features to achieve an efficient and accurate performance on rail component
detections. Our proposed method simplifies the detection pipeline by
eliminating the need of prior settings, such as anchor box, aspect ratio,
default coordinates, and post-processing, such as the threshold for non-maximum
suppression; as well as allows users to trade off the quality and complexity of
the detector with limited training data. Results of a comprehensive
computational study show that our proposed method outperforms a set of existing
state-of-art approaches with large margins | [
"cs.CV"
] |
In recent years, Neural Turing Machines have gathered attention by joining
the flexibility of neural networks with the computational capabilities of
Turing machines. However, Neural Turing Machines are notoriously hard to train,
which limits their applicability. We propose reservoir memory machines, which
are still able to solve some of the benchmark tests for Neural Turing Machines,
but are much faster to train, requiring only an alignment algorithm and linear
regression. Our model can also be seen as an extension of echo state networks
with an external memory, enabling arbitrarily long storage without
interference. | [
"cs.LG",
"stat.ML"
] |
Existing automatic 3D image segmentation methods usually fail to meet the
clinic use. Many studies have explored an interactive strategy to improve the
image segmentation performance by iteratively incorporating user hints.
However, the dynamic process for successive interactions is largely ignored. We
here propose to model the dynamic process of iterative interactive image
segmentation as a Markov decision process (MDP) and solve it with reinforcement
learning (RL). Unfortunately, it is intractable to use single-agent RL for
voxel-wise prediction due to the large exploration space. To reduce the
exploration space to a tractable size, we treat each voxel as an agent with a
shared voxel-level behavior strategy so that it can be solved with multi-agent
reinforcement learning. An additional advantage of this multi-agent model is to
capture the dependency among voxels for segmentation task. Meanwhile, to enrich
the information of previous segmentations, we reserve the prediction
uncertainty in the state space of MDP and derive an adjustment action space
leading to a more precise and finer segmentation. In addition, to improve the
efficiency of exploration, we design a relative cross-entropy gain-based reward
to update the policy in a constrained direction. Experimental results on
various medical datasets have shown that our method significantly outperforms
existing state-of-the-art methods, with the advantage of fewer interactions and
a faster convergence. | [
"cs.CV"
] |
Estimating the 3D position of human joints has become a widely researched
topic in the last years. Special emphasis has gone into defining novel methods
that extrapolate 2-dimensional data (keypoints) into 3D, namely predicting the
root-relative coordinates of joints associated to human skeletons. The latest
research trends have proven that the Transformer Encoder blocks aggregate
temporal information significantly better than previous approaches. Thus, we
propose the usage of these models to obtain more accurate 3D predictions by
leveraging temporal information using attention mechanisms on ordered sequences
human poses in videos.
Our method consistently outperforms the previous best results from the
literature when using both 2D keypoint predictors by 0.3 mm (44.8 MPJPE, 0.7%
improvement) and ground truth inputs by 2mm (MPJPE: 31.9, 8.4% improvement) on
Human3.6M. It also achieves state-of-the-art performance on the HumanEva-I
dataset with 10.5 P-MPJPE (22.2% reduction). The number of parameters in our
model is easily tunable and is smaller (9.5M) than current methodologies
(16.95M and 11.25M) whilst still having better performance. Thus, our 3D
lifting model's accuracy exceeds that of other end-to-end or SMPL approaches
and is comparable to many multi-view methods. | [
"cs.CV"
] |
Registration is a fundamental task in medical image analysis which can be
applied to several tasks including image segmentation, intra-operative
tracking, multi-modal image alignment, and motion analysis. Popular
registration tools such as ANTs and NiftyReg optimize an objective function for
each pair of images from scratch which is time-consuming for large images with
complicated deformation. Facilitated by the rapid progress of deep learning,
learning-based approaches such as VoxelMorph have been emerging for image
registration. These approaches can achieve competitive performance in a
fraction of a second on advanced GPUs. In this work, we construct a neural
registration framework, called NeurReg, with a hybrid loss of displacement
fields and data similarity, which substantially improves the current
state-of-the-art of registrations. Within the framework, we simulate various
transformations by a registration simulator which generates fixed image and
displacement field ground truth for training. Furthermore, we design three
segmentation frameworks based on the proposed registration framework: 1)
atlas-based segmentation, 2) joint learning of both segmentation and
registration tasks, and 3) multi-task learning with atlas-based segmentation as
an intermediate feature. Extensive experimental results validate the
effectiveness of the proposed NeurReg framework based on various metrics: the
endpoint error (EPE) of the predicted displacement field, mean square error
(MSE), normalized local cross-correlation (NLCC), mutual information (MI), Dice
coefficient, uncertainty estimation, and the interpretability of the
segmentation. The proposed NeurReg improves registration accuracy with fast
inference speed, which can greatly accelerate related medical image analysis
tasks. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Most existing face image Super-Resolution (SR) methods assume that the
Low-Resolution (LR) images were artificially downsampled from High-Resolution
(HR) images with bicubic interpolation. This operation changes the natural
image characteristics and reduces noise. Hence, SR methods trained on such data
most often fail to produce good results when applied to real LR images. To
solve this problem, we propose a novel framework for generation of realistic
LR/HR training pairs. Our framework estimates realistic blur kernels, noise
distributions, and JPEG compression artifacts to generate LR images with
similar image characteristics as the ones in the source domain. This allows us
to train a SR model using high quality face images as Ground-Truth (GT). For
better perceptual quality we use a Generative Adversarial Network (GAN) based
SR model where we have exchanged the commonly used VGG-loss [24] with
LPIPS-loss [52]. Experimental results on both real and artificially corrupted
face images show that our method results in more detailed reconstructions with
less noise compared to existing State-of-the-Art (SoTA) methods. In addition,
we show that the traditional non-reference Image Quality Assessment (IQA)
methods fail to capture this improvement and demonstrate that the more recent
NIMA metric [16] correlates better with human perception via Mean Opinion Rank
(MOR). | [
"cs.CV"
] |
Gradient-based meta-learning has proven to be highly effective at learning
model initializations, representations, and update rules that allow fast
adaptation from a few samples. The core idea behind these approaches is to use
fast adaptation and generalization -- two second-order metrics -- as training
signals on a meta-training dataset. However, little attention has been given to
other possible second-order metrics. In this paper, we investigate a different
training signal -- robustness to catastrophic interference -- and demonstrate
that representations learned by directing minimizing interference are more
conducive to incremental learning than those learned by just maximizing fast
adaptation. | [
"cs.LG",
"stat.ML"
] |
High-resolution representations are essential for position-sensitive vision
problems, such as human pose estimation, semantic segmentation, and object
detection. Existing state-of-the-art frameworks first encode the input image as
a low-resolution representation through a subnetwork that is formed by
connecting high-to-low resolution convolutions \emph{in series} (e.g., ResNet,
VGGNet), and then recover the high-resolution representation from the encoded
low-resolution representation. Instead, our proposed network, named as
High-Resolution Network (HRNet), maintains high-resolution representations
through the whole process. There are two key characteristics: (i) Connect the
high-to-low resolution convolution streams \emph{in parallel}; (ii) Repeatedly
exchange the information across resolutions. The benefit is that the resulting
representation is semantically richer and spatially more precise. We show the
superiority of the proposed HRNet in a wide range of applications, including
human pose estimation, semantic segmentation, and object detection, suggesting
that the HRNet is a stronger backbone for computer vision problems. All the
codes are available at~{\url{https://github.com/HRNet}}. | [
"cs.CV"
] |
Visual recognition research often assumes a sufficient resolution of the
region of interest (ROI). That is usually violated in practice, inspiring us to
explore the Very Low Resolution Recognition (VLRR) problem. Typically, the ROI
in a VLRR problem can be smaller than $16 \times 16$ pixels, and is challenging
to be recognized even by human experts. We attempt to solve the VLRR problem
using deep learning methods. Taking advantage of techniques primarily in super
resolution, domain adaptation and robust regression, we formulate a dedicated
deep learning method and demonstrate how these techniques are incorporated step
by step. Any extra complexity, when introduced, is fully justified by both
analysis and simulation results. The resulting \textit{Robust Partially Coupled
Networks} achieves feature enhancement and recognition simultaneously. It
allows for both the flexibility to combat the LR-HR domain mismatch, and the
robustness to outliers. Finally, the effectiveness of the proposed models is
evaluated on three different VLRR tasks, including face identification, digit
recognition and font recognition, all of which obtain very impressive
performances. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Stochastic variational inference is an established way to carry out
approximate Bayesian inference for deep models. While there have been effective
proposals for good initializations for loss minimization in deep learning, far
less attention has been devoted to the issue of initialization of stochastic
variational inference. We address this by proposing a novel layer-wise
initialization strategy based on Bayesian linear models. The proposed method is
extensively validated on regression and classification tasks, including
Bayesian DeepNets and ConvNets, showing faster and better convergence compared
to alternatives inspired by the literature on initializations for loss
minimization. | [
"stat.ML",
"cs.LG"
] |
Person re-identification (Re-ID) aims to match a target person across camera
views at different locations and times. Existing Re-ID studies focus on the
short-term cloth-consistent setting, under which a person re-appears in
different camera views with the same outfit. A discriminative feature
representation learned by existing deep Re-ID models is thus dominated by the
visual appearance of clothing. In this work, we focus on a much more difficult
yet practical setting where person matching is conducted over long-duration,
e.g., over days and months and therefore inevitably under the new challenge of
changing clothes. This problem, termed Long-Term Cloth-Changing (LTCC) Re-ID is
much understudied due to the lack of large scale datasets. The first
contribution of this work is a new LTCC dataset containing people captured over
a long period of time with frequent clothing changes. As a second contribution,
we propose a novel Re-ID method specifically designed to address the
cloth-changing challenge. Specifically, we consider that under cloth-changes,
soft-biometrics such as body shape would be more reliable. We, therefore,
introduce a shape embedding module as well as a cloth-elimination
shape-distillation module aiming to eliminate the now unreliable clothing
appearance features and focus on the body shape information. Extensive
experiments show that superior performance is achieved by the proposed model on
the new LTCC dataset. The code and dataset will be available at
https://naiq.github.io/LTCC_Perosn_ReID.html. | [
"cs.CV"
] |
To improve the system performance towards the Shannon limit, advanced radio
resource management mechanisms play a fundamental role. In particular,
scheduling should receive much attention, because it allocates radio resources
among different users in terms of their channel conditions and QoS
requirements. The difficulties of scheduling algorithms are the tradeoffs need
to be made among multiple objectives, such as throughput, fairness and packet
drop rate. We propose a smart scheduling scheme based on deep reinforcement
learning (DRL). We not only verify the performance gain achieved, but also
provide implementation-friend designs, i.e., a scalable neural network design
for the agent and a virtual environment training framework. With the scalable
neural network design, the DRL agent can easily handle the cases when the
number of active users is time-varying without the need to redesign and retrain
the DRL agent. Training the DRL agent in a virtual environment offline first
and using it as the initial version in the practical usage helps to prevent the
system from suffering from performance and robustness degradation due to the
time-consuming training. Through both simulations and field tests, we show that
the DRL-based smart scheduling outperforms the conventional scheduling method
and can be adopted in practical systems. | [
"cs.LG",
"cs.AI",
"cs.IT",
"math.IT"
] |
Even with the advent of more sophisticated, data-hungry methods, boosted
decision trees remain extraordinarily successful for fast rigid object
detection, achieving top accuracy on numerous datasets. While effective, most
boosted detectors use decision trees with orthogonal (single feature) splits,
and the topology of the resulting decision boundary may not be well matched to
the natural topology of the data. Given highly correlated data, decision trees
with oblique (multiple feature) splits can be effective. Use of oblique splits,
however, comes at considerable computational expense. Inspired by recent work
on discriminative decorrelation of HOG features, we instead propose an
efficient feature transform that removes correlations in local neighborhoods.
The result is an overcomplete but locally decorrelated representation ideally
suited for use with orthogonal decision trees. In fact, orthogonal trees with
our locally decorrelated features outperform oblique trees trained over the
original features at a fraction of the computational cost. The overall
improvement in accuracy is dramatic: on the Caltech Pedestrian Dataset, we
reduce false positives nearly tenfold over the previous state-of-the-art. | [
"cs.CV"
] |
Anomaly detection in surveillance videos has been recently gaining attention.
Even though the performance of state-of-the-art methods on publicly available
data sets has been competitive, they demand a massive amount of training data.
Also, they lack a concrete approach for continuously updating the trained model
once new data is available. Furthermore, online decision making is an important
but mostly neglected factor in this domain. Motivated by these research gaps,
we propose an online anomaly detection method for surveillance videos using
transfer learning and any-shot learning, which in turn significantly reduces
the training complexity and provides a mechanism that can detect anomalies
using only a few labeled nominal examples. Our proposed algorithm leverages the
feature extraction power of neural network-based models for transfer learning
and the any-shot learning capability of statistical detection methods. | [
"cs.CV",
"cs.LG"
] |
We present a deep learning pipeline that leverages network self-prior to
recover a full 3D model consisting of both a triangular mesh and a texture map
from the colored 3D point cloud. Different from previous methods either
exploiting 2D self-prior for image editing or 3D self-prior for pure surface
reconstruction, we propose to exploit a novel hybrid 2D-3D self-prior in deep
neural networks to significantly improve the geometry quality and produce a
high-resolution texture map, which is typically missing from the output of
commodity-level 3D scanners. In particular, we first generate an initial mesh
using a 3D convolutional neural network with 3D self-prior, and then encode
both 3D information and color information in the 2D UV atlas, which is further
refined by 2D convolutional neural networks with the self-prior. In this way,
both 2D and 3D self-priors are utilized for the mesh and texture recovery.
Experiments show that, without the need of any additional training data, our
method recovers the 3D textured mesh model of high quality from sparse input,
and outperforms the state-of-the-art methods in terms of both the geometry and
texture quality. | [
"cs.CV"
] |
There is an increased interest in the use of Unmanned Aerial Vehicles (UAVs)
for agriculture, military, disaster management and aerial photography around
the world. UAVs are scalable, flexible and are useful in various environments
where direct human intervention is difficult. In general, the use of UAVs with
cameras mounted to them has increased in number due to their wide range of
applications in real life scenarios. With the advent of deep learning models in
computer vision many models have shown great success in visual tasks. But most
of evaluation models are done on high end CPUs and GPUs. One of major
challenges in using UAVs for Visual Assistance tasks in real time is managing
the memory usage and power consumption of the these tasks which are
computationally intensive and are difficult to be performed on low end
processor board of the UAV. This projects describes a novel method to optimize
the general image processing tasks like object tracking and object detection
for UAV hardware in real time scenarios without affecting the flight time and
not tampering the latency and accuracy of these models. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
This study follows many classical approaches to multi-object tracking (MOT)
that model the problem using dynamic graphical data structures, and adapts this
formulation to make it amenable to modern neural networks. Our main
contributions in this work are the creation of a framework based on dynamic
undirected graphs that represent the data association problem over multiple
timesteps, and a message passing graph neural network (MPNN) that operates on
these graphs to produce the desired likelihood for every association therein.
We also provide solutions and propositions for the computational problems that
need to be addressed to create a memory-efficient, real-time, online algorithm
that can reason over multiple timesteps, correct previous mistakes, update
beliefs, and handle missed/false detections. To demonstrate the efficacy of our
approach, we only use the 2D box location and object category ID to construct
the descriptor for each object instance. Despite this, our model performs on
par with state-of-the-art approaches that make use of additional sensors, as
well as multiple hand-crafted and/or learned features. This illustrates that
given the right problem formulation and model design, raw bounding boxes (and
their kinematics) from any off-the-shelf detector are sufficient to achieve
competitive tracking results on challenging MOT benchmarks. | [
"cs.CV",
"cs.RO"
] |
We introduce the Region Adaptive Graph Fourier Transform (RA-GFT) for
compression of 3D point cloud attributes. The RA-GFT is a multiresolution
transform, formed by combining spatially localized block transforms. We assume
the points are organized by a family of nested partitions represented by a
rooted tree. At each resolution level, attributes are processed in clusters
using block transforms. Each block transform produces a single approximation
(DC) coefficient, and various detail (AC) coefficients. The DC coefficients are
promoted up the tree to the next (lower resolution) level, where the process
can be repeated until reaching the root. Since clusters may have a different
numbers of points, each block transform must incorporate the relative
importance of each coefficient. For this, we introduce the
$\mathbf{Q}$-normalized graph Laplacian, and propose using its eigenvectors as
the block transform. The RA-GFT achieves better complexity-performance
trade-offs than previous approaches. In particular, it outperforms the Region
Adaptive Haar Transform (RAHT) by up to 2.5 dB, with a small complexity
overhead. | [
"cs.CV",
"cs.MM",
"eess.SP"
] |
Lodging, the permanent bending over of food crops, leads to poor plant growth
and development. Consequently, lodging results in reduced crop quality, lowers
crop yield, and makes harvesting difficult. Plant breeders routinely evaluate
several thousand breeding lines, and therefore, automatic lodging detection and
prediction is of great value aid in selection. In this paper, we propose a deep
convolutional neural network (DCNN) architecture for lodging classification
using five spectral channel orthomosaic images from canola and wheat breeding
trials. Also, using transfer learning, we trained 10 lodging detection models
using well-established deep convolutional neural network architectures. Our
proposed model outperforms the state-of-the-art lodging detection methods in
the literature that use only handcrafted features. In comparison to 10 DCNN
lodging detection models, our proposed model achieves comparable results while
having a substantially lower number of parameters. This makes the proposed
model suitable for applications such as real-time classification using
inexpensive hardware for high-throughput phenotyping pipelines. The GitHub
repository at https://github.com/FarhadMaleki/LodgedNet contains code and
models. | [
"cs.CV"
] |
This paper presents the input convex neural network architecture. These are
scalar-valued (potentially deep) neural networks with constraints on the
network parameters such that the output of the network is a convex function of
(some of) the inputs. The networks allow for efficient inference via
optimization over some inputs to the network given others, and can be applied
to settings including structured prediction, data imputation, reinforcement
learning, and others. In this paper we lay the basic groundwork for these
models, proposing methods for inference, optimization and learning, and analyze
their representational power. We show that many existing neural network
architectures can be made input-convex with a minor modification, and develop
specialized optimization algorithms tailored to this setting. Finally, we
highlight the performance of the methods on multi-label prediction, image
completion, and reinforcement learning problems, where we show improvement over
the existing state of the art in many cases. | [
"cs.LG",
"math.OC"
] |
Goal-oriented dialog has been given attention due to its numerous
applications in artificial intelligence. Goal-oriented dialogue tasks occur
when a questioner asks an action-oriented question and an answerer responds
with the intent of letting the questioner know a correct action to take. To ask
the adequate question, deep learning and reinforcement learning have been
recently applied. However, these approaches struggle to find a competent
recurrent neural questioner, owing to the complexity of learning a series of
sentences. Motivated by theory of mind, we propose "Answerer in Questioner's
Mind" (AQM), a novel information theoretic algorithm for goal-oriented dialog.
With AQM, a questioner asks and infers based on an approximated probabilistic
model of the answerer. The questioner figures out the answerer's intention via
selecting a plausible question by explicitly calculating the information gain
of the candidate intentions and possible answers to each question. We test our
framework on two goal-oriented visual dialog tasks: "MNIST Counting Dialog" and
"GuessWhat?!". In our experiments, AQM outperforms comparative algorithms by a
large margin. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG"
] |
Q-learning with value function approximation may have the poor performance
because of overestimation bias and imprecise estimate. Specifically,
overestimation bias is from the maximum operator over noise estimate, which is
exaggerated using the estimate of a subsequent state. Inspired by the recent
advance of deep reinforcement learning and Double Q-learning, we introduce the
decorrelated double Q-learning (D2Q). Specifically, we introduce the
decorrelated regularization item to reduce the correlation between value
function approximators, which can lead to less biased estimation and low
variance. The experimental results on a suite of MuJoCo continuous control
tasks demonstrate that our decorrelated double Q-learning can effectively
improve the performance. | [
"cs.LG",
"cs.AI",
"68T01",
"I.2.9"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.