text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
We present two instances, L-GAE and L-VGAE, of the variational graph
auto-encoding family (VGAE) based on separating feature propagation operations
from graph convolution layers typically found in graph learning methods to a
single linear matrix computation made prior to input in standard auto-encoder
architectures. This decoupling enables the independent and fixed design of the
auto-encoder without requiring additional GCN layers for every desired increase
in the size of a node's local receptive field. Fixing the auto-encoder enables
a fairer assessment on the size of a nodes receptive field in building
representations. Furthermore a by-product of fixing the auto-encoder design
often results in substantially smaller networks than their VGAE counterparts
especially as we increase the number of feature propagations. A comparative
downstream evaluation on link prediction tasks show comparable state of the art
performance to similar VGAE arrangements despite considerable simplification.
We also show the simple application of our methodology to more challenging
representation learning scenarios such as spatio-temporal graph representation
learning. | [
"cs.LG",
"stat.ML"
] |
We study the robustness of object detection under the presence of missing
annotations. In this setting, the unlabeled object instances will be treated as
background, which will generate an incorrect training signal for the detector.
Interestingly, we observe that after dropping 30% of the annotations (and
labeling them as background), the performance of CNN-based object detectors
like Faster-RCNN only drops by 5% on the PASCAL VOC dataset. We provide a
detailed explanation for this result. To further bridge the performance gap, we
propose a simple yet effective solution, called Soft Sampling. Soft Sampling
re-weights the gradients of RoIs as a function of overlap with positive
instances. This ensures that the uncertain background regions are given a
smaller weight compared to the hardnegatives. Extensive experiments on curated
PASCAL VOC datasets demonstrate the effectiveness of the proposed Soft Sampling
method at different annotation drop rates. Finally, we show that on
OpenImagesV3, which is a real-world dataset with missing annotations, Soft
Sampling outperforms standard detection baselines by over 3%. | [
"cs.CV"
] |
Solving optimization problems with unknown parameters often requires learning
a predictive model to predict the values of the unknown parameters and then
solving the problem using these values. Recent work has shown that including
the optimization problem as a layer in the model training pipeline results in
predictions of the unobserved parameters that lead to higher decision quality.
Unfortunately, this process comes at a large computational cost because the
optimization problem must be solved and differentiated through in each training
iteration; furthermore, it may also sometimes fail to improve solution quality
due to non-smoothness issues that arise when training through a complex
optimization layer. To address these shortcomings, we learn a low-dimensional
surrogate model of a large optimization problem by representing the feasible
space in terms of meta-variables, each of which is a linear combination of the
original variables. By training a low-dimensional surrogate model end-to-end,
and jointly with the predictive model, we achieve: i) a large reduction in
training and inference time; and ii) improved performance by focusing attention
on the more important variables in the optimization and learning in a smoother
space. Empirically, we demonstrate these improvements on a non-convex adversary
modeling task, a submodular recommendation task and a convex portfolio
optimization task. | [
"cs.LG",
"stat.ML"
] |
Time series prediction with deep learning methods, especially long short-term
memory neural networks (LSTMs), have scored significant achievements in recent
years. Despite the fact that the LSTMs can help to capture long-term
dependencies, its ability to pay different degree of attention on sub-window
feature within multiple time-steps is insufficient. To address this issue, an
evolutionary attention-based LSTM training with competitive random search is
proposed for multivariate time series prediction. By transferring shared
parameters, an evolutionary attention learning approach is introduced to the
LSTMs model. Thus, like that for biological evolution, the pattern for
importance-based attention sampling can be confirmed during temporal
relationship mining. To refrain from being trapped into partial optimization
like traditional gradient-based methods, an evolutionary computation inspired
competitive random search method is proposed, which can well configure the
parameters in the attention layer. Experimental results have illustrated that
the proposed model can achieve competetive prediction performance compared with
other baseline methods. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Recent successes of Deep Neural Networks (DNNs) in a variety of research
tasks, however, heavily rely on the large amounts of labeled samples. This may
require considerable annotation cost in real-world applications. Fortunately,
active learning is a promising methodology to train high-performing model with
minimal annotation cost. In the deep learning context, the critical question of
active learning is how to precisely identify the informativeness of samples for
DNN. In this paper, inspired by piece-wise linear interpretability in DNN, we
introduce the linearly separable regions of samples to the problem of active
learning, and propose a novel Deep Active learning approach by Model
Interpretability (DAMI). To keep the maximal representativeness of the entire
unlabeled data, DAMI tries to select and label samples on different linearly
separable regions introduced by the piece-wise linear interpretability in DNN.
We focus on modeling Multi-Layer Perception (MLP) for modeling tabular data.
Specifically, we use the local piece-wise interpretation in MLP as the
representation of each sample, and directly run K-Center clustering to select
and label samples. To be noted, this whole process of DAMI does not require any
hyper-parameters to tune manually. To verify the effectiveness of our approach,
extensive experiments have been conducted on several tabular datasets. The
experimental results demonstrate that DAMI constantly outperforms several
state-of-the-art compared approaches. | [
"cs.LG",
"stat.ML"
] |
Graph convolutional networks have been successful in addressing graph-based
tasks such as semi-supervised node classification. Existing methods use a
network structure defined by the user based on experimentation with fixed
number of layers and neurons per layer and employ a layer-wise propagation rule
to obtain the node embeddings. Designing an automatic process to define a
problem-dependant architecture for graph convolutional networks can greatly
help to reduce the need for manual design of the structure of the model in the
training process. In this paper, we propose a method to automatically build
compact and task-specific graph convolutional networks. Experimental results on
widely used publicly available datasets show that the proposed method
outperforms related methods based on convolutional graph networks in terms of
classification performance and network compactness. | [
"cs.LG",
"stat.ML"
] |
Principal Component Analysis (PCA) has been used to study the pathogenesis of
diseases. To enhance the interpretability of classical PCA, various improved
PCA methods have been proposed to date. Among these, a typical method is the
so-called sparse PCA, which focuses on seeking sparse loadings. However, the
performance of these methods is still far from satisfactory due to their
limitation of using unsupervised learning methods; moreover, the class
ambiguity within the sample is high. To overcome this problem, this study
developed a new PCA method, which is named the Supervised Discriminative Sparse
PCA (SDSPCA). The main innovation of this method is the incorporation of
discriminative information and sparsity into the PCA model. Specifically, in
contrast to the traditional sparse PCA, which imposes sparsity on the loadings,
here, sparse components are obtained to represent the data. Furthermore, via
linear transformation, the sparse components approximate the given label
information. On the one hand, sparse components improve interpretability over
traditional PCA, while on the other hand, they are have discriminative
abilities suitable for classification purposes. A simple algorithm is developed
and its convergence proof is provided. The SDSPCA has been applied to common
characteristic gene selection (com-characteristic gene) and tumor
classification on multi-view biological data. The sparsity and classification
performance of the SDSPCA are empirically verified via abundant, reasonable,
and effective experiments, and the obtained results demonstrate that SDSPCA
outperforms other state-of-the-art methods. | [
"cs.LG",
"stat.ML"
] |
We present Neural Articulated Radiance Field (NARF), a novel deformable 3D
representation for articulated objects learned from images. While recent
advances in 3D implicit representation have made it possible to learn models of
complex objects, learning pose-controllable representations of articulated
objects remains a challenge, as current methods require 3D shape supervision
and are unable to render appearance. In formulating an implicit representation
of 3D articulated objects, our method considers only the rigid transformation
of the most relevant object part in solving for the radiance field at each 3D
location. In this way, the proposed method represents pose-dependent changes
without significantly increasing the computational complexity. NARF is fully
differentiable and can be trained from images with pose annotations. Moreover,
through the use of an autoencoder, it can learn appearance variations over
multiple instances of an object class. Experiments show that the proposed
method is efficient and can generalize well to novel poses. The code is
available for research purposes at https://github.com/nogu-atsu/NARF | [
"cs.CV"
] |
For embodied agents to infer representations of the underlying 3D physical
world they inhabit, they should efficiently combine multisensory cues from
numerous trials, e.g., by looking at and touching objects. Despite its
importance, multisensory 3D scene representation learning has received less
attention compared to the unimodal setting. In this paper, we propose the
Generative Multisensory Network (GMN) for learning latent representations of 3D
scenes which are partially observable through multiple sensory modalities. We
also introduce a novel method, called the Amortized Product-of-Experts, to
improve the computational efficiency and the robustness to unseen combinations
of modalities at test time. Experimental results demonstrate that the proposed
model can efficiently infer robust modality-invariant 3D-scene representations
from arbitrary combinations of modalities and perform accurate cross-modal
generation. To perform this exploration, we also develop the Multisensory
Embodied 3D-Scene Environment (MESE). | [
"cs.LG",
"stat.ML"
] |
Brain extraction is a fundamental step for most brain imaging studies. In
this paper, we investigate the problem of skull stripping and propose
complementary segmentation networks (CompNets) to accurately extract the brain
from T1-weighted MRI scans, for both normal and pathological brain images. The
proposed networks are designed in the framework of encoder-decoder networks and
have two pathways to learn features from both the brain tissue and its
complementary part located outside of the brain. The complementary pathway
extracts the features in the non-brain region and leads to a robust solution to
brain extraction from MRIs with pathologies, which do not exist in our training
dataset. We demonstrate the effectiveness of our networks by evaluating them on
the OASIS dataset, resulting in the state of the art performance under the
two-fold cross-validation setting. Moreover, the robustness of our networks is
verified by testing on images with introduced pathologies and by showing its
invariance to unseen brain pathologies. In addition, our complementary network
design is general and can be extended to address other image segmentation
problems with better generalization. | [
"cs.CV"
] |
In many real-world scenarios, an autonomous agent often encounters various
tasks within a single complex environment. We propose to build a graph
abstraction over the environment structure to accelerate the learning of these
tasks. Here, nodes are important points of interest (pivotal states) and edges
represent feasible traversals between them. Our approach has two stages. First,
we jointly train a latent pivotal state model and a curiosity-driven
goal-conditioned policy in a task-agnostic manner. Second, provided with the
information from the world graph, a high-level Manager quickly finds solution
to new tasks and expresses subgoals in reference to pivotal states to a
low-level Worker. The Worker can then also leverage the graph to easily
traverse to the pivotal states of interest, even across long distance, and
explore non-locally. We perform a thorough ablation study to evaluate our
approach on a suite of challenging maze tasks, demonstrating significant
advantages from the proposed framework over baselines that lack world graph
knowledge in terms of performance and efficiency. | [
"cs.LG",
"stat.ML"
] |
Interpretation of Deep Neural Networks (DNNs) training as an optimal control
problem with nonlinear dynamical systems has received considerable attention
recently, yet the algorithmic development remains relatively limited. In this
work, we make an attempt along this line by reformulating the training
procedure from the trajectory optimization perspective. We first show that most
widely-used algorithms for training DNNs can be linked to the Differential
Dynamic Programming (DDP), a celebrated second-order method rooted in the
Approximate Dynamic Programming. In this vein, we propose a new class of
optimizer, DDP Neural Optimizer (DDPNOpt), for training feedforward and
convolution networks. DDPNOpt features layer-wise feedback policies which
improve convergence and reduce sensitivity to hyper-parameter over existing
methods. It outperforms other optimal-control inspired training methods in both
convergence and complexity, and is competitive against state-of-the-art first
and second order methods. We also observe DDPNOpt has surprising benefit in
preventing gradient vanishing. Our work opens up new avenues for principled
algorithmic design built upon the optimal control theory. | [
"cs.LG",
"cs.NE",
"math.OC"
] |
The natural association between visual observations and their corresponding
sound provides powerful self-supervisory signals for learning video
representations, which makes the ever-growing amount of online videos an
attractive source of training data. However, large portions of online videos
contain irrelevant audio-visual signals because of edited/overdubbed audio, and
models trained on such uncurated videos have shown to learn suboptimal
representations. Therefore, existing approaches rely almost exclusively on
datasets with predetermined taxonomies of semantic concepts, where there is a
high chance of audio-visual correspondence. Unfortunately, constructing such
datasets require labor intensive manual annotation and/or verification, which
severely limits the utility of online videos for large-scale learning. In this
work, we present an automatic dataset curation approach based on subset
optimization where the objective is to maximize the mutual information between
audio and visual channels in videos. We demonstrate that our approach finds
videos with high audio-visual correspondence and show that self-supervised
models trained on our data achieve competitive performances compared to models
trained on existing manually curated datasets. The most significant benefit of
our approach is scalability: We release ACAV100M that contains 100 million
videos with high audio-visual correspondence, ideal for self-supervised video
representation learning. | [
"cs.CV"
] |
In recent years significant progress has been made in dealing with
challenging problems using reinforcement learning.Despite its great success,
reinforcement learning still faces challenge in continuous control tasks.
Conventional methods always compute the derivatives of the optimal goal with a
costly computation resources, and are inefficient, unstable and lack of
robust-ness when dealing with such tasks. Alternatively, derivative-based
methods treat the optimization process as a blackbox and show robustness and
stability in learning continuous control tasks, but not data efficient in
learning. The combination of both methods so as to get the best of the both has
raised attention. However, most of the existing combination works adopt complex
neural networks (NNs) as the policy for control. The double-edged sword of deep
NNs can yield better performance, but also makes it difficult for parameter
tuning and computation. To this end, in this paper we presents a novel method
called FiDi-RL, which incorporates deep RL with Finite-Difference (FiDi) policy
search.FiDi-RL combines Deep Deterministic Policy Gradients (DDPG)with Augment
Random Search (ARS) and aims at improving the data efficiency of ARS. The
empirical results show that FiDi-RL can improves the performance and stability
of ARS, and provide competitive results against some existing deep
reinforcement learning methods | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Financial technology (FinTech) has drawn much attention among investors and
companies. While conventional stock analysis in FinTech targets at predicting
stock prices, less effort is made for profitable stock recommendation. Besides,
in existing approaches on modeling time series of stock prices, the
relationships among stocks and sectors (i.e., categories of stocks) are either
neglected or pre-defined. Ignoring stock relationships will miss the
information shared between stocks while using pre-defined relationships cannot
depict the latent interactions or influence of stock prices between stocks. In
this work, we aim at recommending the top-K profitable stocks in terms of
return ratio using time series of stock prices and sector information. We
propose a novel deep learning-based model, Financial Graph Attention Networks
(FinGAT), to tackle the task under the setting that no pre-defined
relationships between stocks are given. The idea of FinGAT is three-fold.
First, we devise a hierarchical learning component to learn short-term and
long-term sequential patterns from stock time series. Second, a fully-connected
graph between stocks and a fully-connected graph between sectors are
constructed, along with graph attention networks, to learn the latent
interactions among stocks and sectors. Third, a multi-task objective is devised
to jointly recommend the profitable stocks and predict the stock movement.
Experiments conducted on Taiwan Stock, S&P 500, and NASDAQ datasets exhibit
remarkable recommendation performance of our FinGAT, comparing to
state-of-the-art methods. | [
"cs.LG",
"cs.CE",
"cs.IR",
"cs.SI"
] |
On-device Deep Neural Networks (DNNs) have recently gained more attention due
to the increasing computing power of the mobile devices and the number of
applications in Computer Vision (CV), Natural Language Processing (NLP), and
Internet of Things (IoTs). Unfortunately, the existing efficient convolutional
neural network (CNN) architectures designed for CV tasks are not directly
applicable to NLP tasks and the tiny Recurrent Neural Network (RNN)
architectures have been designed primarily for IoT applications. In NLP
applications, although model compression has seen initial success in on-device
text classification, there are at least three major challenges yet to be
addressed: adversarial robustness, explainability, and personalization. Here we
attempt to tackle these challenges by designing a new training scheme for model
compression and adversarial robustness, including the optimization of an
explainable feature mapping objective, a knowledge distillation objective, and
an adversarially robustness objective. The resulting compressed model is
personalized using on-device private training data via fine-tuning. We perform
extensive experiments to compare our approach with both compact RNN (e.g.,
FastGRNN) and compressed RNN (e.g., PRADO) architectures in both natural and
adversarial NLP test settings. | [
"cs.LG"
] |
Image segmentation is a popular area of research in computer vision that has
many applications in automated image processing. A recent technique called
piecewise flat embeddings (PFE) has been proposed for use in image
segmentation; PFE transforms image pixel data into a lower dimensional
representation where similar pixels are pulled close together and dissimilar
pixels are pushed apart. This technique has shown promising results, but its
original formulation is not computationally feasible for large images. We
propose two improvements to the algorithm for computing PFE: first, we
reformulate portions of the algorithm to enable various linear algebra
operations to be performed in parallel; second, we propose utilizing an
iterative linear solver (preconditioned conjugate gradient) to quickly solve a
linear least-squares problem that occurs in the inner loop of a nested
iteration. With these two computational improvements, we show on a publicly
available image database that PFE can be sped up by an order of magnitude
without sacrificing segmentation performance. Our results make this technique
more practical for use on large data sets, not only for image segmentation, but
for general data clustering problems. | [
"cs.CV"
] |
In this paper, we propose a novel policy iteration method, called dynamic
policy programming (DPP), to estimate the optimal policy in the
infinite-horizon Markov decision processes. We prove the finite-iteration and
asymptotic l\infty-norm performance-loss bounds for DPP in the presence of
approximation/estimation error. The bounds are expressed in terms of the
l\infty-norm of the average accumulated error as opposed to the l\infty-norm of
the error in the case of the standard approximate value iteration (AVI) and the
approximate policy iteration (API). This suggests that DPP can achieve a better
performance than AVI and API since it averages out the simulation noise caused
by Monte-Carlo sampling throughout the learning process. We examine this
theoretical results numerically by com- paring the performance of the
approximate variants of DPP with existing reinforcement learning (RL) methods
on different problem domains. Our results show that, in all cases, DPP-based
algorithms outperform other RL methods by a wide margin. | [
"cs.LG",
"cs.AI",
"cs.SY",
"math.OC",
"stat.ML"
] |
Importance-weighted risk minimization is a key ingredient in many machine
learning algorithms for causal inference, domain adaptation, class imbalance,
and off-policy reinforcement learning. While the effect of importance weighting
is well-characterized for low-capacity misspecified models, little is known
about how it impacts over-parameterized, deep neural networks. This work is
inspired by recent theoretical results showing that on (linearly) separable
data, deep linear networks optimized by SGD learn weight-agnostic solutions,
prompting us to ask, for realistic deep networks, for which many practical
datasets are separable, what is the effect of importance weighting? We present
the surprising finding that while importance weighting impacts models early in
training, its effect diminishes over successive epochs. Moreover, while L2
regularization and batch normalization (but not dropout), restore some of the
impact of importance weighting, they express the effect via (seemingly) the
wrong abstraction: why should practitioners tweak the L2 regularization, and by
how much, to produce the correct weighting effect? Our experiments confirm
these findings across a range of architectures and datasets. | [
"cs.LG",
"stat.ML"
] |
Complex data structures such as time series are increasingly present in
modern data science problems. A fundamental question is whether two such
time-series are statistically dependent. Many current approaches make
parametric assumptions on the random processes, only detect linear association,
require multiple tests, or forfeit power in high-dimensional, nonlinear
settings. Estimating the distribution of any test statistic under the null is
non-trivial, as the permutation test is invalid. This work juxtaposes distance
correlation (Dcorr) and multiscale graph correlation (MGC) from independence
testing literature and block permutation from time series analysis to address
these challenges. The proposed nonparametric procedure is valid and consistent,
building upon prior work by characterizing the geometry of the relationship,
estimating the time lag at which dependence is maximized, avoiding the need for
multiple testing, and exhibiting superior power in high-dimensional, low sample
size, nonlinear settings. Neural connectivity is analyzed via fMRI data,
revealing linear dependence of signals within the visual network and default
mode network, and nonlinear relationships in other networks. This work uncovers
a first-resort data analysis tool with open-source code available, directly
impacting a wide range of scientific disciplines. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
An important facet of reinforcement learning (RL) has to do with how the
agent goes about exploring the environment. Traditional exploration strategies
typically focus on efficiency and ignore safety. However, for practical
applications, ensuring safety of the agent during exploration is crucial since
performing an unsafe action or reaching an unsafe state could result in
irreversible damage to the agent. The main challenge of safe exploration is
that characterizing the unsafe states and actions is difficult for large
continuous state or action spaces and unknown environments. In this paper, we
propose a novel approach to incorporate estimations of safety to guide
exploration and policy search in deep reinforcement learning. By using a cost
function to capture trajectory-based safety, our key idea is to formulate the
state-action value function of this safety cost as a candidate Lyapunov
function and extend control-theoretic results to approximate its derivative
using online Gaussian Process (GP) estimation. We show how to use these
statistical models to guide the agent in unknown environments to obtain
high-performance control policies with provable stability certificates. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
We report, to our knowledge, the first end-to-end application of Generative
Adversarial Networks (GANs) towards the synthesis of Optical Coherence
Tomography (OCT) images of the retina. Generative models have gained recent
attention for the increasingly realistic images they can synthesize, given a
sampling of a data type. In this paper, we apply GANs to a sampling
distribution of OCTs of the retina. We observe the synthesis of realistic OCT
images depicting recognizable pathology such as macular holes, choroidal
neovascular membranes, myopic degeneration, cystoid macular edema, and central
serous retinopathy amongst others. This represents the first such report of its
kind. Potential applications of this new technology include for surgical
simulation, for treatment planning, for disease prognostication, and for
accelerating the development of new drugs and surgical procedures to treat
retinal disease. | [
"cs.CV",
"cs.LG"
] |
Recently, face super-resolution (FSR) methods either feed whole face image
into convolutional neural networks (CNNs) or utilize extra facial priors (e.g.,
facial parsing maps, facial landmarks) to focus on facial structure, thereby
maintaining the consistency of the facial structure while restoring facial
details. However, the limited receptive fields of CNNs and inaccurate facial
priors will reduce the naturalness and fidelity of the reconstructed face. In
this paper, we propose a novel paradigm based on the self-attention mechanism
(i.e., the core of Transformer) to fully explore the representation capacity of
the facial structure feature. Specifically, we design a Transformer-CNN
aggregation network (TANet) consisting of two paths, in which one path uses
CNNs responsible for restoring fine-grained facial details while the other
utilizes a resource-friendly Transformer to capture global information by
exploiting the long-distance visual relation modeling. By aggregating the
features from the above two paths, the consistency of global facial structure
and fidelity of local facial detail restoration are strengthened
simultaneously. Experimental results of face reconstruction and recognition
verify that the proposed method can significantly outperform the
state-of-the-art methods. | [
"cs.CV"
] |
We study session-based recommendation scenarios where we want to recommend
items to users during sequential interactions to improve their long-term
utility. Optimizing a long-term metric is challenging because the learning
signal (whether the recommendations achieved their desired goals) is delayed
and confounded by other user interactions with the system. Targeting
immediately measurable proxies such as clicks can lead to suboptimal
recommendations due to misalignment with the long-term metric. We develop a new
reinforcement learning algorithm called Short Horizon Policy Improvement (SHPI)
that approximates policy-induced drift in user behavior across sessions. SHPI
is a straightforward modification of episodic RL algorithms for session-based
recommendation, that additionally gives an appropriate termination bonus in
each session. Empirical results on four recommendation tasks show that SHPI can
outperform state-of-the-art recommendation techniques like matrix factorization
with offline proxy signals, bandits with myopic online proxies, and RL
baselines with limited amounts of user interaction. | [
"cs.LG"
] |
Learning to capture long-range relations is fundamental to image/video
recognition. Existing CNN models generally rely on increasing depth to model
such relations which is highly inefficient. In this work, we propose the
"double attention block", a novel component that aggregates and propagates
informative global features from the entire spatio-temporal space of input
images/videos, enabling subsequent convolution layers to access features from
the entire space efficiently. The component is designed with a double attention
mechanism in two steps, where the first step gathers features from the entire
space into a compact set through second-order attention pooling and the second
step adaptively selects and distributes features to each location via another
attention. The proposed double attention block is easy to adopt and can be
plugged into existing deep neural networks conveniently. We conduct extensive
ablation studies and experiments on both image and video recognition tasks for
evaluating its performance. On the image recognition task, a ResNet-50 equipped
with our double attention blocks outperforms a much larger ResNet-152
architecture on ImageNet-1k dataset with over 40% less the number of parameters
and less FLOPs. On the action recognition task, our proposed model achieves the
state-of-the-art results on the Kinetics and UCF-101 datasets with
significantly higher efficiency than recent works. | [
"cs.CV"
] |
For autonomous vehicles to viably replace human drivers they must contend
with inclement weather. Falling rain and snow introduce noise in LiDAR returns
resulting in both false positive and false negative object detections. In this
article we introduce the Winter Adverse Driving dataSet (WADS) collected in the
snow belt region of Michigan's Upper Peninsula. WADS is the first multi-modal
dataset featuring dense point-wise labeled sequential LiDAR scans collected in
severe winter weather; weather that would cause an experienced driver to alter
their driving behavior. We have labelled and will make available over 7 GB or
3.6 billion labelled LiDAR points out of over 26 TB of total LiDAR and camera
data collected. We also present the Dynamic Statistical Outlier Removal (DSOR)
filter, a statistical PCL-based filter capable or removing snow with a higher
recall than the state of the art snow de-noising filter while being 28\%
faster. Further, the DSOR filter is shown to have a lower time complexity
compared to the state of the art resulting in an improved scalability.
Our labeled dataset and DSOR filter will be made available at
https://bitbucket.org/autonomymtu/dsor_filter | [
"cs.CV",
"cs.RO"
] |
Generative adversarial networks (GANs) provide a way to learn deep
representations without extensively annotated training data. They achieve this
through deriving backpropagation signals through a competitive process
involving a pair of networks. The representations that can be learned by GANs
may be used in a variety of applications, including image synthesis, semantic
image editing, style transfer, image super-resolution and classification. The
aim of this review paper is to provide an overview of GANs for the signal
processing community, drawing on familiar analogies and concepts where
possible. In addition to identifying different methods for training and
constructing GANs, we also point to remaining challenges in their theory and
application. | [
"cs.CV"
] |
We present a self-supervised approach to training convolutional neural
networks for dense depth estimation from monocular endoscopy data without a
priori modeling of anatomy or shading. Our method only requires sequential data
from monocular endoscopic videos and a multi-view stereo reconstruction method,
e.g. structure from motion, that supervises learning in a sparse but accurate
manner. Consequently, our method requires neither manual interaction, such as
scaling or labeling, nor patient CT in the training and application phases. We
demonstrate the performance of our method on sinus endoscopy data from two
patients and validate depth prediction quantitatively using corresponding
patient CT scans where we found submillimeter residual errors. | [
"cs.CV"
] |
Satisfying the high computation demand of modern deep learning architectures
is challenging for achieving low inference latency. The current approaches in
decreasing latency only increase parallelism within a layer. This is because
architectures typically capture a single-chain dependency pattern that prevents
efficient distribution with a higher concurrency (i.e., simultaneous execution
of one inference among devices). Such single-chain dependencies are so
widespread that even implicitly biases recent neural architecture search (NAS)
studies. In this visionary paper, we draw attention to an entirely new space of
NAS that relaxes the single-chain dependency to provide higher concurrency and
distribution opportunities. To quantitatively compare these architectures, we
propose a score that encapsulates crucial metrics such as communication,
concurrency, and load balancing. Additionally, we propose a new generator and
transformation block that consistently deliver superior architectures compared
to current state-of-the-art methods. Finally, our preliminary results show that
these new architectures reduce the inference latency and deserve more
attention. | [
"cs.CV"
] |
3D object detection has attracted much attention thanks to the advances in
sensors and deep learning methods for point clouds. Current state-of-the-art
methods like VoteNet regress direct offset towards object centers and box
orientations with an additional Multi-Layer-Perceptron network. Both their
offset and orientation predictions are not accurate due to the fundamental
difficulty in rotation classification. In the work, we disentangle the direct
offset into Local Canonical Coordinates (LCC), box scales and box orientations.
Only LCC and box scales are regressed while box orientations are generated by a
canonical voting scheme. Finally, a LCC-aware back-projection checking
algorithm iteratively cuts out bounding boxes from the generated vote maps,
with the elimination of false positives. Our model achieves state-of-the-art
performance on challenging large-scale datasets of real point cloud scans:
ScanNet, SceneNN with 8.8 and 5.1 mAP improvement respectively. Code is
available on https://github.com/qq456cvb/CanonicalVoting. | [
"cs.CV"
] |
Context matters! Nevertheless, there has not been much research in exploiting
contextual information in deep neural networks. For most part, the entire usage
of contextual information has been limited to recurrent neural networks.
Attention models and capsule networks are two recent ways of introducing
contextual information in non-recurrent models, however both of these
algorithms have been developed after this work has started.
In this thesis, we show that contextual information can be exploited in 2
fundamentally different ways: implicitly and explicitly. In the DeepScore
project, where the usage of context is very important for the recognition of
many tiny objects, we show that by carefully crafting convolutional
architectures, we can achieve state-of-the-art results, while also being able
to implicitly correctly distinguish between objects which are virtually
identical, but have different meanings based on their surrounding. In parallel,
we show that by explicitly designing algorithms (motivated from graph theory
and game theory) that take into considerations the entire structure of the
dataset, we can achieve state-of-the-art results in different topics like
semi-supervised learning and similarity learning.
To the best of our knowledge, we are the first to integrate graph-theoretical
modules, carefully crafted for the problem of similarity learning and that are
designed to consider contextual information, not only outperforming the other
models, but also gaining a speed improvement while using a smaller number of
parameters. | [
"cs.CV"
] |
Recently, transformation-based self-supervised learning has been applied to
generative adversarial networks (GANs) to mitigate the catastrophic forgetting
problem of discriminator by learning stable representations. However, the
separate self-supervised tasks in existing self-supervised GANs cause an
inconsistent goal with generative modeling due to the learning of the generator
from their generator distribution-agnostic classifiers. To address this issue,
we propose a novel self-supervised GANs framework with label augmentation,
i.e., augmenting the GAN labels (real or fake) with the self-supervised
pseudo-labels. In particular, the discriminator and the self-supervised
classifier are unified to learn a single task that predicts the augmented label
such that the discriminator/classifier is aware of the generator distribution,
while the generator tries to confuse the discriminator/classifier by optimizing
the discrepancy between the transformed real and generated distributions.
Theoretically, we prove that the generator, at the equilibrium point, converges
to replicate the data distribution. Empirically, we demonstrate that the
proposed method significantly outperforms competitive baselines on both
generative modeling and representation learning across benchmark datasets. | [
"cs.LG",
"cs.CV"
] |
Deep Learning Accelerators are prone to faults which manifest in the form of
errors in Neural Networks. Fault Tolerance in Neural Networks is crucial in
real-time safety critical applications requiring computation for long
durations. Neural Networks with high regularisation exhibit superior fault
tolerance, however, at the cost of classification accuracy. In the view of
difference in functionality, a Neural Network is modelled as two separate
networks, i.e, the Feature Extractor with unsupervised learning objective and
the Classifier with a supervised learning objective. Traditional approaches of
training the entire network using a single supervised learning objective is
insufficient to achieve the objectives of the individual components optimally.
In this work, a novel multi-criteria objective function, combining unsupervised
training of the Feature Extractor followed by supervised tuning with Classifier
Network is proposed. The unsupervised training solves two games simultaneously
in the presence of adversary neural networks with conflicting objectives to the
Feature Extractor. The first game minimises the loss in reconstructing the
input image for indistinguishability given the features from the Extractor, in
the presence of a generative decoder. The second game solves a minimax
constraint optimisation for distributional smoothening of feature space to
match a prior distribution, in the presence of a Discriminator network. The
resultant strongly regularised Feature Extractor is combined with the
Classifier Network for supervised fine-tuning. The proposed Adversarial Fault
Tolerant Neural Network Training is scalable to large networks and is
independent of the architecture. The evaluation on benchmarking datasets:
FashionMNIST and CIFAR10, indicates that the resultant networks have high
accuracy with superior tolerance to stuck at "0" faults compared to widely used
regularisers. | [
"cs.LG",
"cs.CR",
"cs.DC",
"cs.GT",
"stat.ML"
] |
Transposable Elements (TEs) or jumping genes are the DNA sequences that have
an intrinsic capability to move within a host genome from one genomic location
to another. Studies show that the presence of a TE within or adjacent to a
functional gene may alter its expression. TEs can also cause an increase in the
rate of mutation and can even mediate duplications and large insertions and
deletions in the genome, promoting gross genetic rearrangements. Thus, the
proper classification of the identified jumping genes is essential to
understand their genetic and evolutionary effects in the genome. While
computational methods have been developed that perform either binary
classification or multi-label classification of TEs, few studies have focused
on their hierarchical classification. The state-of-the-art machine learning
classification method utilizes a Multi-Layer Perceptron (MLP), a class of
neural network, for hierarchical classification of TEs. However, the existing
methods have limited accuracy in classifying TEs. A more effective classifier,
which can explain the role of TEs in germline and somatic evolution, is needed.
In this study, we examine the performance of a variety of machine learning (ML)
methods. And eventually, propose a robust approach for the hierarchical
classification of TEs, with higher accuracy, using Support Vector Machines
(SVM). | [
"cs.LG",
"q-bio.GN",
"stat.ML"
] |
Objective and interpretable metrics to evaluate current artificial
intelligent systems are of great importance, not only to analyze the current
state of such systems but also to objectively measure progress in the future.
In this work, we focus on the evaluation of image generation tasks. We propose
a novel approach, called Fuzzy Topology Impact (FTI), that determines both the
quality and diversity of an image set using topology representations combined
with fuzzy logic. When compared to current evaluation methods, FTI shows better
and more stable performance on multiple experiments evaluating the sensitivity
to noise, mode dropping and mode inventing. | [
"cs.CV",
"cs.LG"
] |
Deep Learning has revolutionized the fields of computer vision, natural
language understanding, speech recognition, information retrieval and more.
However, with the progressive improvements in deep learning models, their
number of parameters, latency, resources required to train, etc. have all have
increased significantly. Consequently, it has become important to pay attention
to these footprint metrics of a model as well, not just its quality. We present
and motivate the problem of efficiency in deep learning, followed by a thorough
survey of the five core areas of model efficiency (spanning modeling
techniques, infrastructure, and hardware) and the seminal work there. We also
present an experiment-based guide along with code, for practitioners to
optimize their model training and deployment. We believe this is the first
comprehensive survey in the efficient deep learning space that covers the
landscape of model efficiency from modeling techniques to hardware support. Our
hope is that this survey would provide the reader with the mental model and the
necessary understanding of the field to apply generic efficiency techniques to
immediately get significant improvements, and also equip them with ideas for
further research and experimentation to achieve additional gains. | [
"cs.LG"
] |
Knowledge transferability, or transfer learning, has been widely adopted to
allow a pre-trained model in the source domain to be effectively adapted to
downstream tasks in the target domain. It is thus important to explore and
understand the factors affecting knowledge transferability. In this paper, as
the first work, we analyze and demonstrate the connections between knowledge
transferability and another important phenomenon--adversarial transferability,
\emph{i.e.}, adversarial examples generated against one model can be
transferred to attack other models. Our theoretical studies show that
adversarial transferability indicates knowledge transferability and vice versa.
Moreover, based on the theoretical insights, we propose two practical
adversarial transferability metrics to characterize this process, serving as
bidirectional indicators between adversarial and knowledge transferability. We
conduct extensive experiments for different scenarios on diverse datasets,
showing a positive correlation between adversarial transferability and
knowledge transferability. Our findings will shed light on future research
about effective knowledge transfer learning and adversarial transferability
analyses. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
We propose a video story question-answering (QA) architecture, Multimodal
Dual Attention Memory (MDAM). The key idea is to use a dual attention mechanism
with late fusion. MDAM uses self-attention to learn the latent concepts in
scene frames and captions. Given a question, MDAM uses the second attention
over these latent concepts. Multimodal fusion is performed after the dual
attention processes (late fusion). Using this processing pipeline, MDAM learns
to infer a high-level vision-language joint representation from an abstraction
of the full video content. We evaluate MDAM on PororoQA and MovieQA datasets
which have large-scale QA annotations on cartoon videos and movies,
respectively. For both datasets, MDAM achieves new state-of-the-art results
with significant margins compared to the runner-up models. We confirm the best
performance of the dual attention mechanism combined with late fusion by
ablation studies. We also perform qualitative analysis by visualizing the
inference mechanisms of MDAM. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.MM"
] |
Cross-modal person re-identification (Re-ID) is critical for modern video
surveillance systems. The key challenge is to align inter-modality
representations according to semantic information present for a person and
ignore background information. In this work, we present AXM-Net, a novel CNN
based architecture designed for learning semantically aligned visual and
textual representations. The underlying building block consists of multiple
streams of feature maps coming from visual and textual modalities and a novel
learnable context sharing semantic alignment network. We also propose
complementary intra modal attention learning mechanisms to focus on more
fine-grained local details in the features along with a cross-modal affinity
loss for robust feature matching. Our design is unique in its ability to
implicitly learn feature alignments from data. The entire AXM-Net can be
trained in an end-to-end manner. We report results on both person search and
cross-modal Re-ID tasks. Extensive experimentation validates the proposed
framework and demonstrates its superiority by outperforming the current
state-of-the-art methods by a significant margin. | [
"cs.CV",
"cs.LG"
] |
This work presents a reformulation of the recently proposed Wasserstein
autoencoder framework on a non-Euclidean manifold, the Poincar\'e ball model of
the hyperbolic space. By assuming the latent space to be hyperbolic, we can use
its intrinsic hierarchy to impose structure on the learned latent space
representations. We demonstrate the model in the visual domain to analyze some
of its properties and show competitive results on a graph link prediction task. | [
"cs.LG",
"stat.ML"
] |
Over the last few years, the phenomenon of adversarial examples ---
maliciously constructed inputs that fool trained machine learning models ---
has captured the attention of the research community, especially when the
adversary is restricted to small modifications of a correctly handled input.
Less surprisingly, image classifiers also lack human-level performance on
randomly corrupted images, such as images with additive Gaussian noise. In this
paper we provide both empirical and theoretical evidence that these are two
manifestations of the same underlying phenomenon, establishing close
connections between the adversarial robustness and corruption robustness
research programs. This suggests that improving adversarial robustness should
go hand in hand with improving performance in the presence of more general and
realistic image corruptions. Based on our results we recommend that future
adversarial defenses consider evaluating the robustness of their methods to
distributional shift with benchmarks such as Imagenet-C. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Carton detection is an important technique in the automatic logistics system
and can be applied to many applications such as the stacking and unstacking of
cartons, the unloading of cartons in the containers. However, there is no
public large-scale carton dataset for the research community to train and
evaluate the carton detection models up to now, which hinders the development
of carton detection. In this paper, we present a large-scale carton dataset
named Stacked Carton Dataset(SCD) with the goal of advancing the
state-of-the-art in carton detection. Images are collected from the internet
and several warehourses, and objects are labeled using per-instance
segmentation for precise localization. There are totally 250,000 instance masks
from 16,136 images. In addition, we design a carton detector based on RetinaNet
by embedding Offset Prediction between Classification and Localization
module(OPCL) and Boundary Guided Supervision module(BGS). OPCL alleviates the
imbalance problem between classification and localization quality which boosts
AP by 3.1% - 4.7% on SCD while BGS guides the detector to pay more attention to
boundary information of cartons and decouple repeated carton textures. To
demonstrate the generalization of OPCL to other datasets, we conduct extensive
experiments on MS COCO and PASCAL VOC. The improvement of AP on MS COCO and
PASCAL VOC is 1.8% - 2.2% and 3.4% - 4.3% respectively. | [
"cs.CV"
] |
Video transmission applications (e.g., conferencing) are gaining momentum,
especially in times of global health pandemic. Video signals are transmitted
over lossy channels, resulting in low-quality received signals. To restore
videos on recipient edge devices in real-time, we introduce an efficient video
restoration network, EVRNet. EVRNet efficiently allocates parameters inside the
network using alignment, differential, and fusion modules. With extensive
experiments on video restoration tasks (deblocking, denoising, and
super-resolution), we demonstrate that EVRNet delivers competitive performance
to existing methods with significantly fewer parameters and MACs. For example,
EVRNet has 260 times fewer parameters and 958 times fewer MACs than enhanced
deformable convolution-based video restoration network (EDVR) for 4 times video
super-resolution while its SSIM score is 0.018 less than EDVR. We also
evaluated the performance of EVRNet under multiple distortions on unseen
dataset to demonstrate its ability in modeling variable-length sequences under
both camera and object motion. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Graph convolutional networks (GCNs) have recently become one of the most
powerful tools for graph analytics tasks in numerous applications, ranging from
social networks and natural language processing to bioinformatics and
chemoinformatics, thanks to their ability to capture the complex relationships
between concepts. At present, the vast majority of GCNs use a neighborhood
aggregation framework to learn a continuous and compact vector, then performing
a pooling operation to generalize graph embedding for the classification task.
These approaches have two disadvantages in the graph classification task:
(1)when only the largest sub-graph structure ($k$-hop neighbor) is used for
neighborhood aggregation, a large amount of early-stage information is lost
during the graph convolution step; (2) simple average/sum pooling or max
pooling utilized, which loses the characteristics of each node and the topology
between nodes. In this paper, we propose a novel framework called, dual
attention graph convolutional networks (DAGCN) to address these problems. DAGCN
automatically learns the importance of neighbors at different hops using a
novel attention graph convolution layer, and then employs a second attention
component, a self-attention pooling layer, to generalize the graph
representation from the various aspects of a matrix graph embedding. The dual
attention network is trained in an end-to-end manner for the graph
classification task. We compare our model with state-of-the-art graph kernels
and other deep learning methods. The experimental results show that our
framework not only outperforms other baselines but also achieves a better rate
of convergence. | [
"cs.LG",
"stat.ML"
] |
The design of neural network architectures is an important component for
achieving state-of-the-art performance with machine learning systems across a
broad array of tasks. Much work has endeavored to design and build
architectures automatically through clever construction of a search space
paired with simple learning algorithms. Recent progress has demonstrated that
such meta-learning methods may exceed scalable human-invented architectures on
image classification tasks. An open question is the degree to which such
methods may generalize to new domains. In this work we explore the construction
of meta-learning techniques for dense image prediction focused on the tasks of
scene parsing, person-part segmentation, and semantic image segmentation.
Constructing viable search spaces in this domain is challenging because of the
multi-scale representation of visual information and the necessity to operate
on high resolution imagery. Based on a survey of techniques in dense image
prediction, we construct a recursive search space and demonstrate that even
with efficient random search, we can identify architectures that outperform
human-invented architectures and achieve state-of-the-art performance on three
dense prediction tasks including 82.7\% on Cityscapes (street scene parsing),
71.3\% on PASCAL-Person-Part (person-part segmentation), and 87.9\% on PASCAL
VOC 2012 (semantic image segmentation). Additionally, the resulting
architecture is more computationally efficient, requiring half the parameters
and half the computational cost as previous state of the art systems. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Image Captioning, or the automatic generation of descriptions for images, is
one of the core problems in Computer Vision and has seen considerable progress
using Deep Learning Techniques. We propose to use Inception-ResNet
Convolutional Neural Network as encoder to extract features from images,
Hierarchical Context based Word Embeddings for word representations and a Deep
Stacked Long Short Term Memory network as decoder, in addition to using Image
Data Augmentation to avoid over-fitting. For data Augmentation, we use
Horizontal and Vertical Flipping in addition to Perspective Transformations on
the images. We evaluate our proposed methods with two image captioning
frameworks- Encoder-Decoder and Soft Attention. Evaluation on widely used
metrics have shown that our approach leads to considerable improvement in model
performance. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM",
"cs.NE"
] |
This article studies the domain adaptation problem in person
re-identification (re-ID) under a "learning via translation" framework,
consisting of two components, 1) translating the labeled images from the source
to the target domain in an unsupervised manner, 2) learning a re-ID model using
the translated images. The objective is to preserve the underlying human
identity information after image translation, so that translated images with
labels are effective for feature learning on the target domain. To this end, we
propose a similarity preserving generative adversarial network (SPGAN) and its
end-to-end trainable version, eSPGAN. Both aiming at similarity preserving,
SPGAN enforces this property by heuristic constraints, while eSPGAN does so by
optimally facilitating the re-ID model learning. More specifically, SPGAN
separately undertakes the two components in the "learning via translation"
framework. It first preserves two types of unsupervised similarity, namely,
self-similarity of an image before and after translation, and
domain-dissimilarity of a translated source image and a target image. It then
learns a re-ID model using existing networks. In comparison, eSPGAN seamlessly
integrates image translation and re-ID model learning. During the end-to-end
training of eSPGAN, re-ID learning guides image translation to preserve the
underlying identity information of an image. Meanwhile, image translation
improves re-ID learning by providing identity-preserving training samples of
the target domain style. In the experiment, we show that identities of the fake
images generated by SPGAN and eSPGAN are well preserved. Based on this, we
report the new state-of-the-art domain adaptation results on two large-scale
person re-ID datasets. | [
"cs.CV"
] |
Rear-end collision warning system has a great role to enhance the driving
safety. In this system some measures are used to estimate the dangers and the
system warns drivers to be more cautious. The real-time processes should be
executed in such system, to remain enough time and distance to avoid collision
with the front vehicle. To this end, in this paper a new system is developed by
using random forest classifier. To evaluate the performance of the proposed
system, vehicles trajectory data of 100 car's database from Virginia tech
transportation institute are used and the methods are compared based on their
accuracy and their processing time. By using TOPSIS multi-criteria selection
method, we show that the results of the implemented classifier is better than
the results of different classifiers including Bayesian network, naive Bayes,
MLP neural network, support vector machine, nearest neighbor, rule-based
methods and decision tree. The presented experiments reveals that the random
forest is an acceptable algorithm for the proposed driver assistant system with
88.4% accuracy for detecting warning situations and 94.7% for detecting safe
situations. | [
"cs.CV"
] |
Graph Neural Network (GNN) is a popular architecture for the analysis of
chemical molecules, and it has numerous applications in material and medicinal
science. Current lines of GNNs developed for molecular analysis, however, do
not fit well on the training set, and their performance does not scale well
with the complexity of the network. In this paper, we propose an auxiliary
module to be attached to a GNN that can boost the representation power of the
model without hindering with the original GNN architecture. Our auxiliary
module can be attached to a wide variety of GNNs, including those that are used
commonly in biochemical applications. With our auxiliary architecture, the
performances of many GNNs used in practice improve more consistently, achieving
the state-of-the-art performance on popular molecular graph datasets. | [
"cs.LG",
"stat.ML"
] |
The safety constraints commonly used by existing safe reinforcement learning
(RL) methods are defined only on expectation of initial states, but allow each
certain state to be unsafe, which is unsatisfying for real-world
safety-critical tasks. In this paper, we introduce the feasible actor-critic
(FAC) algorithm, which is the first model-free constrained RL method that
considers statewise safety, e.g, safety for each initial state. We claim that
some states are inherently unsafe no matter what policy we choose, while for
other states there exist policies ensuring safety, where we say such states and
policies are feasible. By constructing a statewise Lagrange function available
on RL sampling and adopting an additional neural network to approximate the
statewise Lagrange multiplier, we manage to obtain the optimal feasible policy
which ensures safety for each feasible state and the safest possible policy for
infeasible states. Furthermore, the trained multiplier net can indicate whether
a given state is feasible or not through the statewise complementary slackness
condition. We provide theoretical guarantees that FAC outperforms previous
expectation-based constrained RL methods in terms of both constraint
satisfaction and reward optimization. Experimental results on both robot
locomotive tasks and safe exploration tasks verify the safety enhancement and
feasibility interpretation of the proposed method. | [
"cs.LG"
] |
Convolution neural networks have achieved remarkable performance in many
tasks of computing vision. However, CNN tends to bias to low frequency
components. They prioritize capturing low frequency patterns which lead them
fail when suffering from application scenario transformation. While adversarial
example implies the model is very sensitive to high frequency perturbations. In
this paper, we introduce a new regularization method by constraining the
frequency spectra of the filter of the model. Different from band-limit
training, our method considers the valid frequency range probably entangles in
different layers rather than continuous and trains the valid frequency range
end-to-end by backpropagation. We demonstrate the effectiveness of our
regularization by (1) defensing to adversarial perturbations; (2) reducing the
generalization gap in different architecture; (3) improving the generalization
ability in transfer learning scenario without fine-tune. | [
"cs.LG",
"stat.ML"
] |
Inspired by how the human brain employs more neural pathways when increasing
the focus on a subject, we introduce a novel twin cascaded attention model that
outperforms a state-of-the-art image captioning model that was originally
implemented using one channel of attention for the visual grounding task.
Visual grounding ensures the existence of words in the caption sentence that
are grounded into a particular region in the input image. After a deep learning
model is trained on visual grounding task, the model employs the learned
patterns regarding the visual grounding and the order of objects in the caption
sentences, when generating captions. We report the results of our experiments
in three image captioning tasks on the COCO dataset. The results are reported
using standard image captioning metrics to show the improvements achieved by
our model over the previous image captioning model. The results gathered from
our experiments suggest that employing more parallel attention pathways in a
deep neural network leads to higher performance. Our implementation of NTT is
publicly available at: https://github.com/zanyarz/NeuralTwinsTalk. | [
"cs.CV"
] |
We introduce Activity Graph Transformer, an end-to-end learnable model for
temporal action localization, that receives a video as input and directly
predicts a set of action instances that appear in the video. Detecting and
localizing action instances in untrimmed videos requires reasoning over
multiple action instances in a video. The dominant paradigms in the literature
process videos temporally to either propose action regions or directly produce
frame-level detections. However, sequential processing of videos is problematic
when the action instances have non-sequential dependencies and/or non-linear
temporal ordering, such as overlapping action instances or re-occurrence of
action instances over the course of the video. In this work, we capture this
non-linear temporal structure by reasoning over the videos as non-sequential
entities in the form of graphs. We evaluate our model on challenging datasets:
THUMOS14, Charades, and EPIC-Kitchens-100. Our results show that our proposed
model outperforms the state-of-the-art by a considerable margin. | [
"cs.CV",
"cs.AI"
] |
We consider the task of Inverse Reinforcement Learning in Contextual Markov
Decision Processes (MDPs). In this setting, contexts, which define the reward
and transition kernel, are sampled from a distribution. In addition, although
the reward is a function of the context, it is not provided to the agent.
Instead, the agent observes demonstrations from an optimal policy. The goal is
to learn the reward mapping, such that the agent will act optimally even when
encountering previously unseen contexts, also known as zero-shot transfer. We
formulate this problem as a non-differential convex optimization problem and
propose a novel algorithm to compute its subgradients. Based on this scheme, we
analyze several methods both theoretically, where we compare the sample
complexity and scalability, and empirically. Most importantly, we show both
theoretically and empirically that our algorithms perform zero-shot transfer
(generalize to new and unseen contexts). Specifically, we present empirical
experiments in a dynamic treatment regime, where the goal is to learn a reward
function which explains the behavior of expert physicians based on recorded
data of them treating patients diagnosed with sepsis. | [
"cs.LG",
"stat.ML"
] |
In this paper, we present a semi supervised deep quick learning framework for
instance detection and pixel-wise semantic segmentation of images in a dense
clutter of items. The framework can quickly and incrementally learn novel items
in an online manner by real-time data acquisition and generating corresponding
ground truths on its own. To learn various combinations of items, it can
synthesize cluttered scenes, in real time. The overall approach is based on the
tutor-child analogy in which a deep network (tutor) is pretrained for
class-agnostic object detection which generates labeled data for another deep
network (child). The child utilizes a customized convolutional neural network
head for the purpose of quick learning. There are broadly four key components
of the proposed framework semi supervised labeling, occlusion aware clutter
synthesis, a customized convolutional neural network head, and instance
detection. The initial version of this framework was implemented during our
participation in Amazon Robotics Challenge (ARC), 2017. Our system was ranked
3rd, 4th and 5th worldwide in pick, stow-pick and stow task respectively. The
proposed framework is an improved version over ARC17 where novel features such
as instance detection and online learning has been added. | [
"cs.CV",
"cs.RO"
] |
Elevator button recognition is a critical function to realize the autonomous
operation of elevators. However, challenging image conditions and various image
distortions make it difficult to recognize buttons accurately. To fill this
gap, we propose a novel deep learning-based approach, which aims to
autonomously correct perspective distortions of elevator button images based on
button corner detection results. First, we leverage a novel image segmentation
model and the Hough Transform method to obtain button segmentation and button
corner detection results. Then, pixel coordinates of standard button corners
are used as reference features to estimate camera motions for correcting
perspective distortions. Fifteen elevator button images are captured from
different angles of view as the dataset. The experimental results demonstrate
that our proposed approach is capable of estimating camera motions and removing
perspective distortions of elevator button images with high accuracy. | [
"cs.CV",
"cs.RO"
] |
In this paper, we propose a new mathematical model for image processing. It
is a logarithmical one. We consider the bounded interval (-1, 1) as the set of
gray levels. Firstly, we define two operations: addition <+> and real scalar
multiplication <x>. With these operations, the set of gray levels becomes a
real vector space. Then, defining the scalar product (.|.) and the norm || .
||, we obtain an Euclidean space of the gray levels. Secondly, we extend these
operations and functions for color images. We finally show the effect of
various simple operations on an image. | [
"cs.CV"
] |
Learning latent representations of nodes in graphs is an important and
ubiquitous task with widespread applications such as link prediction, node
classification, and graph visualization. Previous methods on graph
representation learning mainly focus on static graphs, however, many real-world
graphs are dynamic and evolve over time. In this paper, we present Dynamic
Self-Attention Network (DySAT), a novel neural architecture that operates on
dynamic graphs and learns node representations that capture both structural
properties and temporal evolutionary patterns. Specifically, DySAT computes
node representations by jointly employing self-attention layers along two
dimensions: structural neighborhood and temporal dynamics. We conduct link
prediction experiments on two classes of graphs: communication networks and
bipartite rating networks. Our experimental results show that DySAT has a
significant performance gain over several different state-of-the-art graph
embedding baselines. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
COVID-19 (coronavirus disease 2019) pandemic caused by SARS-CoV-2 has led to
a treacherous and devastating catastrophe for humanity. At the time of writing,
no specific antivirus drugs or vaccines are recommended to control infection
transmission and spread. The current diagnosis of COVID-19 is done by
Reverse-Transcription Polymer Chain Reaction (RT-PCR) testing. However, this
method is expensive, time-consuming, and not easily available in straitened
regions. An interpretable and COVID-19 diagnosis AI framework is devised and
developed based on the cough sounds features and symptoms metadata to overcome
these limitations. The proposed framework's performance was evaluated using a
medical dataset containing Symptoms and Demographic data of 30000 audio
segments, 328 cough sounds from 150 patients with four cough classes (
COVID-19, Asthma, Bronchitis, and Healthy). Experiments' results show that the
model captures the better and robust feature embedding to distinguish between
COVID-19 patient coughs and several types of non-COVID-19 coughs with higher
specificity and accuracy of 95.04 $\pm$ 0.18% and 96.83$\pm$ 0.18%
respectively, all the while maintaining interpretability. | [
"cs.LG",
"cs.SD",
"eess.AS"
] |
Unsupervised learning can leverage large-scale data sources without the need
for annotations. In this context, deep learning-based auto encoders have shown
great potential in detecting anomalies in medical images. However,
state-of-the-art anomaly scores are still based on the reconstruction error,
which lacks in two essential parts: it ignores the model-internal
representation employed for reconstruction, and it lacks formal assertions and
comparability between samples. We address these shortcomings by proposing the
Context-encoding Variational Autoencoder (ceVAE) which combines reconstruction-
with density-based anomaly scoring. This improves the sample- as well as
pixel-wise results. In our experiments on the BraTS-2017 and ISLES-2015
segmentation benchmarks, the ceVAE achieves unsupervised ROC-AUCs of 0.95 and
0.89, respectively, thus outperforming state-of-the-art methods by a
considerable margin. | [
"cs.LG",
"stat.ML"
] |
We present Deep Generalized Canonical Correlation Analysis (DGCCA) -- a
method for learning nonlinear transformations of arbitrarily many views of
data, such that the resulting transformations are maximally informative of each
other. While methods for nonlinear two-view representation learning (Deep CCA,
(Andrew et al., 2013)) and linear many-view representation learning
(Generalized CCA (Horst, 1961)) exist, DGCCA is the first CCA-style multiview
representation learning technique that combines the flexibility of nonlinear
(deep) representation learning with the statistical power of incorporating
information from many independent sources, or views. We present the DGCCA
formulation as well as an efficient stochastic optimization algorithm for
solving it. We learn DGCCA representations on two distinct datasets for three
downstream tasks: phonetic transcription from acoustic and articulatory
measurements, and recommending hashtags and friends on a dataset of Twitter
users. We find that DGCCA representations soundly beat existing methods at
phonetic transcription and hashtag recommendation, and in general perform no
worse than standard linear many-view techniques. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In many real-world scenarios, the utility of a user is derived from the
single execution of a policy. In this case, to apply multi-objective
reinforcement learning, the expected utility of the returns must be optimised.
Various scenarios exist where a user's preferences over objectives (also known
as the utility function) are unknown or difficult to specify. In such
scenarios, a set of optimal policies must be learned. However, settings where
the expected utility must be maximised have been largely overlooked by the
multi-objective reinforcement learning community and, as a consequence, a set
of optimal solutions has yet to be defined. In this paper we address this
challenge by proposing first-order stochastic dominance as a criterion to build
solution sets to maximise expected utility. We also propose a new dominance
criterion, known as expected scalarised returns (ESR) dominance, that extends
first-order stochastic dominance to allow a set of optimal policies to be
learned in practice. We then define a new solution concept called the ESR set,
which is a set of policies that are ESR dominant. Finally, we define a new
multi-objective distributional tabular reinforcement learning (MOT-DRL)
algorithm to learn the ESR set in a multi-objective multi-armed bandit setting. | [
"cs.LG",
"cs.AI"
] |
In person re-identification, extracting part-level features from person
images has been verified to be crucial. Most of existing CNN-based methods only
locate the human parts coarsely, or rely on pre-trained human parsing models
and fail in locating the identifiable non-human parts (e.g., knapsack). In this
paper, we introduce an alignment scheme in Transformer architecture for the
first time and propose the Auto-Aligned Transformer (AAformer) to automatically
locate both the human parts and non-human ones at patch-level. We introduce the
"part tokens", which are learnable vectors, to extract part features in
Transformer. A part token only interacts with a local subset of patches in
self-attention and learns to be the part representation. To adaptively group
the image patches into different subsets, we design the Auto-Alignment.
Auto-Alignment employs a fast variant of Optimal Transport algorithm to online
cluster the patch embeddings into several groups with the part tokens as their
prototypes. We harmoniously integrate the part alignment into the
self-attention and the output part tokens can be directly used for retrieval.
Extensive experiments validate the effectiveness of part tokens and the
superiority of AAformer over various state-of-the-art methods. | [
"cs.CV"
] |
The paper presents a spatio-temporal wind speed forecasting algorithm using
Deep Learning (DL)and in particular, Recurrent Neural Networks(RNNs). Motivated
by recent advances in renewable energy integration and smart grids, we apply
our proposed algorithm for wind speed forecasting. Renewable energy resources
(wind and solar)are random in nature and, thus, their integration is
facilitated with accurate short-term forecasts. In our proposed framework, we
model the spatiotemporal information by a graph whose nodes are data generating
entities and its edges basically model how these nodes are interacting with
each other. One of the main contributions of our work is the fact that we
obtain forecasts of all nodes of the graph at the same time based on one
framework. Results of a case study on recorded time series data from a
collection of wind mills in the north-east of the U.S. show that the proposed
DL-based forecasting algorithm significantly improves the short-term forecasts
compared to a set of widely-used benchmarks models. | [
"cs.LG"
] |
Facial makeup transfer is a widely-used technology that aims to transfer the
makeup style from a reference face image to a non-makeup face. Existing
literature leverage the adversarial loss so that the generated faces are of
high quality and realistic as real ones, but are only able to produce fixed
outputs. Inspired by recent advances in disentangled representation, in this
paper we propose DMT (Disentangled Makeup Transfer), a unified generative
adversarial network to achieve different scenarios of makeup transfer. Our
model contains an identity encoder as well as a makeup encoder to disentangle
the personal identity and the makeup style for arbitrary face images. Based on
the outputs of the two encoders, a decoder is employed to reconstruct the
original faces. We also apply a discriminator to distinguish real faces from
fake ones. As a result, our model can not only transfer the makeup styles from
one or more reference face images to a non-makeup face with controllable
strength, but also produce various outputs with styles sampled from a prior
distribution. Extensive experiments demonstrate that our model is superior to
existing literature by generating high-quality results for different scenarios
of makeup transfer. | [
"cs.CV"
] |
In this paper, we introduce a framework for segmenting instances of a common
object class by multiple active contour evolution over semantic segmentation
maps of images obtained through fully convolutional networks. The contour
evolution is cast as an energy minimization problem, where the aggregate energy
functional incorporates a data fit term, an explicit shape model, and accounts
for object overlap. Efficient solution neighborhood operators are proposed,
enabling optimization through metaheuristics such as simulated annealing. We
instantiate the proposed framework in the context of segmenting individual
fallen stems from high-resolution aerial multispectral imagery. We validated
our approach on 3 real-world scenes of varying complexity. The test plots were
situated in regions of the Bavarian Forest National Park, Germany, which
sustained a heavy bark beetle infestation. Evaluations were performed on both
the polygon and line segment level, showing that the multi-contour segmentation
can achieve up to 0.93 precision and 0.82 recall. An improvement of up to 7
percentage points (pp) in recall and 6 in precision compared to an iterative
sample consensus line segment detection was achieved. Despite the simplicity of
the applied shape parametrization, an explicit shape model incorporated into
the energy function improved the results by up to 4 pp of recall. Finally, we
show the importance of using a deep learning based semantic segmentation method
as the basis for individual stem detection. Our method is a step towards
increased accessibility of automatic fallen tree mapping, due to higher cost
efficiency of aerial imagery acquisition compared to laser scanning. The
precise fallen tree maps could be further used as a basis for plant and animal
habitat modeling, studies on carbon sequestration as well as soil quality in
forest ecosystems. | [
"cs.CV"
] |
Nowadays, deep neural networks (DNNs) have become the main instrument for
machine learning tasks within a wide range of domains, including vision, NLP,
and speech. Meanwhile, in an important case of heterogenous tabular data, the
advantage of DNNs over shallow counterparts remains questionable. In
particular, there is no sufficient evidence that deep learning machinery allows
constructing methods that outperform gradient boosting decision trees (GBDT),
which are often the top choice for tabular problems. In this paper, we
introduce Neural Oblivious Decision Ensembles (NODE), a new deep learning
architecture, designed to work with any tabular data. In a nutshell, the
proposed NODE architecture generalizes ensembles of oblivious decision trees,
but benefits from both end-to-end gradient-based optimization and the power of
multi-layer hierarchical representation learning. With an extensive
experimental comparison to the leading GBDT packages on a large number of
tabular datasets, we demonstrate the advantage of the proposed NODE
architecture, which outperforms the competitors on most of the tasks. We
open-source the PyTorch implementation of NODE and believe that it will become
a universal framework for machine learning on tabular data. | [
"cs.LG",
"stat.ML"
] |
The success of deep learning methods led to significant breakthroughs in 3-D
point cloud processing tasks with applications in remote sensing. Existing
methods utilize convolutions that have some limitations, as they assume a
uniform input distribution and cannot learn long-range dependencies. Recent
works have shown that adding attention in conjunction with these methods
improves performance. This raises a question: can attention layers completely
replace convolutions? This paper proposes a fully attentional model - {\em
Point Transformer}, for deriving a rich point cloud representation. The model's
shape classification and retrieval performance are evaluated on a large-scale
urban dataset - RoofN3D and a standard benchmark dataset ModelNet40. Extensive
experiments are conducted to test the model's robustness to unseen point
corruptions for analyzing its effectiveness on real datasets. The proposed
method outperforms other state-of-the-art models in the RoofN3D dataset, gives
competitive results in the ModelNet40 benchmark, and showcases high robustness
to various unseen point corruptions. Furthermore, the model is highly memory
and space efficient when compared to other methods. | [
"cs.CV"
] |
Different categories of visual stimuli activate different responses in the
human brain. These signals can be captured with EEG for utilization in
applications such as Brain-Computer Interface (BCI). However, accurate
classification of single-trial data is challenging due to low signal-to-noise
ratio of EEG. This work introduces an EEG-ConvTranformer network that is based
on multi-headed self-attention. Unlike other transformers, the model
incorporates self-attention to capture inter-region interactions. It further
extends to adjunct convolutional filters with multi-head attention as a single
module to learn temporal patterns. Experimental results demonstrate that
EEG-ConvTransformer achieves improved classification accuracy over the
state-of-the-art techniques across five different visual stimuli classification
tasks. Finally, quantitative analysis of inter-head diversity also shows low
similarity in representational subspaces, emphasizing the implicit diversity of
multi-head attention. | [
"cs.CV"
] |
We present a benchmark suite for visual perception. The benchmark is based on
more than 250K high-resolution video frames, all annotated with ground-truth
data for both low-level and high-level vision tasks, including optical flow,
semantic instance segmentation, object detection and tracking, object-level 3D
scene layout, and visual odometry. Ground-truth data for all tasks is available
for every frame. The data was collected while driving, riding, and walking a
total of 184 kilometers in diverse ambient conditions in a realistic virtual
world. To create the benchmark, we have developed a new approach to collecting
ground-truth data from simulated worlds without access to their source code or
content. We conduct statistical analyses that show that the composition of the
scenes in the benchmark closely matches the composition of corresponding
physical environments. The realism of the collected data is further validated
via perceptual experiments. We analyze the performance of state-of-the-art
methods for multiple tasks, providing reference baselines and highlighting
challenges for future research. The supplementary video can be viewed at
https://youtu.be/T9OybWv923Y | [
"cs.CV",
"I.4.8"
] |
Panorama creation is one of the most widely deployed techniques in computer
vision. In addition to industry applications such as Google Street View, it is
also used by millions of consumers in smartphones and other cameras.
Traditionally, the problem is decomposed into three phases: registration, which
picks a single transformation of each source image to align it to the other
inputs, seam finding, which selects a source image for each pixel in the final
result, and blending, which fixes minor visual artifacts. Here, we observe that
the use of a single registration often leads to errors, especially in scenes
with significant depth variation or object motion. We propose instead the use
of multiple registrations, permitting regions of the image at different depths
to be captured with greater accuracy. MRF inference techniques naturally extend
to seam finding over multiple registrations, and we show here that their energy
functions can be readily modified with new terms that discourage duplication
and tearing, common problems that are exacerbated by the use of multiple
registrations. Our techniques are closely related to layer-based stereo, and
move image stitching closer to explicit scene modeling. Experimental evidence
demonstrates that our techniques often generate significantly better panoramas
when there is substantial motion or parallax. | [
"cs.CV"
] |
Generative Adversarial Networks (GANs) have been widely applied in different
scenarios thanks to the development of deep neural networks. The original GAN
was proposed based on the non-parametric assumption of the infinite capacity of
networks. However, it is still unknown whether GANs can generate realistic
samples without any prior information. Due to the overconfident assumption,
many issues remain unaddressed in GANs' training, such as non-convergence, mode
collapses, gradient vanishing. Regularization and normalization are common
methods of introducing prior information to stabilize training and improve
discrimination. Although a handful number of regularization and normalization
methods have been proposed for GANs, to the best of our knowledge, there exists
no comprehensive survey which primarily focuses on objectives and development
of these methods, apart from some in-comprehensive and limited scope studies.
In this work, we conduct a comprehensive survey on the regularization and
normalization techniques from different perspectives of GANs training. First,
we systematically describe different perspectives of GANs training and thus
obtain the different objectives of regularization and normalization. Based on
these objectives, we propose a new taxonomy. Furthermore, we compare the
performance of the mainstream methods on different datasets and investigate the
regularization and normalization techniques that have been frequently employed
in SOTA GANs. Finally, we highlight potential future directions of research in
this domain. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
Colorizing a given gray-level image is an important task in the media and
advertising industry. Due to the ambiguity inherent to colorization (many
shades are often plausible), recent approaches started to explicitly model
diversity. However, one of the most obvious artifacts, structural
inconsistency, is rarely considered by existing methods which predict
chrominance independently for every pixel. To address this issue, we develop a
conditional random field based variational auto-encoder formulation which is
able to achieve diversity while taking into account structural consistency.
Moreover, we introduce a controllability mecha- nism that can incorporate
external constraints from diverse sources in- cluding a user interface.
Compared to existing baselines, we demonstrate that our method obtains more
diverse and globally consistent coloriza- tions on the LFW, LSUN-Church and
ILSVRC-2015 datasets. | [
"cs.CV",
"cs.LG"
] |
Humans reason with concepts and metaconcepts: we recognize red and green from
visual input; we also understand that they describe the same property of
objects (i.e., the color). In this paper, we propose the visual
concept-metaconcept learner (VCML) for joint learning of concepts and
metaconcepts from images and associated question-answer pairs. The key is to
exploit the bidirectional connection between visual concepts and metaconcepts.
Visual representations provide grounding cues for predicting relations between
unseen pairs of concepts. Knowing that red and green describe the same property
of objects, we generalize to the fact that cube and sphere also describe the
same property of objects, since they both categorize the shape of objects.
Meanwhile, knowledge about metaconcepts empowers visual concept learning from
limited, noisy, and even biased data. From just a few examples of purple cubes
we can understand a new color purple, which resembles the hue of the cubes
instead of the shape of them. Evaluation on both synthetic and real-world
datasets validates our claims. | [
"cs.CV",
"cs.AI",
"cs.CL",
"cs.LG",
"stat.ML"
] |
Deep learning based models have had great success in object detection, but
the state of the art models have not yet been widely applied to biological
image data. We apply for the first time an object detection model previously
used on natural images to identify cells and recognize their stages in
brightfield microscopy images of malaria-infected blood. Many micro-organisms
like malaria parasites are still studied by expert manual inspection and hand
counting. This type of object detection task is challenging due to factors like
variations in cell shape, density, and color, and uncertainty of some cell
classes. In addition, annotated data useful for training is scarce, and the
class distribution is inherently highly imbalanced due to the dominance of
uninfected red blood cells. We use Faster Region-based Convolutional Neural
Network (Faster R-CNN), one of the top performing object detection models in
recent years, pre-trained on ImageNet but fine tuned with our data, and compare
it to a baseline, which is based on a traditional approach consisting of cell
segmentation, extraction of several single-cell features, and classification
using random forests. To conduct our initial study, we collect and label a
dataset of 1300 fields of view consisting of around 100,000 individual cells.
We demonstrate that Faster R-CNN outperforms our baseline and put the results
in context of human performance. | [
"cs.CV"
] |
Graph Attention Networks (GATs) are the state-of-the-art neural architecture
for representation learning with graphs. GATs learn attention functions that
assign weights to nodes so that different nodes have different influences in
the feature aggregation steps. In practice, however, induced attention
functions are prone to over-fitting due to the increasing number of parameters
and the lack of direct supervision on attention weights. GATs also suffer from
over-smoothing at the decision boundary of nodes. Here we propose a framework
to address their weaknesses via margin-based constraints on attention during
training. We first theoretically demonstrate the over-smoothing behavior of
GATs and then develop an approach using constraint on the attention weights
according to the class boundary and feature aggregation pattern. Furthermore,
to alleviate the over-fitting problem, we propose additional constraints on the
graph structure. Extensive experiments and ablation studies on common benchmark
datasets demonstrate the effectiveness of our method, which leads to
significant improvements over the previous state-of-the-art graph attention
methods on all datasets. | [
"cs.LG",
"stat.ML"
] |
Knowledge graph is a popular format for representing knowledge, with many
applications to semantic search engines, question-answering systems, and
recommender systems. Real-world knowledge graphs are usually incomplete, so
knowledge graph embedding methods, such as Canonical decomposition/Parallel
factorization (CP), DistMult, and ComplEx, have been proposed to address this
issue. These methods represent entities and relations as embedding vectors in
semantic space and predict the links between them. The embedding vectors
themselves contain rich semantic information and can be used in other
applications such as data analysis. However, mechanisms in these models and the
embedding vectors themselves vary greatly, making it difficult to understand
and compare them. Given this lack of understanding, we risk using them
ineffectively or incorrectly, particularly for complicated models, such as CP,
with two role-based embedding vectors, or the state-of-the-art ComplEx model,
with complex-valued embedding vectors. In this paper, we propose a
multi-embedding interaction mechanism as a new approach to uniting and
generalizing these models. We derive them theoretically via this mechanism and
provide empirical analyses and comparisons between them. We also propose a new
multi-embedding model based on quaternion algebra and show that it achieves
promising results using popular benchmarks. Source code is available on github
at https://github.com/tranhungnghiep/AnalyzingKGEmbeddings | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Reinforcement learning, which acquires a policy maximizing long-term rewards,
has been actively studied. Unfortunately, this learning type is too slow and
difficult to use in practical situations because the state-action space becomes
huge in real environments. Many studies have incorporated human knowledge into
reinforcement Learning. Though human knowledge on trajectories is often used, a
human could be asked to control an AI agent, which can be difficult. Knowledge
on subgoals may lessen this requirement because humans need only to consider a
few representative states on an optimal trajectory in their minds. The
essential factor for learning efficiency is rewards. Potential-based reward
shaping is a basic method for enriching rewards. However, it is often difficult
to incorporate subgoals for accelerating learning over potential-based reward
shaping. This is because the appropriate potentials are not intuitive for
humans. We extend potential-based reward shaping and propose a subgoal-based
reward shaping. The method makes it easier for human trainers to share their
knowledge of subgoals. To evaluate our method, we obtained a subgoal series
from participants and conducted experiments in three domains,
four-rooms(discrete states and discrete actions), pinball(continuous and
discrete), and picking(both continuous). We compared our method with a baseline
reinforcement learning algorithm and other subgoal-based methods, including
random subgoal and naive subgoal-based reward shaping. As a result, we found
out that our reward shaping outperformed all other methods in learning
efficiency. | [
"cs.LG",
"cs.AI"
] |
Many control tasks exhibit similar dynamics that can be modeled as having
common latent structure. Hidden-Parameter Markov Decision Processes (HiP-MDPs)
explicitly model this structure to improve sample efficiency in multi-task
settings. However, this setting makes strong assumptions on the observability
of the state that limit its application in real-world scenarios with rich
observation spaces. In this work, we leverage ideas of common structure from
the HiP-MDP setting, and extend it to enable robust state abstractions inspired
by Block MDPs. We derive instantiations of this new framework for both
multi-task reinforcement learning (MTRL) and meta-reinforcement learning
(Meta-RL) settings. Further, we provide transfer and generalization bounds
based on task and state similarity, along with sample complexity bounds that
depend on the aggregate number of samples across tasks, rather than the number
of tasks, a significant improvement over prior work that use the same
environment assumptions. To further demonstrate the efficacy of the proposed
method, we empirically compare and show improvement over multi-task and
meta-reinforcement learning baselines. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Transmission electron microscopy (TEM) is one of the primary tools to show
microstructural characterization of materials as well as film thickness.
However, manual determination of film thickness from TEM images is
time-consuming as well as subjective, especially when the films in question are
very thin and the need for measurement precision is very high. Such is the case
for head overcoat (HOC) thickness measurements in the magnetic hard disk drive
industry. It is therefore necessary to develop software to automatically
measure HOC thickness. In this paper, for the first time, we propose a HOC
layer segmentation method using NASNet-Large as an encoder and then followed by
a decoder architecture, which is one of the most commonly used architectures in
deep learning for image segmentation. To further improve segmentation results,
we are the first to propose a post-processing layer to remove irrelevant
portions in the segmentation result. To measure the thickness of the segmented
HOC layer, we propose a regressive convolutional neural network (RCNN) model as
well as orthogonal thickness calculation methods. Experimental results
demonstrate a higher dice score for our model which has lower mean squared
error and outperforms current state-of-the-art manual measurement. | [
"cs.CV"
] |
Temporal knowledge graph (TKG) reasoning is a crucial task that has gained
increasing research interest in recent years. Most existing methods focus on
reasoning at past timestamps to complete the missing facts, and there are only
a few works of reasoning on known TKGs to forecast future facts. Compared with
the completion task, the forecasting task is more difficult that faces two main
challenges: (1) how to effectively model the time information to handle future
timestamps? (2) how to make inductive inference to handle previously unseen
entities that emerge over time? To address these challenges, we propose the
first reinforcement learning method for forecasting. Specifically, the agent
travels on historical knowledge graph snapshots to search for the answer. Our
method defines a relative time encoding function to capture the timespan
information, and we design a novel time-shaped reward based on Dirichlet
distribution to guide the model learning. Furthermore, we propose a novel
representation method for unseen entities to improve the inductive inference
ability of the model. We evaluate our method for this link prediction task at
future timestamps. Extensive experiments on four benchmark datasets demonstrate
substantial performance improvement meanwhile with higher explainability, less
calculation, and fewer parameters when compared with existing state-of-the-art
methods. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
Visual relations, such as "person ride bike" and "bike next to car", offer a
comprehensive scene understanding of an image, and have already shown their
great utility in connecting computer vision and natural language. However, due
to the challenging combinatorial complexity of modeling
subject-predicate-object relation triplets, very little work has been done to
localize and predict visual relations. Inspired by the recent advances in
relational representation learning of knowledge bases and convolutional object
detection networks, we propose a Visual Translation Embedding network (VTransE)
for visual relation detection. VTransE places objects in a low-dimensional
relation space where a relation can be modeled as a simple vector translation,
i.e., subject + predicate $\approx$ object. We propose a novel feature
extraction layer that enables object-relation knowledge transfer in a
fully-convolutional fashion that supports training and inference in a single
forward/backward pass. To the best of our knowledge, VTransE is the first
end-to-end relation detection network. We demonstrate the effectiveness of
VTransE over other state-of-the-art methods on two large-scale datasets: Visual
Relationship and Visual Genome. Note that even though VTransE is a purely
visual model, it is still competitive to the Lu's multi-modal model with
language priors. | [
"cs.CV",
"I.4"
] |
Weather forecasting is a long standing scientific challenge with direct
social and economic impact. The task is suitable for deep neural networks due
to vast amounts of continuously collected data and a rich spatial and temporal
structure that presents long range dependencies. We introduce MetNet, a neural
network that forecasts precipitation up to 8 hours into the future at the high
spatial resolution of 1 km$^2$ and at the temporal resolution of 2 minutes with
a latency in the order of seconds. MetNet takes as input radar and satellite
data and forecast lead time and produces a probabilistic precipitation map. The
architecture uses axial self-attention to aggregate the global context from a
large input patch corresponding to a million square kilometers. We evaluate the
performance of MetNet at various precipitation thresholds and find that MetNet
outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on
the scale of the continental United States. | [
"cs.LG",
"physics.ao-ph",
"stat.ML"
] |
We present Siam R-CNN, a Siamese re-detection architecture which unleashes
the full power of two-stage object detection approaches for visual object
tracking. We combine this with a novel tracklet-based dynamic programming
algorithm, which takes advantage of re-detections of both the first-frame
template and previous-frame predictions, to model the full history of both the
object to be tracked and potential distractor objects. This enables our
approach to make better tracking decisions, as well as to re-detect tracked
objects after long occlusion. Finally, we propose a novel hard example mining
strategy to improve Siam R-CNN's robustness to similar looking objects. Siam
R-CNN achieves the current best performance on ten tracking benchmarks, with
especially strong results for long-term tracking. We make our code and models
available at www.vision.rwth-aachen.de/page/siamrcnn. | [
"cs.CV"
] |
This paper highlights several properties of large urban networks that can
have an impact on machine learning methods applied to traffic signal control.
In particular, we show that the average network flow tends to be independent of
the signal control policy as density increases. This property, which so far has
remained under the radar, implies that deep reinforcement learning (DRL)
methods becomes ineffective when trained under congested conditions, and might
explain DRL's limited success for traffic signal control. Our results apply to
all possible grid networks thanks to a parametrization based on two network
parameters: the ratio of the expected distance between consecutive traffic
lights to the expected green time, and the turning probability at
intersections. Networks with different parameters exhibit very different
responses to traffic signal control. Notably, we found that no control (i.e.
random policy) can be an effective control strategy for a surprisingly large
family of networks. The impact of the turning probability turned out to be very
significant both for baseline and for DRL policies. It also explains the loss
of symmetry observed for these policies, which is not captured by existing
theories that rely on corridor approximations without turns. Our findings also
suggest that supervised learning methods have enormous potential as they
require very little examples to produce excellent policies. | [
"cs.LG"
] |
Recent advancements in transfer learning have made it a promising approach
for domain adaptation via transfer of learned representations. This is
especially when relevant when alternate tasks have limited samples of
well-defined and labeled data, which is common in the molecule data domain.
This makes transfer learning an ideal approach to solve molecular learning
tasks. While Adversarial reprogramming has proven to be a successful method to
repurpose neural networks for alternate tasks, most works consider source and
alternate tasks within the same domain. In this work, we propose a new
algorithm, Representation Reprogramming via Dictionary Learning (R2DL), for
adversarially reprogramming pretrained language models for molecular learning
tasks, motivated by leveraging learned representations in massive state of the
art language models. The adversarial program learns a linear transformation
between a dense source model input space (language data) and a sparse target
model input space (e.g., chemical and biological molecule data) using a k-SVD
solver to approximate a sparse representation of the encoded data, via
dictionary learning. R2DL achieves the baseline established by state of the art
toxicity prediction models trained on domain-specific data and outperforms the
baseline in a limited training-data setting, thereby establishing avenues for
domain-agnostic transfer learning for tasks with molecule data. | [
"cs.LG",
"q-bio.MN"
] |
Deep learning models suffer from opaqueness. For Convolutional Neural
Networks (CNNs), current research strategies for explaining models focus on the
target classes within the associated training dataset. As a result, the
understanding of hidden feature map activations is limited by the
discriminative knowledge gleaned during training. The aim of our work is to
explain and expand CNNs models via the mirroring or alignment of CNN to an
external knowledge base. This will allow us to give a semantic context or label
for each visual feature. We can match CNN feature activations to nodes in our
external knowledge base. This supports knowledge-based interpretation of the
features associated with model decisions. To demonstrate our approach, we build
two separate graphs. We use an entity alignment method to align the feature
nodes in a CNN with the nodes in a ConceptNet based knowledge graph. We then
measure the proximity of CNN graph nodes to semantically meaningful knowledge
base nodes. Our results show that in the aligned embedding space, nodes from
the knowledge graph are close to the CNN feature nodes that have similar
meanings, indicating that nodes from an external knowledge base can act as
explanatory semantic references for features in the model. We analyse a variety
of graph building methods in order to improve the results from our embedding
space. We further demonstrate that by using hierarchical relationships from our
external knowledge base, we can locate new unseen classes outside the CNN
training set in our embeddings space, based on visual feature activations. This
suggests that we can adapt our approach to identify unseen classes based on CNN
feature activations. Our demonstrated approach of aligning a CNN with an
external knowledge base paves the way to reason about and beyond the trained
model, with future adaptations to explainable models and zero-shot learning. | [
"cs.CV",
"cs.AI"
] |
In the superpixel literature, the comparison of state-of-the-art methods can
be biased by the non-robustness of some metrics to decomposition aspects, such
as the superpixel scale. Moreover, most recent decomposition methods allow to
set a shape regularity parameter, which can have a substantial impact on the
measured performances. In this paper, we introduce an evaluation framework,
that aims to unify the comparison process of superpixel methods. We investigate
the limitations of existing metrics, and propose to evaluate each of the three
core decomposition aspects: color homogeneity, respect of image objects and
shape regularity. To measure the regularity aspect, we propose a new global
regularity measure (GR), which addresses the non-robustness of state-of-the-art
metrics. We evaluate recent superpixel methods with these criteria, at several
superpixel scales and regularity levels. The proposed framework reduces the
bias in the comparison process of state-of-the-art superpixel methods. Finally,
we demonstrate that the proposed GR measure is correlated with the performances
of various applications. | [
"cs.CV"
] |
Data augmentation is a powerful technique to improve performance in
applications such as image and text classification tasks. Yet, there is little
rigorous understanding of why and how various augmentations work. In this work,
we consider a family of linear transformations and study their effects on the
ridge estimator in an over-parametrized linear regression setting. First, we
show that transformations which preserve the labels of the data can improve
estimation by enlarging the span of the training data. Second, we show that
transformations which mix data can improve estimation by playing a
regularization effect. Finally, we validate our theoretical insights on MNIST.
Based on the insights, we propose an augmentation scheme that searches over the
space of transformations by how uncertain the model is about the transformed
data. We validate our proposed scheme on image and text datasets. For example,
our method outperforms RandAugment by 1.24% on CIFAR-100 using
Wide-ResNet-28-10. Furthermore, we achieve comparable accuracy to the SoTA
Adversarial AutoAugment on CIFAR datasets. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
We propose Deep Q-Networks (DQN) with model-based exploration, an algorithm
combining both model-free and model-based approaches that explores better and
learns environments with sparse rewards more efficiently. DQN is a
general-purpose, model-free algorithm and has been proven to perform well in a
variety of tasks including Atari 2600 games since it's first proposed by Minh
et el. However, like many other reinforcement learning (RL) algorithms, DQN
suffers from poor sample efficiency when rewards are sparse in an environment.
As a result, most of the transitions stored in the replay memory have no
informative reward signal, and provide limited value to the convergence and
training of the Q-Network. However, one insight is that these transitions can
be used to learn the dynamics of the environment as a supervised learning
problem. The transitions also provide information of the distribution of
visited states. Our algorithm utilizes these two observations to perform a
one-step planning during exploration to pick an action that leads to states
least likely to be seen, thus improving the performance of exploration. We
demonstrate our agent's performance in two classic environments with sparse
rewards in OpenAI gym: Mountain Car and Lunar Lander. | [
"cs.LG",
"stat.ML"
] |
Learning from data streams is among the most vital fields of contemporary
data mining. The online analysis of information coming from those potentially
unbounded data sources allows for designing reactive up-to-date models capable
of adjusting themselves to continuous flows of data. While a plethora of
shallow methods have been proposed for simpler low-dimensional streaming
problems, almost none of them addressed the issue of learning from complex
contextual data, such as images or texts. The former is represented mainly by
adaptive decision trees that have been proven to be very efficient in streaming
scenarios. The latter has been predominantly addressed by offline deep
learning. In this work, we attempt to bridge the gap between these two worlds
and propose Adaptive Deep Forest (ADF) - a natural combination of the
successful tree-based streaming classifiers with deep forest, which represents
an interesting alternative idea for learning from contextual data. The
conducted experiments show that the deep forest approach can be effectively
transformed into an online algorithm, forming a model that outperforms all
state-of-the-art shallow adaptive classifiers, especially for high-dimensional
complex streams. | [
"cs.LG",
"I.5.0; I.2.0"
] |
Self-supervised representation learning is a critical problem in computer
vision, as it provides a way to pretrain feature extractors on large unlabeled
datasets that can be used as an initialization for more efficient and effective
training on downstream tasks. A promising approach is to use contrastive
learning to learn a latent space where features are close for similar data
samples and far apart for dissimilar ones. This approach has demonstrated
tremendous success for pretraining both image and point cloud feature
extractors, but it has been barely investigated for multi-modal RGB-D scans,
especially with the goal of facilitating high-level scene understanding. To
solve this problem, we propose contrasting "pairs of point-pixel pairs", where
positives include pairs of RGB-D points in correspondence, and negatives
include pairs where one of the two modalities has been disturbed and/or the two
RGB-D points are not in correspondence. This provides extra flexibility in
making hard negatives and helps networks to learn features from both
modalities, not just the more discriminating one of the two. Experiments show
that this proposed approach yields better performance on three large-scale
RGB-D scene understanding benchmarks (ScanNet, SUN RGB-D, and 3RScan) than
previous pretraining approaches. | [
"cs.CV"
] |
Automotive Cyber-Physical Systems (ACPS) have attracted a significant amount
of interest in the past few decades, while one of the most critical operations
in these systems is the perception of the environment. Deep learning and,
especially, the use of Deep Neural Networks (DNNs) provides impressive results
in analyzing and understanding complex and dynamic scenes from visual data. The
prediction horizons for those perception systems are very short and inference
must often be performed in real time, stressing the need of transforming the
original large pre-trained networks into new smaller models, by utilizing Model
Compression and Acceleration (MCA) techniques. Our goal in this work is to
investigate best practices for appropriately applying novel weight sharing
techniques, optimizing the available variables and the training procedures
towards the significant acceleration of widely adopted DNNs. Extensive
evaluation studies carried out using various state-of-the-art DNN models in
object detection and tracking experiments, provide details about the type of
errors that manifest after the application of weight sharing techniques,
resulting in significant acceleration gains with negligible accuracy losses. | [
"cs.CV",
"cs.LG"
] |
The rapid increase in the amount of published visual data and the limited
time of users bring the demand for processing untrimmed videos to produce
shorter versions that convey the same information. Despite the remarkable
progress that has been made by summarization methods, most of them can only
select a few frames or skims, which creates visual gaps and breaks the video
context. In this paper, we present a novel methodology based on a reinforcement
learning formulation to accelerate instructional videos. Our approach can
adaptively select frames that are not relevant to convey the information
without creating gaps in the final video. Our agent is textually and visually
oriented to select which frames to remove to shrink the input video.
Additionally, we propose a novel network, called Visually-guided Document
Attention Network (VDAN), able to generate a highly discriminative embedding
space to represent both textual and visual data. Our experiments show that our
method achieves the best performance in terms of F1 Score and coverage at the
video segment level. | [
"cs.CV"
] |
Advances in remote sensing technology have led to the capture of massive
amounts of data. Increased image resolution, more frequent revisit times, and
additional spectral channels have created an explosion in the amount of data
that is available to provide analyses and intelligence across domains,
including agriculture. However, the processing of this data comes with a cost
in terms of computation time and money, both of which must be considered when
the goal of an algorithm is to provide real-time intelligence to improve
efficiencies. Specifically, we seek to identify nutrient deficient areas from
remotely sensed data to alert farmers to regions that require attention;
detection of nutrient deficient areas is a key task in precision agriculture as
farmers must quickly respond to struggling areas to protect their harvests.
Past methods have focused on pixel-level classification (i.e. semantic
segmentation) of the field to achieve these tasks, often using deep learning
models with tens-of-millions of parameters. In contrast, we propose a much
lighter graph-based method to perform node-based classification. We first use
Simple Linear Iterative Cluster (SLIC) to produce superpixels across the field.
Then, to perform segmentation across the non-Euclidean domain of superpixels,
we leverage a Graph Convolutional Neural Network (GCN). This model has
4-orders-of-magnitude fewer parameters than a CNN model and trains in a matter
of minutes. | [
"cs.CV",
"cs.LG"
] |
Video representation learning has recently attracted attention in computer
vision due to its applications for activity and scene forecasting or
vision-based planning and control. Video prediction models often learn a latent
representation of video which is encoded from input frames and decoded back
into images. Even when conditioned on actions, purely deep learning based
architectures typically lack a physically interpretable latent space. In this
study, we use a differentiable physics engine within an action-conditional
video representation network to learn a physical latent representation. We
propose supervised and self-supervised learning methods to train our network
and identify physical properties. The latter uses spatial transformers to
decode physical states back into images. The simulation scenarios in our
experiments comprise pushing, sliding and colliding objects, for which we also
analyze the observability of the physical properties. In experiments we
demonstrate that our network can learn to encode images and identify physical
properties like mass and friction from videos and action sequences in the
simulated scenarios. We evaluate the accuracy of our supervised and
self-supervised methods and compare it with a system identification baseline
which directly learns from state trajectories. We also demonstrate the ability
of our method to predict future video frames from input images and actions. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Recently, it has been demonstrated that the performance of a deep
convolutional neural network can be effectively improved by embedding an
attention module into it. In this work, a novel lightweight and effective
attention method named Pyramid Squeeze Attention (PSA) module is proposed. By
replacing the 3x3 convolution with the PSA module in the bottleneck blocks of
the ResNet, a novel representational block named Efficient Pyramid Squeeze
Attention (EPSA) is obtained. The EPSA block can be easily added as a
plug-and-play component into a well-established backbone network, and
significant improvements on model performance can be achieved. Hence, a simple
and efficient backbone architecture named EPSANet is developed in this work by
stacking these ResNet-style EPSA blocks. Correspondingly, a stronger
multi-scale representation ability can be offered by the proposed EPSANet for
various computer vision tasks including but not limited to, image
classification, object detection, instance segmentation, etc. Without bells and
whistles, the performance of the proposed EPSANet outperforms most of the
state-of-the-art channel attention methods. As compared to the SENet-50, the
Top-1 accuracy is improved by 1.93% on ImageNet dataset, a larger margin of
+2.7 box AP for object detection and an improvement of +1.7 mask AP for
instance segmentation by using the Mask-RCNN on MS-COCO dataset are obtained.
Our source code is available at:https://github.com/murufeng/EPSANet. | [
"cs.CV"
] |
Model efficiency is crucial for object detection. Mostprevious works rely on
either hand-crafted design or auto-search methods to obtain a static
architecture, regardless ofthe difference of inputs. In this paper, we
introduce a newperspective of designing efficient detectors, which is
automatically generating sample-adaptive model architectureon the fly. The
proposed method is named content-aware dynamic detectors (CADDet). It first
applies a multi-scale densely connected network with dynamic routing as the
supernet. Furthermore, we introduce a course-to-fine strat-egy tailored for
object detection to guide the learning of dynamic routing, which contains two
metrics: 1) dynamic global budget constraint assigns data-dependent
expectedbudgets for individual samples; 2) local path similarity regularization
aims to generate more diverse routing paths. With these, our method achieves
higher computational efficiency while maintaining good performance. To the best
of our knowledge, our CADDet is the first work to introduce dynamic routing
mechanism in object detection. Experiments on MS-COCO dataset demonstrate that
CADDet achieves 1.8 higher mAP with 10% fewer FLOPs compared with vanilla
routing strategy. Compared with the models based upon similar building blocks,
CADDet achieves a 42% FLOPs reduction with a competitive mAP. | [
"cs.CV"
] |
There is a growing number of tasks that work directly on point clouds. As the
size of the point cloud grows, so do the computational demands of these tasks.
A possible solution is to sample the point cloud first. Classic sampling
approaches, such as farthest point sampling (FPS), do not consider the
downstream task. A recent work showed that learning a task-specific sampling
can improve results significantly. However, the proposed technique did not deal
with the non-differentiability of the sampling operation and offered a
workaround instead. We introduce a novel differentiable relaxation for point
cloud sampling that approximates sampled points as a mixture of points in the
primary input cloud. Our approximation scheme leads to consistently good
results on classification and geometry reconstruction applications. We also
show that the proposed sampling method can be used as a front to a point cloud
registration network. This is a challenging task since sampling must be
consistent across two different point clouds for a shared downstream task. In
all cases, our approach outperforms existing non-learned and learned sampling
alternatives. Our code is publicly available at
https://github.com/itailang/SampleNet. | [
"cs.CV"
] |
We consider the problem of comparing the similarity of image sets with
variable-quantity, quality and un-ordered heterogeneous images. We use feature
restructuring to exploit the correlations of both inner$\&$inter-set images.
Specifically, the residual self-attention can effectively restructure the
features using the other features within a set to emphasize the discriminative
images and eliminate the redundancy. Then, a sparse/collaborative
learning-based dependency-guided representation scheme reconstructs the probe
features conditional to the gallery features in order to adaptively align the
two sets. This enables our framework to be compatible with both verification
and open-set identification. We show that the parametric self-attention network
and non-parametric dictionary learning can be trained end-to-end by a unified
alternative optimization scheme, and that the full framework is
permutation-invariant. In the numerical experiments we conducted, our method
achieves top performance on competitive image set/video-based face recognition
and person re-identification benchmarks. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.