text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Recently,~\citet{liu:arxiv:2019} studied the rather challenging problem of
time series forecasting from the perspective of compressed sensing. They
proposed a no-learning method, named Convolution Nuclear Norm Minimization
(CNNM), and proved that CNNM can exactly recover the future part of a series
from its observed part, provided that the series is convolutionally low-rank.
While impressive, the convolutional low-rankness condition may not be satisfied
whenever the series is far from being seasonal, and is in fact brittle to the
presence of trends and dynamics. This paper tries to approach the issues by
integrating a learnable, orthonormal transformation into CNNM, with the purpose
for converting the series of involute structures into regular signals of
convolutionally low-rank. We prove that the resulted model, termed
Learning-Based CNNM (LbCNNM), strictly succeeds in identifying the future part
of a series, as long as the transform of the series is convolutionally
low-rank. To learn proper transformations that may meet the required success
conditions, we devise an interpretable method based on Principal Component
Purist (PCP). Equipped with this learning method and some elaborate data
argumentation skills, LbCNNM not only can handle well the major components of
time series (including trends, seasonality and dynamics), but also can make use
of the forecasts provided by some other forecasting methods; this means LbCNNM
can be used as a general tool for model combination. Extensive experiments on
100,452 real-world time series from TSDL and M4 demonstrate the superior
performance of LbCNNM. | [
"cs.LG",
"cs.AI"
] |
The perceptual-based grouping process produces a hierarchical and
compositional image representation that helps both human and machine vision
systems recognize heterogeneous visual concepts. Examples can be found in the
classical hierarchical superpixel segmentation or image parsing works. However,
the grouping process is largely overlooked in modern CNN-based image
segmentation networks due to many challenges, including the inherent
incompatibility between the grid-shaped CNN feature map and the
irregular-shaped perceptual grouping hierarchy. Overcoming these challenges, we
propose a deep grouping model (DGM) that tightly marries the two types of
representations and defines a bottom-up and a top-down process for feature
exchanging. When evaluating the model on the recent Broden+ dataset for the
unified perceptual parsing task, it achieves state-of-the-art results while
having a small computational overhead compared to other contextual-based
segmentation models. Furthermore, the DGM has better interpretability compared
with modern CNN methods. | [
"cs.CV"
] |
Few-shot learning aims to correctly recognize query samples from unseen
classes given a limited number of support samples, often by relying on global
embeddings of images. In this paper, we propose to equip the backbone network
with an attention agent, which is trained by reinforcement learning. The policy
gradient algorithm is employed to train the agent towards adaptively localizing
the representative regions on feature maps over time. We further design a
reward function based on the prediction of the held-out data, thus helping the
attention mechanism to generalize better across the unseen classes. The
extensive experiments show, with the help of the reinforced attention, that our
embedding network has the capability to progressively generate a more
discriminative representation in few-shot learning. Moreover, experiments on
the task of image classification also show the effectiveness of the proposed
design. | [
"cs.CV"
] |
3D object detection in point clouds is a challenging vision task that
benefits various applications for understanding the 3D visual world. Lots of
recent research focuses on how to exploit end-to-end trainable Hough voting for
generating object proposals. However, the current voting strategy can only
receive partial votes from the surfaces of potential objects together with
severe outlier votes from the cluttered backgrounds, which hampers full
utilization of the information from the input point clouds. Inspired by the
back-tracing strategy in the conventional Hough voting methods, in this work,
we introduce a new 3D object detection method, named as Back-tracing
Representative Points Network (BRNet), which generatively back-traces the
representative points from the vote centers and also revisits complementary
seed points around these generated points, so as to better capture the fine
local structural features surrounding the potential objects from the raw point
clouds. Therefore, this bottom-up and then top-down strategy in our BRNet
enforces mutual consistency between the predicted vote centers and the raw
surface points and thus achieves more reliable and flexible object localization
and class prediction results. Our BRNet is simple but effective, which
significantly outperforms the state-of-the-art methods on two large-scale point
cloud datasets, ScanNet V2 (+7.5% in terms of [email protected]) and SUN RGB-D (+4.7% in
terms of [email protected]), while it is still lightweight and efficient. Code will be
available at https://github.com/cheng052/BRNet. | [
"cs.CV"
] |
Video-and-Language Inference is a recently proposed task for joint
video-and-language understanding. This new task requires a model to draw
inference on whether a natural language statement entails or contradicts a
given video clip. In this paper, we study how to address three critical
challenges for this task: judging the global correctness of the statement
involved multiple semantic meanings, joint reasoning over video and subtitles,
and modeling long-range relationships and complex social interactions. First,
we propose an adaptive hierarchical graph network that achieves in-depth
understanding of the video over complex interactions. Specifically, it performs
joint reasoning over video and subtitles in three hierarchies, where the graph
structure is adaptively adjusted according to the semantic structures of the
statement. Secondly, we introduce semantic coherence learning to explicitly
encourage the semantic coherence of the adaptive hierarchical graph network
from three hierarchies. The semantic coherence learning can further improve the
alignment between vision and linguistics, and the coherence across a sequence
of video segments. Experimental results show that our method significantly
outperforms the baseline by a large margin. | [
"cs.CV"
] |
We introduce Procgen Benchmark, a suite of 16 procedurally generated
game-like environments designed to benchmark both sample efficiency and
generalization in reinforcement learning. We believe that the community will
benefit from increased access to high quality training environments, and we
provide detailed experimental protocols for using this benchmark. We
empirically demonstrate that diverse environment distributions are essential to
adequately train and evaluate RL agents, thereby motivating the extensive use
of procedural content generation. We then use this benchmark to investigate the
effects of scaling model size, finding that larger models significantly improve
both sample efficiency and generalization. | [
"cs.LG",
"stat.ML"
] |
The ability to accurately detect and classify objects at varying pixel sizes
in cluttered scenes is crucial to many Navy applications. However, detection
performance of existing state-of the-art approaches such as convolutional
neural networks (CNNs) degrade and suffer when applied to such cluttered and
multi-object detection tasks. We conjecture that spatial relationships between
objects in an image could be exploited to significantly improve detection
accuracy, an approach that had not yet been considered by any existing
techniques (to the best of our knowledge) at the time the research was
conducted. We introduce a detection and classification technique called
Spatially Related Detection with Convolutional Neural Networks (SPARCNN) that
learns and exploits a probabilistic representation of inter-object spatial
configurations within images from training sets for more effective region
proposals to use with state-of-the-art CNNs. Our empirical evaluation of
SPARCNN on the VOC 2007 dataset shows that it increases classification accuracy
by 8% when compared to a region proposal technique that does not exploit
spatial relations. More importantly, we obtained a higher performance boost of
18.8% when task difficulty in the test set is increased by including highly
obscured objects and increased image clutter. | [
"cs.CV"
] |
Graph matching finds the correspondence of nodes across two graphs and is a
basic task in graph-based machine learning. Numerous existing methods match
every node in one graph to one node in the other graph whereas two graphs
usually overlap partially in many \realworld{} applications. In this paper, a
partial Gromov-Wasserstein learning framework is proposed for partially
matching two graphs, which fuses the partial Gromov-Wasserstein distance and
the partial Wasserstein distance as the objective and updates the partial
transport map and the node embedding in an alternating fashion. The proposed
framework transports a fraction of the probability mass and matches node pairs
with high relative similarities across the two graphs. Incorporating an
embedding learning method, heterogeneous graphs can also be matched. Numerical
experiments on both synthetic and \realworld{} graphs demonstrate that our
framework can improve the F1 score by at least $20\%$ and often much more. | [
"cs.LG"
] |
Signal processing is rich in inherently continuous and often nonlinear
applications, such as spectral estimation, optical imaging, and
super-resolution microscopy, in which sparsity plays a key role in obtaining
state-of-the-art results. Coping with the infinite dimensionality and
non-convexity of these problems typically involves discretization and convex
relaxations, e.g., using atomic norms. Nevertheless, grid mismatch and other
coherence issues often lead to discretized versions of sparse signals that are
not sparse. Even if they are, recovering sparse solutions using convex
relaxations requires assumptions that may be hard to meet in practice. What is
more, problems involving nonlinear measurements remain non-convex even after
relaxing the sparsity objective. We address these issues by directly tackling
the continuous, nonlinear problem cast as a sparse functional optimization
program. We prove that when these problems are non-atomic, they have no duality
gap and can therefore be solved efficiently using duality and~(stochastic)
convex optimization methods. We illustrate the wide range of applications of
this approach by formulating and solving problems from nonlinear spectral
estimation and robust classification. | [
"cs.LG",
"eess.SP",
"math.OC",
"stat.ML"
] |
Although some convolutional neural networks (CNNs) based super-resolution
(SR) algorithms yield good visual performances on single images recently. Most
of them focus on perfect perceptual quality but ignore specific needs of
subsequent detection task. This paper proposes a simple but powerful
feature-driven super-resolution (FDSR) to improve the detection performance of
low-resolution (LR) images. First, the proposed method uses feature-domain
prior which extracts from an existing detector backbone to guide the HR image
reconstruction. Then, with the aligned features, FDSR update SR parameters for
better detection performance. Comparing with some state-of-the-art SR
algorithms with 4$\times$ scale factor, FDSR outperforms the detection
performance mAP on MS COCO validation, VOC2007 databases with good
generalization to other detection networks. | [
"cs.CV"
] |
Various autonomous or assisted driving strategies have been facilitated
through the accurate and reliable perception of the environment around a
vehicle. Among the commonly used sensors, radar has usually been considered as
a robust and cost-effective solution even in adverse driving scenarios, e.g.,
weak/strong lighting or bad weather. Instead of considering to fuse the
unreliable information from all available sensors, perception from pure radar
data becomes a valuable alternative that is worth exploring. In this paper, we
propose a deep radar object detection network, named RODNet, which is
cross-supervised by a camera-radar fused algorithm without laborious annotation
efforts, to effectively detect objects from the radio frequency (RF) images in
real-time. First, the raw signals captured by millimeter-wave radars are
transformed to RF images in range-azimuth coordinates. Second, our proposed
RODNet takes a sequence of RF images as the input to predict the likelihood of
objects in the radar field of view (FoV). Two customized modules are also added
to handle multi-chirp information and object relative motion. Instead of using
human-labeled ground truth for training, the proposed RODNet is
cross-supervised by a novel 3D localization of detected objects using a
camera-radar fusion (CRF) strategy in the training stage. Finally, we propose a
method to evaluate the object detection performance of the RODNet. Due to no
existing public dataset available for our task, we create a new dataset, named
CRUW, which contains synchronized RGB and RF image sequences in various driving
scenarios. With intensive experiments, our proposed cross-supervised RODNet
achieves 86% average precision and 88% average recall of object detection
performance, which shows the robustness to noisy scenarios in various driving
conditions. | [
"cs.CV",
"eess.SP"
] |
We explore Deep Reinforcement Learning in a parameterized action space.
Specifically, we investigate how to achieve sample-efficient end-to-end
training in these tasks. We propose a new compact architecture for the tasks
where the parameter policy is conditioned on the output of the discrete action
policy. We also propose two new methods based on the state-of-the-art
algorithms Trust Region Policy Optimization (TRPO) and Stochastic Value
Gradient (SVG) to train such an architecture. We demonstrate that these methods
outperform the state of the art method, Parameterized Action DDPG, on test
domains. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Graph Convolutional Networks (GCNs) have recently become the primary choice
for learning from graph-structured data, superseding hash fingerprints in
representing chemical compounds. However, GCNs lack the ability to take into
account the ordering of node neighbors, even when there is a geometric
interpretation of the graph vertices that provides an order based on their
spatial positions. To remedy this issue, we propose Spatial Graph Convolutional
Network (SGCN) which uses spatial features to efficiently learn from graphs
that can be naturally located in space. Our contribution is threefold: we
propose a GCN-inspired architecture which (i) leverages node positions, (ii) is
a proper generalization of both GCNs and Convolutional Neural Networks (CNNs),
(iii) benefits from augmentation which further improves the performance and
assures invariance with respect to the desired properties. Empirically, SGCN
outperforms state-of-the-art graph-based methods on image classification and
chemical tasks. | [
"cs.LG",
"stat.ML"
] |
Domain generalization refers to the problem where we aim to train a model on
data from a set of source domains so that the model can generalize to unseen
target domains. Naively training a model on the aggregate set of data (pooled
from all source domains) has been shown to perform suboptimally, since the
information learned by that model might be domain-specific and generalize
imperfectly to target domains. To tackle this problem, a predominant approach
is to find and learn some domain-invariant information in order to use it for
the prediction task. In this paper, we propose a theoretically grounded method
to learn a domain-invariant representation by enforcing the representation
network to be invariant under all transformation functions among domains. We
also show how to use generative adversarial networks to learn such domain
transformations to implement our method in practice. We demonstrate the
effectiveness of our method on several widely used datasets for the domain
generalization problem, on all of which we achieve competitive results with
state-of-the-art models. | [
"cs.LG"
] |
Several machine learning tasks require to represent the data using only a
sparse set of interest points. An ideal detector is able to find the
corresponding interest points even if the data undergo a transformation typical
for a given domain. Since the task is of high practical interest in computer
vision, many hand-crafted solutions were proposed. In this paper, we ask a
fundamental question: can we learn such detectors from scratch? Since it is
often unclear what points are "interesting", human labelling cannot be used to
find a truly unbiased solution. Therefore, the task requires an unsupervised
formulation. We are the first to propose such a formulation: training a neural
network to rank points in a transformation-invariant manner. Interest points
are then extracted from the top/bottom quantiles of this ranking. We validate
our approach on two tasks: standard RGB image interest point detection and
challenging cross-modal interest point detection between RGB and depth images.
We quantitatively show that our unsupervised method performs better or on-par
with baselines. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Object detection has gained great progress driven by the development of deep
learning. Compared with a widely studied task -- classification, generally
speaking, object detection even need one or two orders of magnitude more FLOPs
(floating point operations) in processing the inference task. To enable a
practical application, it is essential to explore effective runtime and
accuracy trade-off scheme. Recently, a growing number of studies are intended
for object detection on resource constraint devices, such as YOLOv1, YOLOv2,
SSD, MobileNetv2-SSDLite, whose accuracy on COCO test-dev detection results are
yield to mAP around 22-25% (mAP-20-tier). On the contrary, very few studies
discuss the computation and accuracy trade-off scheme for mAP-30-tier detection
networks. In this paper, we illustrate the insights of why RetinaNet gives
effective computation and accuracy trade-off for object detection and how to
build a light-weight RetinaNet. We propose to only reduce FLOPs in
computational intensive layers and keep other layer the same. Compared with
most common way -- input image scaling for FLOPs-accuracy trade-off, the
proposed solution shows a constantly better FLOPs-mAP trade-off line.
Quantitatively, the proposed method result in 0.1% mAP improvement at 1.15x
FLOPs reduction and 0.3% mAP improvement at 1.8x FLOPs reduction. | [
"cs.CV"
] |
We consider the problem of ranking a set of items from pairwise comparisons
in the presence of features associated with the items. Recent works have
established that $O(n\log(n))$ samples are needed to rank well when there is no
feature information present. However, this might be sub-optimal in the presence
of associated features. We introduce a new probabilistic preference model
called feature-Bradley-Terry-Luce (f-BTL) model that generalizes the standard
BTL model to incorporate feature information. We present a new least squares
based algorithm called fBTL-LS which we show requires much lesser than
$O(n\log(n))$ pairs to obtain a good ranking -- precisely our new sample
complexity bound is of $O(\alpha\log \alpha)$, where $\alpha$ denotes the
number of `independent items' of the set, in general $\alpha << n$. Our
analysis is novel and makes use of tools from classical graph matching theory
to provide tighter bounds that sheds light on the true complexity of the
ranking problem, capturing the item dependencies in terms of their feature
representations. This was not possible with earlier matrix completion based
tools used for this problem. We also prove an information theoretic lower bound
on the required sample complexity for recovering the underlying ranking, which
essentially shows the tightness of our proposed algorithms. The efficacy of our
proposed algorithms are validated through extensive experimental evaluations on
a variety of synthetic and real world datasets. | [
"cs.LG",
"stat.ML"
] |
In optical coherence tomography (OCT) volumes of retina, the sequential
acquisition of the individual slices makes this modality prone to motion
artifacts, misalignments between adjacent slices being the most noticeable. Any
distortion in OCT volumes can bias structural analysis and influence the
outcome of longitudinal studies. On the other hand, presence of speckle noise
that is characteristic of this imaging modality, leads to inaccuracies when
traditional registration techniques are employed. Also, the lack of a
well-defined ground truth makes supervised deep-learning techniques ill-posed
to tackle the problem. In this paper, we tackle these issues by using deep
reinforcement learning to correct inter-frame movements in an unsupervised
manner. Specifically, we use dueling deep Q-network to train an artificial
agent to find the optimal policy, i.e. a sequence of actions, that best
improves the alignment by maximizing the sum of reward signals. Instead of
relying on the ground-truth of transformation parameters to guide the rewarding
system, for the first time, we use a combination of intensity based image
similarity metrics. Further, to avoid the agent bias towards speckle noise, we
ensure the agent can see retinal layers as part of the interacting environment.
For quantitative evaluation, we simulate the eye movement artifacts by applying
2D rigid transformations on individual B-scans. The proposed model achieves an
average of 0.985 and 0.914 for normalized mutual information and correlation
coefficient, respectively. We also compare our model with elastix intensity
based medical image registration approach, where significant improvement is
achieved by our model for both noisy and denoised volumes. | [
"cs.LG",
"stat.ML"
] |
Real estate appraisal refers to the process of developing an unbiased opinion
for real property's market value, which plays a vital role in decision-making
for various players in the marketplace (e.g., real estate agents, appraisers,
lenders, and buyers). However, it is a nontrivial task for accurate real estate
appraisal because of three major challenges: (1) The complicated influencing
factors for property value; (2) The asynchronously spatiotemporal dependencies
among real estate transactions; (3) The diversified correlations between
residential communities. To this end, we propose a Multi-Task Hierarchical
Graph Representation Learning (MugRep) framework for accurate real estate
appraisal. Specifically, by acquiring and integrating multi-source urban data,
we first construct a rich feature set to comprehensively profile the real
estate from multiple perspectives (e.g., geographical distribution, human
mobility distribution, and resident demographics distribution). Then, an
evolving real estate transaction graph and a corresponding event graph
convolution module are proposed to incorporate asynchronously spatiotemporal
dependencies among real estate transactions. Moreover, to further incorporate
valuable knowledge from the view of residential communities, we devise a
hierarchical heterogeneous community graph convolution module to capture
diversified correlations between residential communities. Finally, an urban
district partitioned multi-task learning module is introduced to generate
differently distributed value opinions for real estate. Extensive experiments
on two real-world datasets demonstrate the effectiveness of MugRep and its
components and features. | [
"cs.LG"
] |
Graph generative models are a highly active branch of machine learning. Given
the steady development of new models of ever-increasing complexity, it is
necessary to provide a principled way to evaluate and compare them. In this
paper, we enumerate the desirable criteria for comparison metrics, discuss the
development of such metrics, and provide a comparison of their respective
expressive power. We perform a systematic evaluation of the main metrics in use
today, highlighting some of the challenges and pitfalls researchers
inadvertently can run into. We then describe a collection of suitable metrics,
give recommendations as to their practical suitability, and analyse their
behaviour on synthetically generated perturbed graphs as well as on recently
proposed graph generative models. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
The depth of a visible surface of a scene is the distance between the surface
and the sensor. Recovering depth information from two-dimensional images of a
scene is an important task in computer vision that can assist numerous
applications such as object recognition, scene interpretation, obstacle
avoidance, inspection and assembly. Various passive depth computation
techniques have been developed for computer vision applications. They can be
classified into two groups. The first group operates using just one image. The
second group requires more than one image which can be acquired using either
multiple cameras or a camera whose parameters and positioning can be changed.
This project is aimed to find the real depth of the object from the camera
which had been used to click the photograph. An n-degree polynomial was
formulated, which maps the pixel depth of an image to the real depth. In order
to find the coefficients of the polynomial, an experiment was carried out for a
particular lens and thus, these coefficients are a unique feature of a
particular camera. The procedure explained in this report is a monocular
approach for estimation of depth of a scene. The idea involves mapping the
Pixel Depth of the object photographed in the image with the Real Depth of the
object from the camera lens with an interpolation function. In order to find
the parameters of the interpolation function, a set of lines with predefined
distance from camera is used, and then the distance of each line from the
bottom edge of the picture (as the origin line) is calculated. | [
"cs.CV",
"math-ph",
"math.MP",
"physics.comp-ph",
"physics.ed-ph",
"physics.pop-ph"
] |
Matrices satisfying the Restricted Isometry Property (RIP) play an important
role in the areas of compressed sensing and statistical learning. RIP matrices
with optimal parameters are mainly obtained via probabilistic arguments, as
explicit constructions seem hard. It is therefore interesting to ask whether a
fixed matrix can be incorporated into a construction of restricted isometries.
In this paper, we construct a new broad ensemble of random matrices with
dependent entries that satisfy the restricted isometry property. Our
construction starts with a fixed (deterministic) matrix $X$ satisfying some
simple stable rank condition, and we show that the matrix $XR$, where $R$ is a
random matrix drawn from various popular probabilistic models (including,
subgaussian, sparse, low-randomness, satisfying convex concentration property),
satisfies the RIP with high probability. These theorems have various
applications in signal recovery, random matrix theory, dimensionality
reduction, etc. Additionally, motivated by an application for understanding the
effectiveness of word vector embeddings popular in natural language processing
and machine learning applications, we investigate the RIP of the matrix
$XR^{(l)}$ where $R^{(l)}$ is formed by taking all possible (disregarding
order) $l$-way entrywise products of the columns of a random matrix $R$. | [
"cs.LG",
"cs.DS",
"stat.ML"
] |
Large-scale image databases such as ImageNet have significantly advanced
image classification and other visual recognition tasks. However much of these
datasets are constructed only for single-label and coarse object-level
classification. For real-world applications, multiple labels and fine-grained
categories are often needed, yet very few such datasets exist publicly,
especially those of large-scale and high quality. In this work, we contribute
to the community a new dataset called iMaterialist Fashion Attribute
(iFashion-Attribute) to address this problem in the fashion domain. The dataset
is constructed from over one million fashion images with a label space that
includes 8 groups of 228 fine-grained attributes in total. Each image is
annotated by experts with multiple, high-quality fashion attributes. The result
is the first known million-scale multi-label and fine-grained image dataset. We
conduct extensive experiments and provide baseline results with modern deep
Convolutional Neural Networks (CNNs). Additionally, we demonstrate models
pre-trained on iFashion-Attribute achieve superior transfer learning
performance on fashion related tasks compared with pre-training from ImageNet
or other fashion datasets. Data is available at:
https://github.com/visipedia/imat_fashion_comp | [
"cs.CV"
] |
Conditional Generative Adversarial Networks (cGANs) are generative models
that can produce data samples ($x$) conditioned on both latent variables ($z$)
and known auxiliary information ($c$). We propose the Bidirectional cGAN
(BiCoGAN), which effectively disentangles $z$ and $c$ in the generation process
and provides an encoder that learns inverse mappings from $x$ to both $z$ and
$c$, trained jointly with the generator and the discriminator. We present
crucial techniques for training BiCoGANs, which involve an extrinsic factor
loss along with an associated dynamically-tuned importance weight. As compared
to other encoder-based cGANs, BiCoGANs encode $c$ more accurately, and utilize
$z$ and $c$ more effectively and in a more disentangled way to generate
samples. | [
"cs.LG",
"stat.ML"
] |
Graph Neural Networks (GNNs) are the first choice methods for graph machine
learning problems thanks to their ability to learn state-of-the-art level
representations from graph-structured data. However, centralizing a massive
amount of real-world graph data for GNN training is prohibitive due to
user-side privacy concerns, regulation restrictions, and commercial
competition. Federated Learning is the de-facto standard for collaborative
training of machine learning models over many distributed edge devices without
the need for centralization. Nevertheless, training graph neural networks in a
federated setting is vaguely defined and brings statistical and systems
challenges. This work proposes SpreadGNN, a novel multi-task federated training
framework capable of operating in the presence of partial labels and absence of
a central server for the first time in the literature. SpreadGNN extends
federated multi-task learning to realistic serverless settings for GNNs, and
utilizes a novel optimization algorithm with a convergence guarantee,
Decentralized Periodic Averaging SGD (DPA-SGD), to solve decentralized
multi-task learning problems. We empirically demonstrate the efficacy of our
framework on a variety of non-I.I.D. distributed graph-level molecular property
prediction datasets with partial labels. Our results show that SpreadGNN
outperforms GNN models trained over a central server-dependent federated
learning system, even in constrained topologies. The source code is publicly
available at https://github.com/FedML-AI/SpreadGNN | [
"cs.LG"
] |
Generative adversarial networks conditioned on textual image descriptions are
capable of generating realistic-looking images. However, current methods still
struggle to generate images based on complex image captions from a
heterogeneous domain. Furthermore, quantitatively evaluating these
text-to-image models is challenging, as most evaluation metrics only judge
image quality but not the conformity between the image and its caption. To
address these challenges we introduce a new model that explicitly models
individual objects within an image and a new evaluation metric called Semantic
Object Accuracy (SOA) that specifically evaluates images given an image
caption. The SOA uses a pre-trained object detector to evaluate if a generated
image contains objects that are mentioned in the image caption, e.g. whether an
image generated from "a car driving down the street" contains a car. We perform
a user study comparing several text-to-image models and show that our SOA
metric ranks the models the same way as humans, whereas other metrics such as
the Inception Score do not. Our evaluation also shows that models which
explicitly model objects outperform models which only model global image
characteristics. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Current high-quality object detection approaches use the scheme of
salience-based object proposal methods followed by post-classification using
deep convolutional features. This spurred recent research in improving object
proposal methods. However, domain agnostic proposal generation has the
principal drawback that the proposals come unranked or with very weak ranking,
making it hard to trade-off quality for running time. This raises the more
fundamental question of whether high-quality proposal generation requires
careful engineering or can be derived just from data alone. We demonstrate that
learning-based proposal methods can effectively match the performance of
hand-engineered methods while allowing for very efficient runtime-quality
trade-offs. Using the multi-scale convolutional MultiBox (MSC-MultiBox)
approach, we substantially advance the state-of-the-art on the ILSVRC 2014
detection challenge data set, with $0.5$ mAP for a single model and $0.52$ mAP
for an ensemble of two models. MSC-Multibox significantly improves the proposal
quality over its predecessor MultiBox~method: AP increases from $0.42$ to
$0.53$ for the ILSVRC detection challenge. Finally, we demonstrate improved
bounding-box recall compared to Multiscale Combinatorial Grouping with less
proposals on the Microsoft-COCO data set. | [
"cs.CV"
] |
In this paper, we propose a broad comparison between Fully Convolutional
Networks (FCNs) and Mask Region-based Convolutional Neural Networks
(Mask-RCNNs) applied in the Salient Object Detection (SOD) context. Studies in
the SOD literature usually explore architectures based in FCNs to detect
salient regions and objects in visual scenes. However, besides the promising
results achieved, FCNs showed issues in some challenging scenarios. Fairly
recently studies in the SOD literature proposed the use of a Mask-RCNN approach
to overcome such issues. However, there is no extensive comparison between the
two networks in the SOD literature endorsing the effectiveness of Mask-RCNNs
over FCN when segmenting salient objects. Aiming to effectively show the
superiority of Mask-RCNNs over FCNs in the SOD context, we compare two
variations of Mask-RCNNs with two variations of FCNs in eight datasets widely
used in the literature and in four metrics. Our findings show that in this
context Mask-RCNNs achieved an improvement on the F-measure up to 47% over
FCNs. | [
"cs.CV"
] |
Knowledge distillation (KD) is one of the most useful techniques for
light-weight neural networks. Although neural networks have a clear purpose of
embedding datasets into the low-dimensional space, the existing knowledge was
quite far from this purpose and provided only limited information. We argue
that good knowledge should be able to interpret the embedding procedure. This
paper proposes a method of generating interpretable embedding procedure (IEP)
knowledge based on principal component analysis, and distilling it based on a
message passing neural network. Experimental results show that the student
network trained by the proposed KD method improves 2.28% in the CIFAR100
dataset, which is higher performance than the state-of-the-art (SOTA) method.
We also demonstrate that the embedding procedure knowledge is interpretable via
visualization of the proposed KD process. The implemented code is available at
https://github.com/sseung0703/IEPKT. | [
"cs.CV",
"cs.LG"
] |
Predicting the spread and containment of COVID-19 is a challenge of utmost
importance that the broader scientific community is currently facing. One of
the main sources of difficulty is that a very limited amount of daily COVID-19
case data is available, and with few exceptions, the majority of countries are
currently in the "exponential spread stage," and thus there is scarce
information available which would enable one to predict the phase transition
between spread and containment.
In this paper, we propose a novel approach to predicting the spread of
COVID-19 based on dictionary learning and online nonnegative matrix
factorization (online NMF). The key idea is to learn dictionary patterns of
short evolution instances of the new daily cases in multiple countries at the
same time, so that their latent correlation structures are captured in the
dictionary patterns. We first learn such patterns by minibatch learning from
the entire time-series and then further adapt them to the time-series by online
NMF. As we progressively adapt and improve the learned dictionary patterns to
the more recent observations, we also use them to make one-step predictions by
the partial fitting. Lastly, by recursively applying the one-step predictions,
we can extrapolate our predictions into the near future. Our prediction results
can be directly attributed to the learned dictionary patterns due to their
interpretability. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Generative Adversarial Network, as a promising research direction in the AI
community, recently attracts considerable attention due to its ability to
generating high-quality realistic data. GANs are a competing game between two
neural networks trained in an adversarial manner to reach a Nash equilibrium.
Despite the improvement accomplished in GANs in the last years, there remain
several issues to solve. In this way, how to tackle these issues and make
advances leads to rising research interests. This paper reviews literature that
leverages the game theory in GANs and addresses how game models can relieve
specific generative models' challenges and improve the GAN's performance. In
particular, we firstly review some preliminaries, including the basic GAN model
and some game theory backgrounds. After that, we present our taxonomy to
summarize the state-of-the-art solutions into three significant categories:
modified game model, modified architecture, and modified learning method. The
classification is based on the modifications made in the basic model by the
proposed approaches from the game-theoretic perspective. We further classify
each category into several subcategories. Following the proposed taxonomy, we
explore the main objective of each class and review the recent work in each
group. Finally, we discuss the remaining challenges in this field and present
the potential future research topics. | [
"cs.LG",
"cs.AI",
"cs.GT"
] |
Representation learning focused on disentangling the underlying factors of
variation in given data has become an important area of research in machine
learning. However, most of the studies in this area have relied on datasets
from the computer vision domain and thus, have not been readily extended to
music. In this paper, we present a new symbolic music dataset that will help
researchers working on disentanglement problems demonstrate the efficacy of
their algorithms on diverse domains. This will also provide a means for
evaluating algorithms specifically designed for music. To this end, we create a
dataset comprising of 2-bar monophonic melodies where each melody is the result
of a unique combination of nine latent factors that span ordinal, categorical,
and binary types. The dataset is large enough (approx. 1.3 million data points)
to train and test deep networks for disentanglement learning. In addition, we
present benchmarking experiments using popular unsupervised disentanglement
algorithms on this dataset and compare the results with those obtained on an
image-based dataset. | [
"cs.LG",
"cs.SD",
"eess.AS"
] |
Safety and decline of road traffic accidents remain important issues of
autonomous driving. Statistics show that unintended lane departure is a leading
cause of worldwide motor vehicle collisions, making lane detection the most
promising and challenge task for self-driving. Today, numerous groups are
combining deep learning techniques with computer vision problems to solve
self-driving problems. In this paper, a Global Convolution Networks (GCN) model
is used to address both classification and localization issues for semantic
segmentation of lane. We are using color-based segmentation is presented and
the usability of the model is evaluated. A residual-based boundary refinement
and Adam optimization is also used to achieve state-of-art performance. As
normal cars could not afford GPUs on the car, and training session for a
particular road could be shared by several cars. We propose a framework to get
it work in real world. We build a real time video transfer system to get video
from the car, get the model trained in edge server (which is equipped with
GPUs), and send the trained model back to the car. | [
"cs.CV"
] |
In this paper we seek methods to effectively detect urban micro-events. Urban
micro-events are events which occur in cities, have limited geographical
coverage and typically affect only a small group of citizens. Because of their
scale these are difficult to identify in most data sources. However, by using
citizen sensing to gather data, detecting them becomes feasible. The data
gathered by citizen sensing is often multimodal and, as a consequence, the
information required to detect urban micro-events is distributed over multiple
modalities. This makes it essential to have a classifier capable of combining
them. In this paper we explore several methods of creating such a classifier,
including early, late, hybrid fusion and representation learning using
multimodal graphs. We evaluate performance on a real world dataset obtained
from a live citizen reporting system. We show that a multimodal approach yields
higher performance than unimodal alternatives. Furthermore, we demonstrate that
our hybrid combination of early and late fusion with multimodal embeddings
performs best in classification of urban micro-events. | [
"cs.LG",
"stat.ML"
] |
Recurrent Neural Networks (RNN) have become competitive forecasting methods,
as most notably shown in the winning method of the recent M4 competition.
However, established statistical models such as ETS and ARIMA gain their
popularity not only from their high accuracy, but they are also suitable for
non-expert users as they are robust, efficient, and automatic. In these areas,
RNNs have still a long way to go. We present an extensive empirical study and
an open-source software framework of existing RNN architectures for
forecasting, that allow us to develop guidelines and best practices for their
use. For example, we conclude that RNNs are capable of modelling seasonality
directly if the series in the dataset possess homogeneous seasonal patterns,
otherwise we recommend a deseasonalization step. Comparisons against ETS and
ARIMA demonstrate that the implemented (semi-)automatic RNN models are no
silver bullets, but they are competitive alternatives in many situations. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Users can be supported to adopt healthy behaviors, such as regular physical
activity, via relevant and timely suggestions on their mobile devices.
Recently, reinforcement learning algorithms have been found to be effective for
learning the optimal context under which to provide suggestions. However, these
algorithms are not necessarily designed for the constraints posed by mobile
health (mHealth) settings, that they be efficient, domain-informed and
computationally affordable. We propose an algorithm for providing physical
activity suggestions in mHealth settings. Using domain-science, we formulate a
contextual bandit algorithm which makes use of a linear mixed effects model. We
then introduce a procedure to efficiently perform hyper-parameter updating,
using far less computational resources than competing approaches. Not only is
our approach computationally efficient, it is also easily implemented with
closed form matrix algebraic updates and we show improvements over state of the
art approaches both in speed and accuracy of up to 99% and 56% respectively. | [
"cs.LG",
"cs.CY",
"stat.ML"
] |
Visual place recognition is challenging in the urban environment and is
usually viewed as a large scale image retrieval task. The intrinsic challenges
in place recognition exist that the confusing objects such as cars and trees
frequently occur in the complex urban scene, and buildings with repetitive
structures may cause over-counting and the burstiness problem degrading the
image representations. To address these problems, we present an Attention-based
Pyramid Aggregation Network (APANet), which is trained in an end-to-end manner
for place recognition. One main component of APANet, the spatial pyramid
pooling, can effectively encode the multi-size buildings containing
geo-information. The other one, the attention block, is adopted as a region
evaluator for suppressing the confusing regional features while highlighting
the discriminative ones. When testing, we further propose a simple yet
effective PCA power whitening strategy, which significantly improves the widely
used PCA whitening by reasonably limiting the impact of over-counting.
Experimental evaluations demonstrate that the proposed APANet outperforms the
state-of-the-art methods on two place recognition benchmarks, and generalizes
well on standard image retrieval datasets. | [
"cs.CV"
] |
Dense reconstructions often contain errors that prior work has so far
minimised using high quality sensors and regularising the output. Nevertheless,
errors still persist. This paper proposes a machine learning technique to
identify errors in three dimensional (3D) meshes. Beyond simply identifying
errors, our method quantifies both the magnitude and the direction of depth
estimate errors when viewing the scene. This enables us to improve the
reconstruction accuracy.
We train a suitably deep network architecture with two 3D meshes: a
high-quality laser reconstruction, and a lower quality stereo image
reconstruction. The network predicts the amount of error in the lower quality
reconstruction with respect to the high-quality one, having only view the
former through its input. We evaluate our approach by correcting
two-dimensional (2D) inverse-depth images extracted from the 3D model, and show
that our method improves the quality of these depth reconstructions by up to a
relative 10% RMSE. | [
"cs.CV",
"cs.RO"
] |
Video object detection is a fundamental problem in computer vision and has a
wide spectrum of applications. Based on deep networks, video object detection
is actively studied for pushing the limits of detection speed and accuracy. To
reduce the computation cost, we sparsely sample key frames in video and treat
the rest frames are non-key frames; a large and deep network is used to extract
features for key frames and a tiny network is used for non-key frames. To
enhance the features of non-key frames, we propose a novel short-term feature
aggregation method to propagate the rich information in key frame features to
non-key frame features in a fast way. The fast feature aggregation is enabled
by the freely available motion cues in compressed videos. Further, key frame
features are also aggregated based on optical flow. The propagated deep
features are then integrated with the directly extracted features for object
detection. The feature extraction and feature integration parameters are
optimized in an end-to-end manner. The proposed video object detection network
is evaluated on the large-scale ImageNet VID benchmark and achieves 77.2\% mAP,
which is on-par with state-of-the-art accuracy, at the speed of 30 FPS using a
Titan X GPU. The source codes are available at
\url{https://github.com/hustvl/LSFA}. | [
"cs.CV"
] |
At present, attention mechanism has been widely applied to the fields of deep
learning models. Structural models that based on attention mechanism can not
only record the relationships between features position, but also can measure
the importance of different features based on their weights. By establishing
dynamically weighted parameters for choosing relevant and irrelevant features,
the key information can be strengthened, and the irrelevant information can be
weakened. Therefore, the efficiency of deep learning algorithms can be
significantly elevated and improved. Although transformers have been performed
very well in many fields including reinforcement learning, there are still many
problems and applications can be solved and made with transformers within this
area. MARL (known as Multi-Agent Reinforcement Learning) can be recognized as a
set of independent agents trying to adapt and learn through their way to reach
the goal. In order to emphasize the relationship between each MDP decision in a
certain time period, we applied the hierarchical coding method and validated
the effectiveness of this method. This paper proposed a hierarchical
transformers MADDPG based on RNN which we call it Hierarchical RNNs-Based
Transformers MADDPG(HRTMADDPG). It consists of a lower level encoder based on
RNNs that encodes multiple step sizes in each time sequence, and it also
consists of an upper sequence level encoder based on transformer for learning
the correlations between multiple sequences so that we can capture the causal
relationship between sub-time sequences and make HRTMADDPG more efficient. | [
"cs.LG",
"cs.MA"
] |
Hidden Markov Models (HMMs) are one of the most fundamental and widely used
statistical tools for modeling discrete time series. In general, learning HMMs
from data is computationally hard (under cryptographic assumptions), and
practitioners typically resort to search heuristics which suffer from the usual
local optima issues. We prove that under a natural separation condition (bounds
on the smallest singular value of the HMM parameters), there is an efficient
and provably correct algorithm for learning HMMs. The sample complexity of the
algorithm does not explicitly depend on the number of distinct (discrete)
observations---it implicitly depends on this quantity through spectral
properties of the underlying HMM. This makes the algorithm particularly
applicable to settings with a large number of observations, such as those in
natural language processing where the space of observation is sometimes the
words in a language. The algorithm is also simple, employing only a singular
value decomposition and matrix multiplications. | [
"cs.LG",
"cs.AI"
] |
Deployment of deep learning models in robotics as sensory information
extractors can be a daunting task to handle, even using generic GPU cards.
Here, we address three of its most prominent hurdles, namely, i) the adaptation
of a single model to perform multiple tasks at once (in this work, we consider
depth estimation and semantic segmentation crucial for acquiring geometric and
semantic understanding of the scene), while ii) doing it in real-time, and iii)
using asymmetric datasets with uneven numbers of annotations per each modality.
To overcome the first two issues, we adapt a recently proposed real-time
semantic segmentation network, making changes to further reduce the number of
floating point operations. To approach the third issue, we embrace a simple
solution based on hard knowledge distillation under the assumption of having
access to a powerful `teacher' network. We showcase how our system can be
easily extended to handle more tasks, and more datasets, all at once,
performing depth estimation and segmentation both indoors and outdoors with a
single model. Quantitatively, we achieve results equivalent to (or better than)
current state-of-the-art approaches with one forward pass costing just 13ms and
6.5 GFLOPs on 640x480 inputs. This efficiency allows us to directly incorporate
the raw predictions of our network into the SemanticFusion framework for dense
3D semantic reconstruction of the scene. | [
"cs.CV"
] |
Periodicity detection is a crucial step in time series tasks, including
monitoring and forecasting of metrics in many areas, such as IoT applications
and self-driving database management system. In many of these applications,
multiple periodic components exist and are often interlaced with each other.
Such dynamic and complicated periodic patterns make the accurate periodicity
detection difficult. In addition, other components in the time series, such as
trend, outliers and noises, also pose additional challenges for accurate
periodicity detection. In this paper, we propose a robust and general framework
for multiple periodicity detection. Our algorithm applies maximal overlap
discrete wavelet transform to transform the time series into multiple
temporal-frequency scales such that different periodic components can be
isolated. We rank them by wavelet variance, and then at each scale detect
single periodicity by our proposed Huber-periodogram and Huber-ACF robustly. We
rigorously prove the theoretical properties of Huber-periodogram and justify
the use of Fisher's test on Huber-periodogram for periodicity detection. To
further refine the detected periods, we compute unbiased autocorrelation
function based on Wiener-Khinchin theorem from Huber-periodogram for improved
robustness and efficiency. Experiments on synthetic and real-world datasets
show that our algorithm outperforms other popular ones for both single and
multiple periodicity detection. | [
"cs.LG",
"eess.SP",
"stat.AP",
"stat.ML"
] |
The learning rate is one of the most important hyper-parameters for model
training and generalization. However, current hand-designed parametric learning
rate schedules offer limited flexibility and the predefined schedule may not
match the training dynamics of high dimensional and non-convex optimization
problems. In this paper, we propose a reinforcement learning based framework
that can automatically learn an adaptive learning rate schedule by leveraging
the information from past training histories. The learning rate dynamically
changes based on the current training dynamics. To validate this framework, we
conduct experiments with different neural network architectures on the Fashion
MINIST and CIFAR10 datasets. Experimental results show that the auto-learned
learning rate controller can achieve better test results. In addition, the
trained controller network is generalizable -- able to be trained on one data
set and transferred to new problems. | [
"cs.LG",
"stat.ML"
] |
The ability to classify objects is fundamental for robots. Besides knowledge
about their visual appearance, captured by the RGB channel, robots heavily need
also depth information to make sense of the world. While the use of deep
networks on RGB robot images has benefited from the plethora of results
obtained on databases like ImageNet, using convnets on depth images requires
mapping them into three dimensional channels. This transfer learning procedure
makes them processable by pre-trained deep architectures. Current mappings are
based on heuristic assumptions over preprocessing steps and on what depth
properties should be most preserved, resulting often in cumbersome data
visualizations, and in sub-optimal performance in terms of generality and
recognition results. Here we take an alternative route and we attempt instead
to learn an optimal colorization mapping for any given pre-trained
architecture, using as training data a reference RGB-D database. We propose a
deep network architecture, exploiting the residual paradigm, that learns how to
map depth data to three channel images. A qualitative analysis of the images
obtained with this approach clearly indicates that learning the optimal mapping
preserves the richness of depth information better than current hand-crafted
approaches. Experiments on the Washington, JHUIT-50 and BigBIRD public
benchmark databases, using CaffeNet, VGG16, GoogleNet, and ResNet50 clearly
showcase the power of our approach, with gains in performance of up to 16%
compared to state of the art competitors on the depth channel only, leading to
top performances when dealing with RGB-D data | [
"cs.CV"
] |
Although exploratory behaviors are ubiquitous in the animal kingdom, their
computational underpinnings are still largely unknown. Behavioral Psychology
has identified learning as a primary drive underlying many exploratory
behaviors. Exploration is seen as a means for an animal to gather sensory data
useful for reducing its ignorance about the environment. While related problems
have been addressed in Data Mining and Reinforcement Learning, the
computational modeling of learning-driven exploration by embodied agents is
largely unrepresented.
Here, we propose a computational theory for learning-driven exploration based
on the concept of missing information that allows an agent to identify
informative actions using Bayesian inference. We demonstrate that when
embodiment constraints are high, agents must actively coordinate their actions
to learn efficiently. Compared to earlier approaches, our exploration policy
yields more efficient learning across a range of worlds with diverse
structures. The improved learning in turn affords greater success in general
tasks including navigation and reward gathering. We conclude by discussing how
the proposed theory relates to previous information-theoretic objectives of
behavior, such as predictive information and the free energy principle, and how
it might contribute to a general theory of exploratory behavior. | [
"cs.LG"
] |
Geometric scattering has recently gained recognition in graph representation
learning, and recent work has shown that integrating scattering features in
graph convolution networks (GCNs) can alleviate the typical oversmoothing of
features in node representation learning. However, scattering methods often
rely on handcrafted design, requiring careful selection of frequency bands via
a cascade of wavelet transforms, as well as an effective weight sharing scheme
to combine together low- and band-pass information. Here, we introduce a new
attention-based architecture to produce adaptive task-driven node
representations by implicitly learning node-wise weights for combining multiple
scattering and GCN channels in the network. We show the resulting geometric
scattering attention network (GSAN) outperforms previous networks in
semi-supervised node classification, while also enabling a spectral study of
extracted information by examining node-wise attention weights. | [
"cs.LG",
"stat.ML"
] |
Deep Reinforcement Learning has shown its ability in solving complicated
problems directly from high-dimensional observations. However, in end-to-end
settings, Reinforcement Learning algorithms are not sample-efficient and
requires long training times and quantities of data. In this work, we proposed
a framework for sample-efficient Reinforcement Learning that take advantage of
state and action representations to transform a high-dimensional problem into a
low-dimensional one. Moreover, we seek to find the optimal policy mapping
latent states to latent actions. Because now the policy is learned on abstract
representations, we enforce, using auxiliary loss functions, the lifting of
such policy to the original problem domain. Results show that the novel
framework can efficiently learn low-dimensional and interpretable state and
action representations and the optimal latent policy. | [
"cs.LG",
"cs.AI"
] |
The training of deep neural networks (DNNs) requires intensive resources both
for computation and for storage performance. Thus, DNNs cannot be efficiently
applied to mobile phones and embedded devices, which seriously limits their
applicability in industry applications. To address this issue, we propose a
novel encoding scheme of using {-1,+1} to decompose quantized neural networks
(QNNs) into multi-branch binary networks, which can be efficiently implemented
by bitwise operations (xnor and bitcount) to achieve model compression,
computational acceleration and resource saving. Based on our method, users can
easily achieve different encoding precisions arbitrarily according to their
requirements and hardware resources. The proposed mechanism is very suitable
for the use of FPGA and ASIC in terms of data storage and computation, which
provides a feasible idea for smart chips. We validate the effectiveness of our
method on both large-scale image classification tasks (e.g., ImageNet) and
object detection tasks. In particular, our method with low-bit encoding can
still achieve almost the same performance as its full-precision counterparts. | [
"cs.CV"
] |
To improve the generalization of detectors, for domain adaptive object
detection (DAOD), recent advances mainly explore aligning feature-level
distributions between the source and single-target domain, which may neglect
the impact of domain-specific information existing in the aligned features.
Towards DAOD, it is important to extract domain-invariant object
representations. To this end, in this paper, we try to disentangle
domain-invariant representations from domain-specific representations. And we
propose a novel disentangled method based on vector decomposition. Firstly, an
extractor is devised to separate domain-invariant representations from the
input, which are used for extracting object proposals. Secondly,
domain-specific representations are introduced as the differences between the
input and domain-invariant representations. Through the difference operation,
the gap between the domain-specific and domain-invariant representations is
enlarged, which promotes domain-invariant representations to contain more
domain-irrelevant information. In the experiment, we separately evaluate our
method on the single- and compound-target case. For the single-target case,
experimental results of four domain-shift scenes show our method obtains a
significant performance gain over baseline methods. Moreover, for the
compound-target case (i.e., the target is a compound of two different domains
without domain labels), our method outperforms baseline methods by around 4%,
which demonstrates the effectiveness of our method. | [
"cs.CV"
] |
Existing deep learning approaches for learning visual features tend to
overlearn and extract more information than what is required for the task at
hand. From a privacy preservation perspective, the input visual information is
not protected from the model; enabling the model to become more intelligent
than it is trained to be. Current approaches for suppressing additional task
learning assume the presence of ground truth labels for the tasks to be
suppressed during training time. In this research, we propose a three-fold
novel contribution: (i) a model-agnostic solution for reducing model
overlearning by suppressing all the unknown tasks, (ii) a novel metric to
measure the trust score of a trained deep learning model, and (iii) a simulated
benchmark dataset, PreserveTask, having five different fundamental image
classification tasks to study the generalization nature of models. In the first
set of experiments, we learn disentangled representations and suppress
overlearning of five popular deep learning models: VGG16, VGG19, Inception-v1,
MobileNet, and DenseNet on PreserverTask dataset. Additionally, we show results
of our framework on color-MNIST dataset and practical applications of face
attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset. | [
"cs.CV",
"cs.LG"
] |
Learning faithful graph representations as sets of vertex embeddings has
become a fundamental intermediary step in a wide range of machine learning
applications. The quality of the embeddings is usually determined by how well
the geometry of the target space matches the structure of the data. In this
work we learn continuous representations of graphs in spaces of symmetric
matrices over C. These spaces offer a rich geometry that simultaneously admits
hyperbolic and Euclidean subspaces, and are amenable to analysis and explicit
computations. We implement an efficient method to learn embeddings and compute
distances, and develop the tools to operate with such spaces. The proposed
models are able to automatically adapt to very dissimilar arrangements without
any apriori estimates of graph features. On various datasets with very diverse
structural properties and reconstruction measures our model ties the results of
competitive baselines for geometrically pure graphs and outperforms them for
graphs with mixed geometric features, showcasing the versatility of our
approach. | [
"cs.LG",
"cs.CG",
"I.2"
] |
Multivariate time series forecasting, which analyzes historical time series
to predict future trends, can effectively help decision-making. Complex
relations among variables in MTS, including static, dynamic, predictable, and
latent relations, have made it possible to mining more features of MTS.
Modeling complex relations are not only essential in characterizing latent
dependency as well as modeling temporal dependence, but also brings great
challenges in the MTS forecasting task. However, existing methods mainly focus
on modeling certain relations among MTS variables. In this paper, we propose a
novel end-to-end deep learning model, termed Multivariate Time Series
Forecasting via Heterogeneous Graph Neural Networks (MTHetGNN). To characterize
complex relations among variables, a relation embedding module is designed in
MTHetGNN, where each variable is regarded as a graph node, and each type of
edge represents a specific static or dynamic relationship. Meanwhile, a
temporal embedding module is introduced for time series features extraction,
where involving convolutional neural network (CNN) filters with different
perception scales. Finally, a heterogeneous graph embedding module is adopted
to handle the complex structural information generated by the two modules.
Three benchmark datasets from the real world are used to evaluate the proposed
MTHetGNN. The comprehensive experiments show that MTHetGNN achieves
state-of-the-art results in the MTS forecasting task. | [
"cs.LG",
"stat.ML"
] |
Graph convolutional networks are a new promising learning approach to deal
with data on irregular domains. They are predestined to overcome certain
limitations of conventional grid-based architectures and will enable efficient
handling of point clouds or related graphical data representations, e.g.
superpixel graphs. Learning feature extractors and classifiers on 3D point
clouds is still an underdeveloped area and has potential restrictions to equal
graph topologies. In this work, we derive a new architectural design that
combines rotationally and topologically invariant graph diffusion operators and
node-wise feature learning through 1x1 convolutions. By combining multiple
isotropic diffusion operations based on the Laplace-Beltrami operator, we can
learn an optimal linear combination of diffusion kernels for effective feature
propagation across nodes on an irregular graph. We validated our approach for
learning point descriptors as well as semantic classification on real 3D point
clouds of human poses and demonstrate an improvement from 85% to 95% in Dice
overlap with our multi-kernel approach. | [
"cs.CV"
] |
A head-mounted display (HMD) could be an important component of augmented
reality system. However, as the upper face region is seriously occluded by the
device, the user experience could be affected in applications such as
telecommunication and multi-player video games. In this paper, we first present
a novel experimental setup that consists of two near-infrared (NIR) cameras to
point to the eye regions and one visible-light RGB camera to capture the
visible face region. The main purpose of this paper is to synthesize realistic
face images without occlusions based on the images captured by these cameras.
To this end, we propose a novel synthesis framework that contains four modules:
3D head reconstruction, face alignment and tracking, face synthesis, and eye
synthesis. In face synthesis, we propose a novel algorithm that can robustly
align and track a personalized 3D head model given a face that is severely
occluded by the HMD. In eye synthesis, in order to generate accurate eye
movements and dynamic wrinkle variations around eye regions, we propose another
novel algorithm to colorize the NIR eye images and further remove the "red eye"
effects caused by the colorization. Results show that both hardware setup and
system framework are robust to synthesize realistic face images in video
sequences. | [
"cs.CV"
] |
Domain adaptation aims to generalize a model from a source domain to tackle
tasks in a related but different target domain. Traditional domain adaptation
algorithms assume that enough labeled data, which are treated as the prior
knowledge are available in the source domain. However, these algorithms will be
infeasible when only a few labeled data exist in the source domain, and thus
the performance decreases significantly. To address this challenge, we propose
a Domain-invariant Graph Learning (DGL) approach for domain adaptation with
only a few labeled source samples. Firstly, DGL introduces the Nystrom method
to construct a plastic graph that shares similar geometric property as the
target domain. And then, DGL flexibly employs the Nystrom approximation error
to measure the divergence between plastic graph and source graph to formalize
the distribution mismatch from the geometric perspective. Through minimizing
the approximation error, DGL learns a domain-invariant geometric graph to
bridge source and target domains. Finally, we integrate the learned
domain-invariant graph with the semi-supervised learning and further propose an
adaptive semi-supervised model to handle the cross-domain problems. The results
of extensive experiments on popular datasets verify the superiority of DGL,
especially when only a few labeled source samples are available. | [
"cs.CV"
] |
In many complex dynamical systems, artificial or natural, one can observe
self-organization of patterns emerging from local rules. Cellular automata,
like the Game of Life (GOL), have been widely used as abstract models enabling
the study of various aspects of self-organization and morphogenesis, such as
the emergence of spatially localized patterns. However, findings of
self-organized patterns in such models have so far relied on manual tuning of
parameters and initial states, and on the human eye to identify interesting
patterns. In this paper, we formulate the problem of automated discovery of
diverse self-organized patterns in such high-dimensional complex dynamical
systems, as well as a framework for experimentation and evaluation. Using a
continuous GOL as a testbed, we show that recent intrinsically-motivated
machine learning algorithms (POP-IMGEPs), initially developed for learning of
inverse models in robotics, can be transposed and used in this novel
application area. These algorithms combine intrinsically-motivated goal
exploration and unsupervised learning of goal space representations. Goal space
representations describe the interesting features of patterns for which diverse
variations should be discovered. In particular, we compare various approaches
to define and learn goal space representations from the perspective of
discovering diverse spatially localized patterns. Moreover, we introduce an
extension of a state-of-the-art POP-IMGEP algorithm which incrementally learns
a goal representation using a deep auto-encoder, and the use of CPPN primitives
for generating initialization parameters. We show that it is more efficient
than several baselines and equally efficient as a system pre-trained on a
hand-made database of patterns identified by human experts. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Deep neural network models used for medical image segmentation are large
because they are trained with high-resolution three-dimensional (3D) images.
Graphics processing units (GPUs) are widely used to accelerate the trainings.
However, the memory on a GPU is not large enough to train the models. A popular
approach to tackling this problem is patch-based method, which divides a large
image into small patches and trains the models with these small patches.
However, this method would degrade the segmentation quality if a target object
spans multiple patches. In this paper, we propose a novel approach for 3D
medical image segmentation that utilizes the data-swapping, which swaps out
intermediate data from GPU memory to CPU memory to enlarge the effective GPU
memory size, for training high-resolution 3D medical images without patching.
We carefully tuned parameters in the data-swapping method to obtain the best
training performance for 3D U-Net, a widely used deep neural network model for
medical image segmentation. We applied our tuning to train 3D U-Net with
full-size images of 192 x 192 x 192 voxels in brain tumor dataset. As a result,
communication overhead, which is the most important issue, was reduced by
17.1%. Compared with the patch-based method for patches of 128 x 128 x 128
voxels, our training for full-size images achieved improvement on the mean Dice
score by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core
sub-region, respectively. The total training time was reduced from 164 hours to
47 hours, resulting in 3.53 times of acceleration. | [
"cs.LG",
"cs.CV",
"cs.PF",
"stat.ML",
"C.4; I.2.6; I.2.10; I.4.6; I.4.9; J.4"
] |
To help understand the underlying mechanisms of neural networks (NNs),
several groups have, in recent years, studied the number of linear regions
$\ell$ of piecewise linear functions generated by deep neural networks (DNN).
In particular, they showed that $\ell$ can grow exponentially with the number
of network parameters $p$, a property often used to explain the advantages of
DNNs over shallow NNs in approximating complicated functions. Nonetheless, a
simple dimension argument shows that DNNs cannot generate all piecewise linear
functions with $\ell$ linear regions as soon as $\ell > p$. It is thus natural
to seek to characterize specific families of functions with $\ell$ linear
regions that can be constructed by DNNs. Iterated Function Systems (IFS)
generate sequences of piecewise linear functions $F_k$ with a number of linear
regions exponential in $k$. We show that, under mild assumptions, $F_k$ can be
generated by a NN using only $\mathcal{O}(k)$ parameters. IFS are used
extensively to generate, at low computational cost, natural-looking landscape
textures in artificial images. They have also been proposed for compression of
natural images, albeit with less commercial success. The surprisingly good
performance of this fractal-based compression suggests that our visual system
may lock in, to some extent, on self-similarities in images. The combination of
this phenomenon with the capacity, demonstrated here, of DNNs to efficiently
approximate IFS may contribute to the success of DNNs, particularly striking
for image processing tasks, as well as suggest new algorithms for representing
self similarities in images based on the DNN mechanism. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Deep learning models, such as convolutional neural networks, have long been
applied to image and multi-media tasks, particularly those with structured
data. More recently, there has been more attention to unstructured data that
can be represented via graphs. These types of data are often found in health
and medicine, social networks, and research data repositories. Graph
convolutional neural networks have recently gained attention in the field of
deep learning that takes advantage of graph-based data representation with
automatic feature extraction via convolutions. Given the popularity of these
methods in a wide range of applications, robust uncertainty quantification is
vital. This remains a challenge for large models and unstructured datasets.
Bayesian inference provides a principled approach to uncertainty quantification
of model parameters for deep learning models. Although Bayesian inference has
been used extensively elsewhere, its application to deep learning remains
limited due to the computational requirements of the Markov Chain Monte Carlo
(MCMC) methods. Recent advances in parallel computing and advanced proposal
schemes in MCMC sampling methods has opened the path for Bayesian deep
learning. In this paper, we present Bayesian graph convolutional neural
networks that employ tempered MCMC sampling with Langevin-gradient proposal
distribution implemented via parallel computing. Our results show that the
proposed method can provide accuracy similar to advanced optimisers while
providing uncertainty quantification for key benchmark problems. | [
"cs.LG"
] |
We propose a novel Generative Adversarial Network (XingGAN or CrossingGAN)
for person image generation tasks, i.e., translating the pose of a given person
to a desired one. The proposed Xing generator consists of two generation
branches that model the person's appearance and shape information,
respectively. Moreover, we propose two novel blocks to effectively transfer and
update the person's shape and appearance embeddings in a crossing way to
mutually improve each other, which has not been considered by any other
existing GAN-based image generation work. Extensive experiments on two
challenging datasets, i.e., Market-1501 and DeepFashion, demonstrate that the
proposed XingGAN advances the state-of-the-art performance both in terms of
objective quantitative scores and subjective visual realness. The source code
and trained models are available at https://github.com/Ha0Tang/XingGAN. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Visual events are usually accompanied by sounds in our daily lives. However,
can the machines learn to correlate the visual scene and sound, as well as
localize the sound source only by observing them like humans? To investigate
its empirical learnability, in this work we first present a novel unsupervised
algorithm to address the problem of localizing sound sources in visual scenes.
In order to achieve this goal, a two-stream network structure which handles
each modality with attention mechanism is developed for sound source
localization. The network naturally reveals the localized response in the scene
without human annotation. In addition, a new sound source dataset is developed
for performance evaluation. Nevertheless, our empirical evaluation shows that
the unsupervised method generates false conclusions in some cases. Thereby, we
show that this false conclusion cannot be fixed without human prior knowledge
due to the well-known correlation and causality mismatch misconception. To fix
this issue, we extend our network to the supervised and semi-supervised network
settings via a simple modification due to the general architecture of our
two-stream network. We show that the false conclusions can be effectively
corrected even with a small amount of supervision, i.e., semi-supervised setup.
Furthermore, we present the versatility of the learned audio and visual
embeddings on the cross-modal content alignment and we extend this proposed
algorithm to a new application, sound saliency based automatic camera view
panning in 360-degree{\deg} videos. | [
"cs.CV"
] |
Real-world information networks are increasingly occurring across various
disciplines including online social networks and citation networks. These
network data are generally characterized by sparseness, nonlinearity and
heterogeneity bringing different challenges to the network analytics task to
capture inherent properties from network data. Artificial intelligence and
machine learning have been recently leveraged as powerful systems to learn
insights from network data and deal with presented challenges. As part of
machine learning techniques, graph embedding approaches are originally
conceived for graphs constructed from feature represented datasets, like image
dataset, in which links between nodes are explicitly defined. These traditional
approaches cannot cope with network data challenges. As a new learning
paradigm, network representation learning has been proposed to map a real-world
information network into a low-dimensional space while preserving inherent
properties of the network. In this paper, we present a systematic comprehensive
survey of network representation learning, known also as network embedding,
from birth to the current development state. Through the undertaken survey, we
provide a comprehensive view of reasons behind the emergence of network
embedding and, types of settings and models used in the network embedding
pipeline. Thus, we introduce a brief history of representation learning and
word representation learning ancestor of network embedding. We provide also
formal definitions of basic concepts required to understand network
representation learning followed by a description of network embedding
pipeline. Most commonly used downstream tasks to evaluate embeddings, their
evaluation metrics and popular datasets are highlighted. Finally, we present
the open-source libraries for network embedding. | [
"cs.LG",
"cs.AI"
] |
Unbiased confidence estimates of neural networks are crucial especially for
safety-critical applications. Many methods have been developed to calibrate
biased confidence estimates. Though there is a variety of methods for
classification, the field of object detection has not been addressed yet.
Therefore, we present a novel framework to measure and calibrate biased (or
miscalibrated) confidence estimates of object detection methods. The main
difference to related work in the field of classifier calibration is that we
also use additional information of the regression output of an object detector
for calibration. Our approach allows, for the first time, to obtain calibrated
confidence estimates with respect to image location and box scale. In addition,
we propose a new measure to evaluate miscalibration of object detectors.
Finally, we show that our developed methods outperform state-of-the-art
calibration models for the task of object detection and provides reliable
confidence estimates across different locations and scales. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Recent advancements in deep learning-based modeling of molecules promise to
accelerate in silico drug discovery. A plethora of generative models is
available, building molecules either atom-by-atom and bond-by-bond or
fragment-by-fragment. However, many drug discovery projects require a fixed
scaffold to be present in the generated molecule, and incorporating that
constraint has only recently been explored. In this work, we propose a new
graph-based model that naturally supports scaffolds as initial seed of the
generative procedure, which is possible because our model is not conditioned on
the generation history. At the same time, our generation procedure can flexibly
choose between adding individual atoms and entire fragments. We show that
training using a randomized generation order is necessary for good performance
when extending scaffolds, and that the results are further improved by
increasing the fragment vocabulary size. Our model pushes the state-of-the-art
of graph-based molecule generation, while being an order of magnitude faster to
train and sample from than existing approaches. | [
"cs.LG",
"q-bio.QM"
] |
We present GraphMix, a regularization method for Graph Neural Network based
semi-supervised object classification, whereby we propose to train a
fully-connected network jointly with the graph neural network via parameter
sharing and interpolation-based regularization. Further, we provide a
theoretical analysis of how GraphMix improves the generalization bounds of the
underlying graph neural network, without making any assumptions about the
"aggregation" layer or the depth of the graph neural networks. We
experimentally validate this analysis by applying GraphMix to various
architectures such as Graph Convolutional Networks, Graph Attention Networks
and Graph-U-Net. Despite its simplicity, we demonstrate that GraphMix can
consistently improve or closely match state-of-the-art performance using even
simpler architectures such as Graph Convolutional Networks, across three
established graph benchmarks: Cora, Citeseer and Pubmed citation network
datasets, as well as three newly proposed datasets: Cora-Full, Co-author-CS and
Co-author-Physics. | [
"cs.LG",
"stat.ML"
] |
Coloring line art images based on the colors of reference images is an
important stage in animation production, which is time-consuming and tedious.
In this paper, we propose a deep architecture to automatically color line art
videos with the same color style as the given reference images. Our framework
consists of a color transform network and a temporal constraint network. The
color transform network takes the target line art images as well as the line
art and color images of one or more reference images as input, and generates
corresponding target color images. To cope with larger differences between the
target line art image and reference color images, our architecture utilizes
non-local similarity matching to determine the region correspondences between
the target image and the reference images, which are used to transform the
local color information from the references to the target. To ensure global
color style consistency, we further incorporate Adaptive Instance Normalization
(AdaIN) with the transformation parameters obtained from a style embedding
vector that describes the global color style of the references, extracted by an
embedder. The temporal constraint network takes the reference images and the
target image together in chronological order, and learns the spatiotemporal
features through 3D convolution to ensure the temporal consistency of the
target image and the reference image. Our model can achieve even better
coloring results by fine-tuning the parameters with only a small amount of
samples when dealing with an animation of a new style. To evaluate our method,
we build a line art coloring dataset. Experiments show that our method achieves
the best performance on line art video coloring compared to the
state-of-the-art methods and other baselines. | [
"cs.CV"
] |
The Transformer architecture has revolutionized deep learning on sequential
data, becoming ubiquitous in state-of-the-art solutions for a wide variety of
applications. Yet vanilla Transformers are notoriously resource-expensive,
requiring $O(L^2)$ in serial time and memory as functions of input length $L$.
Recent works proposed various linear self-attention mechanisms, scaling only as
$O(L)$ for serial computation. We perform a thorough analysis of recent
Transformer mechanisms with linear self-attention, Performers, in terms of
overall computational complexity. We observe a remarkable computational
flexibility: forward and backward propagation can be performed with no
approximations using sublinear memory as a function of $L$ (in addition to
negligible storage for the input sequence), at a cost of greater time
complexity in the parallel setting. In the extreme case, a Performer consumes
only $O(1)$ memory during training, and still requires $O(L)$ time. This
discovered time-memory tradeoff can be used for training or, due to complete
backward-compatibility, for fine-tuning on a low-memory device, e.g. a
smartphone or an earlier-generation GPU, thus contributing towards
decentralized and democratized deep learning. | [
"cs.LG"
] |
We present a novel method to explicitly incorporate topological prior
knowledge into deep learning based segmentation, which is, to our knowledge,
the first work to do so. Our method uses the concept of persistent homology, a
tool from topological data analysis, to capture high-level topological
characteristics of segmentation results in a way which is differentiable with
respect to the pixelwise probability of being assigned to a given class. The
topological prior knowledge consists of the sequence of desired Betti numbers
of the segmentation. As a proof-of-concept we demonstrate our approach by
applying it to the problem of left-ventricle segmentation of cardiac MR images
of 500 subjects from the UK Biobank dataset, where we show that it improves
segmentation performance in terms of topological correctness without
sacrificing pixelwise accuracy. | [
"cs.CV"
] |
Video salient object detection (VSOD) is an important task in many vision
applications. Reliable VSOD requires to simultaneously exploit the information
from both the spatial domain and the temporal domain. Most of the existing
algorithms merely utilize simple fusion strategies, such as addition and
concatenation, to merge the information from different domains. Despite their
simplicity, such fusion strategies may introduce feature redundancy, and also
fail to fully exploit the relationship between multi-level features extracted
from both spatial and temporal domains. In this paper, we suggest an adaptive
local-global refinement framework for VSOD. Different from previous approaches,
we propose a local refinement architecture and a global one to refine the
simply fused features with different scopes, which can fully explore the local
dependence and the global dependence of multi-level features. In addition, to
emphasize the effective information and suppress the useless one, an adaptive
weighting mechanism is designed based on graph convolutional neural network
(GCN). We show that our weighting methodology can further exploit the feature
correlations, thus driving the network to learn more discriminative feature
representation. Extensive experimental results on public video datasets
demonstrate the superiority of our method over the existing ones. | [
"cs.CV"
] |
Boosting is one of the most successful ideas in machine learning, achieving
great practical performance with little fine-tuning. The success of boosted
classifiers is most often attributed to improvements in margins. The focus on
margin explanations was pioneered in the seminal work by Schapire et al. (1998)
and has culminated in the $k$'th margin generalization bound by Gao and Zhou
(2013), which was recently proved to be near-tight for some data distributions
(Gronlund et al. 2019). In this work, we first demonstrate that the $k$'th
margin bound is inadequate in explaining the performance of state-of-the-art
gradient boosters. We then explain the short comings of the $k$'th margin bound
and prove a stronger and more refined margin-based generalization bound for
boosted classifiers that indeed succeeds in explaining the performance of
modern gradient boosters. Finally, we improve upon the recent generalization
lower bound by Gr{\o}nlund et al. (2019). | [
"cs.LG",
"stat.ML"
] |
In several reinforcement learning (RL) scenarios, mainly in security
settings, there may be adversaries trying to interfere with the reward
generating process. In this paper, we introduce Threatened Markov Decision
Processes (TMDPs), which provide a framework to support a decision maker
against a potential adversary in RL. Furthermore, we propose a level-$k$
thinking scheme resulting in a new learning framework to deal with TMDPs. After
introducing our framework and deriving theoretical results, relevant empirical
evidence is given via extensive experiments, showing the benefits of accounting
for adversaries while the agent learns. | [
"cs.LG",
"cs.AI",
"cs.CR",
"stat.ML"
] |
Modern adiabatic quantum computers (AQC) are already used to solve difficult
combinatorial optimisation problems in various domains of science. Currently,
only a few applications of AQC in computer vision have been demonstrated. We
review AQC and derive a new algorithm for correspondence problems on point sets
suitable for execution on AQC. Our algorithm has a subquadratic computational
complexity of the state preparation. Examples of successful transformation
estimation and point set alignment by simulated sampling are shown in the
systematic experimental evaluation. Finally, we analyse the differences in the
solutions and the corresponding energy values. | [
"cs.CV",
"cs.ET",
"quant-ph"
] |
Mutual information maximization provides an appealing formalism for learning
representations of data. In the context of reinforcement learning (RL), such
representations can accelerate learning by discarding irrelevant and redundant
information, while retaining the information necessary for control. Much of the
prior work on these methods has addressed the practical difficulties of
estimating mutual information from samples of high-dimensional observations,
while comparatively less is understood about which mutual information
objectives yield representations that are sufficient for RL from a theoretical
perspective. In this paper, we formalize the sufficiency of a state
representation for learning and representing the optimal policy, and study
several popular mutual-information based objectives through this lens.
Surprisingly, we find that two of these objectives can yield insufficient
representations given mild and common assumptions on the structure of the MDP.
We corroborate our theoretical results with empirical experiments on a
simulated game environment with visual observations. | [
"cs.LG",
"cs.AI"
] |
State-of-the-art single depth image-based 3D hand pose estimation methods are
based on dense predictions, including voxel-to-voxel predictions,
point-to-point regression, and pixel-wise estimations. Despite the good
performance, those methods have a few issues in nature, such as the poor
trade-off between accuracy and efficiency, and plain feature representation
learning with local convolutions. In this paper, a novel pixel-wise
prediction-based method is proposed to address the above issues. The key ideas
are two-fold: a) explicitly modeling the dependencies among joints and the
relations between the pixels and the joints for better local feature
representation learning; b) unifying the dense pixel-wise offset predictions
and direct joint regression for end-to-end training. Specifically, we first
propose a graph convolutional network (GCN) based joint graph reasoning module
to model the complex dependencies among joints and augment the representation
capability of each pixel. Then we densely estimate all pixels' offsets to
joints in both image plane and depth space and calculate the joints' positions
by a weighted average over all pixels' predictions, totally discarding the
complex postprocessing operations. The proposed model is implemented with an
efficient 2D fully convolutional network (FCN) backbone and has only about 1.4M
parameters. Extensive experiments on multiple 3D hand pose estimation
benchmarks demonstrate that the proposed method achieves new state-of-the-art
accuracy while running very efficiently with around a speed of 110fps on a
single NVIDIA 1080Ti GPU. | [
"cs.CV"
] |
We show that the YOLOv4 object detection neural network based on the CSP
approach, scales both up and down and is applicable to small and large networks
while maintaining optimal speed and accuracy. We propose a network scaling
approach that modifies not only the depth, width, resolution, but also
structure of the network. YOLOv4-large model achieves state-of-the-art results:
55.5% AP (73.4% AP50) for the MS COCO dataset at a speed of ~16 FPS on Tesla
V100, while with the test time augmentation, YOLOv4-large achieves 56.0% AP
(73.3 AP50). To the best of our knowledge, this is currently the highest
accuracy on the COCO dataset among any published work. The YOLOv4-tiny model
achieves 22.0% AP (42.0% AP50) at a speed of 443 FPS on RTX 2080Ti, while by
using TensorRT, batch size = 4 and FP16-precision the YOLOv4-tiny achieves 1774
FPS. | [
"cs.CV",
"cs.LG"
] |
Protective behavior exhibited by people with chronic pain (CP) during
physical activities is the key to understanding their physical and emotional
states. Existing automatic protective behavior detection (PBD) methods rely on
pre-segmentation of activities predefined by users. However, in real life,
people perform activities casually. Therefore, where those activities present
difficulties for people with chronic pain, technology-enabled support should be
delivered continuously and automatically adapted to activity type and
occurrence of protective behavior. Hence, to facilitate ubiquitous CP
management, it becomes critical to enable accurate PBD over continuous data. In
this paper, we propose to integrate human activity recognition (HAR) with PBD
via a novel hierarchical HAR-PBD architecture comprising graph-convolution and
long short-term memory (GC-LSTM) networks, and alleviate class imbalances using
a class-balanced focal categorical-cross-entropy (CFCC) loss. Through in-depth
evaluation of the approach using a CP patients' dataset, we show that the
leveraging of HAR, GC-LSTM networks, and CFCC loss leads to clear increase in
PBD performance against the baseline (macro F1 score of 0.81 vs. 0.66 and
precision-recall area-under-the-curve (PR-AUC) of 0.60 vs. 0.44). We conclude
by discussing possible use cases of the hierarchical architecture in CP
management and beyond. We also discuss current limitations and ways forward. | [
"cs.LG"
] |
Medical research is risky and expensive. Drug discovery, as an example,
requires that researchers efficiently winnow thousands of potential targets to
a small candidate set for more thorough evaluation. However, research groups
spend significant time and money to perform the experiments necessary to
determine this candidate set long before seeing intermediate results.
Hypothesis generation systems address this challenge by mining the wealth of
publicly available scientific information to predict plausible research
directions. We present AGATHA, a deep-learning hypothesis generation system
that can introduce data-driven insights earlier in the discovery process.
Through a learned ranking criteria, this system quickly prioritizes plausible
term-pairs among entity sets, allowing us to recommend new research directions.
We massively validate our system with a temporal holdout wherein we predict
connections first introduced after 2015 using data published beforehand. We
additionally explore biomedical sub-domains, and demonstrate AGATHA's
predictive capacity across the twenty most popular relationship types. This
system achieves best-in-class performance on an established benchmark, and
demonstrates high recommendation scores across subdomains. Reproducibility: All
code, experimental data, and pre-trained models are available online:
sybrandt.com/2020/agatha | [
"cs.LG",
"stat.ML"
] |
Planning is a powerful approach to control problems with known environment
dynamics. In unknown environments the agent needs to learn a model of the
system dynamics to make planning applicable. This is particularly challenging
when the underlying states are only indirectly observable through images. We
propose to learn a deep latent Gaussian process dynamics (DLGPD) model that
learns low-dimensional system dynamics from environment interactions with
visual observations. The method infers latent state representations from
observations using neural networks and models the system dynamics in the
learned latent space with Gaussian processes. All parts of the model can be
trained jointly by optimizing a lower bound on the likelihood of transitions in
image space. We evaluate the proposed approach on the pendulum swing-up task
while using the learned dynamics model for planning in latent space in order to
solve the control problem. We also demonstrate that our method can quickly
adapt a trained agent to changes in the system dynamics from just a few
rollouts. We compare our approach to a state-of-the-art purely deep learning
based method and demonstrate the advantages of combining Gaussian processes
with deep learning for data efficiency and transfer learning. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We show that there may exist an inherent tension between the goal of
adversarial robustness and that of standard generalization. Specifically,
training robust models may not only be more resource-consuming, but also lead
to a reduction of standard accuracy. We demonstrate that this trade-off between
the standard accuracy of a model and its robustness to adversarial
perturbations provably exists in a fairly simple and natural setting. These
findings also corroborate a similar phenomenon observed empirically in more
complex settings. Further, we argue that this phenomenon is a consequence of
robust classifiers learning fundamentally different feature representations
than standard classifiers. These differences, in particular, seem to result in
unexpected benefits: the representations learned by robust models tend to align
better with salient data characteristics and human perception. | [
"stat.ML",
"cs.CV",
"cs.LG",
"cs.NE"
] |
Autoencoder-based learning has emerged as a staple for disciplining
representations in unsupervised and semi-supervised settings. This paper
analyzes a framework for improving generalization in a purely supervised
setting, where the target space is high-dimensional. We motivate and formalize
the general framework of target-embedding autoencoders (TEA) for supervised
prediction, learning intermediate latent representations jointly optimized to
be both predictable from features as well as predictive of targets---encoding
the prior that variations in targets are driven by a compact set of underlying
factors. As our theoretical contribution, we provide a guarantee of
generalization for linear TEAs by demonstrating uniform stability, interpreting
the benefit of the auxiliary reconstruction task as a form of regularization.
As our empirical contribution, we extend validation of this approach beyond
existing static classification applications to multivariate sequence
forecasting, verifying their advantage on both linear and nonlinear recurrent
architectures---thereby underscoring the further generality of this framework
beyond feedforward instantiations. | [
"stat.ML",
"cs.LG"
] |
We study the problem of transferring a sample in one domain to an analog
sample in another domain. Given two related domains, S and T, we would like to
learn a generative function G that maps an input sample from S to the domain T,
such that the output of a given function f, which accepts inputs in either
domains, would remain unchanged. Other than the function f, the training data
is unsupervised and consist of a set of samples from each domain. The Domain
Transfer Network (DTN) we present employs a compound loss function that
includes a multiclass GAN loss, an f-constancy component, and a regularizing
component that encourages G to map samples from T to themselves. We apply our
method to visual domains including digits and face images and demonstrate its
ability to generate convincing novel images of previously unseen entities,
while preserving their identity. | [
"cs.CV"
] |
Generative modeling has evolved to a notable field of machine learning. Deep
polynomial neural networks (PNNs) have demonstrated impressive results in
unsupervised image generation, where the task is to map an input vector (i.e.,
noise) to a synthesized image. However, the success of PNNs has not been
replicated in conditional generation tasks, such as super-resolution. Existing
PNNs focus on single-variable polynomial expansions which do not fare well to
two-variable inputs, i.e., the noise variable and the conditional variable. In
this work, we introduce a general framework, called CoPE, that enables a
polynomial expansion of two input variables and captures their auto- and
cross-correlations. We exhibit how CoPE can be trivially augmented to accept an
arbitrary number of input variables. CoPE is evaluated in five tasks
(class-conditional generation, inverse problems, edges-to-image translation,
image-to-image translation, attribute-guided generation) involving eight
datasets. The thorough evaluation suggests that CoPE can be useful for tackling
diverse conditional generation tasks. | [
"cs.LG",
"cs.CV"
] |
Electronic records contain sequences of events, some of which take place all
at once in a single visit, and others that are dispersed over multiple visits,
each with a different timestamp. We postulate that fine temporal detail, e.g.,
whether a series of blood tests are completed at once or in rapid succession
should not alter predictions based on this data. Motivated by this intuition,
we propose models for analyzing sequences of multivariate clinical time series
data that are invariant to this temporal clustering. We propose an efficient
data augmentation technique that exploits the postulated temporal-clustering
invariance to regularize deep neural networks optimized for several clinical
prediction tasks. We introduce two techniques to temporally coarsen
(downsample) irregular time series: (i) grouping the data points based on
regularly-spaced timestamps; and (ii) clustering them, yielding
irregularly-paced timestamps. Moreover, we propose a MultiResolution Ensemble
(MRE) model, improving predictive accuracy by ensembling predictions based on
inputs sequences transformed by different coarsening operators. Our experiments
show that MRE improves the mAP on the benchmark mortality prediction task from
51.53% to 53.92%. | [
"cs.LG",
"q-bio.QM",
"stat.ML"
] |
Graph Convolutional Networks (GCNs) have already demonstrated their powerful
ability to model the irregular data, e.g., skeletal data in human action
recognition, providing an exciting new way to fuse rich structural information
for nodes residing in different parts of a graph. In human action recognition,
current works introduce a dynamic graph generation mechanism to better capture
the underlying semantic skeleton connections and thus improves the performance.
In this paper, we provide an orthogonal way to explore the underlying
connections. Instead of introducing an expensive dynamic graph generation
paradigm, we build a more efficient GCN on a Riemann manifold, which we think
is a more suitable space to model the graph data, to make the extracted
representations fit the embedding matrix. Specifically, we present a novel
spatial-temporal GCN (ST-GCN) architecture which is defined via the Poincar\'e
geometry such that it is able to better model the latent anatomy of the
structure data. To further explore the optimal projection dimension in the
Riemann space, we mix different dimensions on the manifold and provide an
efficient way to explore the dimension for each ST-GCN layer. With the final
resulted architecture, we evaluate our method on two current largest scale 3D
datasets, i.e., NTU RGB+D and NTU RGB+D 120. The comparison results show that
the model could achieve a superior performance under any given evaluation
metrics with only 40\% model size when compared with the previous best GCN
method, which proves the effectiveness of our model. | [
"cs.CV"
] |
Scene Designer is a novel method for searching and generating images using
free-hand sketches of scene compositions; i.e. drawings that describe both the
appearance and relative positions of objects. Our core contribution is a single
unified model to learn both a cross-modal search embedding for matching
sketched compositions to images, and an object embedding for layout synthesis.
We show that a graph neural network (GNN) followed by Transformer under our
novel contrastive learning setting is required to allow learning correlations
between object type, appearance and arrangement, driving a mask generation
module that synthesises coherent scene layouts, whilst also delivering state of
the art sketch based visual search of scenes. | [
"cs.CV"
] |
Object detection is the identification of an object in the image along with
its localisation and classification. It has wide spread applications and is a
critical component for vision based software systems. This paper seeks to
perform a rigorous survey of modern object detection algorithms that use deep
learning. As part of the survey, the topics explored include various
algorithms, quality metrics, speed/size trade offs and training methodologies.
This paper focuses on the two types of object detection algorithms- the SSD
class of single step detectors and the Faster R-CNN class of two step
detectors. Techniques to construct detectors that are portable and fast on low
powered devices are also addressed by exploring new lightweight convolutional
base architectures. Ultimately, a rigorous review of the strengths and
weaknesses of each detector leads us to the present state of the art. | [
"cs.CV"
] |
Graph Neural Networks (GNNs), which generalize deep neural networks to
graph-structured data, have drawn considerable attention and achieved
state-of-the-art performance in numerous graph related tasks. However, existing
GNN models mainly focus on designing graph convolution operations. The graph
pooling (or downsampling) operations, that play an important role in learning
hierarchical representations, are usually overlooked. In this paper, we propose
a novel graph pooling operator, called Hierarchical Graph Pooling with
Structure Learning (HGP-SL), which can be integrated into various graph neural
network architectures. HGP-SL incorporates graph pooling and structure learning
into a unified module to generate hierarchical representations of graphs. More
specifically, the graph pooling operation adaptively selects a subset of nodes
to form an induced subgraph for the subsequent layers. To preserve the
integrity of graph's topological information, we further introduce a structure
learning mechanism to learn a refined graph structure for the pooled graph at
each layer. By combining HGP-SL operator with graph neural networks, we perform
graph level representation learning with focus on graph classification task.
Experimental results on six widely used benchmarks demonstrate the
effectiveness of our proposed model. | [
"cs.LG",
"stat.ML"
] |
This paper presents a new approach for assembling graph neural networks based
on framelet transforms. The latter provides a multi-scale representation for
graph-structured data. We decompose an input graph into low-pass and high-pass
frequencies coefficients for network training, which then defines a
framelet-based graph convolution. The framelet decomposition naturally induces
a graph pooling strategy by aggregating the graph feature into low-pass and
high-pass spectra, which considers both the feature values and geometry of the
graph data and conserves the total information. The graph neural networks with
the proposed framelet convolution and pooling achieve state-of-the-art
performance in many node and graph prediction tasks. Moreover, we propose
shrinkage as a new activation for the framelet convolution, which thresholds
high-frequency information at different scales. Compared to ReLU, shrinkage
activation improves model performance on denoising and signal compression:
noises in both node and structure can be significantly reduced by accurately
cutting off the high-pass coefficients from framelet decomposition, and the
signal can be compressed to less than half its original size with
well-preserved prediction performance. | [
"cs.LG",
"cs.AI",
"cs.NA",
"math.NA",
"68T07, 05C85, 42C40",
"I.2.4; I.2.6"
] |
In this paper, we present a recurrent neural system named Long Short-term
Cognitive Networks (LSTCNs) as a generalization of the Short-term Cognitive
Network (STCN) model. Such a generalization is motivated by the difficulty of
forecasting very long time series efficiently. The LSTCN model can be defined
as a collection of STCN blocks, each processing a specific time patch of the
(multivariate) time series being modeled. In this neural ensemble, each block
passes information to the subsequent one in the form of weight matrices
representing the prior knowledge. As a second contribution, we propose a
deterministic learning algorithm to compute the learnable weights while
preserving the prior knowledge resulting from previous learning processes. As a
third contribution, we introduce a feature influence score as a proxy to
explain the forecasting process in multivariate time series. The simulations
using three case studies show that our neural system reports small forecasting
errors while being significantly faster than state-of-the-art recurrent models. | [
"cs.LG",
"cs.AI"
] |
This paper presents a holistic approach to saliency-guided visual attention
modeling (SVAM) for use by autonomous underwater robots. Our proposed model,
named SVAM-Net, integrates deep visual features at various scales and semantics
for effective salient object detection (SOD) in natural underwater images. The
SVAM-Net architecture is configured in a unique way to jointly accommodate
bottom-up and top-down learning within two separate branches of the network
while sharing the same encoding layers. We design dedicated spatial attention
modules (SAMs) along these learning pathways to exploit the coarse-level and
fine-level semantic features for SOD at four stages of abstractions. The
bottom-up branch performs a rough yet reasonably accurate saliency estimation
at a fast rate, whereas the deeper top-down branch incorporates a residual
refinement module (RRM) that provides fine-grained localization of the salient
objects. Extensive performance evaluation of SVAM-Net on benchmark datasets
clearly demonstrates its effectiveness for underwater SOD. We also validate its
generalization performance by several ocean trials' data that include test
images of diverse underwater scenes and waterbodies, and also images with
unseen natural objects. Moreover, we analyze its computational feasibility for
robotic deployments and demonstrate its utility in several important use cases
of visual attention modeling. | [
"cs.CV",
"cs.LG",
"cs.RO"
] |
Road curb detection is important for autonomous driving. It can be used to
determine road boundaries to constrain vehicles on roads, so that potential
accidents could be avoided. Most of the current methods detect road curbs
online using vehicle-mounted sensors, such as cameras or 3-D Lidars. However,
these methods usually suffer from severe occlusion issues. Especially in
highly-dynamic traffic environments, most of the field of view is occupied by
dynamic objects. To alleviate this issue, we detect road curbs offline using
high-resolution aerial images in this paper. Moreover, the detected road curbs
can be used to create high-definition (HD) maps for autonomous vehicles.
Specifically, we first predict the pixel-wise segmentation map of road curbs,
and then conduct a series of post-processing steps to extract the graph
structure of road curbs. To tackle the disconnectivity issue in the
segmentation maps, we propose an innovative connectivity-preserving loss
(CP-loss) to improve the segmentation performance. The experimental results on
a public dataset demonstrate the effectiveness of our proposed loss function.
This paper is accompanied with a demonstration video and a supplementary
document, which are available at
\texttt{\url{https://sites.google.com/view/cp-loss}}. | [
"cs.CV",
"cs.RO"
] |
Convolutional architectures have proven extremely successful for vision
tasks. Their hard inductive biases enable sample-efficient learning, but come
at the cost of a potentially lower performance ceiling. Vision Transformers
(ViTs) rely on more flexible self-attention layers, and have recently
outperformed CNNs for image classification. However, they require costly
pre-training on large external datasets or distillation from pre-trained
convolutional networks. In this paper, we ask the following question: is it
possible to combine the strengths of these two architectures while avoiding
their respective limitations? To this end, we introduce gated positional
self-attention (GPSA), a form of positional self-attention which can be
equipped with a ``soft" convolutional inductive bias. We initialise the GPSA
layers to mimic the locality of convolutional layers, then give each attention
head the freedom to escape locality by adjusting a gating parameter regulating
the attention paid to position versus content information. The resulting
convolutional-like ViT architecture, ConViT, outperforms the DeiT on ImageNet,
while offering a much improved sample efficiency. We further investigate the
role of locality in learning by first quantifying how it is encouraged in
vanilla self-attention layers, then analysing how it is escaped in GPSA layers.
We conclude by presenting various ablations to better understand the success of
the ConViT. Our code and models are released publicly at
https://github.com/facebookresearch/convit. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Inspired by the classic Sauvola local image thresholding approach, we
systematically study it from the deep neural network (DNN) perspective and
propose a new solution called SauvolaNet for degraded document binarization
(DDB). It is composed of three explainable modules, namely, Multi-Window
Sauvola (MWS), Pixelwise Window Attention (PWA), and Adaptive Sauolva Threshold
(AST). The MWS module honestly reflects the classic Sauvola but with trainable
parameters and multi-window settings. The PWA module estimates the preferred
window sizes for each pixel location. The AST module further consolidates the
outputs from MWS and PWA and predicts the final adaptive threshold for each
pixel location. As a result, SauvolaNet becomes end-to-end trainable and
significantly reduces the number of required network parameters to 40K -- it is
only 1\% of MobileNetV2. In the meantime, it achieves the State-of-The-Art
(SoTA) performance for the DDB task -- SauvolaNet is at least comparable to, if
not better than, SoTA binarization solutions in our extensive studies on the 13
public document binarization datasets. Our source code is available at
https://github.com/Leedeng/SauvolaNet. | [
"cs.CV"
] |
Fault detection problem for closed loop uncertain dynamical systems, is
investigated in this paper, using different deep learning based methods.
Traditional classifier based method does not perform well, because of the
inherent difficulty of detecting system level faults for closed loop dynamical
system. Specifically, acting controller in any closed loop dynamical system,
works to reduce the effect of system level faults. A novel Generative
Adversarial based deep Autoencoder is designed to classify datasets under
normal and faulty operating conditions. This proposed network performs
significantly well when compared to any available classifier based methods, and
moreover, does not require labeled fault incorporated datasets for training
purpose. Finally, this aforementioned network's performance is tested on a high
complexity building energy system dataset. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Graphs are widely used as a popular representation of the network structure
of connected data. Graph data can be found in a broad spectrum of application
domains such as social systems, ecosystems, biological networks, knowledge
graphs, and information systems. With the continuous penetration of artificial
intelligence technologies, graph learning (i.e., machine learning on graphs) is
gaining attention from both researchers and practitioners. Graph learning
proves effective for many tasks, such as classification, link prediction, and
matching. Generally, graph learning methods extract relevant features of graphs
by taking advantage of machine learning algorithms. In this survey, we present
a comprehensive overview on the state-of-the-art of graph learning. Special
attention is paid to four categories of existing graph learning methods,
including graph signal processing, matrix factorization, random walk, and deep
learning. Major models and algorithms under these categories are reviewed
respectively. We examine graph learning applications in areas such as text,
images, science, knowledge graphs, and combinatorial optimization. In addition,
we discuss several promising research directions in this field. | [
"cs.LG",
"cs.AI",
"cs.SI",
"68T07",
"I.2.6"
] |
We introduce a novel framework for adversarial training where the target
distribution is annealed between the uniform distribution and the data
distribution. We posited a conjecture that learning under continuous annealing
in the nonparametric regime is stable irrespective of the divergence measures
in the objective function and proposed an algorithm, dubbed {\ss}-GAN, in
corollary. In this framework, the fact that the initial support of the
generative network is the whole ambient space combined with annealing are key
to balancing the minimax game. In our experiments on synthetic data, MNIST, and
CelebA, {\ss}-GAN with a fixed annealing schedule was stable and did not suffer
from mode collapse. | [
"stat.ML",
"cs.LG"
] |
Despite the powerful feature extraction capability of Convolutional Neural
Networks, there are still some challenges in saliency detection. In this paper,
we focus on two aspects of challenges: i) Since salient objects appear in
various sizes, using single-scale convolution would not capture the right size.
Moreover, using multi-scale convolutions without considering their importance
may confuse the model. ii) Employing multi-level features helps the model use
both local and global context. However, treating all features equally results
in information redundancy. Therefore, there needs to be a mechanism to
intelligently select which features in different levels are useful. To address
the first challenge, we propose a Multi-scale Attention Guided Module. This
module not only extracts multi-scale features effectively but also gives more
attention to more discriminative feature maps corresponding to the scale of the
salient object. To address the second challenge, we propose an Attention-based
Multi-level Integrator Module to give the model the ability to assign different
weights to multi-level feature maps. Furthermore, our Sharpening Loss function
guides our network to output saliency maps with higher certainty and less
blurry salient objects, and it has far better performance than the
Cross-entropy loss. For the first time, we adopt four different backbones to
show the generalization of our method. Experiments on five challenging datasets
prove that our method achieves the state-of-the-art performance. Our approach
is fast as well and can run at a real-time speed. | [
"cs.CV"
] |
The problem of low-rank matrix estimation recently received a lot of
attention due to challenging applications. A lot of work has been done on
rank-penalized methods and convex relaxation, both on the theoretical and
applied sides. However, only a few papers considered Bayesian estimation. In
this paper, we review the different type of priors considered on matrices to
favour low-rank. We also prove that the obtained Bayesian estimators, under
suitable assumptions, enjoys the same optimality properties as the ones based
on penalization. | [
"stat.ML"
] |
Current deep learning models for classification tasks in computer vision are
trained using mini-batches. In the present article, we take advantage of the
relationships between samples in a mini-batch, using graph neural networks to
aggregate information from similar images. This helps mitigate the adverse
effects of alterations to the input images on classification performance.
Diverse experiments on image-based object and scene classification show that
this approach not only improves a classifier's performance but also increases
its robustness to image perturbations and adversarial attacks. Further, we also
show that mini-batch graph neural networks can help to alleviate the problem of
mode collapse in Generative Adversarial Networks. | [
"cs.CV",
"cs.AI"
] |
Subsets and Splits