text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
The significant increase in world population and urbanisation has brought
several important challenges, in particular regarding the sustainability,
maintenance and planning of urban mobility. At the same time, the exponential
increase of computing capability and of available sensor and location data have
offered the potential for innovative solutions to these challenges. In this
work, we focus on the challenge of traffic forecasting and review the recent
development and application of graph neural networks (GNN) to this problem.
GNNs are a class of deep learning methods that directly process the input as
graph data. This leverages more directly the spatial dependencies of traffic
data and makes use of the advantages of deep learning producing
state-of-the-art results. We introduce and review the emerging topic of GNNs,
including their most common variants, with a focus on its application to
traffic forecasting. We address the different ways of modelling traffic
forecasting as a (temporal) graph, the different approaches developed so far to
combine the graph and temporal learning components, as well as current
limitations and research opportunities. | [
"cs.LG",
"cs.AI"
]
|
Pedestrian trajectory prediction is an active research area with recent works
undertaken to embed accurate models of pedestrians social interactions and
their contextual compliance into dynamic spatial graphs. However, existing
works rely on spatial assumptions about the scene and dynamics, which entails a
significant challenge to adapt the graph structure in unknown environments for
an online system. In addition, there is a lack of assessment approach for the
relational modeling impact on prediction performance. To fill this gap, we
propose Social Trajectory Recommender-Gated Graph Recurrent Neighborhood
Network, (STR-GGRNN), which uses data-driven adaptive online neighborhood
recommendation based on the contextual scene features and pedestrian visual
cues. The neighborhood recommendation is achieved by online Nonnegative Matrix
Factorization (NMF) to construct the graph adjacency matrices for predicting
the pedestrians' trajectories. Experiments based on widely-used datasets show
that our method outperforms the state-of-the-art. Our best performing model
achieves 12 cm ADE and $\sim$15 cm FDE on ETH-UCY dataset. The proposed method
takes only 0.49 seconds when sampling a total of 20K future trajectories per
frame. | [
"cs.CV",
"cs.LG"
]
|
Gradient quantization is an emerging technique in reducing communication
costs in distributed learning. Existing gradient quantization algorithms often
rely on engineering heuristics or empirical observations, lacking a systematic
approach to dynamically quantize gradients. This paper addresses this issue by
proposing a novel dynamically quantized SGD (DQ-SGD) framework, enabling us to
dynamically adjust the quantization scheme for each gradient descent step by
exploring the trade-off between communication cost and convergence error. We
derive an upper bound, tight in some cases, of the convergence error for a
restricted family of quantization schemes and loss functions. We design our
DQ-SGD algorithm via minimizing the communication cost under the convergence
error constraints. Finally, through extensive experiments on large-scale
natural language processing and computer vision tasks on AG-News, CIFAR-10, and
CIFAR-100 datasets, we demonstrate that our quantization scheme achieves better
tradeoffs between the communication cost and learning performance than other
state-of-the-art gradient quantization methods. | [
"cs.LG",
"cs.AI"
]
|
This paper describes a novel approach to change-point detection when the
observed high-dimensional data may have missing elements. The performance of
classical methods for change-point detection typically scales poorly with the
dimensionality of the data, so that a large number of observations are
collected after the true change-point before it can be reliably detected.
Furthermore, missing components in the observed data handicap conventional
approaches. The proposed method addresses these challenges by modeling the
dynamic distribution underlying the data as lying close to a time-varying
low-dimensional submanifold embedded within the ambient observation space.
Specifically, streaming data is used to track a submanifold approximation,
measure deviations from this approximation, and calculate a series of
statistics of the deviations for detecting when the underlying manifold has
changed in a sharp or unexpected manner. The approach described in this paper
leverages several recent results in the field of high-dimensional data
analysis, including subspace tracking with missing data, multiscale analysis
techniques for point clouds, online optimization, and change-point detection
performance analysis. Simulations and experiments highlight the robustness and
efficacy of the proposed approach in detecting an abrupt change in an otherwise
slowly varying low-dimensional manifold. | [
"stat.ML",
"cs.LG"
]
|
In the Information Bottleneck (IB), when tuning the relative strength between
compression and prediction terms, how do the two terms behave, and what's their
relationship with the dataset and the learned representation? In this paper, we
set out to answer these questions by studying multiple phase transitions in the
IB objective: $\text{IB}_\beta[p(z|x)] = I(X; Z) - \beta I(Y; Z)$ defined on
the encoding distribution p(z|x) for input $X$, target $Y$ and representation
$Z$, where sudden jumps of $dI(Y; Z)/d \beta$ and prediction accuracy are
observed with increasing $\beta$. We introduce a definition for IB phase
transitions as a qualitative change of the IB loss landscape, and show that the
transitions correspond to the onset of learning new classes. Using second-order
calculus of variations, we derive a formula that provides a practical condition
for IB phase transitions, and draw its connection with the Fisher information
matrix for parameterized models. We provide two perspectives to understand the
formula, revealing that each IB phase transition is finding a component of
maximum (nonlinear) correlation between $X$ and $Y$ orthogonal to the learned
representation, in close analogy with canonical-correlation analysis (CCA) in
linear settings. Based on the theory, we present an algorithm for discovering
phase transition points. Finally, we verify that our theory and algorithm
accurately predict phase transitions in categorical datasets, predict the onset
of learning new classes and class difficulty in MNIST, and predict prominent
phase transitions in CIFAR10. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
]
|
Self-supervised depth estimation has shown its great effectiveness in
producing high quality depth maps given only image sequences as input. However,
its performance usually drops when estimating on border areas or objects with
thin structures due to the limited depth representation ability. In this paper,
we address this problem by proposing a semantic-guided depth representation
enhancement method, which promotes both local and global depth feature
representations by leveraging rich contextual information. In stead of a single
depth network as used in conventional paradigms, we propose an extra semantic
segmentation branch to offer extra contextual features for depth estimation.
Based on this framework, we enhance the local feature representation by
sampling and feeding the point-based features that locate on the semantic edges
to an individual Semantic-guided Edge Enhancement module (SEEM), which is
specifically designed for promoting depth estimation on the challenging
semantic borders. Then, we improve the global feature representation by
proposing a semantic-guided multi-level attention mechanism, which enhances the
semantic and depth features by exploring pixel-wise correlations in the
multi-level depth decoding scheme. Extensive experiments validate the distinct
superiority of our method in capturing highly accurate depth on the challenging
image areas such as semantic category borders and thin objects. Both
quantitative and qualitative experiments on KITTI show that our method
outperforms the state-of-the-art methods. | [
"cs.CV"
]
|
An Unmanned Ariel vehicle (UAV) has greater importance in the army for border
security. The main objective of this article is to develop an OpenCV-Python
code using Haar Cascade algorithm for object and face detection. Currently,
UAVs are used for detecting and attacking the infiltrated ground targets. The
main drawback for this type of UAVs is that sometimes the object are not
properly detected, which thereby causes the object to hit the UAV. This project
aims to avoid such unwanted collisions and damages of UAV. UAV is also used for
surveillance that uses Voila-jones algorithm to detect and track humans. This
algorithm uses cascade object detector function and vision. train function to
train the algorithm. The main advantage of this code is the reduced processing
time. The Python code was tested with the help of available database of video
and image, the output was verified. | [
"cs.CV"
]
|
While deep learning methods are increasingly being applied to tasks such as
computer-aided diagnosis, these models are difficult to interpret, do not
incorporate prior domain knowledge, and are often considered as a "black-box."
The lack of model interpretability hinders them from being fully understood by
target users such as radiologists. In this paper, we present a novel
interpretable deep hierarchical semantic convolutional neural network (HSCNN)
to predict whether a given pulmonary nodule observed on a computed tomography
(CT) scan is malignant. Our network provides two levels of output: 1) low-level
radiologist semantic features, and 2) a high-level malignancy prediction score.
The low-level semantic outputs quantify the diagnostic features used by
radiologists and serve to explain how the model interprets the images in an
expert-driven manner. The information from these low-level tasks, along with
the representations learned by the convolutional layers, are then combined and
used to infer the high-level task of predicting nodule malignancy. This unified
architecture is trained by optimizing a global loss function including both
low- and high-level tasks, thereby learning all the parameters within a joint
framework. Our experimental results using the Lung Image Database Consortium
(LIDC) show that the proposed method not only produces interpretable lung
cancer predictions but also achieves significantly better results compared to
common 3D CNN approaches. | [
"cs.CV",
"cs.AI"
]
|
Re-ranking utilizes contextual information to optimize the initial ranking
list of person or vehicle re-identification (re-ID), which boosts the retrieval
performance at post-processing steps. This paper proposes a re-ranking network
to predict the correlations between the probe and top-ranked neighbor samples.
Specifically, all the feature embeddings of query and gallery images are
expanded and enhanced by a linear combination of their neighbors, with the
correlation prediction serves as discriminative combination weights. The
combination process is equivalent to moving independent embeddings toward the
identity centers, improving cluster compactness. For correlation prediction, we
first aggregate the contextual information for probe's k-nearest neighbors via
the Transformer encoder. Then, we distill and refine the probe-related features
into the Contextual Memory cell via attention mechanism. Like humans that
retrieve images by not only considering probe images but also memorizing the
retrieved ones, the Contextual Memory produces multi-view descriptions for each
instance. Finally, the neighbors are reconstructed with features fetched from
the Contextual Memory, and a binary classifier predicts their correlations with
the probe. Experiments on six widely-used person and vehicle re-ID benchmarks
demonstrate the effectiveness of the proposed method. Especially, our method
surpasses the state-of-the-art re-ranking approaches on large-scale datasets by
a significant margin, i.e., with an average 3.08% CMC@1 and 7.46% mAP
improvements on VERI-Wild, MSMT17, and VehicleID datasets. | [
"cs.CV"
]
|
Most generative models of audio directly generate samples in one of two
domains: time or frequency. While sufficient to express any signal, these
representations are inefficient, as they do not utilize existing knowledge of
how sound is generated and perceived. A third approach (vocoders/synthesizers)
successfully incorporates strong domain knowledge of signal processing and
perception, but has been less actively researched due to limited expressivity
and difficulty integrating with modern auto-differentiation-based machine
learning methods. In this paper, we introduce the Differentiable Digital Signal
Processing (DDSP) library, which enables direct integration of classic signal
processing elements with deep learning methods. Focusing on audio synthesis, we
achieve high-fidelity generation without the need for large autoregressive
models or adversarial losses, demonstrating that DDSP enables utilizing strong
inductive biases without losing the expressive power of neural networks.
Further, we show that combining interpretable modules permits manipulation of
each separate model component, with applications such as independent control of
pitch and loudness, realistic extrapolation to pitches not seen during
training, blind dereverberation of room acoustics, transfer of extracted room
acoustics to new environments, and transformation of timbre between disparate
sources. In short, DDSP enables an interpretable and modular approach to
generative modeling, without sacrificing the benefits of deep learning. The
library is publicly available at https://github.com/magenta/ddsp and we welcome
further contributions from the community and domain experts. | [
"cs.LG",
"cs.SD",
"eess.AS",
"eess.SP",
"stat.ML"
]
|
We propose the k-Shortest-Path (k-SP) constraint: a novel constraint on the
agent's trajectory that improves the sample efficiency in sparse-reward MDPs.
We show that any optimal policy necessarily satisfies the k-SP constraint.
Notably, the k-SP constraint prevents the policy from exploring state-action
pairs along the non-k-SP trajectories (e.g., going back and forth). However, in
practice, excluding state-action pairs may hinder the convergence of RL
algorithms. To overcome this, we propose a novel cost function that penalizes
the policy violating SP constraint, instead of completely excluding it. Our
numerical experiment in a tabular RL setting demonstrates that the SP
constraint can significantly reduce the trajectory space of policy. As a
result, our constraint enables more sample efficient learning by suppressing
redundant exploration and exploitation. Our experiments on MiniGrid, DeepMind
Lab, Atari, and Fetch show that the proposed method significantly improves
proximal policy optimization (PPO) and outperforms existing novelty-seeking
exploration methods including count-based exploration even in continuous
control tasks, indicating that it improves the sample efficiency by preventing
the agent from taking redundant actions. | [
"cs.LG",
"cs.AI",
"cs.RO"
]
|
Reinforcement learning (RL) provides a framework for learning goal-directed
policies given user-specified rewards. However, since designing rewards often
requires substantial engineering effort, we are interested in the problem of
learning without rewards, where agents must discover useful behaviors in the
absence of task-specific incentives. Intrinsic motivation is a family of
unsupervised RL techniques which develop general objectives for an RL agent to
optimize that lead to better exploration or the discovery of skills. In this
paper, we propose a new unsupervised RL technique based on an adversarial game
which pits two policies against each other to compete over the amount of
surprise an RL agent experiences. The policies each take turns controlling the
agent. The Explore policy maximizes entropy, putting the agent into surprising
or unfamiliar situations. Then, the Control policy takes over and seeks to
recover from those situations by minimizing entropy. The game harnesses the
power of multi-agent competition to drive the agent to seek out increasingly
surprising parts of the environment while learning to gain mastery over them.
We show empirically that our method leads to the emergence of complex skills by
exhibiting clear phase transitions. Furthermore, we show both theoretically
(via a latent state space coverage argument) and empirically that our method
has the potential to be applied to the exploration of stochastic,
partially-observed environments. We show that Adversarial Surprise learns more
complex behaviors, and explores more effectively than competitive baselines,
outperforming intrinsic motivation methods based on active inference,
novelty-seeking (Random Network Distillation (RND)), and multi-agent
unsupervised RL (Asymmetric Self-Play (ASP)) in MiniGrid, Atari and VizDoom
environments. | [
"cs.LG",
"cs.AI"
]
|
We propose a new network architecture, the Fractal Pyramid Networks (PFNs)
for pixel-wise prediction tasks as an alternative to the widely used
encoder-decoder structure. In the encoder-decoder structure, the input is
processed by an encoding-decoding pipeline that tries to get a semantic
large-channel feature. Different from that, our proposed PFNs hold multiple
information processing pathways and encode the information to multiple separate
small-channel features. On the task of self-supervised monocular depth
estimation, even without ImageNet pretrained, our models can compete or
outperform the state-of-the-art methods on the KITTI dataset with much fewer
parameters. Moreover, the visual quality of the prediction is significantly
improved. The experiment of semantic segmentation provides evidence that the
PFNs can be applied to other pixel-wise prediction tasks, and demonstrates that
our models can catch more global structure information. | [
"cs.CV",
"cs.AI"
]
|
A challenge of skeleton-based action recognition is the difficulty to
classify actions with similar motions and object-related actions. Visual clues
from other streams help in that regard. RGB data are sensible to illumination
conditions, thus unusable in the dark. To alleviate this issue and still
benefit from a visual stream, we propose a modular network (FUSION) combining
skeleton and infrared data. A 2D convolutional neural network (CNN) is used as
a pose module to extract features from skeleton data. A 3D CNN is used as an
infrared module to extract visual cues from videos. Both feature vectors are
then concatenated and exploited conjointly using a multilayer perceptron (MLP).
Skeleton data also condition the infrared videos, providing a crop around the
performing subjects and thus virtually focusing the attention of the infrared
module. Ablation studies show that using pre-trained networks on other large
scale datasets as our modules and data augmentation yield considerable
improvements on the action classification accuracy. The strong contribution of
our cropping strategy is also demonstrated. We evaluate our method on the NTU
RGB+D dataset, the largest dataset for human action recognition from depth
cameras, and report state-of-the-art performances. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Given a vertex of interest in a network $G_1$, the vertex nomination problem
seeks to find the corresponding vertex of interest (if it exists) in a second
network $G_2$. A vertex nomination scheme produces a list of the vertices in
$G_2$, ranked according to how likely they are judged to be the corresponding
vertex of interest in $G_2$. The vertex nomination problem and related
information retrieval tasks have attracted much attention in the machine
learning literature, with numerous applications to social and biological
networks. However, the current framework has often been confined to a
comparatively small class of network models, and the concept of statistically
consistent vertex nomination schemes has been only shallowly explored. In this
paper, we extend the vertex nomination problem to a very general statistical
model of graphs. Further, drawing inspiration from the long-established
classification framework in the pattern recognition literature, we provide
definitions for the key notions of Bayes optimality and consistency in our
extended vertex nomination framework, including a derivation of the Bayes
optimal vertex nomination scheme. In addition, we prove that no universally
consistent vertex nomination schemes exist. Illustrative examples are provided
throughout. | [
"stat.ML"
]
|
Towards the challenging problem of semi-supervised node classification, there
have been extensive studies. As a frontier, Graph Neural Networks (GNNs) have
aroused great interest recently, which update the representation of each node
by aggregating information of its neighbors. However, most GNNs have shallow
layers with a limited receptive field and may not achieve satisfactory
performance especially when the number of labeled nodes is quite small. To
address this challenge, we innovatively propose a graph few-shot learning (GFL)
algorithm that incorporates prior knowledge learned from auxiliary graphs to
improve classification accuracy on the target graph. Specifically, a
transferable metric space characterized by a node embedding and a
graph-specific prototype embedding function is shared between auxiliary graphs
and the target, facilitating the transfer of structural knowledge. Extensive
experiments and ablation studies on four real-world graph datasets demonstrate
the effectiveness of our proposed model. | [
"cs.LG",
"stat.ML"
]
|
We study deep reinforcement learning (RL) algorithms with delayed rewards. In
many real-world tasks, instant rewards are often not readily accessible or even
defined immediately after the agent performs actions. In this work, we first
formally define the environment with delayed rewards and discuss the challenges
raised due to the non-Markovian nature of such environments. Then, we introduce
a general off-policy RL framework with a new Q-function formulation that can
handle the delayed rewards with theoretical convergence guarantees. For
practical tasks with high dimensional state spaces, we further introduce the
HC-decomposition rule of the Q-function in our framework which naturally leads
to an approximation scheme that helps boost the training efficiency and
stability. We finally conduct extensive experiments to demonstrate the superior
performance of our algorithms over the existing work and their variants. | [
"cs.LG"
]
|
Facial action unit (AU) intensity is an index to describe all visually
discernible facial movements. Most existing methods learn intensity estimator
with limited AU data, while they lack generalization ability out of the
dataset. In this paper, we present a framework to predict the facial parameters
(including identity parameters and AU parameters) based on a bone-driven face
model (BDFM) under different views. The proposed framework consists of a
feature extractor, a generator, and a facial parameter regressor. The regressor
can fit the physical meaning parameters of the BDFM from a single face image
with the help of the generator, which maps the facial parameters to the
game-face images as a differentiable renderer. Besides, identity loss, loopback
loss, and adversarial loss can improve the regressive results. Quantitative
evaluations are performed on two public databases BP4D and DISFA, which
demonstrates that the proposed method can achieve comparable or better
performance than the state-of-the-art methods. What's more, the qualitative
results also demonstrate the validity of our method in the wild. | [
"cs.CV"
]
|
We investigate the classification performance of K-nearest neighbors (K-NN)
and deep neural networks (DNNs) in the presence of label noise. We first show
empirically that a DNN's prediction for a given test example depends on the
labels of the training examples in its local neighborhood. This motivates us to
derive a realizable analytic expression that approximates the multi-class K-NN
classification error in the presence of label noise, which is of independent
importance. We then suggest that the expression for K-NN may serve as a
first-order approximation for the DNN error. Finally, we demonstrate
empirically the proximity of the developed expression to the observed
performance of K-NN and DNN classifiers. Our result may explain the already
observed surprising resistance of DNN to some types of label noise. It also
characterizes an important factor of it showing that the more concentrated the
noise the greater is the degradation in performance. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
]
|
The recent increase in the scale and complexity of software systems has
introduced new challenges to the time series monitoring and anomaly detection
process. A major drawback of existing anomaly detection methods is that they
lack contextual information to help stakeholders identify the cause of
anomalies. This problem, known as root cause detection, is particularly
challenging to undertake in today's complex distributed software systems since
the metrics under consideration generally have multiple internal and external
dependencies. Significant manual analysis and strong domain expertise is
required to isolate the correct cause of the problem. In this paper, we propose
a method that isolates the root cause of an anomaly by analyzing the patterns
in time series fluctuations. Our method considers the time series as
observations from an underlying process passing through a sequence of
discretized hidden states. The idea is to track the propagation of the effect
when a given problem causes unaligned but homogeneous shifts of the underlying
states. We evaluate our approach by finding the root cause of anomalies in
Zillows clickstream data by identifying causal patterns among a set of observed
fluctuations. | [
"cs.LG",
"stat.AP",
"stat.ML"
]
|
Simultaneous segmentation of multiple organs from different medical imaging
modalities is a crucial task as it can be utilized for computer-aided
diagnosis, computer-assisted surgery, and therapy planning. Thanks to the
recent advances in deep learning, several deep neural networks for medical
image segmentation have been introduced successfully for this purpose. In this
paper, we focus on learning a deep multi-organ segmentation network that labels
voxels. In particular, we examine the critical choice of a loss function in
order to handle the notorious imbalance problem that plagues both the input and
output of a learning model. The input imbalance refers to the class-imbalance
in the input training samples (i.e., small foreground objects embedded in an
abundance of background voxels, as well as organs of varying sizes). The output
imbalance refers to the imbalance between the false positives and false
negatives of the inference model. In order to tackle both types of imbalance
during training and inference, we introduce a new curriculum learning based
loss function. Specifically, we leverage Dice similarity coefficient to deter
model parameters from being held at bad local minima and at the same time
gradually learn better model parameters by penalizing for false
positives/negatives using a cross entropy term. We evaluated the proposed loss
function on three datasets: whole body positron emission tomography (PET) scans
with 5 target organs, magnetic resonance imaging (MRI) prostate scans, and
ultrasound echocardigraphy images with a single target organ i.e., left
ventricular. We show that a simple network architecture with the proposed
integrative loss function can outperform state-of-the-art methods and results
of the competing methods can be improved when our proposed loss is used. | [
"cs.CV"
]
|
We propose to jointly learn multi-view geometry and warping between views of
the same object instances for robust cross-view object detection. What makes
multi-view object instance detection difficult are strong changes in viewpoint,
lighting conditions, high similarity of neighbouring objects, and strong
variability in scale. By turning object detection and instance
re-identification in different views into a joint learning task, we are able to
incorporate both image appearance and geometric soft constraints into a single,
multi-view detection process that is learnable end-to-end. We validate our
method on a new, large data set of street-level panoramas of urban objects and
show superior performance compared to various baselines. Our contribution is
threefold: a large-scale, publicly available data set for multi-view instance
detection and re-identification; an annotation tool custom-tailored for
multi-view instance detection; and a novel, holistic multi-view instance
detection and re-identification method that jointly models geometry and
appearance across views. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
The foundational concept of Max-Margin in machine learning is ill-posed for
output spaces with more than two labels such as in structured prediction. In
this paper, we show that the Max-Margin loss can only be consistent to the
classification task under highly restrictive assumptions on the discrete loss
measuring the error between outputs. These conditions are satisfied by
distances defined in tree graphs, for which we prove consistency, thus being
the first losses shown to be consistent for Max-Margin beyond the binary
setting. We finally address these limitations by correcting the concept of
Max-Margin and introducing the Restricted-Max-Margin, where the maximization of
the loss-augmented scores is maintained, but performed over a subset of the
original domain. The resulting loss is also a generalization of the binary
support vector machine and it is consistent under milder conditions on the
discrete loss. | [
"cs.LG",
"stat.ML"
]
|
Causal processes in biomedicine may contain cycles, evolve over time or
differ between populations. However, many graphical models cannot accommodate
these conditions. We propose to model causation using a mixture of directed
cyclic graphs (DAGs), where the joint distribution in a population follows a
DAG at any single point in time but potentially different DAGs across time. We
also introduce an algorithm called Causal Inference over Mixtures that uses
longitudinal data to infer a graph summarizing the causal relations generated
from a mixture of DAGs. Experiments demonstrate improved performance compared
to prior approaches. | [
"stat.ML",
"cs.LG",
"stat.AP"
]
|
Due to escalating healthcare costs, accurately predicting which patients will
incur high costs is an important task for payers and providers of healthcare.
High-cost claimants (HiCCs) are patients who have annual costs above
$\$250,000$ and who represent just 0.16% of the insured population but
currently account for 9% of all healthcare costs. In this study, we aimed to
develop a high-performance algorithm to predict HiCCs to inform a novel care
management system. Using health insurance claims from 48 million people and
augmented with census data, we applied machine learning to train binary
classification models to calculate the personal risk of HiCC. To train the
models, we developed a platform starting with 6,006 variables across all
clinical and demographic dimensions and constructed over one hundred candidate
models. The best model achieved an area under the receiver operating
characteristic curve of 91.2%. The model exceeds the highest published
performance (84%) and remains high for patients with no prior history of
high-cost status (89%), who have less than a full year of enrollment (87%), or
lack pharmacy claims data (88%). It attains an area under the precision-recall
curve of 23.1%, and precision of 74% at a threshold of 0.99. A care management
program enrolling 500 people with the highest HiCC risk is expected to treat
199 true HiCCs and generate a net savings of $\$7.3$ million per year. Our
results demonstrate that high-performing predictive models can be constructed
using claims data and publicly available data alone, even for rare high-cost
claimants exceeding $\$250,000$. Our model demonstrates the transformational
power of machine learning and artificial intelligence in care management, which
would allow healthcare payers and providers to introduce the next generation of
care management programs. | [
"cs.LG",
"stat.ML",
"J.3, I.2.6",
"J.3; I.2.6"
]
|
As reinforcement learning agents are tasked with solving more challenging and
diverse tasks, the ability to incorporate prior knowledge into the learning
system and to exploit reusable structure in solution space is likely to become
increasingly important. The KL-regularized expected reward objective
constitutes one possible tool to this end. It introduces an additional
component, a default or prior behavior, which can be learned alongside the
policy and as such partially transforms the reinforcement learning problem into
one of behavior modelling. In this work we consider the implications of this
framework in cases where both the policy and default behavior are augmented
with latent variables. We discuss how the resulting hierarchical structures can
be used to implement different inductive biases and how their modularity can
benefit transfer. Empirically we find that they can lead to faster learning and
transfer on a range of continuous control tasks. | [
"cs.LG",
"stat.ML"
]
|
Environmental Sound Classification (ESC) is an active research area in the
audio domain and has seen a lot of progress in the past years. However, many of
the existing approaches achieve high accuracy by relying on domain-specific
features and architectures, making it harder to benefit from advances in other
fields (e.g., the image domain). Additionally, some of the past successes have
been attributed to a discrepancy of how results are evaluated (i.e., on
unofficial splits of the UrbanSound8K (US8K) dataset), distorting the overall
progression of the field.
The contribution of this paper is twofold. First, we present a model that is
inherently compatible with mono and stereo sound inputs. Our model is based on
simple log-power Short-Time Fourier Transform (STFT) spectrograms and combines
them with several well-known approaches from the image domain (i.e., ResNet,
Siamese-like networks and attention). We investigate the influence of
cross-domain pre-training, architectural changes, and evaluate our model on
standard datasets. We find that our model out-performs all previously known
approaches in a fair comparison by achieving accuracies of 97.0 % (ESC-10),
91.5 % (ESC-50) and 84.2 % / 85.4 % (US8K mono / stereo).
Second, we provide a comprehensive overview of the actual state of the field,
by differentiating several previously reported results on the US8K dataset
between official or unofficial splits. For better reproducibility, our code
(including any re-implementations) is made available. | [
"cs.CV",
"cs.LG",
"cs.SD",
"eess.AS"
]
|
Establishing dense correspondences between a pair of images is an important
and general problem, covering geometric matching, optical flow and semantic
correspondences. While these applications share fundamental challenges, such as
large displacements, pixel-accuracy, and appearance changes, they are currently
addressed with specialized network architectures, designed for only one
particular task. This severely limits the generalization capabilities of such
networks to new scenarios, where e.g. robustness to larger displacements or
higher accuracy is required.
In this work, we propose a universal network architecture that is directly
applicable to all the aforementioned dense correspondence problems. We achieve
both high accuracy and robustness to large displacements by investigating the
combined use of global and local correlation layers. We further propose an
adaptive resolution strategy, allowing our network to operate on virtually any
input image resolution. The proposed GLU-Net achieves state-of-the-art
performance for geometric and semantic matching as well as optical flow, when
using the same network and weights. Code and trained models are available at
https://github.com/PruneTruong/GLU-Net. | [
"cs.CV"
]
|
Tabular data are ubiquitous for the widespread applications of tables and
hence have attracted the attention of researchers to extract underlying
information. One of the critical problems in mining tabular data is how to
understand their inherent semantic structures automatically. Existing studies
typically adopt Convolutional Neural Network (CNN) to model the spatial
information of tabular structures yet ignore more diverse relational
information between cells, such as the hierarchical and paratactic
relationships. To simultaneously extract spatial and relational information
from tables, we propose a novel neural network architecture, TabularNet. The
spatial encoder of TabularNet utilizes the row/column-level Pooling and the
Bidirectional Gated Recurrent Unit (Bi-GRU) to capture statistical information
and local positional correlation, respectively. For relational information, we
design a new graph construction method based on the WordNet tree and adopt a
Graph Convolutional Network (GCN) based encoder that focuses on the
hierarchical and paratactic relationships between cells. Our neural network
architecture can be a unified neural backbone for different understanding tasks
and utilized in a multitask scenario. We conduct extensive experiments on three
classification tasks with two real-world spreadsheet data sets, and the results
demonstrate the effectiveness of our proposed TabularNet over state-of-the-art
baselines. | [
"cs.LG"
]
|
Image Compression has become an absolute necessity in today's day and age.
With the advent of the Internet era, compressing files to share among other
users is quintessential. Several efforts have been made to reduce file sizes
while still maintain image quality in order to transmit files even on limited
bandwidth connections. This paper discusses the need for Discrete Cosine
Transform or DCT in the compression of images in Joint Photographic Experts
Group or JPEG file format. Via an intensive literature study, this paper first
introduces DCT and JPEG Compression. The section preceding it discusses how
JPEG compression is implemented by DCT. The last section concludes with further
real world applications of DCT in image processing. | [
"cs.CV"
]
|
Breast cancer is one of the leading causes of mortality in women. Early
detection and treatment are imperative for improving survival rates, which have
steadily increased in recent years as a result of more sophisticated
computer-aided-diagnosis (CAD) systems. A critical component of breast cancer
diagnosis relies on histopathology, a laborious and highly subjective process.
Consequently, CAD systems are essential to reduce inter-rater variability and
supplement the analyses conducted by specialists. In this paper, a
transfer-learning based approach is proposed, for the task of breast histology
image classification into four tissue sub-types, namely, normal, benign,
\textit{in situ} carcinoma and invasive carcinoma. The histology images,
provided as part of the BACH 2018 grand challenge, were first normalized to
correct for color variations resulting from inconsistencies during slide
preparation. Subsequently, image patches were extracted and used to fine-tune
Google`s Inception-V3 and ResNet50 convolutional neural networks (CNNs), both
pre-trained on the ImageNet database, enabling them to learn domain-specific
features, necessary to classify the histology images. The ResNet50 network
(based on residual learning) achieved a test classification accuracy of 97.50%
for four classes, outperforming the Inception-V3 network which achieved an
accuracy of 91.25%. | [
"cs.CV"
]
|
Being aware of other traffic is a prerequisite for self-driving cars to
operate in the real world. In this paper, we show how the intrinsic feature
maps of an object detection CNN can be used to uniquely identify vehicles from
a dash-cam feed. Feature maps of a pretrained `YOLO' network are used to create
700 deep integrated feature signatures (DIFS) from 20 different images of 35
vehicles from a high resolution dataset and 340 signatures from 20 different
images of 17 vehicles of a lower resolution tracking benchmark dataset. The
YOLO network was trained to classify general object categories, e.g. classify a
detected object as a `car' or `truck'. 5-Fold nearest neighbor (1NN)
classification was used on DIFS created from feature maps in the middle layers
of the network to correctly identify unique vehicles at a rate of 96.7\% for
the high resolution data and with a rate of 86.8\% for the lower resolution
data. We conclude that a deep neural detection network trained to distinguish
between different classes can be successfully used to identify different
instances belonging to the same class, through the creation of deep integrated
feature signatures (DIFS). | [
"cs.CV",
"cs.LG",
"stat.ML",
"I.2.6"
]
|
Timely accurate traffic forecast is crucial for urban traffic control and
guidance. Due to the high nonlinearity and complexity of traffic flow,
traditional methods cannot satisfy the requirements of mid-and-long term
prediction tasks and often neglect spatial and temporal dependencies. In this
paper, we propose a novel deep learning framework, Spatio-Temporal Graph
Convolutional Networks (STGCN), to tackle the time series prediction problem in
traffic domain. Instead of applying regular convolutional and recurrent units,
we formulate the problem on graphs and build the model with complete
convolutional structures, which enable much faster training speed with fewer
parameters. Experiments show that our model STGCN effectively captures
comprehensive spatio-temporal correlations through modeling multi-scale traffic
networks and consistently outperforms state-of-the-art baselines on various
real-world traffic datasets. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we focus on estimating the 6D pose of objects in point clouds.
Although the topic has been widely studied, pose estimation in point clouds
remains a challenging problem due to the noise and occlusion. To address the
problem, a novel 3DPVNet is presented in this work, which utilizes 3D local
patches to vote for the object 6D poses. 3DPVNet is comprised of three modules.
In particular, a Patch Unification (\textbf{PU}) module is first introduced to
normalize the input patch, and also create a standard local coordinate frame on
it to generate a reliable vote. We then devise a Weight-guided Neighboring
Feature Fusion (\textbf{WNFF}) module in the network, which fuses the
neighboring features to yield a semi-global feature for the center patch. WNFF
module mines the neighboring information of a local patch, such that the
representation capability to local geometric characteristics is significantly
enhanced, making the method robust to a certain level of noise. Moreover, we
present a Patch-level Voting (\textbf{PV}) module to regress transformations
and generates pose votes. After the aggregation of all votes from patches and a
refinement step, the final pose of the object can be obtained. Compared to
recent voting-based methods, 3DPVNet is patch-level, and directly carried out
on point clouds. Therefore, 3DPVNet achieves less computation than
point/pixel-level voting scheme, and has robustness to partial data.
Experiments on several datasets demonstrate that 3DPVNet achieves the
state-of-the-art performance, and is also robust against noise and occlusions. | [
"cs.CV"
]
|
Salient object detection in complex scenes and environments is a challenging
research topic. Most works focus on RGB-based salient object detection, which
limits its performance of real-life applications when confronted with adverse
conditions such as dark environments and complex backgrounds. Taking advantage
of RGB and thermal infrared images becomes a new research direction for
detecting salient object in complex scenes recently, as thermal infrared
spectrum imaging provides the complementary information and has been applied to
many computer vision tasks. However, current research for RGBT salient object
detection is limited by the lack of a large-scale dataset and comprehensive
benchmark. This work contributes such a RGBT image dataset named VT5000,
including 5000 spatially aligned RGBT image pairs with ground truth
annotations. VT5000 has 11 challenges collected in different scenes and
environments for exploring the robustness of algorithms. With this dataset, we
propose a powerful baseline approach, which extracts multi-level features
within each modality and aggregates these features of all modalities with the
attention mechanism, for accurate RGBT salient object detection. Extensive
experiments show that the proposed baseline approach outperforms the
state-of-the-art methods on VT5000 dataset and other two public datasets. In
addition, we carry out a comprehensive analysis of different algorithms of RGBT
salient object detection on VT5000 dataset, and then make several valuable
conclusions and provide some potential research directions for RGBT salient
object detection. | [
"cs.CV"
]
|
We present a novel framework to learn to convert the perpixel photometric
information at each view into spatially distinctive and view-invariant
low-level features, which can be plugged into existing multi-view stereo
pipeline for enhanced 3D reconstruction. Both the illumination conditions
during acquisition and the subsequent per-pixel feature transform can be
jointly optimized in a differentiable fashion. Our framework automatically
adapts to and makes efficient use of the geometric information available in
different forms of input data. High-quality 3D reconstructions of a variety of
challenging objects are demonstrated on the data captured with an illumination
multiplexing device, as well as a point light. Our results compare favorably
with state-of-the-art techniques. | [
"cs.CV",
"cs.GR"
]
|
In contrast to the literature where local patterns in 3D point clouds are
captured by customized convolutional operators, in this paper we study the
problem of how to effectively and efficiently project such point clouds into a
2D image space so that traditional 2D convolutional neural networks (CNNs) such
as U-Net can be applied for segmentation. To this end, we are motivated by
graph drawing and reformulate it as an integer programming problem to learn the
topology-preserving graph-to-grid mapping for each individual point cloud. To
accelerate the computation in practice, we further propose a novel hierarchical
approximate algorithm. With the help of the Delaunay triangulation for graph
construction from point clouds and a multi-scale U-Net for segmentation, we
manage to demonstrate the state-of-the-art performance on ShapeNet and PartNet,
respectively, with significant improvement over the literature. Code is
available at https://github.com/Zhang-VISLab. | [
"cs.CV",
"cs.LG"
]
|
Symbolic equations are at the core of scientific discovery. The task of
discovering the underlying equation from a set of input-output pairs is called
symbolic regression. Traditionally, symbolic regression methods use
hand-designed strategies that do not improve with experience. In this paper, we
introduce the first symbolic regression method that leverages large scale
pre-training. We procedurally generate an unbounded set of equations, and
simultaneously pre-train a Transformer to predict the symbolic equation from a
corresponding set of input-output-pairs. At test time, we query the model on a
new set of points and use its output to guide the search for the equation. We
show empirically that this approach can re-discover a set of well-known
physical equations, and that it improves over time with more data and compute. | [
"cs.LG"
]
|
Psychologists recognize Raven's Progressive Matrices as a very effective test
of general human intelligence. While many computational models have been
developed by the AI community to investigate different forms of top-down,
deliberative reasoning on the test, there has been less research on bottom-up
perceptual processes, like Gestalt image completion, that are also critical in
human test performance. In this work, we investigate how Gestalt visual
reasoning on the Raven's test can be modeled using generative image inpainting
techniques from computer vision. We demonstrate that a self-supervised
inpainting model trained only on photorealistic images of objects achieves a
score of 27/36 on the Colored Progressive Matrices, which corresponds to
average performance for nine-year-old children. We also show that models
trained on other datasets (faces, places, and textures) do not perform as well.
Our results illustrate how learning visual regularities in real-world images
can translate into successful reasoning about artificial test stimuli. On the
flip side, our results also highlight the limitations of such transfer, which
may explain why intelligence tests like the Raven's are often sensitive to
people's individual sociocultural backgrounds. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Actor-critic (AC) algorithms are known for their efficacy and high
performance in solving reinforcement learning problems, but they also suffer
from low sampling efficiency. An AC based policy optimization process is
iterative and needs to frequently access the agent-environment system to
evaluate and update the policy by rolling out the policy, collecting rewards
and states (i.e. samples), and learning from them. It ultimately requires a
huge number of samples to learn an optimal policy. To improve sampling
efficiency, we propose a strategy to optimize the training dataset that
contains significantly less samples collected from the AC process. The dataset
optimization is made of a best episode only operation, a policy
parameter-fitness model, and a genetic algorithm module. The optimal policy
network trained by the optimized training dataset exhibits superior performance
compared to many contemporary AC algorithms in controlling autonomous dynamical
systems. Evaluation on standard benchmarks show that the method improves
sampling efficiency, ensures faster convergence to optima, and is more
data-efficient than its counterparts. | [
"cs.LG",
"cs.SY",
"eess.SY"
]
|
Exploration is critical to a reinforcement learning agent's performance in
its given environment. Prior exploration methods are often based on using
heuristic auxiliary predictions to guide policy behavior, lacking a
mathematically-grounded objective with clear properties. In contrast, we recast
exploration as a problem of State Marginal Matching (SMM), where we aim to
learn a policy for which the state marginal distribution matches a given target
state distribution. The target distribution is a uniform distribution in most
cases, but can incorporate prior knowledge if available. In effect, SMM
amortizes the cost of learning to explore in a given environment. The SMM
objective can be viewed as a two-player, zero-sum game between a state density
model and a parametric policy, an idea that we use to build an algorithm for
optimizing the SMM objective. Using this formalism, we further demonstrate that
prior work approximately maximizes the SMM objective, offering an explanation
for the success of these methods. On both simulated and real-world tasks, we
demonstrate that agents that directly optimize the SMM objective explore faster
and adapt more quickly to new tasks as compared to prior exploration methods. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
]
|
In this paper, we propose a novel way to interpret text information by
extracting visual feature presentation from multiple high-resolution and
photo-realistic synthetic images generated by Text-to-image Generative
Adversarial Network (GAN) to improve the performance of image labeling.
Firstly, we design a stacked Generative Multi-Adversarial Network (GMAN),
StackGMAN++, a modified version of the current state-of-the-art Text-to-image
GAN, StackGAN++, to generate multiple synthetic images with various prior
noises conditioned on a text. And then we extract deep visual features from the
generated synthetic images to explore the underlying visual concepts for text.
Finally, we combine image-level visual feature, text-level feature and visual
features based on synthetic images together to predict labels for images. We
conduct experiments on two benchmark datasets and the experimental results
clearly demonstrate the efficacy of our proposed approach. | [
"cs.CV"
]
|
Visual object detection is a computer vision-based artificial intelligence
(AI) technique which has many practical applications (e.g., fire hazard
monitoring). However, due to privacy concerns and the high cost of transmitting
video data, it is highly challenging to build object detection models on
centrally stored large training datasets following the current approach.
Federated learning (FL) is a promising approach to resolve this challenge.
Nevertheless, there currently lacks an easy to use tool to enable computer
vision application developers who are not experts in federated learning to
conveniently leverage this technology and apply it in their systems. In this
paper, we report FedVision - a machine learning engineering platform to support
the development of federated learning powered computer vision applications. The
platform has been deployed through a collaboration between WeBank and Extreme
Vision to help customers develop computer vision-based safety monitoring
solutions in smart city applications. Over four months of usage, it has
achieved significant efficiency improvement and cost reduction while removing
the need to transmit sensitive data for three major corporate customers. To the
best of our knowledge, this is the first real application of FL in computer
vision-based tasks. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
Conventional deep learning based methods for object detection require a large
amount of bounding box annotations for training, which is expensive to obtain
such high quality annotated data. Few-shot object detection, which learns to
adapt to novel classes with only a few annotated examples, is very challenging
since the fine-grained feature of novel object can be easily overlooked with
only a few data available. In this work, aiming to fully exploit features of
annotated novel object and capture fine-grained features of query object, we
propose Dense Relation Distillation with Context-aware Aggregation (DCNet) to
tackle the few-shot detection problem. Built on the meta-learning based
framework, Dense Relation Distillation module targets at fully exploiting
support features, where support features and query feature are densely matched,
covering all spatial locations in a feed-forward fashion. The abundant usage of
the guidance information endows model the capability to handle common
challenges such as appearance changes and occlusions. Moreover, to better
capture scale-aware features, Context-aware Aggregation module adaptively
harnesses features from different scales for a more comprehensive feature
representation. Extensive experiments illustrate that our proposed approach
achieves state-of-the-art results on PASCAL VOC and MS COCO datasets. Code will
be made available at https://github.com/hzhupku/DCNet. | [
"cs.CV"
]
|
Object-centric world models provide structured representation of the scene
and can be an important backbone in reinforcement learning and planning.
However, existing approaches suffer in partially-observable environments due to
the lack of belief states. In this paper, we propose Structured World Belief, a
model for learning and inference of object-centric belief states. Inferred by
Sequential Monte Carlo (SMC), our belief states provide multiple object-centric
scene hypotheses. To synergize the benefits of SMC particles with object
representations, we also propose a new object-centric dynamics model that
considers the inductive bias of object permanence. This enables tracking of
object states even when they are invisible for a long time. To further
facilitate object tracking in this regime, we allow our model to attend
flexibly to any spatial location in the image which was restricted in previous
models. In experiments, we show that object-centric belief provides a more
accurate and robust performance for filtering and generation. Furthermore, we
show the efficacy of structured world belief in improving the performance of
reinforcement learning, planning and supervised reasoning. | [
"cs.LG",
"cs.AI"
]
|
Recent reinforcement learning studies extensively explore the interplay
between cooperative and competitive behaviour in mixed environments. Unlike
cooperative environments where agents strive towards a common goal, mixed
environments are notorious for the conflicts of selfish and social interests.
As a consequence, purely rational agents often struggle to achieve and maintain
cooperation. A prevalent approach to induce cooperative behaviour is to assign
additional rewards based on other agents' well-being. However, this approach
suffers from the issue of multi-agent credit assignment, which can hinder
performance. This issue is efficiently alleviated in cooperative setting with
such state-of-the-art algorithms as QMIX and COMA. Still, when applied to mixed
environments, these algorithms may result in unfair allocation of rewards. We
propose BAROCCO, an extension of these algorithms capable to balance individual
and social incentives. The mechanism behind BAROCCO is to train two distinct
but interwoven components that jointly affect each agent's decisions. Our
meta-algorithm is compatible with both Q-learning and Actor-Critic frameworks.
We experimentally confirm the advantages over the existing methods and explore
the behavioural aspects of BAROCCO in two mixed multi-agent setups. | [
"cs.LG",
"cs.AI",
"cs.MA"
]
|
Few-shot image classification consists of two consecutive learning processes:
1) In the meta-learning stage, the model acquires a knowledge base from a set
of training classes. 2) During meta-testing, the acquired knowledge is used to
recognize unseen classes from very few examples. Inspired by the compositional
representation of objects in humans, we train a neural network architecture
that explicitly represents objects as a set of parts and their spatial
composition. In particular, during meta-learning, we train a knowledge base
that consists of a dictionary of part representations and a dictionary of part
activation maps that encode common spatial activation patterns of parts. The
elements of both dictionaries are shared among the training classes. During
meta-testing, the representation of unseen classes is learned using the part
representations and the part activation maps from the knowledge base. Finally,
an attention mechanism is used to strengthen those parts that are most
important for each category. We demonstrate the value of our compositional
learning framework for a few-shot classification using miniImageNet,
tieredImageNet, CIFAR-FS, and FC100, where we achieve state-of-the-art
performance. | [
"cs.CV"
]
|
This paper considers the subject of information losses arising from the
finite datasets used in the training of neural classifiers. It proves a
relationship between such losses as the product of the expected total variation
of the estimated neural model with the information about the feature space
contained in the hidden representation of that model. It then bounds this
expected total variation as a function of the size of randomly sampled datasets
in a fairly general setting, and without bringing in any additional dependence
on model complexity. It ultimately obtains bounds on information losses that
are less sensitive to input compression and in general much smaller than
existing bounds. The paper then uses these bounds to explain some recent
experimental findings of information compression in neural networks which
cannot be explained by previous work. Finally, the paper shows that not only
are these bounds much smaller than existing ones, but that they also correspond
well with experiments. | [
"cs.LG",
"stat.ML"
]
|
The need of sign language is increasing radically especially to hearing
impaired community. Only few research groups try to automatically recognize
sign language from video, colored gloves and etc. Their approach requires a
valid segmentation of the data that is used for training and of the data that
is used to be recognized. Recognition of a sign language image sequence is
challenging because of the variety of hand shapes and hand motions. Here, this
paper proposes to apply a combination of image segmentation with restoration
using topological derivatives for achieving high recognition accuracy. Image
quality measures are conceded here to differentiate the methods both
subjectively as well as objectively. Experiments show that the additional use
of the restoration before segmenting the postures significantly improves the
correct rate of hand detection, and that the discrete derivatives yields a high
rate of discrimination between different static hand postures as well as
between hand postures and the scene background. Eventually, the research is to
contribute to the implementation of automated sign language recognition system
mainly established for the welfare purpose. | [
"cs.CV"
]
|
The multiplicative structure of parameters and input data in the first layer
of neural networks is explored to build connection between the landscape of the
loss function with respect to parameters and the landscape of the model
function with respect to input data. By this connection, it is shown that flat
minima regularize the gradient of the model function, which explains the good
generalization performance of flat minima. Then, we go beyond the flatness and
consider high-order moments of the gradient noise, and show that Stochastic
Gradient Dascent (SGD) tends to impose constraints on these moments by a linear
stability analysis of SGD around global minima. Together with the
multiplicative structure, we identify the Sobolev regularization effect of SGD,
i.e. SGD regularizes the Sobolev seminorms of the model function with respect
to the input data. Finally, bounds for generalization error and adversarial
robustness are provided for solutions found by SGD under assumptions of the
data distribution. | [
"cs.LG"
]
|
The neuromorphic event cameras, which capture the optical changes of a scene,
have drawn increasing attention due to their high speed and low power
consumption. However, the event data are noisy, sparse, and nonuniform in the
spatial-temporal domain with an extremely high temporal resolution, making it
challenging to design backend algorithms for event-based vision. Existing
methods encode events into point-cloud-based or voxel-based representations,
but suffer from noise and/or information loss. Additionally, there is little
research that systematically studies how to handle static and dynamic scenes
with one universal design for event-based vision. This work proposes the
Aligned Event Tensor (AET) as a novel event data representation, and a neat
framework called Event Frame Net (EFN), which enables our model for event-based
vision under static and dynamic scenes. The proposed AET and EFN are evaluated
on various datasets, and proved to surpass existing state-of-the-art methods by
large margins. Our method is also efficient and achieves the fastest inference
speed among others. | [
"cs.CV",
"I.2.10"
]
|
Recently, it has been demonstrated that deep neural networks can
significantly improve the performance of single image super-resolution (SISR).
Numerous studies have concentrated on raising the quantitative quality of
super-resolved (SR) images. However, these methods that target PSNR
maximization usually produce blurred images at large upscaling factor. The
introduction of generative adversarial networks (GANs) can mitigate this issue
and show impressive results with synthetic high-frequency textures.
Nevertheless, these GAN-based approaches always have a tendency to add fake
textures and even artifacts to make the SR image of visually higher-resolution.
In this paper, we propose a novel perceptual image super-resolution method that
progressively generates visually high-quality results by constructing a
stage-wise network. Specifically, the first phase concentrates on minimizing
pixel-wise error, and the second stage utilizes the features extracted by the
previous stage to pursue results with better structural retention. The final
stage employs fine structure features distilled by the second phase to produce
more realistic results. In this way, we can maintain the pixel, and structural
level information in the perceptual image as much as possible. It is useful to
note that the proposed method can build three types of images in a feed-forward
process. Also, we explore a new generator that adopts multi-scale hierarchical
features fusion. Extensive experiments on benchmark datasets show that our
approach is superior to the state-of-the-art methods. Code is available at
https://github.com/Zheng222/PPON. | [
"cs.CV",
"eess.IV"
]
|
Computational aesthetics is an emerging field of research which has attracted
different research groups in the last few years. In this field, one of the main
approaches to evaluate the aesthetic quality of paintings and photographs is a
feature-based approach. Among the different features proposed to reach this
goal, color plays an import role. In this paper, we introduce a novel dataset
that consists of paintings of Western provenance from 36 well-known painters
from the 15th to the 20th century. As a first step and to assess this dataset,
using a classifier, we investigate the correlation between the subjective
scores and two widely used features that are related to color perception and in
different aesthetic quality assessment approaches. Results show a
classification rate of up to 73% between the color features and the subjective
scores. | [
"cs.CV"
]
|
We present DistillFlow, a knowledge distillation approach to learning optical
flow. DistillFlow trains multiple teacher models and a student model, where
challenging transformations are applied to the input of the student model to
generate hallucinated occlusions as well as less confident predictions. Then, a
self-supervised learning framework is constructed: confident predictions from
teacher models are served as annotations to guide the student model to learn
optical flow for those less confident predictions. The self-supervised learning
framework enables us to effectively learn optical flow from unlabeled data, not
only for non-occluded pixels, but also for occluded pixels. DistillFlow
achieves state-of-the-art unsupervised learning performance on both KITTI and
Sintel datasets. Our self-supervised pre-trained model also provides an
excellent initialization for supervised fine-tuning, suggesting an alternate
training paradigm in contrast to current supervised learning methods that
highly rely on pre-training on synthetic data. At the time of writing, our
fine-tuned models ranked 1st among all monocular methods on the KITTI 2015
benchmark, and outperform all published methods on the Sintel Final benchmark.
More importantly, we demonstrate the generalization capability of DistillFlow
in three aspects: framework generalization, correspondence generalization and
cross-dataset generalization. | [
"cs.CV",
"cs.AI"
]
|
Spectral estimation (SE) aims to identify how the energy of a signal (e.g., a
time series) is distributed across different frequencies. This can become
particularly challenging when only partial and noisy observations of the signal
are available, where current methods fail to handle uncertainty appropriately.
In this context, we propose a joint probabilistic model for signals,
observations and spectra, where SE is addressed as an exact inference problem.
Assuming a Gaussian process prior over the signal, we apply Bayes' rule to find
the analytic posterior distribution of the spectrum given a set of
observations. Besides its expressiveness and natural account of spectral
uncertainty, the proposed model also provides a functional-form representation
of the power spectral density, which can be optimised efficiently. Comparison
with previous approaches, in particular against Lomb-Scargle, is addressed
theoretically and also experimentally in three different scenarios. Code and
demo available at https://github.com/GAMES-UChile/BayesianSpectralEstimation. | [
"stat.ML",
"cs.LG",
"eess.SP"
]
|
Object detection for robot guidance is a crucial mission for autonomous
robots, which has provoked extensive attention for researchers. However, the
changing view of robot movement and limited available data hinder the research
in this area. To address these matters, we proposed a new vision system for
robots, the model adaptation object detection system. Instead of using a single
one to solve problems, We made use of different object detection neural
networks to guide the robot in accordance with various situations, with the
help of a meta neural network to allocate the object detection neural networks.
Furthermore, taking advantage of transfer learning technology and depthwise
separable convolutions, our model is easy to train and can address small
dataset problems. | [
"cs.CV",
"cs.RO"
]
|
Depth map super-resolution is a task with high practical application
requirements in the industry. Existing color-guided depth map super-resolution
methods usually necessitate an extra branch to extract high-frequency detail
information from RGB image to guide the low-resolution depth map
reconstruction. However, because there are still some differences between the
two modalities, direct information transmission in the feature dimension or
edge map dimension cannot achieve satisfactory result, and may even trigger
texture copying in areas where the structures of the RGB-D pair are
inconsistent. Inspired by the multi-task learning, we propose a joint learning
network of depth map super-resolution (DSR) and monocular depth estimation
(MDE) without introducing additional supervision labels. For the interaction of
two subnetworks, we adopt a differentiated guidance strategy and design two
bridges correspondingly. One is the high-frequency attention bridge (HABdg)
designed for the feature encoding process, which learns the high-frequency
information of the MDE task to guide the DSR task. The other is the content
guidance bridge (CGBdg) designed for the depth map reconstruction process,
which provides the content guidance learned from DSR task for MDE task. The
entire network architecture is highly portable and can provide a paradigm for
associating the DSR and MDE tasks. Extensive experiments on benchmark datasets
demonstrate that our method achieves competitive performance. Our code and
models are available at https://rmcong.github.io/proj_BridgeNet.html. | [
"cs.CV"
]
|
Factorized layers--operations parameterized by products of two or more
matrices--occur in a variety of deep learning contexts, including compressed
model training, certain types of knowledge distillation, and multi-head
self-attention architectures. We study how to initialize and regularize deep
nets containing such layers, examining two simple, understudied schemes,
spectral initialization and Frobenius decay, for improving their performance.
The guiding insight is to design optimization routines for these networks that
are as close as possible to that of their well-tuned, non-decomposed
counterparts; we back this intuition with an analysis of how the initialization
and regularization schemes impact training with gradient descent, drawing on
modern attempts to understand the interplay of weight-decay and
batch-normalization. Empirically, we highlight the benefits of spectral
initialization and Frobenius decay across a variety of settings. In model
compression, we show that they enable low-rank methods to significantly
outperform both unstructured sparsity and tensor methods on the task of
training low-memory residual networks; analogs of the schemes also improve the
performance of tensor decomposition techniques. For knowledge distillation,
Frobenius decay enables a simple, overcomplete baseline that yields a compact
model from over-parameterized training without requiring retraining with or
pruning a teacher network. Finally, we show how both schemes applied to
multi-head attention lead to improved performance on both translation and
unsupervised pre-training. | [
"stat.ML",
"cs.AI",
"cs.CL",
"cs.CV",
"cs.LG"
]
|
Our goal is to recognize material categories using images and geometry
information. In many applications, such as construction management, coarse
geometry information is available. We investigate how 3D geometry (surface
normals, camera intrinsic and extrinsic parameters) can be used with 2D
features (texture and color) to improve material classification. We introduce a
new dataset, GeoMat, which is the first to provide both image and geometry data
in the form of: (i) training and testing patches that were extracted at
different scales and perspectives from real world examples of each material
category, and (ii) a large scale construction site scene that includes 160
images and over 800,000 hand labeled 3D points. Our results show that using 2D
and 3D features both jointly and independently to model materials improves
classification accuracy across multiple scales and viewing directions for both
material patches and images of a large scale construction site scene. | [
"cs.CV"
]
|
Time series and signals are attracting more attention across statistics,
machine learning and pattern recognition as it appears widely in the industry
especially in sensor and IoT related research and applications, but few
advances has been achieved in effective time series visual analytics and
interaction due to its temporal dimensionality and complex dynamics. Inspired
by recent effort on using network metrics to characterize time series for
classification, we present an approach to visualize time series as complex
networks based on the first order Markov process in its temporal ordering. In
contrast to the classical bar charts, line plots and other statistics based
graph, our approach delivers more intuitive visualization that better preserves
both the temporal dependency and frequency structures. It provides a natural
inverse operation to map the graph back to raw signals, making it possible to
use graph statistics to characterize time series for better visual exploration
and statistical analysis. Our experimental results suggest the effectiveness on
various tasks such as pattern discovery and classification on both synthetic
and the real time series and sensor data. | [
"cs.LG",
"cs.HC"
]
|
All famous machine learning algorithms that comprise both supervised and
semi-supervised learning work well only under a common assumption: the training
and test data follow the same distribution. When the distribution changes, most
statistical models must be reconstructed from newly collected data, which for
some applications can be costly or impossible to obtain. Therefore, it has
become necessary to develop approaches that reduce the need and the effort to
obtain new labeled samples by exploiting data that are available in related
areas, and using these further across similar fields. This has given rise to a
new machine learning framework known as transfer learning: a learning setting
inspired by the capability of a human being to extrapolate knowledge across
tasks to learn more efficiently. Despite a large amount of different transfer
learning scenarios, the main objective of this survey is to provide an overview
of the state-of-the-art theoretical results in a specific, and arguably the
most popular, sub-field of transfer learning, called domain adaptation. In this
sub-field, the data distribution is assumed to change across the training and
the test data, while the learning task remains the same. We provide a first
up-to-date description of existing results related to domain adaptation problem
that cover learning bounds based on different statistical learning frameworks. | [
"cs.LG",
"stat.ML"
]
|
Vision transformers (ViTs) process input images as sequences of patches via
self-attention; a radically different architecture than convolutional neural
networks (CNNs). This makes it interesting to study the adversarial feature
space of ViT models and their transferability. In particular, we observe that
adversarial patterns found via conventional adversarial attacks show very low
black-box transferability even for large ViT models. However, we show that this
phenomenon is only due to the sub-optimal attack procedures that do not
leverage the true representation potential of ViTs. A deep ViT is composed of
multiple blocks, with a consistent architecture comprising of self-attention
and feed-forward layers, where each block is capable of independently producing
a class token. Formulating an attack using only the last class token
(conventional approach) does not directly leverage the discriminative
information stored in the earlier tokens, leading to poor adversarial
transferability of ViTs. Using the compositional nature of ViT models, we
enhance the transferability of existing attacks by introducing two novel
strategies specific to the architecture of ViT models. (i) Self-Ensemble: We
propose a method to find multiple discriminative pathways by dissecting a
single ViT model into an ensemble of networks. This allows explicitly utilizing
class-specific information at each ViT block. (ii) Token Refinement: We then
propose to refine the tokens to further enhance the discriminative capacity at
each block of ViT. Our token refinement systematically combines the class
tokens with structural information preserved within the patch tokens. An
adversarial attack, when applied to such refined tokens within the ensemble of
classifiers found in a single vision transformer, has significantly higher
transferability. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
The Transformer architecture has become a dominant choice in many domains,
such as natural language processing and computer vision. Yet, it has not
achieved competitive performance on popular leaderboards of graph-level
prediction compared to mainstream GNN variants. Therefore, it remains a mystery
how Transformers could perform well for graph representation learning. In this
paper, we solve this mystery by presenting Graphormer, which is built upon the
standard Transformer architecture, and could attain excellent results on a
broad range of graph representation learning tasks, especially on the recent
OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the
graph is the necessity of effectively encoding the structural information of a
graph into the model. To this end, we propose several simple yet effective
structural encoding methods to help Graphormer better model graph-structured
data. Besides, we mathematically characterize the expressive power of
Graphormer and exhibit that with our ways of encoding the structural
information of graphs, many popular GNN variants could be covered as the
special cases of Graphormer. | [
"cs.LG",
"cs.AI"
]
|
Feature representation in the form of spatio-spectral decomposition is one of
the robust techniques adopted in automatic handwritten character recognition
systems. In this regard, we propose a new image representation approach for
unconstrained handwritten alphanumeric characters using sparse concept coded
Tetrolets. Tetrolets, which does not use fixed dyadic square blocks for
spectral decomposition like conventional wavelets, preserve the localized
variations in handwritings by adopting tetrominoes those capture the shape
geometry. The sparse concept coding of low entropy Tetrolet representation is
found to extract the important hidden information (concept) for superior
pattern discrimination. Large scale experimentation using ten databases in six
different scripts (Bangla, Devanagari, Odia, English, Arabic and Telugu) has
been performed. The proposed feature representation along with standard
classifiers such as random forest, support vector machine (SVM), nearest
neighbor and modified quadratic discriminant function (MQDF) is found to
achieve state-of-the-art recognition performance in all the databases, viz.
99.40% (MNIST); 98.72% and 93.24% (IITBBS); 99.38% and 99.22% (ISI Kolkata).
The proposed OCR system is shown to perform better than other sparse based
techniques such as PCA, SparsePCA and SparseLDA, as well as better than
existing transforms (Wavelet, Slantlet and Stockwell). | [
"cs.CV",
"eess.IV"
]
|
We present BoTNet, a conceptually simple yet powerful backbone architecture
that incorporates self-attention for multiple computer vision tasks including
image classification, object detection and instance segmentation. By just
replacing the spatial convolutions with global self-attention in the final
three bottleneck blocks of a ResNet and no other changes, our approach improves
upon the baselines significantly on instance segmentation and object detection
while also reducing the parameters, with minimal overhead in latency. Through
the design of BoTNet, we also point out how ResNet bottleneck blocks with
self-attention can be viewed as Transformer blocks. Without any bells and
whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance
Segmentation benchmark using the Mask R-CNN framework; surpassing the previous
best published single model and single scale results of ResNeSt evaluated on
the COCO validation set. Finally, we present a simple adaptation of the BoTNet
design for image classification, resulting in models that achieve a strong
performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to
1.64x faster in compute time than the popular EfficientNet models on TPU-v3
hardware. We hope our simple and effective approach will serve as a strong
baseline for future research in self-attention models for vision | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
As camera-based documents are increasingly used, the rectification of
distorted document images becomes a need to improve the recognition
performance. In this paper, we propose a novel framework for both rectifying
distorted document image and removing background finely, by estimating
pixel-wise displacements using a fully convolutional network (FCN). The
document image is rectified by transformation according to the displacements of
pixels. The FCN is trained by regressing displacements of synthesized distorted
documents, and to control the smoothness of displacements, we propose a Local
Smooth Constraint (LSC) in regularization. Our approach is easy to implement
and consumes moderate computing resource. Experiments proved that our approach
can dewarp document images effectively under various geometric distortions, and
has achieved the state-of-the-art performance in terms of local details and
overall effect. | [
"cs.CV"
]
|
A hallmark of an AI agent is to mimic human beings to understand and interact
with others. In this paper, we propose a collaborative multi-agent
reinforcement learning algorithm to learn a \emph{joint} policy through the
interactions over agents. To make a joint decision over the group, each agent
makes an initial decision and tells its policy to its neighbors. Then each
agent modifies its own policy properly based on received messages and spreads
out its plan. As this intention propagation procedure goes on, we prove that it
converges to a mean-field approximation of the joint policy with the framework
of neural embedded probabilistic inference. We evaluate our algorithm on
several large scale challenging tasks and demonstrate that it outperforms
previous state-of-the-arts. | [
"cs.LG",
"cs.MA",
"stat.ML"
]
|
Long-tailed learning has attracted much attention recently, with the goal of
improving generalisation for tail classes. Most existing works use supervised
learning without considering the prevailing noise in the training dataset. To
move long-tailed learning towards more realistic scenarios, this work
investigates the label noise problem under long-tailed label distribution. We
first observe the negative impact of noisy labels on the performance of
existing methods, revealing the intrinsic challenges of this problem. As the
most commonly used approach to cope with noisy labels in previous literature,
we then find that the small-loss trick fails under long-tailed label
distribution. The reason is that deep neural networks cannot distinguish
correctly-labeled and mislabeled examples on tail classes. To overcome this
limitation, we establish a new prototypical noise detection method by designing
a distance-based metric that is resistant to label noise. Based on the above
findings, we propose a robust framework,~\algo, that realizes noise detection
for long-tailed learning, followed by soft pseudo-labeling via both label
smoothing and diverse label guessing. Moreover, our framework can naturally
leverage semi-supervised learning algorithms to further improve the
generalisation. Extensive experiments on benchmark and real-world datasets
demonstrate the superiority of our methods over existing baselines. In
particular, our method outperforms DivideMix by 3\% in test accuracy. Source
code will be released soon. | [
"cs.LG"
]
|
Object detection is a fundamental task in computer vision, requiring large
annotated datasets that are difficult to collect, as annotators need to label
objects and their bounding boxes. Thus, it is a significant challenge to use
cheaper forms of supervision effectively. Recent work has begun to explore
image captions as a source for weak supervision, but to date, in the context of
object detection, captions have only been used to infer the categories of the
objects in the image. In this work, we argue that captions contain much richer
information about the image, including attributes of objects and their
relations. Namely, the text represents a scene of the image, as described
recently in the literature. We present a method that uses the attributes in
this "textual scene graph" to train object detectors. We empirically
demonstrate that the resulting model achieves state-of-the-art results on
several challenging object detection datasets, outperforming recent approaches. | [
"cs.CV"
]
|
Anomaly detection in database management systems (DBMSs) is difficult because
of increasing number of statistics (stat) and event metrics in big data system.
In this paper, I propose an automatic DBMS diagnosis system that detects
anomaly periods with abnormal DB stat metrics and finds causal events in the
periods. Reconstruction error from deep autoencoder and statistical process
control approach are applied to detect time period with anomalies. Related
events are found using time series similarity measures between events and
abnormal stat metrics. After training deep autoencoder with DBMS metric data,
efficacy of anomaly detection is investigated from other DBMSs containing
anomalies. Experiment results show effectiveness of proposed model, especially,
batch temporal normalization layer. Proposed model is used for publishing
automatic DBMS diagnosis reports in order to determine DBMS configuration and
SQL tuning. | [
"stat.ML",
"cs.LG",
"stat.AP"
]
|
We consider a category-level perception problem, where one is given 3D sensor
data picturing an object of a given category (e.g. a car), and has to
reconstruct the pose and shape of the object despite intra-class variability
(i.e. different car models have different shapes). We consider an active shape
model, where -- for an object category -- we are given a library of potential
CAD models describing objects in that category, and we adopt a standard
formulation where pose and shape estimation are formulated as a non-convex
optimization. Our first contribution is to provide the first certifiably
optimal solver for pose and shape estimation. In particular, we show that
rotation estimation can be decoupled from the estimation of the object
translation and shape, and we demonstrate that (i) the optimal object rotation
can be computed via a tight (small-size) semidefinite relaxation, and (ii) the
translation and shape parameters can be computed in closed-form given the
rotation. Our second contribution is to add an outlier rejection layer to our
solver, hence making it robust to a large number of misdetections. Towards this
goal, we wrap our optimal solver in a robust estimation scheme based on
graduated non-convexity. To further enhance robustness to outliers, we also
develop the first graph-theoretic formulation to prune outliers in
category-level perception, which removes outliers via convex hull and maximum
clique computations; the resulting approach is robust to 70%-90% outliers. Our
third contribution is an extensive experimental evaluation. Besides providing
an ablation study on a simulated dataset and on the PASCAL3D+ dataset, we
combine our solver with a deep-learned keypoint detector, and show that the
resulting approach improves over the state of the art in vehicle pose
estimation in the ApolloScape datasets. | [
"cs.CV",
"cs.RO"
]
|
Performing machine learning on structured data is complicated by the fact
that such data does not have vectorial form. Therefore, multiple approaches
have emerged to construct vectorial representations of structured data, from
kernel and distance approaches to recurrent, recursive, and convolutional
neural networks. Recent years have seen heightened attention in this demanding
field of research and several new approaches have emerged, such as metric
learning on structured data, graph convolutional neural networks, and recurrent
decoder networks for structured data. In this contribution, we provide an
high-level overview of the state-of-the-art in representation learning and
embeddings for structured data across a wide range of machine learning fields. | [
"cs.LG",
"stat.ML"
]
|
Deep neural networks are transforming fields ranging from computer vision to
computational medicine, and we recently extended their application to the field
of phase-change heat transfer by introducing theory-trained neural networks
(TTNs) for a solidification problem \cite{TTN}. Here, we present general,
in-depth, and empirical insights into theory-training networks for learning the
solution of highly coupled differential equations. We analyze the deteriorating
effects of the oscillating loss on the ability of a network to satisfy the
equations at the training data points, measured by the final training loss, and
on the accuracy of the inferred solution. We introduce a theory-training
technique that, by leveraging regularization, eliminates those oscillations,
decreases the final training loss, and improves the accuracy of the inferred
solution, with no additional computational cost. Then, we present guidelines
that allow a systematic search for the network that has the optimal training
time and inference accuracy for a given set of equations; following these
guidelines can reduce the number of tedious training iterations in that search.
Finally, a comparison between theory-training and the rival, conventional
method of solving differential equations using discretization attests to the
advantages of theory-training not being necessarily limited to high-dimensional
sets of equations. The comparison also reveals a limitation of the current
theory-training framework that may limit its application in domains where
extreme accuracies are necessary. | [
"cs.LG",
"physics.app-ph"
]
|
Many current deep learning approaches make extensive use of backbone networks
pre-trained on large datasets like ImageNet, which are then fine-tuned to
perform a certain task. In remote sensing, the lack of comparable large
annotated datasets and the wide diversity of sensing platforms impedes similar
developments. In order to contribute towards the availability of pre-trained
backbone networks in remote sensing, we devise a self-supervised approach for
pre-training deep neural networks. By exploiting the correspondence between
geo-tagged audio recordings and remote sensing imagery, this is done in a
completely label-free manner, eliminating the need for laborious manual
annotation. For this purpose, we introduce the SoundingEarth dataset, which
consists of co-located aerial imagery and audio samples all around the world.
Using this dataset, we then pre-train ResNet models to map samples from both
modalities into a common embedding space, which encourages the models to
understand key properties of a scene that influence both visual and auditory
appearance. To validate the usefulness of the proposed approach, we evaluate
the transfer learning performance of pre-trained weights obtained against
weights obtained through other means. By fine-tuning the models on a number of
commonly used remote sensing datasets, we show that our approach outperforms
existing pre-training strategies for remote sensing imagery. The dataset, code
and pre-trained model weights will be available at
https://github.com/khdlr/SoundingEarth. | [
"cs.CV"
]
|
Analyzing the handwriting generation process is an important issue and has
been tackled by various generation models, such as kinematics based models and
stochastic models. In this study, we use a reinforcement learning (RL)
framework to realize handwriting generation with the careful future planning
ability. In fact, the handwriting process of human beings is also supported by
their future planning ability; for example, the ability is necessary to
generate a closed trajectory like '0' because any shortsighted model, such as a
Markovian model, cannot generate it. For the algorithm, we employ generative
adversarial imitation learning (GAIL). Typical RL algorithms require the manual
definition of the reward function, which is very crucial to control the
generation process. In contrast, GAIL trains the reward function along with the
other modules of the framework. In other words, through GAIL, we can understand
the reward of the handwriting generation process from handwriting examples. Our
experimental results qualitatively and quantitatively show that the learned
reward catches the trends in handwriting generation and thus GAIL is well
suited for the acquisition of handwriting behavior. | [
"cs.CV"
]
|
While most deep learning architectures are built on convolution, alternative
foundations like morphology are being explored for purposes like
interpretability and its connection to the analysis and processing of geometric
structures. The morphological hit-or-miss operation has the advantage that it
takes into account both foreground and background information when evaluating
target shape in an image. Herein, we identify limitations in existing
hit-or-miss neural definitions and we formulate an optimization problem to
learn the transform relative to deeper architectures. To this end, we model the
semantically important condition that the intersection of the hit and miss
structuring elements (SEs) should be empty and we present a way to express
Don't Care (DNC), which is important for denoting regions of an SE that are not
relevant to detecting a target pattern. Our analysis shows that convolution, in
fact, acts like a hit-miss transform through semantic interpretation of its
filter differences. On these premises, we introduce an extension that
outperforms conventional convolution on benchmark data. Quantitative
experiments are provided on synthetic and benchmark data, showing that the
direct encoding hit-or-miss transform provides better interpretability on
learned shapes consistent with objects whereas our morphologically inspired
generalized convolution yields higher classification accuracy. Last,
qualitative hit and miss filter visualizations are provided relative to single
morphological layer. | [
"cs.CV"
]
|
In-vehicle human object identification plays an important role in
vision-based automated vehicle driving systems while objects such as
pedestrians and vehicles on roads or streets are the primary targets to protect
from driverless vehicles. A challenge is the difficulty to detect objects in
moving under the wild conditions, while illumination and image quality could
drastically vary. In this work, to address this challenge, we exploit Deep
Convolutional Generative Adversarial Networks (DCGANs) with Single Shot
Detector (SSD) to handle with the wild conditions. In our work, a GAN was
trained with low-quality images to handle with the challenges arising from the
wild conditions in smart cities, while a cascaded SSD is employed as the object
detector to perform with the GAN. We used tested our approach under wild
conditions using taxi driver videos on London street in both daylight and night
times, and the tests from in-vehicle videos demonstrate that this strategy can
drastically achieve a better detection rate under the wild conditions. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
In this paper, we study the problem of outlier arm detection in multi-armed
bandit settings, which finds plenty of applications in many high-impact domains
such as finance, healthcare, and online advertising. For this problem, a
learner aims to identify the arms whose expected rewards deviate significantly
from most of the other arms. Different from existing work, we target the
generic outlier arms or outlier arm groups whose expected rewards can be
larger, smaller, or even in between those of normal arms. To this end, we start
by providing a comprehensive definition of such generic outlier arms and
outlier arm groups. Then we propose a novel pulling algorithm named GOLD to
identify such generic outlier arms. It builds a real-time neighborhood graph
based on upper confidence bounds and catches the behavior pattern of outliers
from normal arms. We also analyze its performance from various aspects. In the
experiments conducted on both synthetic and real-world data sets, the proposed
algorithm achieves 98 % accuracy while saving 83 % exploration cost on average
compared with state-of-the-art techniques. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Graph neural networks (GNNs) are designed for semi-supervised node
classification on graphs where only a small subset of nodes have class labels.
However, under extreme cases when very few labels are available (e.g., 1
labeled node per class), GNNs suffer from severe result quality degradation.
Several existing studies make an initial effort to ease this situation, but are
still far from satisfactory.
In this paper, on few-labeled graph data, we propose an effective framework
ABN that is readily applicable to both shallow and deep GNN architectures and
significantly boosts classification accuracy. In particular, on a benchmark
dataset Cora with only 1 labeled node per class, while the classic graph
convolutional network (GCN) only has 44.6% accuracy, an immediate instantiation
of ABN over GCN achieves 62.5% accuracy; when applied to a deep architecture
DAGNN, ABN improves accuracy from 59.8% to 66.4%, which is state of the art.
ABN obtains superior performance through three main algorithmic designs.
First, it selects high-quality unlabeled nodes via an adaptive pseudo labeling
technique, so as to adaptively enhance the training process of GNNs. Second,
ABN balances the labels of the selected nodes on real-world skewed graph data
by pseudo label balancing. Finally, a negative sampling regularizer is designed
for ABN to further utilize the unlabeled nodes. The effectiveness of the three
techniques in ABN is well-validated by both theoretical and empirical analysis.
Extensive experiments, comparing 12 existing approaches on 4 benchmark
datasets, demonstrate that ABN achieves state-of-the-art performance. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we propose a novel convolutional neural network (CNN) for
image denoising, which uses exponential linear unit (ELU) as the activation
function. We investigate the suitability by analyzing ELU's connection with
trainable nonlinear reaction diffusion model (TNRD) and residual denoising. On
the other hand, batch normalization (BN) is indispensable for residual
denoising and convergence purpose. However, direct stacking of BN and ELU
degrades the performance of CNN. To mitigate this issue, we design an
innovative combination of activation layer and normalization layer to exploit
and leverage the ELU network, and discuss the corresponding rationale.
Moreover, inspired by the fact that minimizing total variation (TV) can be
applied to image denoising, we propose a TV regularized L2 loss to evaluate the
training effect during the iterations. Finally, we conduct extensive
experiments, showing that our model outperforms some recent and popular
approaches on Gaussian denoising with specific or randomized noise levels for
both gray and color images. | [
"cs.CV"
]
|
What did it feel like to walk through a city from the past? In this work, we
describe Nostalgin (Nostalgia Engine), a method that can faithfully reconstruct
cities from historical images. Unlike existing work in city reconstruction, we
focus on the task of reconstructing 3D cities from historical images. Working
with historical image data is substantially more difficult, as there are
significantly fewer buildings available and the details of the camera
parameters which captured the images are unknown. Nostalgin can generate a city
model even if there is only a single image per facade, regardless of viewpoint
or occlusions. To achieve this, our novel architecture combines image
segmentation, rectification, and inpainting. We motivate our design decisions
with experimental analysis of individual components of our pipeline, and show
that we can improve on baselines in both speed and visual realism. We
demonstrate the efficacy of our pipeline by recreating two 1940s Manhattan city
blocks. We aim to deploy Nostalgin as an open source platform where users can
generate immersive historical experiences from their own photos. | [
"cs.CV",
"cs.CG",
"cs.LG"
]
|
Generative Adversarial Networks (GANs) are a powerful framework for deep
generative modeling. Posed as a two-player minimax problem, GANs are typically
trained end-to-end on real-valued data and can be used to train a generator of
high-dimensional and realistic images. However, a major limitation of GANs is
that training relies on passing gradients from the discriminator through the
generator via back-propagation. This makes it fundamentally difficult to train
GANs with discrete data, as generation in this case typically involves a
non-differentiable function. These difficulties extend to the reinforcement
learning setting when the action space is composed of discrete decisions. We
address these issues by reframing the GAN framework so that the generator is no
longer trained using gradients through the discriminator, but is instead
trained using a learned critic in the actor-critic framework with a Temporal
Difference (TD) objective. This is a natural fit for sequence modeling and we
use it to achieve improvements on language modeling tasks over the standard
Teacher-Forcing methods. | [
"stat.ML",
"cs.LG"
]
|
Graph convolutional networks are becoming indispensable for deep learning
from graph-structured data. Most of the existing graph convolutional networks
share two big shortcomings. First, they are essentially low-pass filters, thus
the potentially useful middle and high frequency band of graph signals are
ignored. Second, the bandwidth of existing graph convolutional filters is
fixed. Parameters of a graph convolutional filter only transform the graph
inputs without changing the curvature of a graph convolutional filter function.
In reality, we are uncertain about whether we should retain or cut off the
frequency at a certain point unless we have expert domain knowledge. In this
paper, we propose Automatic Graph Convolutional Networks (AutoGCN) to capture
the full spectrum of graph signals and automatically update the bandwidth of
graph convolutional filters. While it is based on graph spectral theory, our
AutoGCN is also localized in space and has a spatial form. Experimental results
show that AutoGCN achieves significant improvement over baseline methods which
only work as low-pass filters. | [
"cs.LG",
"cs.AI"
]
|
Suppose an agent is in a (possibly unknown) Markov Decision Process in the
absence of a reward signal, what might we hope that an agent can efficiently
learn to do? This work studies a broad class of objectives that are defined
solely as functions of the state-visitation frequencies that are induced by how
the agent behaves. For example, one natural, intrinsically defined, objective
problem is for the agent to learn a policy which induces a distribution over
state space that is as uniform as possible, which can be measured in an
entropic sense. We provide an efficient algorithm to optimize such such
intrinsically defined objectives, when given access to a black box planning
oracle (which is robust to function approximation). Furthermore, when
restricted to the tabular setting where we have sample based access to the MDP,
our proposed algorithm is provably efficient, both in terms of its sample and
computational complexities. Key to our algorithmic methodology is utilizing the
conditional gradient method (a.k.a. the Frank-Wolfe algorithm) which utilizes
an approximate MDP solver. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Graphs are the natural data structure to represent relational and structural
information in many domains. To cover the broad range of graph-data
applications including graph classification as well as graph generation, it is
desirable to have a general and flexible model consisting of an encoder and a
decoder that can handle graph data. Although the representative encoder-decoder
model, Transformer, shows superior performance in various tasks especially of
natural language processing, it is not immediately available for graphs due to
their non-sequential characteristics. To tackle this incompatibility, we
propose GRaph-Aware Transformer (GRAT), the first Transformer-based model which
can encode and decode whole graphs in end-to-end fashion. GRAT is featured with
a self-attention mechanism adaptive to the edge information and an
auto-regressive decoding mechanism based on the two-path approach consisting of
sub-graph encoding path and node-and-edge generation path for each decoding
step. We empirically evaluated GRAT on multiple setups including encoder-based
tasks such as molecule property predictions on QM9 datasets and
encoder-decoder-based tasks such as molecule graph generation in the organic
molecule synthesis domain. GRAT has shown very promising results including
state-of-the-art performance on 4 regression tasks in QM9 benchmark. | [
"cs.LG",
"cs.CL",
"stat.ML"
]
|
Vision transformer (ViT) has recently showed its strong capability in
achieving comparable results to convolutional neural networks (CNNs) on image
classification. However, vanilla ViT simply inherits the same architecture from
the natural language processing directly, which is often not optimized for
vision applications. Motivated by this, in this paper, we propose a new
architecture that adopts the pyramid structure and employ a novel
regional-to-local attention rather than global self-attention in vision
transformers. More specifically, our model first generates regional tokens and
local tokens from an image with different patch sizes, where each regional
token is associated with a set of local tokens based on the spatial location.
The regional-to-local attention includes two steps: first, the regional
self-attention extract global information among all regional tokens and then
the local self-attention exchanges the information among one regional token and
the associated local tokens via self-attention. Therefore, even though local
self-attention confines the scope in a local region but it can still receive
global information. Extensive experiments on three vision tasks, including
image classification, object detection and action recognition, show that our
approach outperforms or is on par with state-of-the-art ViT variants including
many concurrent works. Our source codes and models will be publicly available. | [
"cs.CV"
]
|
Image-based virtual try-on systems for fitting new in-shop clothes into a
person image have attracted increasing research attention, yet is still
challenging. A desirable pipeline should not only transform the target clothes
into the most fitting shape seamlessly but also preserve well the clothes
identity in the generated image, that is, the key characteristics (e.g.
texture, logo, embroidery) that depict the original clothes. However, previous
image-conditioned generation works fail to meet these critical requirements
towards the plausible virtual try-on performance since they fail to handle
large spatial misalignment between the input image and target clothes. Prior
work explicitly tackled spatial deformation using shape context matching, but
failed to preserve clothing details due to its coarse-to-fine strategy. In this
work, we propose a new fully-learnable Characteristic-Preserving Virtual Try-On
Network(CP-VTON) for addressing all real-world challenges in this task. First,
CP-VTON learns a thin-plate spline transformation for transforming the in-shop
clothes into fitting the body shape of the target person via a new Geometric
Matching Module (GMM) rather than computing correspondences of interest points
as prior works did. Second, to alleviate boundary artifacts of warped clothes
and make the results more realistic, we employ a Try-On Module that learns a
composition mask to integrate the warped clothes and the rendered image to
ensure smoothness. Extensive experiments on a fashion dataset demonstrate our
CP-VTON achieves the state-of-the-art virtual try-on performance both
qualitatively and quantitatively. | [
"cs.CV"
]
|
Lane-level scene annotations provide invaluable data in autonomous vehicles
for trajectory planning in complex environments such as urban areas and cities.
However, obtaining such data is time-consuming and expensive since lane
annotations have to be annotated manually by humans and are as such hard to
scale to large areas. In this work, we propose a novel approach for lane
geometry estimation from bird's-eye-view images. We formulate the problem of
lane shape and lane connections estimation as a graph estimation problem where
lane anchor points are graph nodes and lane segments are graph edges. We train
a graph estimation model on multimodal bird's-eye-view data processed from the
popular NuScenes dataset and its map expansion pack. We furthermore estimate
the direction of the lane connection for each lane segment with a separate
model which results in a directed lane graph. We illustrate the performance of
our LaneGraphNet model on the challenging NuScenes dataset and provide
extensive qualitative and quantitative evaluation. Our model shows promising
performance for most evaluated urban scenes and can serve as a step towards
automated generation of HD lane annotations for autonomous driving. | [
"cs.CV",
"cs.LG",
"cs.RO"
]
|
Reinforcement learning has been applied to human movement through
physiologically-based biomechanical models to add insights into the neural
control of these movements; it is also useful in the design of prosthetics and
robotics. In this paper, we extend the use of reinforcement learning into
controlling an ocular biomechanical system to perform saccades, which is one of
the fastest eye movement systems. We describe an ocular environment and an
agent trained using Deep Deterministic Policy Gradients method to perform
saccades. The agent was able to match the desired eye position with a mean
deviation angle of 3:5+/-1:25 degrees. The proposed framework is a first step
towards using the capabilities of deep reinforcement learning to enhance our
understanding of ocular biomechanics. | [
"cs.LG",
"cs.AI",
"I.6.5"
]
|
Object-centric representations have recently enabled significant progress in
tackling relational reasoning tasks. By building a strong object-centric
inductive bias into neural architectures, recent efforts have improved
generalization and data efficiency of machine learning algorithms for these
problems. One problem class involving relational reasoning that still remains
under-explored is multi-agent reinforcement learning (MARL). Here we
investigate whether object-centric representations are also beneficial in the
fully cooperative MARL setting. Specifically, we study two ways of
incorporating an agent-centric inductive bias into our RL algorithm: 1.
Introducing an agent-centric attention module with explicit connections across
agents 2. Adding an agent-centric unsupervised predictive objective (i.e. not
using action labels), to be used as an auxiliary loss for MARL, or as the basis
of a pre-training step. We evaluate these approaches on the Google Research
Football environment as well as DeepMind Lab 2D. Empirically, agent-centric
representation learning leads to the emergence of more complex cooperation
strategies between agents as well as enhanced sample efficiency and
generalization. | [
"cs.LG",
"cs.AI"
]
|
Policy optimization is a core component of reinforcement learning (RL), and
most existing RL methods directly optimize parameters of a policy based on
maximizing the expected total reward, or its surrogate. Though often achieving
encouraging empirical success, its underlying mathematical principle on {\em
policy-distribution} optimization is unclear. We place policy optimization into
the space of probability measures, and interpret it as Wasserstein gradient
flows. On the probability-measure space, under specified circumstances, policy
optimization becomes a convex problem in terms of distribution optimization. To
make optimization feasible, we develop efficient algorithms by numerically
solving the corresponding discrete gradient flows. Our technique is applicable
to several RL settings, and is related to many state-of-the-art
policy-optimization algorithms. Empirical results verify the effectiveness of
our framework, often obtaining better performance compared to related
algorithms. | [
"cs.LG",
"stat.ML"
]
|
Recent advances in Generative Adversarial Networks (GANs) have shown
impressive results for task of facial expression synthesis. The most successful
architecture is StarGAN, that conditions GANs generation process with images of
a specific domain, namely a set of images of persons sharing the same
expression. While effective, this approach can only generate a discrete number
of expressions, determined by the content of the dataset. To address this
limitation, in this paper, we introduce a novel GAN conditioning scheme based
on Action Units (AU) annotations, which describes in a continuous manifold the
anatomical facial movements defining a human expression. Our approach allows
controlling the magnitude of activation of each AU and combine several of them.
Additionally, we propose a fully unsupervised strategy to train the model, that
only requires images annotated with their activated AUs, and exploit attention
mechanisms that make our network robust to changing backgrounds and lighting
conditions. Extensive evaluation show that our approach goes beyond competing
conditional generators both in the capability to synthesize a much wider range
of expressions ruled by anatomically feasible muscle movements, as in the
capacity of dealing with images in the wild. | [
"cs.CV"
]
|
Modern machine learning systems based on neural networks have shown great
success in learning complex data patterns while being able to make good
predictions on unseen data points. However, the limited interpretability of
these systems hinders further progress and application to several domains in
the real world. This predicament is exemplified by time consuming model
selection and the difficulties faced in predictive explainability, especially
in the presence of adversarial examples. In this paper, we take a step towards
better understanding of neural networks by introducing a local polytope
interpolation method. The proposed Deep Non Negative Kernel regression (NNK)
interpolation framework is non parametric, theoretically simple and
geometrically intuitive. We demonstrate instance based explainability for deep
learning models and develop a method to identify models with good
generalization properties using leave one out estimation. Finally, we draw a
rationalization to adversarial and generative examples which are inevitable
from an interpolation view of machine learning. | [
"cs.LG",
"stat.ML"
]
|
Intelligent diagnosis method based on data-driven and deep learning is an
attractive and meaningful field in recent years. However, in practical
application scenarios, the imbalance of time-series fault is an urgent problem
to be solved. This paper proposes a novel deep metric learning model, where
imbalanced fault data and a quadruplet data pair design manner are considered.
Based on such data pair, a quadruplet loss function which takes into account
the inter-class distance and the intra-class data distribution are proposed.
This quadruplet loss pays special attention to imbalanced sample pair. The
reasonable combination of quadruplet loss and softmax loss function can reduce
the impact of imbalance. Experiment results on two open-source datasets show
that the proposed method can effectively and robustly improve the performance
of imbalanced fault diagnosis. | [
"cs.LG",
"cs.AI"
]
|
Lateral connections in the primary visual cortex (V1) have long been
hypothesized to be responsible of several visual processing mechanisms such as
brightness induction, chromatic induction, visual discomfort and bottom-up
visual attention (also named saliency). Many computational models have been
developed to independently predict these and other visual processes, but no
computational model has been able to reproduce all of them simultaneously. In
this work we show that a biologically plausible computational model of lateral
interactions of V1 is able to simultaneously predict saliency and all the
aforementioned visual processes. Our model's (NSWAM) architecture is based on
Pennachio's neurodynamic model of lateral connections of V1. It is defined as a
network of firing rate neurons, sensitive to visual features such as
brightness, color, orientation and scale. We tested NSWAM saliency predictions
using images from several eye tracking datasets. We show that accuracy of
predictions, using shuffled metrics, obtained by our architecture is similar to
other state-of-the-art computational methods, particularly with synthetic
images (CAT2000-Pattern & SID4VAM) which mainly contain low level features.
Moreover, we outperform other biologically-inspired saliency models that are
specifically designed to exclusively reproduce saliency. Hence, we show that
our biologically plausible model of lateral connections can simultaneously
explain different visual proceses present in V1 (without applying any type of
training or optimization and keeping the same parametrization for all the
visual processes). This can be useful for the definition of a unified
architecture of the primary visual cortex. | [
"cs.CV"
]
|
The open-ended question answering task of Text-VQA requires reading and
reasoning about local, often previously unseen, scene-text content of an image
to generate answers. In this work, we propose the generalized use of external
knowledge to augment our understanding of the said scene-text. We design a
framework to extract, validate, and reason with knowledge using a standard
multimodal transformer for vision language understanding tasks. Through
empirical evidence and qualitative results, we demonstrate how external
knowledge can highlight instance-only cues and thus help deal with training
data bias, improve answer entity type correctness, and detect multiword named
entities. We generate results comparable to the state-of-the-art on two
publicly available datasets, under the constraints of similar upstream OCR
systems and training data. | [
"cs.CV",
"cs.MM"
]
|
The dominant object detection approaches treat each dataset separately and
fit towards a specific domain, which cannot adapt to other domains without
extensive retraining. In this paper, we address the problem of designing a
universal object detection model that exploits diverse category granularity
from multiple domains and predict all kinds of categories in one system.
Existing works treat this problem by integrating multiple detection branches
upon one shared backbone network. However, this paradigm overlooks the crucial
semantic correlations between multiple domains, such as categories hierarchy,
visual similarity, and linguistic relationship. To address these drawbacks, we
present a novel universal object detector called Universal-RCNN that
incorporates graph transfer learning for propagating relevant semantic
information across multiple datasets to reach semantic coherency. Specifically,
we first generate a global semantic pool by integrating all high-level semantic
representation of all the categories. Then an Intra-Domain Reasoning Module
learns and propagates the sparse graph representation within one dataset guided
by a spatial-aware GCN. Finally, an InterDomain Transfer Module is proposed to
exploit diverse transfer dependencies across all domains and enhance the
regional feature representation by attending and transferring semantic contexts
globally. Extensive experiments demonstrate that the proposed method
significantly outperforms multiple-branch models and achieves the
state-of-the-art results on multiple object detection benchmarks (mAP: 49.1% on
COCO). | [
"cs.CV"
]
|
This paper explores the problem of reconstructing high-resolution light field
(LF) images from hybrid lenses, including a high-resolution camera surrounded
by multiple low-resolution cameras. To tackle this challenge, we propose a
novel end-to-end learning-based approach, which can comprehensively utilize the
specific characteristics of the input from two complementary and parallel
perspectives. Specifically, one module regresses a spatially consistent
intermediate estimation by learning a deep multidimensional and cross-domain
feature representation; the other one constructs another intermediate
estimation, which maintains the high-frequency textures, by propagating the
information of the high-resolution view. We finally leverage the advantages of
the two intermediate estimations via the learned attention maps, leading to the
final high-resolution LF image. Extensive experiments demonstrate the
significant superiority of our approach over state-of-the-art ones. That is,
our method not only improves the PSNR by more than 2 dB, but also preserves the
LF structure much better. To the best of our knowledge, this is the first
end-to-end deep learning method for reconstructing a high-resolution LF image
with a hybrid input. We believe our framework could potentially decrease the
cost of high-resolution LF data acquisition and also be beneficial to LF data
storage and transmission. The code is available at
https://github.com/jingjin25/LFhybridSR-Fusion. | [
"cs.CV",
"eess.IV"
]
|
By interpreting the forward dynamics of the latent representation of neural
networks as an ordinary differential equation, Neural Ordinary Differential
Equation (Neural ODE) emerged as an effective framework for modeling a system
dynamics in the continuous time domain. However, real-world systems often
involves external interventions that cause changes in the system dynamics such
as a moving ball coming in contact with another ball, or such as a patient
being administered with particular drug. Neural ODE and a number of its recent
variants, however, are not suitable for modeling such interventions as they do
not properly model the observations and the interventions separately. In this
paper, we propose a novel neural ODE-based approach (IMODE) that properly model
the effect of external interventions by employing two ODE functions to
separately handle the observations and the interventions. Using both synthetic
and real-world time-series datasets involving interventions, our experimental
results consistently demonstrate the superiority of IMODE compared to existing
approaches. | [
"cs.LG",
"cs.NE"
]
|
Environment perception is an important task with great practical value and
bird view is an essential part for creating panoramas of surrounding
environment. Due to the large gap and severe deformation between the frontal
view and bird view, generating a bird view image from a single frontal view is
challenging. To tackle this problem, we propose the BridgeGAN, i.e., a novel
generative model for bird view synthesis. First, an intermediate view, i.e.,
homography view, is introduced to bridge the large gap. Next, conditioned on
the three views (frontal view, homography view and bird view) in our task, a
multi-GAN based model is proposed to learn the challenging cross-view
translation. Extensive experiments conducted on a synthetic dataset have
demonstrated that the images generated by our model are much better than those
generated by existing methods, with more consistent global appearance and
sharper details. Ablation studies and discussions show its reliability and
robustness in some challenging cases. | [
"cs.CV"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.