text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
This paper is concerned with multi-view reinforcement learning (MVRL), which
allows for decision making when agents share common dynamics but adhere to
different observation models. We define the MVRL framework by extending
partially observable Markov decision processes (POMDPs) to support more than
one observation model and propose two solution methods through observation
augmentation and cross-view policy transfer. We empirically evaluate our method
and demonstrate its effectiveness in a variety of environments. Specifically,
we show reductions in sample complexities and computational time for acquiring
policies that handle multi-view environments. | [
"cs.LG",
"stat.ML"
] |
We present a novel learning-based approach to graph representations of road
networks employing state-of-the-art graph convolutional neural networks. Our
approach is applied to realistic road networks of 17 cities from Open Street
Map. While edge features are crucial to generate descriptive graph
representations of road networks, graph convolutional networks usually rely on
node features only. We show that the highly representative edge features can
still be integrated into such networks by applying a line graph transformation.
We also propose a method for neighborhood sampling based on a topological
neighborhood composed of both local and global neighbors. We compare the
performance of learning representations using different types of neighborhood
aggregation functions in transductive and inductive tasks and in supervised and
unsupervised learning. Furthermore, we propose a novel aggregation approach,
Graph Attention Isomorphism Network, GAIN. Our results show that GAIN
outperforms state-of-the-art methods on the road type classification problem. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Capturing uncertainty in object detection is indispensable for safe
autonomous driving. In recent years, deep learning has become the de-facto
approach for object detection, and many probabilistic object detectors have
been proposed. However, there is no summary on uncertainty estimation in deep
object detection, and existing methods are not only built with different
network architectures and uncertainty estimation methods, but also evaluated on
different datasets with a wide range of evaluation metrics. As a result, a
comparison among methods remains challenging, as does the selection of a model
that best suits a particular application. This paper aims to alleviate this
problem by providing a review and comparative study on existing probabilistic
object detection methods for autonomous driving applications. First, we provide
an overview of generic uncertainty estimation in deep learning, and then
systematically survey existing methods and evaluation metrics for probabilistic
object detection. Next, we present a strict comparative study for probabilistic
object detection based on an image detector and three public autonomous driving
datasets. Finally, we present a discussion of the remaining challenges and
future works. Code has been made available at
https://github.com/asharakeh/pod_compare.git | [
"cs.CV",
"cs.RO"
] |
Generative adversarial networks (GANs) can generate high-quality images from
sampled latent codes. Recent works attempt to edit an image by manipulating its
underlying latent code, but rarely go beyond the basic task of attribute
adjustment. We propose the first method that enables manipulation with
multidimensional condition such as keypoints and captions. Specifically, we
design an algorithm that searches for a new latent code that satisfies the
target condition based on the Surrogate Gradient Field (SGF) induced by an
auxiliary mapping network. For quantitative comparison, we propose a metric to
evaluate the disentanglement of manipulation methods. Thorough experimental
analysis on the facial attribute adjustment task shows that our method
outperforms state-of-the-art methods in disentanglement. We further apply our
method to tasks of various condition modalities to demonstrate that our method
can alter complex image properties such as keypoints and captions. | [
"cs.CV"
] |
Scene Text Recognition (STR), the task of recognizing text against complex
image backgrounds, is an active area of research. Current state-of-the-art
(SOTA) methods still struggle to recognize text written in arbitrary shapes. In
this paper, we introduce a novel architecture for STR, named Selective Context
ATtentional Text Recognizer (SCATTER). SCATTER utilizes a stacked block
architecture with intermediate supervision during training, that paves the way
to successfully train a deep BiLSTM encoder, thus improving the encoding of
contextual dependencies. Decoding is done using a two-step 1D attention
mechanism. The first attention step re-weights visual features from a CNN
backbone together with contextual features computed by a BiLSTM layer. The
second attention step, similar to previous papers, treats the features as a
sequence and attends to the intra-sequence relationships. Experiments show that
the proposed approach surpasses SOTA performance on irregular text recognition
benchmarks by 3.7\% on average. | [
"cs.CV"
] |
Traditional text detection methods mostly focus on quadrangle text. In this
study we propose a novel method named sliding line point regression (SLPR) in
order to detect arbitrary-shape text in natural scene. SLPR regresses multiple
points on the edge of text line and then utilizes these points to sketch the
outlines of the text. The proposed SLPR can be adapted to many object detection
architectures such as Faster R-CNN and R-FCN. Specifically, we first generate
the smallest rectangular box including the text with region proposal network
(RPN), then isometrically regress the points on the edge of text by using the
vertically and horizontally sliding lines. To make full use of information and
reduce redundancy, we calculate x-coordinate or y-coordinate of target point by
the rectangular box position, and just regress the remaining y-coordinate or
x-coordinate. Accordingly we can not only reduce the parameters of system, but
also restrain the points which will generate more regular polygon. Our approach
achieved competitive results on traditional ICDAR2015 Incidental Scene Text
benchmark and curve text detection dataset CTW1500. | [
"cs.CV"
] |
In this paper, we propose a new variant of Linear Discriminant Analysis (LDA)
to solve multi-label classification tasks. The proposed method is based on a
probabilistic model for defining the weights of individual samples in a
weighted multi-label LDA approach. Linear Discriminant Analysis is a classical
statistical machine learning method, which aims to find a linear data
transformation increasing class discrimination in an optimal discriminant
subspace. Traditional LDA sets assumptions related to Gaussian class
distributions and single-label data annotations. To employ the LDA technique in
multi-label classification problems, we exploit intuitions coming from a
probabilistic interpretation of class saliency to redefine the between-class
and within-class scatter matrices. The saliency-based weights obtained based on
various kinds of affinity encoding prior information are used to reveal the
probability of each instance to be salient for each of its classes in the
multi-label problem at hand. The proposed Saliency-based weighted Multi-label
LDA approach is shown to lead to performance improvements in various
multi-label classification problems. | [
"cs.LG",
"stat.ML"
] |
Unsupervised image segmentation aims at assigning the pixels with similar
feature into a same cluster without annotation, which is an important task in
computer vision. Due to lack of prior knowledge, most of existing model usually
need to be trained several times to obtain suitable results. To address this
problem, we propose an unsupervised image segmentation model based on the
Mutual Mean-Teaching (MMT) framework to produce more stable results. In
addition, since the labels of pixels from two model are not matched, a label
alignment algorithm based on the Hungarian algorithm is proposed to match the
cluster labels. Experimental results demonstrate that the proposed model is
able to segment various types of images and achieves better performance than
the existing methods. | [
"cs.CV",
"cs.AI"
] |
Automatic search of neural architectures for various vision and natural
language tasks is becoming a prominent tool as it allows to discover
high-performing structures on any dataset of interest. Nevertheless, on more
difficult domains, such as dense per-pixel classification, current automatic
approaches are limited in their scope - due to their strong reliance on
existing image classifiers they tend to search only for a handful of additional
layers with discovered architectures still containing a large number of
parameters. In contrast, in this work we propose a novel solution able to find
light-weight and accurate segmentation architectures starting from only few
blocks of a pre-trained classification network. To this end, we progressively
build up a methodology that relies on templates of sets of operations, predicts
which template and how many times should be applied at each step, while also
generating the connectivity structure and downsampling factors. All these
decisions are being made by a recurrent neural network that is rewarded based
on the score of the emitted architecture on the holdout set and trained using
reinforcement learning. One discovered architecture achieves 63.2% mean IoU on
CamVid and 67.8% on CityScapes having only 270K parameters. Pre-trained models
and the search code are available at
https://github.com/DrSleep/nas-segm-pytorch. | [
"cs.CV"
] |
There has been an increasing surge of interest on development of advanced
Reinforcement Learning (RL) systems as intelligent approaches to learn optimal
control policies directly from smart agents' interactions with the environment.
Objectives: In a model-free RL method with continuous state-space, typically,
the value function of the states needs to be approximated. In this regard, Deep
Neural Networks (DNNs) provide an attractive modeling mechanism to approximate
the value function using sample transitions. DNN-based solutions, however,
suffer from high sensitivity to parameter selection, are prone to overfitting,
and are not very sample efficient. A Kalman-based methodology, on the other
hand, could be used as an efficient alternative. Such an approach, however,
commonly requires a-priori information about the system (such as noise
statistics) to perform efficiently. The main objective of this paper is to
address this issue. Methods: As a remedy to the aforementioned problems, this
paper proposes an innovative Multiple Model Kalman Temporal Difference (MM-KTD)
framework, which adapts the parameters of the filter using the observed states
and rewards. Moreover, an active learning method is proposed to enhance the
sampling efficiency of the system. More specifically, the estimated uncertainty
of the value functions are exploited to form the behaviour policy leading to
more visits to less certain values, therefore, improving the overall learning
sample efficiency. As a result, the proposed MM-KTD framework can learn the
optimal policy with significantly reduced number of samples as compared to its
DNN-based counterparts. Results: To evaluate performance of the proposed MM-KTD
framework, we have performed a comprehensive set of experiments based on three
RL benchmarks. Experimental results show superiority of the MM-KTD framework in
comparison to its state-of-the-art counterparts. | [
"cs.LG",
"cs.AI",
"eess.SP",
"stat.ML"
] |
Recently proposed budding tree is a decision tree algorithm in which every
node is part internal node and part leaf. This allows representing every
decision tree in a continuous parameter space, and therefore a budding tree can
be jointly trained with backpropagation, like a neural network. Even though
this continuity allows it to be used in hierarchical representation learning,
the learned representations are local: Activation makes a soft selection among
all root-to-leaf paths in a tree. In this work we extend the budding tree and
propose the distributed tree where the children use different and independent
splits and hence multiple paths in a tree can be traversed at the same time.
This ability to combine multiple paths gives the power of a distributed
representation, as in a traditional perceptron layer. We show that distributed
trees perform comparably or better than budding and traditional hard trees on
classification and regression tasks. | [
"cs.LG",
"stat.ML"
] |
Self-supervised monocular depth estimation methods generally suffer the
occlusion fading issue due to the lack of supervision by the per pixel ground
truth. Although a post-processing method was proposed by Godard et. al. to
reduce the occlusion fading, the compensated results have a severe halo effect.
In this paper, we propose a novel Edge-Guided post-processing to reduce the
occlusion fading issue for self-supervised monocular depth estimation. We
further introduce Atrous Spatial Pyramid Pooling (ASPP) into the network to
reduce the computational costs and improve the inference performance. The
proposed ASPP-based network is lighter, faster, and better than current
commonly used depth estimation networks. This light-weight network only needs
8.1 million parameters and can achieve up to 40 frames per second for
$256\times512$ input in the inference stage using a single nVIDIA GTX1080 GPU.
The proposed network also outperforms the current state-of-the-art on the KITTI
benchmarks. The ASPP-based network and Edge-Guided post-processing produce
better results either quantitatively and qualitatively than the competitors. | [
"cs.CV"
] |
It is known that the current graph neural networks (GNNs) are difficult to
make themselves deep due to the problem known as over-smoothing. Multi-scale
GNNs are a promising approach for mitigating the over-smoothing problem.
However, there is little explanation of why it works empirically from the
viewpoint of learning theory. In this study, we derive the optimization and
generalization guarantees of transductive learning algorithms that include
multi-scale GNNs. Using the boosting theory, we prove the convergence of the
training error under weak learning-type conditions. By combining it with
generalization gap bounds in terms of transductive Rademacher complexity, we
show that a test error bound of a specific type of multi-scale GNNs that
decreases corresponding to the number of node aggregations under some
conditions. Our results offer theoretical explanations for the effectiveness of
the multi-scale structure against the over-smoothing problem. We apply boosting
algorithms to the training of multi-scale GNNs for real-world node prediction
tasks. We confirm that its performance is comparable to existing GNNs, and the
practical behaviors are consistent with theoretical observations. Code is
available at https://github.com/delta2323/GB-GNN. | [
"cs.LG",
"math.ST",
"stat.ML",
"stat.TH",
"05C99, 62M45",
"G.2.2"
] |
Many real-world systems, such as moving planets, can be considered as
multi-agent dynamic systems, where objects interact with each other and
co-evolve along with the time. Such dynamics is usually difficult to capture,
and understanding and predicting the dynamics based on observed trajectories of
objects become a critical research problem in many domains. Most existing
algorithms, however, assume the observations are regularly sampled and all the
objects can be fully observed at each sampling time, which is impractical for
many applications. In this paper, we propose to learn system dynamics from
irregularly-sampled partial observations with underlying graph structure for
the first time. To tackle the above challenge, we present LG-ODE, a latent
ordinary differential equation generative model for modeling multi-agent
dynamic system with known graph structure. It can simultaneously learn the
embedding of high dimensional trajectories and infer continuous latent system
dynamics. Our model employs a novel encoder parameterized by a graph neural
network that can infer initial states in an unsupervised way from
irregularly-sampled partial observations of structural objects and utilizes
neuralODE to infer arbitrarily complex continuous-time latent dynamics.
Experiments on motion capture, spring system, and charged particle datasets
demonstrate the effectiveness of our approach. | [
"cs.LG",
"stat.ML"
] |
Graph embedding methods transform high-dimensional and complex graph contents
into low-dimensional representations. They are useful for a wide range of graph
analysis tasks including link prediction, node classification, recommendation
and visualization. Most existing approaches represent graph nodes as point
vectors in a low-dimensional embedding space, ignoring the uncertainty present
in the real-world graphs. Furthermore, many real-world graphs are large-scale
and rich in content (e.g. node attributes). In this work, we propose GLACE, a
novel, scalable graph embedding method that preserves both graph structure and
node attributes effectively and efficiently in an end-to-end manner. GLACE
effectively models uncertainty through Gaussian embeddings, and supports
inductive inference of new nodes based on their attributes. In our
comprehensive experiments, we evaluate GLACE on real-world graphs, and the
results demonstrate that GLACE significantly outperforms state-of-the-art
embedding methods on multiple graph analysis tasks. | [
"cs.LG",
"stat.ML"
] |
Despite recent advances in representation learning in hypercomplex (HC)
space, this subject is still vastly unexplored in the context of graphs.
Motivated by the complex and quaternion algebras, which have been found in
several contexts to enable effective representation learning that inherently
incorporates a weight-sharing mechanism, we develop graph neural networks that
leverage the properties of hypercomplex feature transformation. In particular,
in our proposed class of models, the multiplication rule specifying the algebra
itself is inferred from the data during training. Given a fixed model
architecture, we present empirical evidence that our proposed model
incorporates a regularization effect, alleviating the risk of overfitting. We
also show that for fixed model capacity, our proposed method outperforms its
corresponding real-formulated GNN, providing additional confirmation for the
enhanced expressivity of HC embeddings. Finally, we test our proposed
hypercomplex GNN on several open graph benchmark datasets and show that our
models reach state-of-the-art performance while consuming a much lower memory
footprint with 70& fewer parameters. Our implementations are available at
https://github.com/bayer-science-for-a-better-life/phc-gnn. | [
"cs.LG"
] |
A simple and inexpensive (low-power and low-bandwidth) modification is made
to a conventional off-the-shelf color video camera, from which we recover
{multiple} color frames for each of the original measured frames, and each of
the recovered frames can be focused at a different depth. The recovery of
multiple frames for each measured frame is made possible via high-speed coding,
manifested via translation of a single coded aperture; the inexpensive
translation is constituted by mounting the binary code on a piezoelectric
device. To simultaneously recover depth information, a {liquid} lens is
modulated at high speed, via a variable voltage. Consequently, during the
aforementioned coding process, the liquid lens allows the camera to sweep the
focus through multiple depths. In addition to designing and implementing the
camera, fast recovery is achieved by an anytime algorithm exploiting the
group-sparsity of wavelet/DCT coefficients. | [
"cs.CV"
] |
With the introduction of new regulations in the European Union, the future of
Beyond Visual Line Of Sight (BVLOS) drones is set to bloom. This led to the
creation of the theBEAST project, which aims to create an autonomous security
drone, with focus on those regulations and on safety. This technical paper
describes the first steps of a module within this project, which revolves
around detecting obstacles so they can be avoided in a fail-safe landing. A
deep learning powered object detection method is the subject of our research,
and various experiments are held to maximize its performance, such as comparing
various data augmentation techniques or YOLOv3 and YOLOv5. According to the
results of the experiments, we conclude that although object detection is a
promising approach to resolve this problem, more volume of data is required for
potential usage in a real-life application. | [
"cs.CV"
] |
Generative Adversarial networks (GANs) have obtained remarkable success in
many unsupervised learning tasks and unarguably, clustering is an important
unsupervised learning problem. While one can potentially exploit the
latent-space back-projection in GANs to cluster, we demonstrate that the
cluster structure is not retained in the GAN latent space.
In this paper, we propose ClusterGAN as a new mechanism for clustering using
GANs. By sampling latent variables from a mixture of one-hot encoded variables
and continuous latent variables, coupled with an inverse network (which
projects the data to the latent space) trained jointly with a clustering
specific loss, we are able to achieve clustering in the latent space. Our
results show a remarkable phenomenon that GANs can preserve latent space
interpolation across categories, even though the discriminator is never exposed
to such vectors. We compare our results with various clustering baselines and
demonstrate superior performance on both synthetic and real datasets. | [
"cs.LG",
"stat.ML"
] |
Few-shot segmentation aims to train a segmentation model that can fast adapt
to novel classes with few exemplars. The conventional training paradigm is to
learn to make predictions on query images conditioned on the features from
support images. Previous methods only utilized the semantic-level prototypes of
support images as the conditional information. These methods cannot utilize all
pixel-wise support information for the query predictions, which is however
critical for the segmentation task. In this paper, we focus on utilizing
pixel-wise relationships between support and target images to facilitate the
few-shot semantic segmentation task. We design a novel Cycle-Consistent
Transformer (CyCTR) module to aggregate pixel-wise support features into query
ones. CyCTR performs cross-attention between features from different images,
i.e. support and query images. We observe that there may exist unexpected
irrelevant pixel-level support features. Directly performing cross-attention
may aggregate these features from support to query and bias the query features.
Thus, we propose using a novel cycle-consistent attention mechanism to filter
out possible harmful support features and encourage query features to attend to
the most informative pixels from support images. Experiments on all few-shot
segmentation benchmarks demonstrate that our proposed CyCTR leads to remarkable
improvement compared to previous state-of-the-art methods. Specifically, on
Pascal-$5^i$ and COCO-$20^i$ datasets, we achieve 66.6% and 45.6% mIoU for
5-shot segmentation, outperforming previous state-of-the-art by 4.6% and 7.1%
respectively. | [
"cs.CV"
] |
A heterogeneous information network (HIN) has as vertices objects of
different types and as edges the relations between objects, which are also of
various types. We study the problem of classifying objects in HINs. Most
existing methods perform poorly when given scarce labeled objects as training
sets, and methods that improve classification accuracy under such scenarios are
often computationally expensive. To address these problems, we propose ConCH, a
graph neural network model. ConCH formulates the classification problem as a
multi-task learning problem that combines semi-supervised learning with
self-supervised learning to learn from both labeled and unlabeled data. ConCH
employs meta-paths, which are sequences of object types that capture semantic
relationships between objects. ConCH co-derives object embeddings and context
embeddings via graph convolution. It also uses the attention mechanism to fuse
such embeddings. We conduct extensive experiments to evaluate the performance
of ConCH against other 15 classification methods. Our results show that ConCH
is an effective and efficient method for HIN classification. | [
"cs.LG",
"cs.SI"
] |
Network embedding aims to embed nodes into a low-dimensional space, while
capturing the network structures and properties. Although quite a few promising
network embedding methods have been proposed, most of them focus on static
networks. In fact, temporal networks, which usually evolve over time in terms
of microscopic and macroscopic dynamics, are ubiquitous. The micro-dynamics
describe the formation process of network structures in a detailed manner,
while the macro-dynamics refer to the evolution pattern of the network scale.
Both micro- and macro-dynamics are the key factors to network evolution;
however, how to elegantly capture both of them for temporal network embedding,
especially macro-dynamics, has not yet been well studied. In this paper, we
propose a novel temporal network embedding method with micro- and
macro-dynamics, named $\rm{M^2DNE}$. Specifically, for micro-dynamics, we
regard the establishments of edges as the occurrences of chronological events
and propose a temporal attention point process to capture the formation process
of network structures in a fine-grained manner. For macro-dynamics, we define a
general dynamics equation parameterized with network embeddings to capture the
inherent evolution pattern and impose constraints in a higher structural level
on network embeddings. Mutual evolutions of micro- and macro-dynamics in a
temporal network alternately affect the process of learning node embeddings.
Extensive experiments on three real-world temporal networks demonstrate that
$\rm{M^2DNE}$ significantly outperforms the state-of-the-arts not only in
traditional tasks, e.g., network reconstruction, but also in temporal
tendency-related tasks, e.g., scale prediction. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
First-order stochastic optimization methods are currently the most widely
used class of methods for training deep neural networks. However, the choice of
the optimizer has become an ad-hoc rule that can significantly affect the
performance. For instance, SGD with momentum (SGD+M) is typically used in
computer vision (CV) and Adam is used for training transformer models for
Natural Language Processing (NLP). Using the wrong method can lead to
significant performance degradation. Inspired by the dual averaging algorithm,
we propose Modernized Dual Averaging (MDA), an optimizer that is able to
perform as well as SGD+M in CV and as Adam in NLP. Our method is not adaptive
and is significantly simpler than Adam. We show that MDA induces a decaying
uncentered $L_2$-regularization compared to vanilla SGD+M and hypothesize that
this may explain why it works on NLP problems where SGD+M fails. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Object detection is an important yet challenging task in video understanding
& analysis, where one major challenge lies in the proper balance between two
contradictive factors: detection accuracy and detection speed. In this paper,
we propose a new adaptive patch-of-interest composition approach for boosting
both the accuracy and speed for object detection. The proposed approach first
extracts patches in a video frame which have the potential to include
objects-of-interest. Then, an adaptive composition process is introduced to
compose the extracted patches into an optimal number of sub-frames for object
detection. With this process, we are able to maintain the resolution of the
original frame during object detection (for guaranteeing the accuracy), while
minimizing the number of inputs in detection (for boosting the speed).
Experimental results on various datasets demonstrate the effectiveness of the
proposed approach. | [
"cs.CV"
] |
Modern tasks in reinforcement learning have large state and action spaces. To
deal with them efficiently, one often uses predefined feature mapping to
represent states and actions in a low-dimensional space. In this paper, we
study reinforcement learning for discounted Markov Decision Processes (MDPs),
where the transition kernel can be parameterized as a linear function of
certain feature mapping. We propose a novel algorithm that makes use of the
feature mapping and obtains a $\tilde O(d\sqrt{T}/(1-\gamma)^2)$ regret, where
$d$ is the dimension of the feature space, $T$ is the time horizon and $\gamma$
is the discount factor of the MDP. To the best of our knowledge, this is the
first polynomial regret bound without accessing the generative model or making
strong assumptions such as ergodicity of the MDP. By constructing a special
class of MDPs, we also show that for any algorithms, the regret is lower
bounded by $\Omega(d\sqrt{T}/(1-\gamma)^{1.5})$. Our upper and lower bound
results together suggest that the proposed reinforcement learning algorithm is
near-optimal up to a $(1-\gamma)^{-0.5}$ factor. | [
"cs.LG",
"cs.AI",
"math.OC",
"stat.ML"
] |
Autonomous Micro Aerial Vehicles (MAVs) gained tremendous attention in recent
years. Autonomous flight in indoor requires a dense depth map for navigable
space detection which is the fundamental component for autonomous navigation.
In this paper, we address the problem of reconstructing dense depth while a
drone is hovering (small camera motion) in indoor scenes using already
estimated cameras and sparse point cloud obtained from a vSLAM. We start by
segmenting the scene based on sudden depth variation using sparse 3D points and
introduce a patch-based local plane fitting via energy minimization which
combines photometric consistency and co-planarity with neighbouring patches.
The method also combines a plane sweep technique for image segments having
almost no sparse point for initialization. Experiments show, the proposed
method produces better depth for indoor in artificial lighting condition,
low-textured environment compared to earlier literature in small motion. | [
"cs.CV"
] |
When forecasting time series with a hierarchical structure, the existing
state of the art is to forecast each time series independently, and, in a
post-treatment step, to reconcile the time series in a way that respects the
hierarchy (Hyndman et al., 2011; Wickramasuriya et al., 2018). We propose a new
loss function that can be incorporated into any maximum likelihood objective
with hierarchical data, resulting in reconciled estimates with confidence
intervals that correctly account for additional uncertainty due to imperfect
reconciliation. We evaluate our method using a non-linear model and synthetic
data on a counterfactual forecasting problem, where we have access to the
ground truth and contemporaneous covariates, and show that we largely improve
over the existing state-of-the-art method. | [
"stat.ML",
"cs.LG"
] |
Image captioning is a widely known problem in the area of AI. Caption
generation from floor plan images has applications in indoor path planning,
real estate, and providing architectural solutions. Several methods have been
explored in literature for generating captions or semi-structured descriptions
from floor plan images. Since only the caption is insufficient to capture
fine-grained details, researchers also proposed descriptive paragraphs from
images. However, these descriptions have a rigid structure and lack
flexibility, making it difficult to use them in real-time scenarios. This paper
offers two models, Description Synthesis from Image Cue (DSIC) and Transformer
Based Description Generation (TBDG), for the floor plan image to text
generation to fill the gaps in existing methods. These two models take
advantage of modern deep neural networks for visual feature extraction and text
generation. The difference between both models is in the way they take input
from the floor plan image. The DSIC model takes only visual features
automatically extracted by a deep neural network, while the TBDG model learns
textual captions extracted from input floor plan images with paragraphs. The
specific keywords generated in TBDG and understanding them with paragraphs make
it more robust in a general floor plan image. Experiments were carried out on a
large-scale publicly available dataset and compared with state-of-the-art
techniques to show the proposed model's superiority. | [
"cs.CV"
] |
We introduce the SE(3)-Transformer, a variant of the self-attention module
for 3D point clouds and graphs, which is equivariant under continuous 3D
roto-translations. Equivariance is important to ensure stable and predictable
performance in the presence of nuisance transformations of the data input. A
positive corollary of equivariance is increased weight-tying within the model.
The SE(3)-Transformer leverages the benefits of self-attention to operate on
large point clouds and graphs with varying number of points, while guaranteeing
SE(3)-equivariance for robustness. We evaluate our model on a toy N-body
particle simulation dataset, showcasing the robustness of the predictions under
rotations of the input. We further achieve competitive performance on two
real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms
a strong, non-equivariant attention baseline and an equivariant model without
attention. | [
"cs.LG",
"stat.ML"
] |
We describe an open-source simulator that creates sensor irradiance and
sensor images of typical automotive scenes in urban settings. The purpose of
the system is to support camera design and testing for automotive applications.
The user can specify scene parameters (e.g., scene type, road type, traffic
density, time of day) to assemble a large number of random scenes from graphics
assets stored in a database. The sensor irradiance is generated using
quantitative computer graphics methods, and the sensor images are created using
image systems sensor simulation. The synthetic sensor images have pixel level
annotations; hence, they can be used to train and evaluate neural networks for
imaging tasks, such as object detection and classification. The end-to-end
simulation system supports quantitative assessment, from scene to camera to
network accuracy, for automotive applications. | [
"cs.CV"
] |
Doors are important landmarks for indoor mobile robot navigation and also
assist blind people to independently access unfamiliar buildings. Most existing
algorithms of door detection are limited to work for familiar environments
because of restricted assumptions about color, texture and shape. In this paper
we propose a novel approach which employs feature based classification and uses
the Kohonen Self-Organizing Map (SOM) for the purpose of door detection.
Generic and stable features are used for the training of SOM that increase the
performance significantly: concavity, bottom-edge intensity profile and door
edges. To validate the robustness and generalizability of our method, we
collected a large dataset of real world door images from a variety of
environments and different lighting conditions. The algorithm achieves more
than 95% detection which demonstrates that our door detection method is generic
and robust with variations of color, texture, occlusions, lighting condition,
scales, and viewpoints. | [
"cs.CV"
] |
Confocal laser endomicroscopy (CLE) is a novel imaging modality that provides
in vivo histological cross-sections of examined tissue. Recently, attempts have
been made to develop miniaturized in vivo imaging devices, specifically
confocal laser microscopes, for both clinical and research applications.
However, current implementations of miniature CLE components, such as confocal
lenses, compromise image resolution, signal-to-noise ratio, or both, which
negatively impacts the utility of in vivo imaging. In this work, we demonstrate
that software-based techniques can be used to recover lost information due to
endomicroscopy hardware miniaturization and reconstruct images of higher
resolution. Particularly, a densely connected convolutional neural network is
used to reconstruct a high-resolution CLE image from a low-resolution input. In
the proposed network, each layer is directly connected to all subsequent
layers, which results in an effective combination of low-level and high-level
features and efficient information flow throughout the network. To train and
evaluate our network, we use a dataset of 181 high-resolution CLE images. Both
quantitative and qualitative results indicate superiority of the proposed
network compared to traditional interpolation techniques and competing
learning-based methods. This work demonstrates that software-based
super-resolution is a viable approach to compensate for loss of resolution due
to endoscopic hardware miniaturization. | [
"cs.CV"
] |
Traditional object recognition approaches apply feature extraction, part
deformation handling, occlusion handling and classification sequentially while
they are independent from each other. Ouyang and Wang proposed a model for
jointly learning of all of the mentioned processes using one deep neural
network. We utilized, and manipulated their toolbox in order to apply it in car
detection scenarios where it had not been tested. Creating a single deep
architecture from these components, improves the interaction between them and
can enhance the performance of the whole system. We believe that the approach
can be used as a general purpose object detection toolbox. We tested the
algorithm on UIUC car dataset, and achieved an outstanding result. The accuracy
of our method was 97 % while the previously reported results showed an accuracy
of up to 91 %. We strongly believe that having an experiment on a larger
dataset can show the advantage of using deep models over shallow ones. | [
"cs.CV"
] |
Recently, deep reinforcement learning (DRL) methods have achieved impressive
performance on tasks in a variety of domains. However, neural network policies
produced with DRL methods are not human-interpretable and often have difficulty
generalizing to novel scenarios. To address these issues, prior works explore
learning programmatic policies that are more interpretable and structured for
generalization. Yet, these works either employ limited policy representations
(e.g. decision trees, state machines, or predefined program templates) or
require stronger supervision (e.g. input/output state pairs or expert
demonstrations). We present a framework that instead learns to synthesize a
program, which details the procedure to solve a task in a flexible and
expressive manner, solely from reward signals. To alleviate the difficulty of
learning to compose programs to induce the desired agent behavior from scratch,
we propose to first learn a program embedding space that continuously
parameterizes diverse behaviors in an unsupervised manner and then search over
the learned program embedding space to yield a program that maximizes the
return for a given task. Experimental results demonstrate that the proposed
framework not only learns to reliably synthesize task-solving programs but also
outperforms DRL and program synthesis baselines while producing interpretable
and more generalizable policies. We also justify the necessity of the proposed
two-stage learning scheme as well as analyze various methods for learning the
program embedding. | [
"cs.LG",
"cs.AI",
"cs.PL"
] |
With the dramatic increase of dimensions in the data representation,
extracting latent low-dimensional features becomes of the utmost importance for
efficient classification. Aiming at the problems of unclear margin
representation and difficulty in revealing the data manifold structure in most
of the existing linear discriminant methods, we propose a new discriminant
feature extraction framework, namely Robust Locality-Aware Regression (RLAR).
In our model, we introduce a retargeted regression to perform the marginal
representation learning adaptively instead of using the general average
inter-class margin. Besides, we formulate a new strategy for enhancing the
local intra-class compactness of the data manifold, which can achieve the joint
learning of locality-aware graph structure and desirable projection matrix. To
alleviate the disturbance of outliers and prevent overfitting, we measure the
regression term and locality-aware term together with the regularization term
by the L2,1 norm. Further, forcing the row sparsity on the projection matrix
through the L2,1 norm achieves the cooperation of feature selection and feature
extraction. Then, we derive an effective iterative algorithm for solving the
proposed model. The experimental results over a range of UCI data sets and
other benchmark databases demonstrate that the proposed RLAR outperforms some
state-of-the-art approaches. | [
"cs.LG",
"stat.ML"
] |
Accurate depth estimation from images is a fundamental task in many
applications including scene understanding and reconstruction. Existing
solutions for depth estimation often produce blurry approximations of low
resolution. This paper presents a convolutional neural network for computing a
high-resolution depth map given a single RGB image with the help of transfer
learning. Following a standard encoder-decoder architecture, we leverage
features extracted using high performing pre-trained networks when initializing
our encoder along with augmentation and training strategies that lead to more
accurate results. We show how, even for a very simple decoder, our method is
able to achieve detailed high-resolution depth maps. Our network, with fewer
parameters and training iterations, outperforms state-of-the-art on two
datasets and also produces qualitatively better results that capture object
boundaries more faithfully. Code and corresponding pre-trained weights are made
publicly available. | [
"cs.CV"
] |
Insurance companies must manage millions of claims per year. While most of
these claims are non-fraudulent, fraud detection is core for insurance
companies. The ultimate goal is a predictive model to single out the fraudulent
claims and pay out the non-fraudulent ones immediately. Modern machine learning
methods are well suited for this kind of problem. Health care claims often have
a data structure that is hierarchical and of variable length. We propose one
model based on piecewise feed forward neural networks (deep learning) and
another model based on self-attention neural networks for the task of claim
management. We show that the proposed methods outperform bag-of-words based
models, hand designed features, and models based on convolutional neural
networks, on a data set of two million health care claims. The proposed
self-attention method performs the best. | [
"cs.LG",
"econ.EM",
"stat.ML"
] |
Driven by recent vision and graphics applications such as image segmentation
and object recognition, computing pixel-accurate saliency values to uniformly
highlight foreground objects becomes increasingly important. In this paper, we
propose a unified framework called PISA, which stands for Pixelwise Image
Saliency Aggregating various bottom-up cues and priors. It generates spatially
coherent yet detail-preserving, pixel-accurate and fine-grained saliency, and
overcomes the limitations of previous methods which use homogeneous
superpixel-based and color only treatment. PISA aggregates multiple saliency
cues in a global context such as complementary color and structure contrast
measures with their spatial priors in the image domain. The saliency confidence
is further jointly modeled with a neighborhood consistence constraint into an
energy minimization formulation, in which each pixel will be evaluated with
multiple hypothetical saliency levels. Instead of using global discrete
optimization methods, we employ the cost-volume filtering technique to solve
our formulation, assigning the saliency levels smoothly while preserving the
edge-aware structure details. In addition, a faster version of PISA is
developed using a gradient-driven image sub-sampling strategy to greatly
improve the runtime efficiency while keeping comparable detection accuracy.
Extensive experiments on a number of public datasets suggest that PISA
convincingly outperforms other state-of-the-art approaches. In addition, with
this work we also create a new dataset containing $800$ commodity images for
evaluating saliency detection. The dataset and source code of PISA can be
downloaded at http://vision.sysu.edu.cn/project/PISA/ | [
"cs.CV",
"68U10"
] |
Yield and its prediction is one of the most important tasks in grapevine
breeding purposes and vineyard management. Commonly, this trait is estimated
manually right before harvest by extrapolation, which mostly is
labor-intensive, destructive and inaccurate. In the present study an automated
image-based workflow was developed quantifying inflorescences and single
flowers in unprepared field images of grapevines, i.e. no artificial background
or light was applied. It is a novel approach for non-invasive, inexpensive and
objective phenotyping with high-throughput.
First, image regions depicting inflorescences were identified and localized.
This was done by segmenting the images into the classes "inflorescence" and
"non-inflorescence" using a Fully Convolutional Network (FCN). Efficient image
segmentation hereby is the most challenging step regarding the small geometry
and dense distribution of flowers (several hundred flowers per inflorescence),
similar color of all plant organs in the fore- and background as well as the
circumstance that only approximately 5% of an image show inflorescences. The
trained FCN achieved a mean Intersection Over Union (IOU) of 87.6% on the test
data set. Finally, individual flowers were extracted from the
"inflorescence"-areas using Circular Hough Transform. The flower extraction
achieved a recall of 80.3% and a precision of 70.7% using the segmentation
derived by the trained FCN model.
Summarized, the presented approach is a promising strategy in order to
predict yield potential automatically in the earliest stage of grapevine
development which is applicable for objective monitoring and evaluations of
breeding material, genetic repositories or commercial vineyards. | [
"cs.CV"
] |
Our work explores temporal self-supervision for GAN-based video generation
tasks. While adversarial training successfully yields generative models for a
variety of areas, temporal relationships in the generated data are much less
explored. Natural temporal changes are crucial for sequential generation tasks,
e.g. video super-resolution and unpaired video translation. For the former,
state-of-the-art methods often favor simpler norm losses such as $L^2$ over
adversarial training. However, their averaging nature easily leads to
temporally smooth results with an undesirable lack of spatial detail. For
unpaired video translation, existing approaches modify the generator networks
to form spatio-temporal cycle consistencies. In contrast, we focus on improving
learning objectives and propose a temporally self-supervised algorithm. For
both tasks, we show that temporal adversarial learning is key to achieving
temporally coherent solutions without sacrificing spatial detail. We also
propose a novel Ping-Pong loss to improve the long-term temporal consistency.
It effectively prevents recurrent networks from accumulating artifacts
temporally without depressing detailed features. Additionally, we propose a
first set of metrics to quantitatively evaluate the accuracy as well as the
perceptual quality of the temporal evolution. A series of user studies confirm
the rankings computed with these metrics. Code, data, models, and results are
provided at https://github.com/thunil/TecoGAN. The project page
https://ge.in.tum.de/publications/2019-tecogan-chu/ contains supplemental
materials. | [
"cs.CV",
"cs.LG"
] |
We introduce a new measure to evaluate the transferability of representations
learned by classifiers. Our measure, the Log Expected Empirical Prediction
(LEEP), is simple and easy to compute: when given a classifier trained on a
source data set, it only requires running the target data set through this
classifier once. We analyze the properties of LEEP theoretically and
demonstrate its effectiveness empirically. Our analysis shows that LEEP can
predict the performance and convergence speed of both transfer and
meta-transfer learning methods, even for small or imbalanced data. Moreover,
LEEP outperforms recently proposed transferability measures such as negative
conditional entropy and H scores. Notably, when transferring from ImageNet to
CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing
method in terms of the correlations with actual transfer accuracy. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Spectral clustering has become one of the most popular algorithms in data
clustering and community detection. We study the performance of classical
two-step spectral clustering via the graph Laplacian to learn the stochastic
block model. Our aim is to answer the following question: when is spectral
clustering via the graph Laplacian able to achieve strong consistency, i.e.,
the exact recovery of the underlying hidden communities? Our work provides an
entrywise analysis (an $\ell_{\infty}$-norm perturbation bound) of the Fielder
eigenvector of both the unnormalized and the normalized Laplacian associated
with the adjacency matrix sampled from the stochastic block model. We prove
that spectral clustering is able to achieve exact recovery of the planted
community structure under conditions that match the information-theoretic
limits. | [
"stat.ML",
"cs.LG",
"cs.SI"
] |
Visual object tracking is a fundamental and time-critical vision task. Recent
years have seen many shallow tracking methods based on real-time pixel-based
correlation filters, as well as deep methods that have top performance but need
a high-end GPU. In this paper, we learn to improve the speed of deep trackers
without losing accuracy. Our fundamental insight is to take an adaptive
approach, where easy frames are processed with cheap features (such as pixel
values), while challenging frames are processed with invariant but expensive
deep features. We formulate the adaptive tracking problem as a decision-making
process, and learn an agent to decide whether to locate objects with high
confidence on an early layer, or continue processing subsequent layers of a
network. This significantly reduces the feed-forward cost for easy frames with
distinct or slow-moving objects. We train the agent offline in a reinforcement
learning fashion, and further demonstrate that learning all deep layers (so as
to provide good features for adaptive tracking) can lead to near real-time
average tracking speed of 23 fps on a single CPU while achieving
state-of-the-art performance. Perhaps most tellingly, our approach provides a
100X speedup for almost 50% of the time, indicating the power of an adaptive
approach. | [
"cs.CV"
] |
The chest X-ray is one of the most commonly accessible radiological
examinations for screening and diagnosis of many lung diseases. A tremendous
number of X-ray imaging studies accompanied by radiological reports are
accumulated and stored in many modern hospitals' Picture Archiving and
Communication Systems (PACS). On the other side, it is still an open question
how this type of hospital-size knowledge database containing invaluable imaging
informatics (i.e., loosely labeled) can be used to facilitate the data-hungry
deep learning paradigms in building truly large-scale high precision
computer-aided diagnosis (CAD) systems.
In this paper, we present a new chest X-ray database, namely "ChestX-ray8",
which comprises 108,948 frontal-view X-ray images of 32,717 unique patients
with the text-mined eight disease image labels (where each image can have
multi-labels), from the associated radiological reports using natural language
processing. Importantly, we demonstrate that these commonly occurring thoracic
diseases can be detected and even spatially-located via a unified
weakly-supervised multi-label image classification and disease localization
framework, which is validated using our proposed dataset. Although the initial
quantitative results are promising as reported, deep convolutional neural
network based "reading chest X-rays" (i.e., recognizing and locating the common
disease patterns trained with only image-level labels) remains a strenuous task
for fully-automated high precision CAD systems. Data download link:
https://nihcc.app.box.com/v/ChestXray-NIHCC | [
"cs.CV",
"cs.CL"
] |
Graph-structured data such as social networks, functional brain networks,
gene regulatory networks, communications networks have brought the interest in
generalizing deep learning techniques to graph domains. In this paper, we are
interested to design neural networks for graphs with variable length in order
to solve learning problems such as vertex classification, graph classification,
graph regression, and graph generative tasks. Most existing works have focused
on recurrent neural networks (RNNs) to learn meaningful representations of
graphs, and more recently new convolutional neural networks (ConvNets) have
been introduced. In this work, we want to compare rigorously these two
fundamental families of architectures to solve graph learning tasks. We review
existing graph RNN and ConvNet architectures, and propose natural extension of
LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of
analytically controlled experiments on two basic graph problems, i.e. subgraph
matching and graph clustering, to test the different architectures. Numerical
results show that the proposed graph ConvNets are 3-17% more accurate and
1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than
variational (non-learning) techniques. Finally, the most effective graph
ConvNet architecture uses gated edges and residuality. Residuality plays an
essential role to learn multi-layer architectures as they provide a 10% gain of
performance. | [
"cs.LG",
"stat.ML"
] |
Detection and recognition of the facial images of people is an intricate
problem which has garnered much attention during recent years due to its ever
increasing applications in numerous fields. It continues to pose a challenge in
finding a robust solution to it. Its scope extends to catering the security,
commercial and law enforcement applications. Research for moreover a decade on
this subject has brought about remarkable development with the modus operandi
like human computer interaction, biometric analysis and content based coding of
images, videos and surveillance. A trivial task for brain but cumbersome to be
imitated artificially. The commonalities in faces does pose a problem on
various grounds but features such as skin color, gender differentiate a person
from the other. In this paper the facial detection has been carried out using
Viola-Jones algorithm and recognition of face has been done using Back
Propagation Neural Network (BPNN). | [
"cs.CV"
] |
In this paper, we consider hybrid parallelism -- a paradigm that employs both
Data Parallelism (DP) and Model Parallelism (MP) -- to scale distributed
training of large recommendation models. We propose a compression framework
called Dynamic Communication Thresholding (DCT) for communication-efficient
hybrid training. DCT filters the entities to be communicated across the network
through a simple hard-thresholding function, allowing only the most relevant
information to pass through. For communication efficient DP, DCT compresses the
parameter gradients sent to the parameter server during model synchronization.
The threshold is updated only once every few thousand iterations to reduce the
computational overhead of compression. For communication efficient MP, DCT
incorporates a novel technique to compress the activations and gradients sent
across the network during the forward and backward propagation, respectively.
This is done by identifying and updating only the most relevant neurons of the
neural network for each training sample in the data. We evaluate DCT on
publicly available natural language processing and recommender models and
datasets, as well as recommendation systems used in production at Facebook. DCT
reduces communication by at least $100\times$ and $20\times$ during DP and MP,
respectively. The algorithm has been deployed in production, and it improves
end-to-end training time for a state-of-the-art industrial recommender model by
37\%, without any loss in performance. | [
"cs.LG",
"cs.DC",
"stat.ML"
] |
Neural networks (NN) are considered as black-boxes due to the lack of
explainability and transparency of their decisions. This significantly hampers
their deployment in environments where explainability is essential along with
the accuracy of the system. Recently, significant efforts have been made for
the interpretability of these deep networks with the aim to open up the
black-box. However, most of these approaches are specifically developed for
visual modalities. In addition, the interpretations provided by these systems
require expert knowledge and understanding for intelligibility. This indicates
a vital gap between the explainability provided by the systems and the novice
user. To bridge this gap, we present a novel framework i.e. Time-Series
eXplanation (TSXplain) system which produces a natural language based
explanation of the decision taken by a NN. It uses the extracted statistical
features to describe the decision of a NN, merging the deep learning world with
that of statistics. The two-level explanation provides ample description of the
decision made by the network to aid an expert as well as a novice user alike.
Our survey and reliability assessment test confirm that the generated
explanations are meaningful and correct. We believe that generating natural
language based descriptions of the network's decisions is a big step towards
opening up the black-box. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
The claims data, containing medical codes, services information, and incurred
expenditure, can be a good resource for estimating an individual's health
condition and medical risk level. In this study, we developed Transformer-based
Multimodal AutoEncoder (TMAE), an unsupervised learning framework that can
learn efficient patient representation by encoding meaningful information from
the claims data. TMAE is motivated by the practical needs in healthcare to
stratify patients into different risk levels for improving care delivery and
management. Compared to previous approaches, TMAE is able to 1) model
inpatient, outpatient, and medication claims collectively, 2) handle irregular
time intervals between medical events, 3) alleviate the sparsity issue of the
rare medical codes, and 4) incorporate medical expenditure information. We
trained TMAE using a real-world pediatric claims dataset containing more than
600,000 patients and compared its performance with various approaches in two
clustering tasks. Experimental results demonstrate that TMAE has superior
performance compared to all baselines. Multiple downstream applications are
also conducted to illustrate the effectiveness of our framework. The promising
results confirm that the TMAE framework is scalable to large claims data and is
able to generate efficient patient embeddings for risk stratification and
analysis. | [
"cs.LG",
"cs.AI"
] |
When a human asks questions online, or when a conversational virtual agent
asks human questions, questions triggering emotions or with details might more
likely to get responses or answers. we explore how to automatically rewrite
natural language questions to improve the response rate from people. In
particular, a new task of Visual Question Rewriting(VQR) task is introduced to
explore how visual information can be used to improve the new questions. A data
set containing around 4K bland questions, attractive questions and images
triples is collected. We developed some baseline sequence to sequence models
and more advanced transformer based models, which take a bland question and a
related image as input and output a rewritten question that is expected to be
more attractive. Offline experiments and mechanical Turk based evaluations show
that it is possible to rewrite bland questions in a more detailed and
attractive way to increase the response rate, and images can be helpful. | [
"cs.CV",
"cs.AI",
"cs.LG",
"I.2.10; I.2.7"
] |
The manpower scheduling problem is a kind of critical combinational
optimization problem. Researching solutions to scheduling problems can improve
the efficiency of companies, hospitals, and other work units. This paper
proposes a new model combined with deep learning to solve the multi-shift
manpower scheduling problem based on the existing research. This model first
solves the objective function's optimized value according to the current
constraints to find the plan of employee arrangement initially. It will then
use the scheduling table generation algorithm to obtain the scheduling result
in a short time. Moreover, the most prominent feature we propose is that we
will use the neural network training method based on the time series to solve
long-term and long-period scheduling tasks and obtain manpower arrangement. The
selection criteria of the neural network and the training process are also
described in this paper. We demonstrate that our model can make a precise
forecast based on the improvement of neural networks. This paper also discusses
the challenges in the neural network training process and obtains enlightening
results after getting the arrangement plan. Our research shows that neural
networks and deep learning strategies have the potential to solve similar
problems effectively. | [
"cs.LG"
] |
Visual correspondence is a fundamental building block on the way to building
assistive tools for hand-drawn animation. However, while a large body of work
has focused on learning visual correspondences at the pixel-level, few
approaches have emerged to learn correspondence at the level of line enclosures
(segments) that naturally occur in hand-drawn animation. Exploiting this
structure in animation has numerous benefits: it avoids the intractable memory
complexity of attending to individual pixels in high resolution images and
enables the use of real-world animation datasets that contain correspondence
information at the level of per-segment colors. To that end, we propose the
Animation Transformer (AnT) which uses a transformer-based architecture to
learn the spatial and visual relationships between segments across a sequence
of images. AnT enables practical ML-assisted colorization for professional
animation workflows and is publicly accessible as a creative tool in Cadmium. | [
"cs.CV",
"cs.AI",
"cs.GR"
] |
Automated Computer Aided diagnostic tools can be used for the early detection
of glaucoma to prevent irreversible vision loss. In this work, we present a
Multi-task Convolutional Neural Network (CNN) that jointly segments the Optic
Disc (OD), Optic Cup (OC) and predicts the presence of glaucoma in color fundus
images. The CNN utilizes a combination of image appearance features and
structural features obtained from the OD-OC segmentation to obtain a robust
prediction. The use of fewer network parameters and the sharing of the CNN
features for multiple related tasks ensures the good generalizability of the
architecture, allowing it to be trained on small training sets. The
cross-testing performance of the proposed method on an independent validation
set acquired using a different camera and image resolution was found to be good
with an average dice score of 0.92 for OD, 0.84 for OC and AUC of 0.95 on the
task of glaucoma classification illustrating its potential as a mass screening
tool for the early detection of glaucoma. | [
"cs.CV",
"cs.LG"
] |
Reinforcement learning is a promising paradigm for solving sequential
decision-making problems, but low data efficiency and weak generalization
across tasks are bottlenecks in real-world applications. Model-based meta
reinforcement learning addresses these issues by learning dynamics and
leveraging knowledge from prior experience. In this paper, we take a closer
look at this framework, and propose a new Thompson-sampling based approach that
consists of a new model to identify task dynamics together with an amortized
policy optimization step. We show that our model, called a graph structured
surrogate model (GSSM), outperforms state-of-the-art methods in predicting
environment dynamics. Additionally, our approach is able to obtain high
returns, while allowing fast execution during deployment by avoiding test time
policy gradient optimization. | [
"cs.LG"
] |
Spectral clustering is a popular method for community detection in network
graphs: starting from a matrix representation of the graph, the nodes are
clustered on a low dimensional projection obtained from a truncated spectral
decomposition of the matrix. Estimating correctly the number of communities and
the dimension of the reduced latent space is critical for good performance of
spectral clustering algorithms. Furthermore, many real-world graphs, such as
enterprise computer networks studied in cyber-security applications, often
display heterogeneous within-community degree distributions. Such heterogeneous
degree distributions are usually not well captured by standard spectral
clustering algorithms. In this article, a novel spectral clustering algorithm
is proposed for community detection under the degree-corrected stochastic
blockmodel. The proposed method is based on a transformation of the spectral
embedding to spherical coordinates, and a novel modelling assumption in the
transformed space. The method allows for simultaneous and automated selection
of the number of communities and the latent dimension for spectral embeddings
of graphs with uneven node degrees. Results show improved performance over
competing methods in representing computer networks. | [
"stat.ML",
"cs.LG"
] |
Imaging systems are increasingly used as input to convolutional neural
networks (CNN) for object detection; we would like to design cameras that are
optimized for this purpose. It is impractical to build different cameras and
then acquire and label the necessary data for every potential camera design;
creating software simulations of the camera in context (soft prototyping) is
the only realistic approach. We implemented soft-prototyping tools that can
quantitatively simulate image radiance and camera designs to create realistic
images that are input to a convolutional neural network for car detection. We
used these methods to quantify the effect that critical hardware components
(pixel size), sensor control (exposure algorithms) and image processing (gamma
and demosaicing algorithms) have upon average precision of car detection. We
quantify (a) the relationship between pixel size and the ability to detect cars
at different distances, (b) the penalty for choosing a poor exposure duration,
and (c) the ability of the CNN to perform car detection for a variety of
post-acquisition processing algorithms. These results show that the optimal
choices for car detection are not constrained by the same metrics used for
image quality in consumer photography. It is better to evaluate camera designs
for CNN applications using soft prototyping with task-specific metrics rather
than consumer photography metrics. | [
"cs.CV"
] |
Robust point cloud registration in real-time is an important prerequisite for
many mapping and localization algorithms. Traditional methods like ICP tend to
fail without good initialization, insufficient overlap or in the presence of
dynamic objects. Modern deep learning based registration approaches present
much better results, but suffer from a heavy run-time. We overcome these
drawbacks by introducing StickyPillars, a fast, accurate and extremely robust
deep middle-end 3D feature matching method on point clouds. It uses graph
neural networks and performs context aggregation on sparse 3D key-points with
the aid of transformer based multi-head self and cross-attention. The network
output is used as the cost for an optimal transport problem whose solution
yields the final matching probabilities. The system does not rely on hand
crafted feature descriptors or heuristic matching strategies. We present
state-of-art art accuracy results on the registration problem demonstrated on
the KITTI dataset while being four times faster then leading deep methods.
Furthermore, we integrate our matching system into a LiDAR odometry pipeline
yielding most accurate results on the KITTI odometry dataset. Finally, we
demonstrate robustness on KITTI odometry. Our method remains stable in accuracy
where state-of-the-art procedures fail on frame drops and higher speeds. | [
"cs.CV"
] |
We study active object tracking, where a tracker takes visual observations
(i.e., frame sequences) as input and produces the corresponding camera control
signals as output (e.g., move forward, turn left, etc.). Conventional methods
tackle tracking and camera control tasks separately, and the resulting system
is difficult to tune jointly. These methods also require significant human
efforts for image labeling and expensive trial-and-error system tuning in the
real world. To address these issues, we propose, in this paper, an end-to-end
solution via deep reinforcement learning. A ConvNet-LSTM function approximator
is adopted for the direct frame-to-action prediction. We further propose an
environment augmentation technique and a customized reward function, which are
crucial for successful training. The tracker trained in simulators (ViZDoom and
Unreal Engine) demonstrates good generalization behaviors in the case of unseen
object moving paths, unseen object appearances, unseen backgrounds, and
distracting objects. The system is robust and can restore tracking after
occasional lost of the target being tracked. We also find that the tracking
ability, obtained solely from simulators, can potentially transfer to
real-world scenarios. We demonstrate successful examples of such transfer, via
experiments over the VOT dataset and the deployment of a real-world robot using
the proposed active tracker trained in simulation. | [
"cs.CV"
] |
The current understanding of deep neural networks can only partially explain
how input structure, network parameters and optimization algorithms jointly
contribute to achieve the strong generalization power that is typically
observed in many real-world applications. In order to improve the comprehension
and interpretability of deep neural networks, we here introduce a novel
theoretical framework based on the compositional structure of piecewise linear
activation functions. By defining a direct acyclic graph representing the
composition of activation patterns through the network layers, it is possible
to characterize the instances of the input data with respect to both the
predicted label and the specific (linear) transformation used to perform
predictions. Preliminary tests on the MNIST dataset show that our method can
group input instances with regard to their similarity in the internal
representation of the neural network, providing an intuitive measure of input
complexity. | [
"cs.LG",
"stat.ML"
] |
We present a Bayesian approach to identify optimal transformations that map
model input points to low dimensional latent variables. The "projection"
mapping consists of an orthonormal matrix that is considered a priori unknown
and needs to be inferred jointly with the GP parameters, conditioned on the
available training data. The proposed Bayesian inference scheme relies on a
two-step iterative algorithm that samples from the marginal posteriors of the
GP parameters and the projection matrix respectively, both using Markov Chain
Monte Carlo (MCMC) sampling. In order to take into account the orthogonality
constraints imposed on the orthonormal projection matrix, a Geodesic Monte
Carlo sampling algorithm is employed, that is suitable for exploiting
probability measures on manifolds. We extend the proposed framework to
multi-fidelity models using GPs including the scenarios of training multiple
outputs together. We validate our framework on three synthetic problems with a
known lower-dimensional subspace. The benefits of our proposed framework, are
illustrated on the computationally challenging three-dimensional aerodynamic
optimization of a last-stage blade for an industrial gas turbine, where we
study the effect of an 85-dimensional airfoil shape parameterization on two
output quantities of interest, specifically on the aerodynamic efficiency and
the degree of reaction. | [
"stat.ML",
"cs.LG",
"stat.CO"
] |
Transformer models have achieved great progress on computer vision tasks
recently. The rapid development of vision transformers is mainly contributed by
their high representation ability for extracting informative features from
input images. However, the mainstream transformer models are designed with deep
architectures, and the feature diversity will be continuously reduced as the
depth increases, i.e., feature collapse. In this paper, we theoretically
analyze the feature collapse phenomenon and study the relationship between
shortcuts and feature diversity in these transformer models. Then, we present
an augmented shortcut scheme, which inserts additional paths with learnable
parameters in parallel on the original shortcuts. To save the computational
costs, we further explore an efficient approach that uses the block-circulant
projection to implement augmented shortcuts. Extensive experiments conducted on
benchmark datasets demonstrate the effectiveness of the proposed method, which
brings about 1% accuracy increase of the state-of-the-art visual transformers
without obviously increasing their parameters and FLOPs. | [
"cs.CV",
"cs.LG"
] |
Inducing causal relationships from observations is a classic problem in
machine learning. Most work in causality starts from the premise that the
causal variables themselves are observed. However, for AI agents such as robots
trying to make sense of their environment, the only observables are low-level
variables like pixels in images. To generalize well, an agent must induce
high-level variables, particularly those which are causal or are affected by
causal variables. A central goal for AI and causality is thus the joint
discovery of abstract representations and causal structure. However, we note
that existing environments for studying causal induction are poorly suited for
this objective because they have complicated task-specific causal graphs which
are impossible to manipulate parametrically (e.g., number of nodes, sparsity,
causal chain length, etc.). In this work, our goal is to facilitate research in
learning representations of high-level variables as well as causal structures
among them. In order to systematically probe the ability of methods to identify
these variables and structures, we design a suite of benchmarking RL
environments. We evaluate various representation learning algorithms from the
literature and find that explicitly incorporating structure and modularity in
models can help causal induction in model-based reinforcement learning. | [
"stat.ML",
"cs.LG"
] |
Deep neural networks often degrade significantly when training data suffer
from class imbalance problems. Existing approaches, e.g., re-sampling and
re-weighting, commonly address this issue by rearranging the label distribution
of training data to train the networks fitting well to the implicit balanced
label distribution. However, most of them hinder the representative ability of
learned features due to insufficient use of intra/inter-sample information of
training data. To address this issue, we propose meta feature modulator (MFM),
a meta-learning framework to model the difference between the long-tailed
training data and the balanced meta data from the perspective of representation
learning. Concretely, we employ learnable hyper-parameters (dubbed modulation
parameters) to adaptively scale and shift the intermediate features of
classification networks, and the modulation parameters are optimized together
with the classification network parameters guided by a small amount of balanced
meta data. We further design a modulator network to guide the generation of the
modulation parameters, and such a meta-learner can be readily adapted to train
the classification network on other long-tailed datasets. Extensive experiments
on benchmark vision datasets substantiate the superiority of our approach on
long-tailed recognition tasks beyond other state-of-the-art methods. | [
"cs.CV"
] |
In this work, we address the problem of 3D object detection from point cloud
data in real time. For autonomous vehicles to work, it is very important for
the perception component to detect the real world objects with both high
accuracy and fast inference. We propose a novel neural network architecture
along with the training and optimization details for detecting 3D objects in
point cloud data. We compare the results with different backbone architectures
including the standard ones like VGG, ResNet, Inception with our backbone. Also
we present the optimization and ablation studies including designing an
efficient anchor. We use the Kitti 3D Birds Eye View dataset for benchmarking
and validating our results. Our work surpasses the state of the art in this
domain both in terms of average precision and speed running at > 30 FPS. This
makes it a feasible option to be deployed in real time applications including
self driving cars. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Learning the embedding space, where semantically similar objects are located
close together and dissimilar objects far apart, is a cornerstone of many
computer vision applications. Existing approaches usually learn a single metric
in the embedding space for all available data points, which may have a very
complex non-uniform distribution with different notions of similarity between
objects, e.g. appearance, shape, color or semantic meaning. Approaches for
learning a single distance metric often struggle to encode all different types
of relationships and do not generalize well. In this work, we propose a novel
easy-to-implement divide and conquer approach for deep metric learning, which
significantly improves the state-of-the-art performance of metric learning. Our
approach utilizes the embedding space more efficiently by jointly splitting the
embedding space and data into $K$ smaller sub-problems. It divides both, the
data and the embedding space into $K$ subsets and learns $K$ separate distance
metrics in the non-overlapping subspaces of the embedding space, defined by
groups of neurons in the embedding layer of the neural network. The proposed
approach increases the convergence speed and improves generalization since the
complexity of each sub-problem is reduced compared to the original one. We show
that our approach outperforms the state-of-the-art by a large margin in
retrieval, clustering and re-identification tasks on CUB200-2011, CARS196,
Stanford Online Products, In-shop Clothes and PKU VehicleID datasets. | [
"cs.CV",
"cs.LG"
] |
The rise of digitization of cultural documents offers large-scale contents,
opening the road for development of AI systems in order to preserve, search,
and deliver cultural heritage. To organize such cultural content also means to
classify them, a task that is very familiar to modern computer science.
Contextual information is often the key to structure such real world data, and
we propose to use it in form of a knowledge graph. Such a knowledge graph,
combined with content analysis, enhances the notion of proximity between
artworks so it improves the performances in classification tasks. In this
paper, we propose a novel use of a knowledge graph, that is constructed on
annotated data and pseudo-labeled data. With label propagation, we boost
artwork classification by training a model using a graph convolutional network,
relying on the relationships between entities of the knowledge graph. Following
a transductive learning framework, our experiments show that relying on a
knowledge graph modeling the relations between labeled data and unlabeled data
allows to achieve state-of-the-art results on multiple classification tasks on
a dataset of paintings, and on a dataset of Buddha statues. Additionally, we
show state-of-the-art results for the difficult case of dealing with unbalanced
data, with the limitation of disregarding classes with extremely low degrees in
the knowledge graph. | [
"cs.LG",
"cs.CV"
] |
Several popular graph embedding techniques for representation learning and
dimensionality reduction rely on performing computationally expensive
eigendecompositions to derive a nonlinear transformation of the input data
space. The resulting eigenvectors encode the embedding coordinates for the
training samples only, and so the embedding of novel data samples requires
further costly computation. In this paper, we present a method for the
out-of-sample extension of graph embeddings using deep neural networks (DNN) to
parametrically approximate these nonlinear maps. Compared with traditional
nonparametric out-of-sample extension methods, we demonstrate that the DNNs can
generalize with equal or better fidelity and require orders of magnitude less
computation at test time. Moreover, we find that unsupervised pretraining of
the DNNs improves optimization for larger network sizes, thus removing
sensitivity to model selection. | [
"stat.ML",
"cs.LG",
"cs.NE",
"stat.ME"
] |
Graph Neural Networks (GNNs) have demonstrated superior performance in
learning node representations for various graph inference tasks. However,
learning over graph data can raise privacy concerns when nodes represent people
or human-related variables that involve sensitive or personal information.
While numerous techniques have been proposed for privacy-preserving deep
learning over non-relational data, there is less work addressing the privacy
issues pertained to applying deep learning algorithms on graphs. In this paper,
we study the problem of node data privacy, where graph nodes have potentially
sensitive data that is kept private, but they could be beneficial for a central
server for training a GNN over the graph. To address this problem, we develop a
privacy-preserving, architecture-agnostic GNN learning algorithm with formal
privacy guarantees based on Local Differential Privacy (LDP). Specifically, we
propose an LDP encoder and an unbiased rectifier, by which the server can
communicate with the graph nodes to privately collect their data and
approximate the GNN's first layer. To further reduce the effect of the injected
noise, we propose to prepend a simple graph convolution layer, called KProp,
which is based on the multi-hop aggregation of the nodes' features acting as a
denoising mechanism. Finally, we propose a robust training framework, in which
we benefit from KProp's denoising capability to increase the accuracy of
inference in the presence of noisy labels. Extensive experiments conducted over
real-world datasets demonstrate that our method can maintain a satisfying level
of accuracy with low privacy loss. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Several new properties of weighted Hilbert transform are obtained. If mu is
zero, two Plancherel-like equations and the isotropic properties are derived.
For mu is real number, a coerciveness is derived and two iterative sequences
are constructed to find the inversion. The proposed iterative sequences are
applicable to the case of pure imaginary constant mu=i*eta with |eta|<pi/4 .
For mu=0.0 and 3.0 , we present the computer simulation results by using the
Chebyshev series representation of finite Hilbert transform. The results in
this paper are useful to the half scan in several imaging applications. | [
"cs.LG",
"stat.ML"
] |
Semantic segmentation for lightweight object parsing is a very challenging
task, because both accuracy and efficiency (e.g., execution speed, memory
footprint or computational complexity) should all be taken into account.
However, most previous works pay too much attention to one-sided perspective,
either accuracy or speed, and ignore others, which poses a great limitation to
actual demands of intelligent devices. To tackle this dilemma, we propose a
novel lightweight architecture named Context-Integrated and Feature-Refined
Network (CIFReNet). The core components of CIFReNet are the Long-skip
Refinement Module (LRM) and the Multi-scale Context Integration Module (MCIM).
The LRM is designed to ease the propagation of spatial information between
low-level and high-level stages. Furthermore, channel attention mechanism is
introduced into the process of long-skip learning to boost the quality of
low-level feature refinement. Meanwhile, the MCIM consists of three cascaded
Dense Semantic Pyramid (DSP) blocks with image-level features, which is
presented to encode multiple context information and enlarge the field of view.
Specifically, the proposed DSP block exploits a dense feature sampling strategy
to enhance the information representations without significantly increasing the
computation cost. Comprehensive experiments are conducted on three benchmark
datasets for object parsing including Cityscapes, CamVid, and Helen. As
indicated, the proposed method reaches a better trade-off between accuracy and
efficiency compared with the other state-of-the-art methods. | [
"cs.CV"
] |
In this paper the use of Random Sprays Retinex (RSR) algorithm for global
illumination estimation is proposed and its feasibility tested. Like other
algorithms based on the Retinex model, RSR also provides local illumination
estimation and brightness adjustment for each pixel and it is faster than other
path-wise Retinex algorithms. As the assumption of the uniform illumination
holds in many cases, it should be possible to use the mean of local
illumination estimations of RSR as a global illumination estimation for images
with (assumed) uniform illumination allowing also the accuracy to be easily
measured. Therefore we propose a method for estimating global illumination
estimation based on local RSR results. To our best knowledge this is the first
time that RSR algorithm is used to obtain global illumination estimation. For
our tests we use a publicly available color constancy image database for
testing. The results are presented and discussed and it turns out that the
proposed method outperforms many existing unsupervised color constancy
algorithms. The source code is available at
http://www.fer.unizg.hr/ipg/resources/color_constancy/. | [
"cs.CV"
] |
We propose a new embedding method, named Quantile-Quantile Embedding (QQE),
for distribution transformation and manifold embedding with the ability to
choose the embedding distribution. QQE, which uses the concept of
quantile-quantile plot from visual statistical tests, can transform the
distribution of data to any theoretical desired distribution or empirical
reference sample. Moreover, QQE gives the user a choice of embedding
distribution in embedding the manifold of data into the low dimensional
embedding space. It can also be used for modifying the embedding distribution
of other dimensionality reduction methods, such as PCA, t-SNE, and deep metric
learning, for better representation or visualization of data. We propose QQE in
both unsupervised and supervised forms. QQE can also transform a distribution
to either an exact reference distribution or its shape. We show that QQE allows
for better discrimination of classes in some cases. Our experiments on
different synthetic and image datasets show the effectiveness of the proposed
embedding method. | [
"stat.ML",
"cs.CV",
"cs.LG",
"stat.CO"
] |
The spatial attention mechanism captures long-range dependencies by
aggregating global contextual information to each query location, which is
beneficial for semantic segmentation. In this paper, we present a sparse
spatial attention network (SSANet) to improve the efficiency of the spatial
attention mechanism without sacrificing the performance. Specifically, a sparse
non-local (SNL) block is proposed to sample a subset of key and value elements
for each query element to capture long-range relations adaptively and generate
a sparse affinity matrix to aggregate contextual information efficiently.
Experimental results show that the proposed approach outperforms other context
aggregation methods and achieves state-of-the-art performance on the
Cityscapes, PASCAL Context and ADE20K datasets. | [
"cs.CV"
] |
Arbitrary-oriented objects exist widely in natural scenes, and thus the
oriented object detection has received extensive attention in recent years. The
mainstream rotation detectors use oriented bounding boxes (OBB) or
quadrilateral bounding boxes (QBB) to represent the rotating objects. However,
these methods suffer from the representation ambiguity for oriented object
definition, which leads to suboptimal regression optimization and the
inconsistency between the loss metric and the localization accuracy of the
predictions. In this paper, we propose a Representation Invariance Loss (RIL)
to optimize the bounding box regression for the rotating objects. Specifically,
RIL treats multiple representations of an oriented object as multiple
equivalent local minima, and hence transforms bounding box regression into an
adaptive matching process with these local minima. Then, the Hungarian matching
algorithm is adopted to obtain the optimal regression strategy. We also propose
a normalized rotation loss to alleviate the weak correlation between different
variables and their unbalanced loss contribution in OBB representation.
Extensive experiments on remote sensing datasets and scene text datasets show
that our method achieves consistent and substantial improvement. The source
code and trained models are available at https://github.com/ming71/RIDet. | [
"cs.CV"
] |
Logic optimization is an NP-hard problem commonly approached through
hand-engineered heuristics. We propose to combine graph convolutional networks
with reinforcement learning and a novel, scalable node embedding method to
learn which local transforms should be applied to the logic graph. We show that
this method achieves a similar size reduction as ABC on smaller circuits and
outperforms it by 1.5-1.75x on larger random graphs. | [
"cs.LG"
] |
Current deep learning techniques for style transfer would not be optimal for
design support since their "one-shot" transfer does not fit exploratory design
processes. To overcome this gap, we propose parametric transcription, which
transcribes an end-to-end style transfer effect into parameter values of
specific transformations available in an existing content editing tool. With
this approach, users can imitate the style of a reference sample in the tool
that they are familiar with and thus can easily continue further exploration by
manipulating the parameters. To enable this, we introduce a framework that
utilizes an existing pretrained model for style transfer to calculate a
perceptual style distance to the reference sample and uses black-box
optimization to find the parameters that minimize this distance. Our
experiments with various third-party tools, such as Instagram and Blender, show
that our framework can effectively leverage deep learning techniques for
computational design support. | [
"cs.LG",
"cs.CV",
"cs.HC"
] |
Graph matching aims to establish correspondences between vertices of graphs
such that both the node and edge attributes agree. Various learning-based
methods were recently proposed for finding correspondences between image key
points based on deep graph matching formulations. While these approaches mainly
focus on learning node and edge attributes, they completely ignore the 3D
geometry of the underlying 3D objects depicted in the 2D images. We fill this
gap by proposing a trainable framework that takes advantage of graph neural
networks for learning a deformable 3D geometry model from inhomogeneous image
collections, i.e. a set of images that depict different instances of objects
from the same category. Experimentally we demonstrate that our method
outperforms recent learning-based approaches for graph matching considering
both accuracy and cycle-consistency error, while we in addition obtain the
underlying 3D geometry of the objects depicted in the 2D images. | [
"cs.CV",
"cs.LG"
] |
Time series classification has received great attention over the past decade
with a wide range of methods focusing on predictive performance by exploiting
various types of temporal features. Nonetheless, little emphasis has been
placed on interpretability and explainability. In this paper, we formulate the
novel problem of explainable time series tweaking, where, given a time series
and an opaque classifier that provides a particular classification decision for
the time series, we want to find the minimum number of changes to be performed
to the given time series so that the classifier changes its decision to another
class. We show that the problem is NP-hard, and focus on two instantiations of
the problem, which we refer to as reversible and irreversible time series
tweaking. The classifier under investigation is the random shapelet forest
classifier. Moreover, we propose two algorithmic solutions for the two problems
along with simple optimizations, as well as a baseline solution using the
nearest neighbor classifier. An extensive experimental evaluation on a variety
of real datasets demonstrates the usefulness and effectiveness of our problem
formulation and solutions. | [
"cs.LG",
"stat.ML"
] |
Recognition tasks, such as object recognition and keypoint estimation, have
seen widespread adoption in recent years. Most state-of-the-art methods for
these tasks use deep networks that are computationally expensive and have huge
memory footprints. This makes it exceedingly difficult to deploy these systems
on low power embedded devices. Hence, the importance of decreasing the storage
requirements and the amount of computation in such models is paramount. The
recently proposed Lottery Ticket Hypothesis (LTH) states that deep neural
networks trained on large datasets contain smaller subnetworks that achieve on
par performance as the dense networks. In this work, we perform the first
empirical study investigating LTH for model pruning in the context of object
detection, instance segmentation, and keypoint estimation. Our studies reveal
that lottery tickets obtained from ImageNet pretraining do not transfer well to
the downstream tasks. We provide guidance on how to find lottery tickets with
up to 80% overall sparsity on different sub-tasks without incurring any drop in
the performance. Finally, we analyse the behavior of trained tickets with
respect to various task attributes such as object size, frequency, and
difficulty of detection. | [
"cs.CV",
"cs.LG"
] |
Vehicle re-identification (Re-ID) is to retrieve images of the same vehicle
across different cameras. Two key challenges lie in the subtle inter-instance
discrepancy caused by near-duplicate identities and the large intra-instance
variance caused by different views. Since the holistic appearance suffers from
viewpoint variation and distortion, part-level feature learning has been
introduced to enhance vehicle description. However, existing approaches to
localize and amplify significant parts often fail to handle spatial
misalignment as well as occlusion and require expensive annotations. In this
paper, we propose a weakly supervised Part-Mentored Attention Network (PMANet)
composed of a Part Attention Network (PANet) for vehicle part localization with
self-attention and a Part-Mentored Network (PMNet) for mentoring the global and
local feature aggregation. Firstly, PANet is introduced to predict a foreground
mask and pinpoint $K$ prominent vehicle parts only with weak identity
supervision. Secondly, we propose a PMNet to learn global and part-level
features with multi-scale attention and aggregate them in $K$ main-partial
tasks via part transfer. Like humans who first differentiate objects with
general information and then observe salient parts for more detailed clues,
PANet and PMNet construct a two-stage attention structure to perform a
coarse-to-fine search among identities. Finally, we address this Re-ID issue as
a multi-task problem, including global feature learning, identity
classification, and part transfer. We adopt Homoscedastic Uncertainty to learn
the optimal weighing of different losses. Comprehensive experiments are
conducted on two benchmark datasets. Our approach outperforms recent
state-of-the-art methods by averagely 2.63% in CMC@1 on VehicleID and 2.2% in
mAP on VeRi776. Results on occluded test sets also demonstrate the
generalization ability of PMANet. | [
"cs.CV"
] |
The nonlinear vector autoregressive (NVAR) model provides an appealing
framework to analyze multivariate time series obtained from a nonlinear
dynamical system. However, the innovation (or error), which plays a key role by
driving the dynamics, is almost always assumed to be additive. Additivity
greatly limits the generality of the model, hindering analysis of general NVAR
processes which have nonlinear interactions between the innovations. Here, we
propose a new general framework called independent innovation analysis (IIA),
which estimates the innovations from completely general NVAR. We assume mutual
independence of the innovations as well as their modulation by an auxiliary
variable (which is often taken as the time index and simply interpreted as
nonstationarity). We show that IIA guarantees the identifiability of the
innovations with arbitrary nonlinearities, up to a permutation and
component-wise invertible nonlinearities. We also propose three estimation
frameworks depending on the type of the auxiliary variable. We thus provide the
first rigorous identifiability result for general NVAR, as well as very general
tools for learning such models. | [
"stat.ML",
"cs.LG"
] |
In this paper, sample-aware policy entropy regularization is proposed to
enhance the conventional policy entropy regularization for better exploration.
Exploiting the sample distribution obtainable from the replay buffer, the
proposed sample-aware entropy regularization maximizes the entropy of the
weighted sum of the policy action distribution and the sample action
distribution from the replay buffer for sample-efficient exploration. A
practical algorithm named diversity actor-critic (DAC) is developed by applying
policy iteration to the objective function with the proposed sample-aware
entropy regularization. Numerical results show that DAC significantly
outperforms existing recent algorithms for reinforcement learning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Neural Tangent Kernel (NTK) theory is widely used to study the dynamics of
infinitely-wide deep neural networks (DNNs) under gradient descent. But do the
results for infinitely-wide networks give us hints about the behavior of real
finite-width ones? In this paper, we study empirically when NTK theory is valid
in practice for fully-connected ReLU and sigmoid DNNs. We find out that whether
a network is in the NTK regime depends on the hyperparameters of random
initialization and the network's depth. In particular, NTK theory does not
explain the behavior of sufficiently deep networks initialized so that their
gradients explode as they propagate through the network's layers: the kernel is
random at initialization and changes significantly during training in this
case, contrary to NTK theory. On the other hand, in the case of vanishing
gradients, DNNs are in the the NTK regime but become untrainable rapidly with
depth. We also describe a framework to study generalization properties of DNNs,
in particular the variance of network's output function, by means of NTK theory
and discuss its limits. | [
"cs.LG",
"stat.ML"
] |
Markov switching models (MSMs) are probabilistic models that employ multiple
sets of parameters to describe different dynamic regimes that a time series may
exhibit at different periods of time. The switching mechanism between regimes
is controlled by unobserved random variables that form a first-order Markov
chain. Explicit-duration MSMs contain additional variables that explicitly
model the distribution of time spent in each regime. This allows to define
duration distributions of any form, but also to impose complex dependence
between the observations and to reset the dynamics to initial conditions.
Models that focus on the first two properties are most commonly known as hidden
semi-Markov models or segment models, whilst models that focus on the third
property are most commonly known as changepoint models or reset models. In this
monograph, we provide a description of explicit-duration modelling by
categorizing the different approaches into three groups, which differ in
encoding in the explicit-duration variables different information about regime
change/reset boundaries. The approaches are described using the formalism of
graphical models, which allows to graphically represent and assess statistical
dependence and therefore to easily describe the structure of complex models and
derive inference routines. The presentation is intended to be pedagogical,
focusing on providing a characterization of the three groups in terms of model
structure constraints and inference properties. The monograph is supplemented
with a software package that contains most of the models and examples
described. The material presented should be useful to both researchers wishing
to learn about these models and researchers wishing to develop them further. | [
"stat.ML",
"cs.LG"
] |
To ensure safety in automated driving, the correct perception of the
situation inside the car is as important as its environment. Thus, seat
occupancy detection and classification of detected instances play an important
role in interior sensing. By the knowledge of the seat occupancy status, it is
possible to, e.g., automate the airbag deployment control. Furthermore, the
presence of a driver, which is necessary for partially automated driving cars
at the automation levels two to four can be verified. In this work, we compare
different statistical methods from the field of image segmentation to approach
the problem of background-foreground segmentation in camera based interior
sensing. In the recent years, several methods based on different techniques
have been developed and applied to images or videos from different
applications. The peculiarity of the given scenarios of interior sensing is,
that the foreground instances and the background both contain static as well as
dynamic elements. In data considered in this work, even the camera position is
not completely fixed. We review and benchmark three different methods ranging,
i.e., Gaussian Mixture Models (GMM), Morphological Snakes and a deep neural
network, namely a Mask R-CNN. In particular, the limitations of the classical
methods, GMM and Morphological Snakes, for interior sensing are shown.
Furthermore, it turns, that it is possible to overcome these limitations by
deep learning, e.g.\ using a Mask R-CNN. Although only a small amount of ground
truth data was available for training, we enabled the Mask R-CNN to produce
high quality background-foreground masks via transfer learning. Moreover, we
demonstrate that certain augmentation as well as pre- and post-processing
methods further enhance the performance of the investigated methods. | [
"cs.CV",
"cs.LG"
] |
Learning from image-text data has demonstrated recent success for many
recognition tasks, yet is currently limited to visual features or individual
visual concepts such as objects. In this paper, we propose one of the first
methods that learn from image-sentence pairs to extract a graphical
representation of localized objects and their relationships within an image,
known as scene graph. To bridge the gap between images and texts, we leverage
an off-the-shelf object detector to identify and localize object instances,
match labels of detected regions to concepts parsed from captions, and thus
create "pseudo" labels for learning scene graph. Further, we design a
Transformer-based model to predict these "pseudo" labels via a masked token
prediction task. Learning from only image-sentence pairs, our model achieves
30% relative gain over a latest method trained with human-annotated unlocalized
scene graphs. Our model also shows strong results for weakly and fully
supervised scene graph generation. In addition, we explore an open-vocabulary
setting for detecting scene graphs, and present the first result for open-set
scene graph generation. Our code is available at
https://github.com/YiwuZhong/SGG_from_NLS. | [
"cs.CV"
] |
Common horizontal bounding box (HBB)-based methods are not capable of
accurately locating slender ship targets with arbitrary orientations in
synthetic aperture radar (SAR) images. Therefore, in recent years, methods
based on oriented bounding box (OBB) have gradually received attention from
researchers. However, most of the recently proposed deep learning-based methods
for OBB detection encounter the boundary discontinuity problem in angle or key
point regression. In order to alleviate this problem, researchers propose to
introduce some manually set parameters or extra network branches for
distinguishing the boundary cases, which make training more diffcult and lead
to performance degradation. In this paper, in order to solve the boundary
discontinuity problem in OBB regression, we propose to detect SAR ships by
learning polar encodings. The encoding scheme uses a group of vectors pointing
from the center of the ship target to the boundary points to represent an OBB.
The boundary discontinuity problem is avoided by training and inference
directly according to the polar encodings. In addition, we propose an Intersect
over Union (IOU) -weighted regression loss, which further guides the training
of polar encodings through the IOU metric and improves the detection
performance. Experiments on the Rotating SAR Ship Detection Dataset (RSSDD)
show that the proposed method can achieve better detection performance over
other comparison algorithms and other OBB encoding schemes, demonstrating the
effectiveness of our method. | [
"cs.CV"
] |
In image captioning where fluency is an important factor in evaluation, e.g.,
$n$-gram metrics, sequential models are commonly used; however, sequential
models generally result in overgeneralized expressions that lack the details
that may be present in an input image. Inspired by the idea of the
compositional neural module networks in the visual question answering task, we
introduce a hierarchical framework for image captioning that explores both
compositionality and sequentiality of natural language. Our algorithm learns to
compose a detail-rich sentence by selectively attending to different modules
corresponding to unique aspects of each object detected in an input image to
include specific descriptions such as counts and color. In a set of experiments
on the MSCOCO dataset, the proposed model outperforms a state-of-the art model
across multiple evaluation metrics, more importantly, presenting visually
interpretable results. Furthermore, the breakdown of subcategories $f$-scores
of the SPICE metric and human evaluation on Amazon Mechanical Turk show that
our compositional module networks effectively generate accurate and detailed
captions. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
A neural network regularizer (e.g., weight decay) boosts performance by
explicitly penalizing the complexity of a network. In this paper, we penalize
inferior network activations -- feature embeddings -- which in turn regularize
the network's weights implicitly. We propose singular value maximization
(SVMax) to learn a more uniform feature embedding. The SVMax regularizer
supports both supervised and unsupervised learning. Our formulation mitigates
model collapse and enables larger learning rates. We evaluate the SVMax
regularizer using both retrieval and generative adversarial networks. We
leverage a synthetic mixture of Gaussians dataset to evaluate SVMax in an
unsupervised setting. For retrieval networks, SVMax achieves significant
improvement margins across various ranking losses. Code available at
https://bit.ly/3jNkgDt | [
"cs.CV",
"cs.LG"
] |
Image Segmentation is a technique of partitioning the original image into
some distinct classes. Many possible solutions may be available for segmenting
an image into a certain number of classes, each one having different quality of
segmentation. In our proposed method, multilevel thresholding technique has
been used for image segmentation. A new approach of Cuckoo Search (CS) is used
for selection of optimal threshold value. In other words, the algorithm is used
to achieve the best solution from the initial random threshold values or
solutions and to evaluate the quality of a solution correlation function is
used. Finally, MSE and PSNR are measured to understand the segmentation
quality. | [
"cs.CV"
] |
The pre-training on the graph neural network model can learn the general
features of large-scale networks or networks of the same type by
self-supervised methods, which allows the model to work even when node labels
are missing. However, the existing pre-training methods do not take network
evolution into consideration. This paper proposes a pre-training method on
dynamic graph neural networks (PT-DGNN), which uses dynamic attributed graph
generation tasks to simultaneously learn the structure, semantics, and
evolution features of the graph. The method includes two steps: 1) dynamic
sub-graph sampling, and 2) pre-training with dynamic attributed graph
generation task. Comparative experiments on three realistic dynamic network
datasets show that the proposed method achieves the best results on the link
prediction fine-tuning task. | [
"cs.LG",
"cs.SI"
] |
Advances in reinforcement learning (RL) have resulted in recent breakthroughs
in the application of artificial intelligence (AI) across many different
domains. An emerging landscape of development environments is making powerful
RL techniques more accessible for a growing community of researchers. However,
most existing frameworks do not directly address the problem of learning in
complex operating environments, such as dense urban settings or defense-related
scenarios, that incorporate distributed, heterogeneous teams of agents. To help
enable AI research for this important class of applications, we introduce the
AI Arena: a scalable framework with flexible abstractions for distributed
multi-agent reinforcement learning. The AI Arena extends the OpenAI Gym
interface to allow greater flexibility in learning control policies across
multiple agents with heterogeneous learning strategies and localized views of
the environment. To illustrate the utility of our framework, we present
experimental results that demonstrate performance gains due to a distributed
multi-agent learning approach over commonly-used RL techniques in several
different learning environments. | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
Set-based person re-identification (SReID) is a matching problem that aims to
verify whether two sets are of the same identity (ID). Existing SReID models
typically generate a feature representation per image and aggregate them to
represent the set as a single embedding. However, they can easily be perturbed
by noises--perceptually/semantically low quality images--which are inevitable
due to imperfect tracking/detection systems, or overfit to trivial images. In
this work, we present a novel and simple solution to this problem based on
ID-aware quality that measures the perceptual and semantic quality of images
guided by their ID information. Specifically, we propose an ID-aware Embedding
that consists of two key components: (1) Feature learning attention that aims
to learn robust image embeddings by focusing on 'medium' hard images. This way
it can prevent overfitting to trivial images, and alleviate the influence of
outliers. (2) Feature fusion attention is to fuse image embeddings in the set
to obtain the set-level embedding. It ignores noisy information and pays more
attention to discriminative images to aggregate more discriminative
information. Experimental results on four datasets show that our method
outperforms state-of-the-art approaches despite the simplicity of our approach. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
The goal of self-supervised learning from images is to construct image
representations that are semantically meaningful via pretext tasks that do not
require semantic annotations for a large training set of images. Many pretext
tasks lead to representations that are covariant with image transformations. We
argue that, instead, semantic representations ought to be invariant under such
transformations. Specifically, we develop Pretext-Invariant Representation
Learning (PIRL, pronounced as "pearl") that learns invariant representations
based on pretext tasks. We use PIRL with a commonly used pretext task that
involves solving jigsaw puzzles. We find that PIRL substantially improves the
semantic quality of the learned image representations. Our approach sets a new
state-of-the-art in self-supervised learning from images on several popular
benchmarks for self-supervised learning. Despite being unsupervised, PIRL
outperforms supervised pre-training in learning image representations for
object detection. Altogether, our results demonstrate the potential of
self-supervised learning of image representations with good invariance
properties. | [
"cs.CV",
"cs.LG"
] |
Deep reinforcement learning for high dimensional, hierarchical control tasks
usually requires the use of complex neural networks as functional
approximators, which can lead to inefficiency, instability and even divergence
in the training process. Here, we introduce stacked deep Q learning (SDQL), a
flexible modularized deep reinforcement learning architecture, that can enable
finding of optimal control policy of control tasks consisting of multiple
linear stages in a stable and efficient way. SDQL exploits the linear stage
structure by approximating the Q function via a collection of deep Q
sub-networks stacking along an axis marking the stage-wise progress of the
whole task. By back-propagating the learned state values from later stages to
earlier stages, all sub-networks co-adapt to maximize the total reward of the
whole task, although each sub-network is responsible for learning optimal
control policy for its own stage. This modularized architecture offers
considerable flexibility in terms of environment and policy modeling, as it
allows choices of different state spaces, action spaces, reward structures, and
Q networks for each stage, Further, the backward stage-wise training procedure
of SDQL can offers additional transparency, stability, and flexibility to the
training process, thus facilitating model fine-tuning and hyper-parameter
search. We demonstrate that SDQL is capable of learning competitive strategies
for problems with characteristics of high-dimensional state space,
heterogeneous action space(both discrete and continuous), multiple scales, and
sparse and delayed rewards. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The model reduction problem that eases the computation costs and latency of
complex deep learning architectures has received an increasing number of
investigations owing to its importance in model deployment. One promising
method is knowledge distillation (KD), which creates a fast-to-execute student
model to mimic a large teacher network. In this paper, we propose a method,
called KDFM (Knowledge Distillation with Feature Maps), which improves the
effectiveness of KD by learning the feature maps from the teacher network. Two
major techniques used in KDFM are shared classifier and generative adversarial
network. Experimental results show that KDFM can use a four layers CNN to mimic
DenseNet-40 and use MobileNet to mimic DenseNet-100. Both student networks have
less than 1\% accuracy loss comparing to their teacher models for CIFAR-100
datasets. The student networks are 2-6 times faster than their teacher models
for inference, and the model size of MobileNet is less than half of
DenseNet-100's. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Sleep disorder diagnosis relies on the analysis of polysomnography (PSG)
records. As a preliminary step of this examination, sleep stages are
systematically determined. In practice, sleep stage classification relies on
the visual inspection of 30-second epochs of polysomnography signals. Numerous
automatic approaches have been developed to replace this tedious and expensive
task. Although these methods demonstrated better performance than human sleep
experts on specific datasets, they remain largely unused in sleep clinics. The
main reason is that each sleep clinic uses a specific PSG montage that most
automatic approaches cannot handle out-of-the-box. Moreover, even when the PSG
montage is compatible, publications have shown that automatic approaches
perform poorly on unseen data with different demographics. To address these
issues, we introduce RobustSleepNet, a deep learning model for automatic sleep
stage classification able to handle arbitrary PSG montages. We trained and
evaluated this model in a leave-one-out-dataset fashion on a large corpus of 8
heterogeneous sleep staging datasets to make it robust to demographic changes.
When evaluated on an unseen dataset, RobustSleepNet reaches 97% of the F1 of a
model explicitly trained on this dataset. Hence, RobustSleepNet unlocks the
possibility to perform high-quality out-of-the-box automatic sleep staging with
any clinical setup. We further show that finetuning RobustSleepNet, using a
part of the unseen dataset, increases the F1 by 2% when compared to a model
trained specifically for this dataset. Therefore, finetuning might be used to
reach a state-of-the-art level of performance on a specific population. | [
"stat.ML",
"cs.LG",
"eess.SP"
] |
In many applications of computer graphics, art and design, it is desirable
for a user to provide intuitive non-image input, such as text, sketch, stroke,
graph or layout, and have a computer system automatically generate
photo-realistic images that adhere to the input content. While classic works
that allow such automatic image content generation have followed a framework of
image retrieval and composition, recent advances in deep generative models such
as generative adversarial networks (GANs), variational autoencoders (VAEs), and
flow-based methods have enabled more powerful and versatile image generation
tasks. This paper reviews recent works for image synthesis given intuitive user
input, covering advances in input versatility, image generation methodology,
benchmark datasets, and evaluation metrics. This motivates new perspectives on
input representation and interactivity, cross pollination between major image
generation paradigms, and evaluation and comparison of generation methods. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
Biological systems perceive the world by simultaneously processing
high-dimensional inputs from modalities as diverse as vision, audition, touch,
proprioception, etc. The perception models used in deep learning on the other
hand are designed for individual modalities, often relying on domain-specific
assumptions such as the local grid structures exploited by virtually all
existing vision models. These priors introduce helpful inductive biases, but
also lock models to individual modalities. In this paper we introduce the
Perceiver - a model that builds upon Transformers and hence makes few
architectural assumptions about the relationship between its inputs, but that
also scales to hundreds of thousands of inputs, like ConvNets. The model
leverages an asymmetric attention mechanism to iteratively distill inputs into
a tight latent bottleneck, allowing it to scale to handle very large inputs. We
show that this architecture is competitive with or outperforms strong,
specialized models on classification tasks across various modalities: images,
point clouds, audio, video, and video+audio. The Perceiver obtains performance
comparable to ResNet-50 and ViT on ImageNet without 2D convolutions by directly
attending to 50,000 pixels. It is also competitive in all modalities in
AudioSet. | [
"cs.CV",
"cs.AI",
"cs.LG",
"cs.SD",
"eess.AS"
] |
The attention mechanism can refine the extracted feature maps and boost the
classification performance of the deep network, which has become an essential
technique in computer vision and natural language processing. However, the
memory and computational costs of the dot-product attention mechanism increase
quadratically with the spatio-temporal size of the input. Such growth hinders
the usage of attention mechanisms considerably in application scenarios with
large-scale inputs. In this Letter, we propose a Linear Attention Mechanism
(LAM) to address this issue, which is approximately equivalent to dot-product
attention with computational efficiency. Such a design makes the incorporation
between attention mechanisms and deep networks much more flexible and
versatile. Based on the proposed LAM, we re-factor the skip connections in the
raw U-Net and design a Multi-stage Attention ResU-Net (MAResU-Net) for semantic
segmentation from fine-resolution remote sensing images. Experiments conducted
on the Vaihingen dataset demonstrated the effectiveness and efficiency of our
MAResU-Net. Open-source code is available at
https://github.com/lironui/Multistage-Attention-ResU-Net. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.