text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
This paper aims at constructing a light-weight object detector that inputs a
depth and a color image from a stereo camera. Specifically, by extending the
network architecture of YOLOv3 to 3D in the middle, it is possible to output in
the depth direction. In addition, Intersection over Uninon (IoU) in 3D space is
introduced to confirm the accuracy of region extraction results. In the field
of deep learning, object detectors that use distance information as input are
actively studied for utilizing automated driving. However, the conventional
detector has a large network structure, and the real-time property is impaired.
The effectiveness of the detector constructed as described above is verified
using datasets. As a result of this experiment, the proposed model is able to
output 3D bounding boxes and detect people whose part of the body is hidden.
Further, the processing speed of the model is 44.35 fps. | [
"cs.CV"
] |
We present a new algorithm for single camera 3D reconstruction, or 3D input
for human-computer interfaces, based on precise tracking of an elongated
object, such as a pen, having a pattern of colored bands. To configure the
system, the user provides no more than one labelled image of a handmade
pointer, measurements of its colored bands, and the camera's pinhole projection
matrix. Other systems are of much higher cost and complexity, requiring
combinations of multiple cameras, stereocameras, and pointers with sensors and
lights. Instead of relying on information from multiple devices, we examine our
single view more closely, integrating geometric and appearance constraints to
robustly track the pointer in the presence of occlusion and distractor objects.
By probing objects of known geometry with the pointer, we demonstrate
acceptable accuracy of 3D localization. | [
"cs.CV"
] |
In this paper, we present an implicit feature pyramid network (i-FPN) for
object detection. Existing FPNs stack several cross-scale blocks to obtain
large receptive field. We propose to use an implicit function, recently
introduced in deep equilibrium model (DEQ), to model the transformation of FPN.
We develop a residual-like iteration to updates the hidden states efficiently.
Experimental results on MS COCO dataset show that i-FPN can significantly boost
detection performance compared to baseline detectors with ResNet-50-FPN: +3.4,
+3.2, +3.5, +4.2, +3.2 mAP on RetinaNet, Faster-RCNN, FCOS, ATSS and
AutoAssign, respectively. | [
"cs.CV"
] |
Recent progress in style transfer on images has focused on improving the
quality of stylized images and speed of methods. However, real-time methods are
highly unstable resulting in visible flickering when applied to videos. In this
work we characterize the instability of these methods by examining the solution
set of the style transfer objective. We show that the trace of the Gram matrix
representing style is inversely related to the stability of the method. Then,
we present a recurrent convolutional network for real-time video style transfer
which incorporates a temporal consistency loss and overcomes the instability of
prior methods. Our networks can be applied at any resolution, do not re- quire
optical flow at test time, and produce high quality, temporally consistent
stylized videos in real-time. | [
"cs.CV"
] |
In this paper, we study deep generative models for effective unsupervised
learning. We propose VGAN, which works by minimizing a variational lower bound
of the negative log likelihood (NLL) of an energy based model (EBM), where the
model density $p(\mathbf{x})$ is approximated by a variational distribution
$q(\mathbf{x})$ that is easy to sample from. The training of VGAN takes a two
step procedure: given $p(\mathbf{x})$, $q(\mathbf{x})$ is updated to maximize
the lower bound; $p(\mathbf{x})$ is then updated one step with samples drawn
from $q(\mathbf{x})$ to decrease the lower bound. VGAN is inspired by the
generative adversarial networks (GANs), where $p(\mathbf{x})$ corresponds to
the discriminator and $q(\mathbf{x})$ corresponds to the generator, but with
several notable differences. We hence name our model variational GANs (VGANs).
VGAN provides a practical solution to training deep EBMs in high dimensional
space, by eliminating the need of MCMC sampling. From this view, we are also
able to identify causes to the difficulty of training GANs and propose viable
solutions. \footnote{Experimental code is available at
https://github.com/Shuangfei/vgan} | [
"cs.LG"
] |
The rapid development of deep learning (DL) has driven single image
super-resolution (SR) into a new era. However, in most existing DL based image
SR networks, the information flows are solely feedforward, and the high-level
features cannot be fully explored. In this paper, we propose the gated multiple
feedback network (GMFN) for accurate image SR, in which the representation of
low-level features are efficiently enriched by rerouting multiple high-level
features. We cascade multiple residual dense blocks (RDBs) and recurrently
unfolds them across time. The multiple feedback connections between two
adjacent time steps in the proposed GMFN exploits multiple high-level features
captured under large receptive fields to refine the low-level features lacking
enough contextual information. The elaborately designed gated feedback module
(GFM) efficiently selects and further enhances useful information from multiple
rerouted high-level features, and then refine the low-level features with the
enhanced high-level information. Extensive experiments demonstrate the
superiority of our proposed GMFN against state-of-the-art SR methods in terms
of both quantitative metrics and visual quality. Code is available at
https://github.com/liqilei/GMFN. | [
"cs.CV"
] |
Learning sparse feature representations is a useful instrument for solving an
unsupervised learning problem. In this paper, we present three labeled
handwritten digit datasets, collectively called n-MNIST. Then, we propose a
novel framework for the classification of handwritten digits that learns sparse
representations using probabilistic quadtrees and Deep Belief Nets. On the
MNIST and n-MNIST datasets, our framework shows promising results and
significantly outperforms traditional Deep Belief Networks. | [
"cs.CV"
] |
Recently, significant progress has been made in single-view depth estimation
thanks to increasingly large and diverse depth datasets. However, these
datasets are largely limited to specific application domains (e.g. indoor,
autonomous driving) or static in-the-wild scenes due to hardware constraints or
technical limitations of 3D reconstruction. In this paper, we introduce the
first depth dataset DynOcc consisting of dynamic in-the-wild scenes. Our
approach leverages the occlusion cues in these dynamic scenes to infer depth
relationships between points of selected video frames. To achieve accurate
occlusion detection and depth order estimation, we employ a novel occlusion
boundary detection, filtering and thinning scheme followed by a robust
foreground/background classification method. In total our DynOcc dataset
contains 22M depth pairs out of 91K frames from a diverse set of videos. Using
our dataset we achieved state-of-the-art results measured in weighted human
disagreement rate (WHDR). We also show that the inferred depth maps trained
with DynOcc can preserve sharper depth boundaries. | [
"cs.CV"
] |
Visualizing features in deep neural networks (DNNs) can help understanding
their computations. Many previous studies aimed to visualize the selectivity of
individual units by finding meaningful images that maximize their activation.
However, comparably little attention has been paid to visualizing to what image
transformations units in DNNs are invariant. Here we propose a method to
discover invariances in the responses of hidden layer units of deep neural
networks. Our approach is based on simultaneously searching for a batch of
images that strongly activate a unit while at the same time being as distinct
from each other as possible. We find that even early convolutional layers in
VGG-19 exhibit various forms of response invariance: near-perfect phase
invariance in some units and invariance to local diffeomorphic transformations
in others. At the same time, we uncover representational differences with
ResNet-50 in its corresponding layers. We conclude that invariance
transformations are a major computational component learned by DNNs and we
provide a systematic method to study them. | [
"cs.CV"
] |
Mobility datasets are fundamental for evaluating algorithms pertaining to
geographic information systems and facilitating experimental reproducibility.
But privacy implications restrict sharing such datasets, as even aggregated
location-data is vulnerable to membership inference attacks. Current synthetic
mobility dataset generators attempt to superficially match a priori modeled
mobility characteristics which do not accurately reflect the real-world
characteristics. Modeling human mobility to generate synthetic yet semantically
and statistically realistic trajectories is therefore crucial for publishing
trajectory datasets having satisfactory utility level while preserving user
privacy. Specifically, long-range dependencies inherent to human mobility are
challenging to capture with both discriminative and generative models. In this
paper, we benchmark the performance of recurrent neural architectures (RNNs),
generative adversarial networks (GANs) and nonparametric copulas to generate
synthetic mobility traces. We evaluate the generated trajectories with respect
to their geographic and semantic similarity, circadian rhythms, long-range
dependencies, training and generation time. We also include two sample tests to
assess statistical similarity between the observed and simulated distributions,
and we analyze the privacy tradeoffs with respect to membership inference and
location-sequence attacks. | [
"cs.LG",
"stat.ML"
] |
Recently, Semi-Supervised Learning (SSL) has shown much promise in leveraging
unlabeled data while being provided with very few labels. In this paper, we
show that ignoring the labels altogether for whole epochs intermittently during
training can significantly improve performance in the small sample regime. More
specifically, we propose to train a network on two tasks jointly. The primary
classification task is exposed to both the unlabeled and the scarcely annotated
data, whereas the secondary task seeks to cluster the data without any labels.
As opposed to hand-crafted pretext tasks frequently used in self-supervision,
our clustering phase utilizes the same classification network and head in an
attempt to relax the primary task and propagate the information from the labels
without overfitting them. On top of that, the self-supervised technique of
classifying image rotations is incorporated during the unsupervised learning
phase to stabilize training. We demonstrate our method's efficacy in boosting
several state-of-the-art SSL algorithms, significantly improving their results
and reducing running time in various standard semi-supervised benchmarks,
including 92.6% accuracy on CIFAR-10 and 96.9% on SVHN, using only 4 labels per
class in each task. We also notably improve the results in the extreme cases of
1,2 and 3 labels per class, and show that features learned by our model are
more meaningful for separating the data. | [
"cs.CV",
"cs.LG"
] |
Short-term road traffic prediction (STTP) is one of the most important
modules in Intelligent Transportation Systems (ITS). However, network-level
STTP still remains challenging due to the difficulties both in modeling the
diverse traffic patterns and tacking high-dimensional time series with low
latency. Therefore, a framework combining with a deep clustering (DeepCluster)
module is developed for STTP at largescale networks in this paper. The
DeepCluster module is proposed to supervise the representation learning in a
visualized way from the large unlabeled dataset. More specifically, to fully
exploit the traffic periodicity, the raw series is first split into a number of
sub-series for triplets generation. The convolutional neural networks (CNNs)
with triplet loss are utilized to extract the features of shape by transferring
the series into visual images. The shape-based representations are then used
for road segments clustering. Thereafter, motivated by the fact that the road
segments in a group have similar patterns, a model sharing strategy is further
proposed to build recurrent NNs (RNNs)-based predictions through a group-based
model (GM), instead of individual-based model (IM) in which one model are built
for one road exclusively. Our framework can not only significantly reduce the
number of models and cost, but also increase the number of training data and
the diversity of samples. In the end, we evaluate the proposed framework over
the network of Liuli Bridge in Beijing. Experimental results show that the
DeepCluster can effectively cluster the road segments and GM can achieve
comparable performance against the IM with less number of models. | [
"cs.LG",
"stat.ML"
] |
Shadow removal is an essential task for scene understanding. Many studies
consider only matching the image contents, which often causes two types of
ghosts: color in-consistencies in shadow regions or artifacts on shadow
boundaries. In this paper, we tackle these issues in two ways. First, to
carefully learn the border artifacts-free image, we propose a novel network
structure named the dual hierarchically aggregation network~(DHAN). It contains
a series of growth dilated convolutions as the backbone without any
down-samplings, and we hierarchically aggregate multi-context features for
attention and prediction, respectively. Second, we argue that training on a
limited dataset restricts the textural understanding of the network, which
leads to the shadow region color in-consistencies. Currently, the largest
dataset contains 2k+ shadow/shadow-free image pairs. However, it has only 0.1k+
unique scenes since many samples share exactly the same background with
different shadow positions. Thus, we design a shadow matting generative
adversarial network~(SMGAN) to synthesize realistic shadow mattings from a
given shadow mask and shadow-free image. With the help of novel masks or
scenes, we enhance the current datasets using synthesized shadow images.
Experiments show that our DHAN can erase the shadows and produce high-quality
ghost-free images. After training on the synthesized and real datasets, our
network outperforms other state-of-the-art methods by a large margin. The code
is available: http://github.com/vinthony/ghost-free-shadow-removal/ | [
"cs.CV"
] |
Multi-shot pedestrian re-identification problem is at the core of
surveillance video analysis. It matches two tracks of pedestrians from
different cameras. In contrary to existing works that aggregate single frames
features by time series model such as recurrent neural network, in this paper,
we propose an interpretable reinforcement learning based approach to this
problem. Particularly, we train an agent to verify a pair of images at each
time. The agent could choose to output the result (same or different) or
request another pair of images to verify (unsure). By this way, our model
implicitly learns the difficulty of image pairs, and postpone the decision when
the model does not accumulate enough evidence. Moreover, by adjusting the
reward for unsure action, we can easily trade off between speed and accuracy.
In three open benchmarks, our method are competitive with the state-of-the-art
methods while only using 3% to 6% images. These promising results demonstrate
that our method is favorable in both efficiency and performance. | [
"cs.CV",
"cs.NE"
] |
We present a learning-based method for 6 DoF pose estimation of rigid objects
in point cloud data. Many recent learning-based approaches use primarily RGB
information for detecting objects, in some cases with an added refinement step
using depth data. Our method consumes unordered point sets with/without RGB
information, from initial detection to the final transformation estimation
stage. This allows us to achieve accurate pose estimates, in some cases
surpassing state of the art methods trained on the same data. | [
"cs.CV"
] |
In this work we propose a batch Bayesian optimization method for
combinatorial problems on permutations, which is well suited for expensive cost
functions on permutations. We introduce LAW, a new efficient batch acquisition
method based on the determinantal point process, using an acquisition weighted
kernel. Relying on multiple parallel evaluations, LAW accelerates the search
for the optimal permutation. We provide a regret analysis for our method to
gain insight in its theoretical properties. We then apply the framework to
permutation problems, which have so far received little attention in the
Bayesian Optimization literature, despite their practical importance. We call
this method LAW2ORDER. We evaluate the method on several standard combinatorial
problems involving permutations such as quadratic assignment, flowshop
scheduling and the traveling salesman, as well as on a structure learning task. | [
"stat.ML",
"cs.LG"
] |
Despite the recent progress of generative adversarial networks (GANs) at
synthesizing photo-realistic images, producing complex urban scenes remains a
challenging problem. Previous works break down scene generation into two
consecutive phases: unconditional semantic layout synthesis and image synthesis
conditioned on layouts. In this work, we propose to condition layout generation
as well for higher semantic control: given a vector of class proportions, we
generate layouts with matching composition. To this end, we introduce a
conditional framework with novel architecture designs and learning objectives,
which effectively accommodates class proportions to guide the scene generation
process. The proposed architecture also allows partial layout editing with
interesting applications. Thanks to the semantic control, we can produce
layouts close to the real distribution, helping enhance the whole scene
generation process. On different metrics and urban scene benchmarks, our models
outperform existing baselines. Moreover, we demonstrate the merit of our
approach for data augmentation: semantic segmenters trained on real
layout-image pairs along with additional ones generated by our approach
outperform models only trained on real pairs. | [
"cs.CV"
] |
Mobile robots that manipulate their environments require high-accuracy scene
understanding at close range. Typically this understanding is achieved with
RGBD cameras, but the evaluation process for selecting an appropriate RGBD
camera for the application is minimally quantitative. Limited
manufacturer-published metrics do not translate to observed quality in
real-world cluttered environments, since quality is application-specific. To
bridge the gap, we present a method for quantitatively measuring depth quality
using a set of extendable 3D printed fixtures that approximate real-world
conditions. By framing depth quality as point cloud density and root mean
square error (RMSE) from a known geometry, we present a method that is
extendable by other system integrators for custom environments. We show a
comparison of 3 cameras and present a case study for camera selection, provide
reference meshes and analysis code, and discuss further extensions. | [
"cs.CV",
"cs.RO"
] |
Generative Adversarial Networks (GANs) have revolutionized image synthesis
through many applications like face generation, photograph editing, and image
super-resolution. Image synthesis using GANs has predominantly been uni-modal,
with few approaches that can synthesize images from text or other data modes.
Text-to-image synthesis, especially text-to-face synthesis, has promising use
cases of robust face-generation from eye witness accounts and augmentation of
the reading experience with visual cues. However, only a couple of datasets
provide consolidated face data and textual descriptions for text-to-face
synthesis. Moreover, these textual annotations are less extensive and
descriptive, which reduces the diversity of faces generated from it. This paper
empirically proves that increasing the number of facial attributes in each
textual description helps GANs generate more diverse and real-looking faces. To
prove this, we propose a new methodology that focuses on using structured
textual descriptions. We also consolidate a Multi-Attributed and Structured
Text-to-face (MAST) dataset consisting of high-quality images with structured
textual annotations and make it available to researchers to experiment and
build upon. Lastly, we report benchmark Frechet's Inception Distance (FID),
Facial Semantic Similarity (FSS), and Facial Semantic Distance (FSD) scores for
the MAST dataset. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
This paper proposes a simple yet effective method for human action
recognition in video. The proposed method separately extracts local appearance
and motion features using state-of-the-art three-dimensional convolutional
neural networks from sampled snippets of a video. These local features are then
concatenated to form global representations which are then used to train a
linear SVM to perform the action classification using full context of the
video, as partial context as used in previous works. The videos undergo two
simple proposed preprocessing techniques, optical flow scaling and crop
filling. We perform an extensive evaluation on three common benchmark dataset
to empirically show the benefit of the SVM, and the two preprocessing steps. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Fine-grained visual classification (FGVC) aims to classify sub-classes of
objects in the same super-class (e.g., species of birds, models of cars). For
the FGVC tasks, the essential solution is to find discriminative subtle
information of the target from local regions. TraditionalFGVC models preferred
to use the refined features,i.e., high-level semantic information for
recognition and rarely use low-level in-formation. However, it turns out that
low-level information which contains rich detail information also has effect on
improving performance. Therefore, in this paper, we propose cross-layer
navigation convolutional neural network for feature fusion. First, the feature
maps extracted by the backbone network are fed into a convolutional long
short-term memory model sequentially from high-level to low-level to perform
feature aggregation. Then, attention mechanisms are used after feature fusion
to extract spatial and channel information while linking the high-level
semantic information and the low-level texture features, which can better
locate the discriminative regions for the FGVC. In the experiments, three
commonly used FGVC datasets, including CUB-200-2011, Stanford-Cars,
andFGVC-Aircraft datasets, are used for evaluation and we demonstrate the
superiority of the proposed method by comparing it with other referred FGVC
methods to show that this method achieves superior results. | [
"cs.CV",
"14J60 (Primary) 14F05, 14J26 (Secondary)",
"F.2.2; I.2.7"
] |
There is a growing desire in the field of reinforcement learning (and machine
learning in general) to move from black-box models toward more "interpretable
AI." We improve interpretability of reinforcement learning by increasing the
utility of decision tree policies learned via reinforcement learning. These
policies consist of a decision tree over the state space, which requires fewer
parameters to express than traditional policy representations. Existing methods
for creating decision tree policies via reinforcement learning focus on
accurately representing an action-value function during training, but this
leads to much larger trees than would otherwise be required. To address this
shortcoming, we propose a novel algorithm which only increases tree size when
the estimated discounted future reward of the overall policy would increase by
a sufficient amount. Through evaluation in a simulated environment, we show
that its performance is comparable or superior to traditional tree-based
approaches and that it yields a more succinct policy. Additionally, we discuss
tuning parameters to control the tradeoff between optimizing for smaller tree
size or for overall reward. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
We introduce Dynamic Deep Neural Networks (D2NN), a new type of feed-forward
deep neural network that allows selective execution. Given an input, only a
subset of D2NN neurons are executed, and the particular subset is determined by
the D2NN itself. By pruning unnecessary computation depending on input, D2NNs
provide a way to improve computational efficiency. To achieve dynamic selective
execution, a D2NN augments a feed-forward deep neural network (directed acyclic
graph of differentiable modules) with controller modules. Each controller
module is a sub-network whose output is a decision that controls whether other
modules can execute. A D2NN is trained end to end. Both regular and controller
modules in a D2NN are learnable and are jointly trained to optimize both
accuracy and efficiency. Such training is achieved by integrating
backpropagation with reinforcement learning. With extensive experiments of
various D2NN architectures on image classification tasks, we demonstrate that
D2NNs are general and flexible, and can effectively optimize
accuracy-efficiency trade-offs. | [
"cs.LG",
"stat.ML"
] |
We introduce a new multi-dimensional nonlinear embedding -- Piecewise Flat
Embedding (PFE) -- for image segmentation. Based on the theory of sparse signal
recovery, piecewise flat embedding with diverse channels attempts to recover a
piecewise constant image representation with sparse region boundaries and
sparse cluster value scattering. The resultant piecewise flat embedding
exhibits interesting properties such as suppressing slowly varying signals, and
offers an image representation with higher region identifiability which is
desirable for image segmentation or high-level semantic analysis tasks. We
formulate our embedding as a variant of the Laplacian Eigenmap embedding with
an $L_{1,p} (0<p\leq1)$ regularization term to promote sparse solutions. First,
we devise a two-stage numerical algorithm based on Bregman iterations to
compute $L_{1,1}$-regularized piecewise flat embeddings. We further generalize
this algorithm through iterative reweighting to solve the general
$L_{1,p}$-regularized problem. To demonstrate its efficacy, we integrate PFE
into two existing image segmentation frameworks, segmentation based on
clustering and hierarchical segmentation based on contour detection.
Experiments on four major benchmark datasets, BSDS500, MSRC, Stanford
Background Dataset, and PASCAL Context, show that segmentation algorithms
incorporating our embedding achieve significantly improved results. | [
"cs.CV",
"eess.IV",
"stat.ML"
] |
How humans can distinguish between general categories of objects? Are the
subcategories of living things visually distinctive? In a number of
semantic-category deficits, patients are good at making broad categorization
but are unable to remember fine and specific details. It has been well accepted
that general information about concepts are more robust to damages related to
semantic memory. Results from patients with semantic memory disorders
demonstrate the loss of ability in subcategory recognition. While bottom-up
feature construction has been studied in detail, little attention has been
served to top-down approach and the type of features that could account for
general categorization. In this paper, we show that broad categories of animal
and plant are visually distinguishable without processing textural information.
To this aim, we utilize shape descriptors with an additional phase of feature
learning. The results are evaluated with both supervised and unsupervised
learning mechanisms. The obtained results demonstrate that global encoding of
visual appearance of objects accounts for high discrimination between animal
and plant object categories. | [
"cs.CV",
"cs.AI"
] |
Graphs have been widely used in data mining and machine learning due to their
unique representation of real-world objects and their interactions. As graphs
are getting bigger and bigger nowadays, it is common to see their subgraphs
separately collected and stored in multiple local systems. Therefore, it is
natural to consider the subgraph federated learning setting, where each local
system holding a small subgraph that may be biased from the distribution of the
whole graph. Hence, the subgraph federated learning aims to collaboratively
train a powerful and generalizable graph mining model without directly sharing
their graph data. In this work, towards the novel yet realistic setting of
subgraph federated learning, we propose two major techniques: (1) FedSage,
which trains a GraphSage model based on FedAvg to integrate node features, link
structures, and task labels on multiple local subgraphs; (2) FedSage+, which
trains a missing neighbor generator along FedSage to deal with missing links
across local subgraphs. Empirical results on four real-world graph datasets
with synthesized subgraph federated learning settings demonstrate the
effectiveness and efficiency of our proposed techniques. At the same time,
consistent theoretical implications are made towards their generalization
ability on the global graphs. | [
"cs.LG",
"cs.SI"
] |
We consider the problem of segmenting image regions given a natural language
phrase, and study it on a novel dataset of 77,262 images and 345,486
phrase-region pairs. Our dataset is collected on top of the Visual Genome
dataset and uses the existing annotations to generate a challenging set of
referring phrases for which the corresponding regions are manually annotated.
Phrases in our dataset correspond to multiple regions and describe a large
number of object and stuff categories as well as their attributes such as
color, shape, parts, and relationships with other entities in the image. Our
experiments show that the scale and diversity of concepts in our dataset poses
significant challenges to the existing state-of-the-art. We systematically
handle the long-tail nature of these concepts and present a modular approach to
combine category, attribute, and relationship cues that outperforms existing
approaches. | [
"cs.CV"
] |
Localizing functional regions of objects or affordances is an important
aspect of scene understanding. In this work, we cast the problem of affordance
segmentation as that of semantic image segmentation. In order to explore
various levels of supervision, we introduce a pixel-annotated affordance
dataset of 3090 images containing 9916 object instances with rich contextual
information in terms of human-object interactions. We use a deep convolutional
neural network within an expectation maximization framework to take advantage
of weakly labeled data like image level annotations or keypoint annotations. We
show that a further reduction in supervision is possible with a minimal loss in
performance when human pose is used as context. | [
"cs.CV"
] |
Comma.ai's approach to Artificial Intelligence for self-driving cars is based
on an agent that learns to clone driver behaviors and plans maneuvers by
simulating future events in the road. This paper illustrates one of our
research approaches for driving simulation. One where we learn to simulate.
Here we investigate variational autoencoders with classical and learned cost
functions using generative adversarial networks for embedding road frames.
Afterwards, we learn a transition model in the embedded space using action
conditioned Recurrent Neural Networks. We show that our approach can keep
predicting realistic looking video for several frames despite the transition
model being optimized without a cost function in the pixel space. | [
"cs.LG",
"stat.ML"
] |
Parametric causal modelling techniques rarely provide functionality for
counterfactual estimation, often at the expense of modelling complexity. Since
causal estimations depend on the family of functions used to model the data,
simplistic models could entail imprecise characterizations of the generative
mechanism, and, consequently, unreliable results. This limits their
applicability to real-life datasets, with non-linear relationships and high
interaction between variables. We propose Deep Causal Graphs, an abstract
specification of the required functionality for a neural network to model
causal distributions, and provide a model that satisfies this contract:
Normalizing Causal Flows. We demonstrate its expressive power in modelling
complex interactions and showcase applications of the method to machine
learning explainability and fairness, using true causal counterfactuals. | [
"stat.ML",
"cs.LG",
"cs.NE"
] |
Graph convolutional neural networks (GCNs) embed nodes in a graph into
Euclidean space, which has been shown to incur a large distortion when
embedding real-world graphs with scale-free or hierarchical structure.
Hyperbolic geometry offers an exciting alternative, as it enables embeddings
with much smaller distortion. However, extending GCNs to hyperbolic geometry
presents several unique challenges because it is not clear how to define neural
network operations, such as feature transformation and aggregation, in
hyperbolic space. Furthermore, since input features are often Euclidean, it is
unclear how to transform the features into hyperbolic embeddings with the right
amount of curvature. Here we propose Hyperbolic Graph Convolutional Neural
Network (HGCN), the first inductive hyperbolic GCN that leverages both the
expressiveness of GCNs and hyperbolic geometry to learn inductive node
representations for hierarchical and scale-free graphs. We derive GCN
operations in the hyperboloid model of hyperbolic space and map Euclidean input
features to embeddings in hyperbolic spaces with different trainable curvature
at each layer. Experiments demonstrate that HGCN learns embeddings that
preserve hierarchical structure, and leads to improved performance when
compared to Euclidean analogs, even with very low dimensional embeddings:
compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to
63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node
classification, also improving state-of-the art on the Pubmed dataset. | [
"cs.LG",
"stat.ML"
] |
In this paper, we tackle the problem of colorization of grayscale videos to
reduce bandwidth usage. For this task, we use some colored keyframes as
reference images from the colored version of the grayscale video. We propose a
model that extracts keyframes from a colored video and trains a Convolutional
Neural Network from scratch on these colored frames. Through the extracted
keyframes we get a good knowledge of the colors that have been used in the
video which helps us in colorizing the grayscale version of the video
efficiently. An application of the technique that we propose in this paper, is
in saving bandwidth while sending raw colored videos that haven't gone through
any compression. A raw colored video takes up around three times more memory
size than its grayscale version. We can exploit this fact and send a grayscale
video along with out trained model instead of a colored video. Later on, in
this paper we show how this technique can help to save bandwidth usage to upto
three times while transmitting raw colored videos. | [
"cs.CV"
] |
In this paper, we propose spatial propagation networks for learning the
affinity matrix for vision tasks. We show that by constructing a row/column
linear propagation model, the spatially varying transformation matrix exactly
constitutes an affinity matrix that models dense, global pairwise relationships
of an image. Specifically, we develop a three-way connection for the linear
propagation model, which (a) formulates a sparse transformation matrix, where
all elements can be the output from a deep CNN, but (b) results in a dense
affinity matrix that effectively models any task-specific pairwise similarity
matrix. Instead of designing the similarity kernels according to image features
of two points, we can directly output all the similarities in a purely
data-driven manner. The spatial propagation network is a generic framework that
can be applied to many affinity-related tasks, including but not limited to
image matting, segmentation and colorization, to name a few. Essentially, the
model can learn semantically-aware affinity values for high-level vision tasks
due to the powerful learning capability of the deep neural network classifier.
We validate the framework on the task of refinement for image segmentation
boundaries. Experiments on the HELEN face parsing and PASCAL VOC-2012 semantic
segmentation tasks show that the spatial propagation network provides a
general, effective and efficient solution for generating high-quality
segmentation results. | [
"cs.CV",
"cs.LG"
] |
Softening labels of training datasets with respect to data representations
has been frequently used to improve the training of deep neural networks
(DNNs). While such a practice has been studied as a way to leverage privileged
information about the distribution of the data, a well-trained learner with
soft classification outputs should be first obtained as a prior to generate
such privileged information. To solve such chicken-egg problem, we propose
COLAM framework that Co-Learns DNNs and soft labels through Alternating
Minimization of two objectives - (a) the training loss subject to soft labels
and (b) the objective to learn improved soft labels - in one end-to-end
training procedure. We performed extensive experiments to compare our proposed
method with a series of baselines. The experiment results show that COLAM
achieves improved performance on many tasks with better testing classification
accuracy. We also provide both qualitative and quantitative analyses that
explain why COLAM works well. | [
"cs.LG",
"stat.ML"
] |
Improving the aesthetic quality of images is challenging and eager for the
public. To address this problem, most existing algorithms are based on
supervised learning methods to learn an automatic photo enhancer for paired
data, which consists of low-quality photos and corresponding expert-retouched
versions. However, the style and characteristics of photos retouched by experts
may not meet the needs or preferences of general users. In this paper, we
present an unsupervised image enhancement generative adversarial network
(UEGAN), which learns the corresponding image-to-image mapping from a set of
images with desired characteristics in an unsupervised manner, rather than
learning on a large number of paired images. The proposed model is based on
single deep GAN which embeds the modulation and attention mechanisms to capture
richer global and local features. Based on the proposed model, we introduce two
losses to deal with the unsupervised image enhancement: (1) fidelity loss,
which is defined as a L2 regularization in the feature domain of a pre-trained
VGG network to ensure the content between the enhanced image and the input
image is the same, and (2) quality loss that is formulated as a relativistic
hinge adversarial loss to endow the input image the desired characteristics.
Both quantitative and qualitative results show that the proposed model
effectively improves the aesthetic quality of images. Our code is available at:
https://github.com/eezkni/UEGAN. | [
"cs.CV"
] |
Convolutional neural network (CNN) based architectures, such as Mask R-CNN,
constitute the state of the art in object detection and segmentation. Recently,
these methods have been extended for model-based segmentation where the network
outputs the parameters of a geometric model (e.g. an ellipse) directly. This
work considers objects whose three-dimensional models can be represented as
ellipsoids. We present a variant of Mask R-CNN for estimating the parameters of
ellipsoidal objects by segmenting each object and accurately regressing the
parameters of projection ellipses. We show that model regression is sensitive
to the underlying occlusion scenario and that prediction quality for each
object needs to be characterized individually for accurate 3D object
estimation. We present a novel ellipse regression loss which can learn the
offset parameters with their uncertainties and quantify the overall geometric
quality of detection for each ellipse. These values, in turn, allow us to fuse
multi-view detections to obtain 3D ellipsoid parameters in a principled
fashion. The experiments on both synthetic and real datasets quantitatively
demonstrate the high accuracy of our proposed method in estimating 3D objects
under heavy occlusions compared to previous state-of-the-art methods. | [
"cs.CV",
"cs.RO"
] |
Recent advances in object detection are mainly driven by deep learning with
large-scale detection benchmarks. However, the fully-annotated training set is
often limited for a target detection task, which may deteriorate the
performance of deep detectors. To address this challenge, we propose a novel
low-shot transfer detector (LSTD) in this paper, where we leverage rich
source-domain knowledge to construct an effective target-domain detector with
very few training examples. The main contributions are described as follows.
First, we design a flexible deep architecture of LSTD to alleviate transfer
difficulties in low-shot detection. This architecture can integrate the
advantages of both SSD and Faster RCNN in a unified deep framework. Second, we
introduce a novel regularized transfer learning framework for low-shot
detection, where the transfer knowledge (TK) and background depression (BD)
regularizations are proposed to leverage object knowledge respectively from
source and target domains, in order to further enhance fine-tuning with a few
target images. Finally, we examine our LSTD on a number of challenging low-shot
detection experiments, where LSTD outperforms other state-of-the-art
approaches. The results demonstrate that LSTD is a preferable deep detector for
low-shot scenarios. | [
"cs.CV"
] |
In order to tackle the difficulty associated with the ill-posed nature of the
image registration problem, researchers use regularization to constrain the
solution space. For most learning-based registration approaches, the
regularization usually has a fixed weight and only constrains the spatial
transformation. Such convention has two limitations: (1) The regularization
strength of a specific image pair should be associated with the content of the
images, thus the ``one value fits all'' scheme is not ideal; (2) Only spatially
regularizing the transformation (but overlooking the temporal consistency of
different estimations) may not be the best strategy to cope with the
ill-posedness. In this study, we propose a mean-teacher based registration
framework. This framework incorporates an additional \textit{temporal
regularization} term by encouraging the teacher model's temporal ensemble
prediction to be consistent with that of the student model. At each training
step, it also automatically adjusts the weights of the \textit{spatial
regularization} and the \textit{temporal regularization} by taking account of
the transformation uncertainty and appearance uncertainty derived from the
perturbed teacher model. We perform experiments on multi- and uni-modal
registration tasks, and the results show that our strategy outperforms the
traditional and learning-based benchmark methods. | [
"cs.CV",
"eess.IV"
] |
Deep neural network approaches have demonstrated high performance in object
recognition (CNN) and detection (Faster-RCNN) tasks, but experiments have shown
that such architectures are vulnerable to adversarial attacks (FFF, UAP): low
amplitude perturbations, barely perceptible by the human eye, can lead to a
drastic reduction in labeling performance. This article proposes a new context
module, called \textit{Transformer-Encoder Detector Module}, that can be
applied to an object detector to (i) improve the labeling of object instances;
and (ii) improve the detector's robustness to adversarial attacks. The proposed
model achieves higher mAP, F1 scores and AUC average score of up to 13\%
compared to the baseline Faster-RCNN detector, and an mAP score 8 points higher
on images subjected to FFF or UAP attacks due to the inclusion of both
contextual and visual features extracted from scene and encoded into the model.
The result demonstrates that a simple ad-hoc context module can improve the
reliability of object detectors significantly. | [
"cs.CV"
] |
Recent advances in depth imaging sensors provide easy access to the
synchronized depth with color, called RGB-D image. In this paper, we propose an
unsupervised method for indoor RGB-D image segmentation and analysis. We
consider a statistical image generation model based on the color and geometry
of the scene. Our method consists of a joint color-spatial-directional
clustering method followed by a statistical planar region merging method. We
evaluate our method on the NYU depth database and compare it with existing
unsupervised RGB-D segmentation methods. Results show that, it is comparable
with the state of the art methods and it needs less computation time. Moreover,
it opens interesting perspectives to fuse color and geometry in an unsupervised
manner. | [
"cs.CV"
] |
Autonomous and semi-autonomous systems for safety-critical applications
require rigorous testing before deployment. Due to the complexity of these
systems, formal verification may be impossible and real-world testing may be
dangerous during development. Therefore, simulation-based techniques have been
developed that treat the system under test as a black box during testing.
Safety validation tasks include finding disturbances to the system that cause
it to fail (falsification), finding the most-likely failure, and estimating the
probability that the system fails. Motivated by the prevalence of
safety-critical artificial intelligence, this work provides a survey of
state-of-the-art safety validation techniques with a focus on applied
algorithms and their modifications for the safety validation problem. We
present and discuss algorithms in the domains of optimization, path planning,
reinforcement learning, and importance sampling. Problem decomposition
techniques are presented to help scale algorithms to large state spaces, and a
brief overview of safety-critical applications is given, including autonomous
vehicles and aircraft collision avoidance systems. Finally, we present a survey
of existing academic and commercially available safety validation tools. | [
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY",
"stat.ML"
] |
Time series data is prevalent in a wide variety of real-world applications
and it calls for trustworthy and explainable models for people to understand
and fully trust decisions made by AI solutions. We consider the problem of
building explainable classifiers from multi-variate time series data. A key
criterion to understand such predictive models involves elucidating and
quantifying the contribution of time varying input variables to the
classification. Hence, we introduce a novel, modular, convolution-based feature
extraction and attention mechanism that simultaneously identifies the variables
as well as time intervals which determine the classifier output. We present
results of extensive experiments with several benchmark data sets that show
that the proposed method outperforms the state-of-the-art baseline methods on
multi-variate time series classification task. The results of our case studies
demonstrate that the variables and time intervals identified by the proposed
method make sense relative to available domain knowledge. | [
"cs.LG"
] |
The industrial machine learning pipeline requires iterating on model
features, training and deploying models, and monitoring deployed models at
scale. Feature stores were developed to manage and standardize the engineer's
workflow in this end-to-end pipeline, focusing on traditional tabular feature
data. In recent years, however, model development has shifted towards using
self-supervised pretrained embeddings as model features. Managing these
embeddings and the downstream systems that use them introduces new challenges
with respect to managing embedding training data, measuring embedding quality,
and monitoring downstream models that use embeddings. These challenges are
largely unaddressed in standard feature stores. Our goal in this tutorial is to
introduce the feature store system and discuss the challenges and current
solutions to managing these new embedding-centric pipelines. | [
"cs.LG",
"cs.DB"
] |
Object recognition techniques using convolutional neural networks (CNN) have
achieved great success. However, state-of-the-art object detection methods
still perform poorly on large vocabulary and long-tailed datasets, e.g. LVIS.
In this work, we analyze this problem from a novel perspective: each positive
sample of one category can be seen as a negative sample for other categories,
making the tail categories receive more discouraging gradients. Based on it, we
propose a simple but effective loss, named equalization loss, to tackle the
problem of long-tailed rare categories by simply ignoring those gradients for
rare categories. The equalization loss protects the learning of rare categories
from being at a disadvantage during the network parameter updating. Thus the
model is capable of learning better discriminative features for objects of rare
classes. Without any bells and whistles, our method achieves AP gains of 4.1%
and 4.8% for the rare and common categories on the challenging LVIS benchmark,
compared to the Mask R-CNN baseline. With the utilization of the effective
equalization loss, we finally won the 1st place in the LVIS Challenge 2019.
Code has been made available at: https: //github.com/tztztztztz/eql.detectron2 | [
"cs.CV"
] |
Smart and agile drones are fast becoming ubiquitous at the edge of the cloud.
The usage of these drones are constrained by their limited power and compute
capability. In this paper, we present a Transfer Learning (TL) based approach
to reduce on-board computation required to train a deep neural network for
autonomous navigation via Deep Reinforcement Learning for a target algorithmic
performance. A library of 3D realistic meta-environments is manually designed
using Unreal Gaming Engine and the network is trained end-to-end. These trained
meta-weights are then used as initializers to the network in a test environment
and fine-tuned for the last few fully connected layers. Variation in drone
dynamics and environmental characteristics is carried out to show robustness of
the approach. Using NVIDIA GPU profiler it was shown that the energy
consumption and training latency is reduced by 3.7x and 1.8x respectively
without significant degradation in the performance in terms of average distance
traveled before crash i.e. Mean Safe Flight (MSF). The approach is also tested
on a real environment using DJI Tello drone and similar results were reported. | [
"cs.LG",
"stat.ML"
] |
Semi-supervised learning has received a lot of recent attention as it
alleviates the need for large amounts of labelled data which can often be
expensive, requires expert knowledge and be time consuming to collect. Recent
developments in deep semi-supervised classification have reached unprecedented
performance and the gap between supervised and semi-supervised learning is
ever-decreasing. This improvement in performance has been based on the
inclusion of numerous technical tricks, strong augmentation techniques and
costly optimisation schemes with multi-term loss functions. We propose a new
framework, LaplaceNet, for deep semi-supervised classification that has a
greatly reduced model complexity. We utilise a hybrid energy-neural network
where graph based pseudo-labels, generated by minimising the graphical
Laplacian, are used to iteratively improve a neural-network backbone. Our model
outperforms state-of-the-art methods for deep semi-supervised classification,
over several benchmark datasets. Furthermore, we consider the application of
strong-augmentations to neural networks theoretically and justify the use of a
multi-sampling approach for semi-supervised learning. We demonstrate, through
rigorous experimentation, that a multi-sampling augmentation approach improves
generalisation and reduces the sensitivity of the network to augmentation. | [
"cs.LG"
] |
The task of medical image segmentation commonly involves an image
reconstruction step to convert acquired raw data to images before any analysis.
However, noises, artifacts and loss of information due to the reconstruction
process are almost inevitable, which compromises the final performance of
segmentation. We present a novel learning framework that performs magnetic
resonance brain image segmentation directly from k-space data. The end-to-end
framework consists of a unique task-driven attention module that recurrently
utilizes intermediate segmentation estimation to facilitate image-domain
feature extraction from the raw data, thus closely bridging the reconstruction
and the segmentation tasks. In addition, to address the challenge of manual
labeling, we introduce a novel workflow to generate labeled training data for
segmentation by exploiting imaging modality simulators and digital phantoms.
Extensive experimental results show that the proposed method outperforms
several state-of-the-art methods. | [
"cs.CV"
] |
There has been renewed recent interest in developing effective lower bounds
for Dynamic Time Warping (DTW) distance between time series. These have many
applications in time series indexing, clustering, forecasting, regression and
classification. One of the key time series classification algorithms, the
nearest neighbor algorithm with DTW distance (NN-DTW) is very expensive to
compute, due to the quadratic complexity of DTW. Lower bound search can speed
up NN-DTW substantially. An effective and tight lower bound quickly prunes off
unpromising nearest neighbor candidates from the search space and minimises the
number of the costly DTW computations. The speed up provided by lower bound
search becomes increasingly critical as training set size increases. Different
lower bounds provide different trade-offs between computation time and
tightness. Most existing lower bounds interact with DTW warping window sizes.
They are very tight and effective at smaller warping window sizes, but become
looser as the warping window increases, thus reducing the pruning effectiveness
for NN-DTW. In this work, we present a new class of lower bounds that are
tighter than the popular Keogh lower bound, while requiring similar computation
time. Our new lower bounds take advantage of the DTW boundary condition,
monotonicity and continuity constraints to create a tighter lower bound. Of
particular significance, they remain relatively tight even for large windows. A
single parameter to these new lower bounds controls the speed-tightness
trade-off. We demonstrate that these new lower bounds provide an exceptional
balance between computation time and tightness for the NN-DTW time series
classification task, resulting in greatly improved efficiency for NN-DTW lower
bound search. | [
"cs.LG",
"stat.ML"
] |
This paper presents a reinforcement learning approach to synthesizing
task-driven control policies for robotic systems equipped with rich sensory
modalities (e.g., vision or depth). Standard reinforcement learning algorithms
typically produce policies that tightly couple control actions to the entirety
of the system's state and rich sensor observations. As a consequence, the
resulting policies can often be sensitive to changes in task-irrelevant
portions of the state or observations (e.g., changing background colors). In
contrast, the approach we present here learns to create a task-driven
representation that is used to compute control actions. Formally, this is
achieved by deriving a policy gradient-style algorithm that creates an
information bottleneck between the states and the task-driven representation;
this constrains actions to only depend on task-relevant information. We
demonstrate our approach in a thorough set of simulation results on multiple
examples including a grasping task that utilizes depth images and a
ball-catching task that utilizes RGB images. Comparisons with a standard policy
gradient approach demonstrate that the task-driven policies produced by our
algorithm are often significantly more robust to sensor noise and
task-irrelevant changes in the environment. | [
"cs.LG",
"cs.RO",
"math.OC",
"stat.ML"
] |
Flappy Bird, which has a very high popularity, has been trained in many
algorithms. Some of these studies were trained from raw pixel values of game
and some from specific attributes. In this study, the model was trained with
raw game images, which had not been seen before. The trained model has learned
as reinforcement when to make which decision. As an input to the model, the
reward or penalty at the end of each step was returned and the training was
completed. Flappy Bird game was trained with the Reinforcement Learning
algorithm Deep Q-Network and Asynchronous Advantage Actor Critic (A3C)
algorithms. | [
"cs.LG",
"cs.NE"
] |
We address the problem of unsupervised classification of players in a team
sport according to their team affiliation, when jersey colours and design are
not known a priori. We adopt a contrastive learning approach in which an
embedding network learns to maximize the distance between representations of
players on different teams relative to players on the same team, in a purely
unsupervised fashion, without any labelled data. We evaluate the approach using
a new hockey dataset and find that it outperforms prior unsupervised approaches
by a substantial margin, particularly for real-time application when only a
small number of frames are available for unsupervised learning before team
assignments must be made. Remarkably, we show that our contrastive method
achieves 94% accuracy after unsupervised training on only a single frame, with
accuracy rising to 97% within 500 frames (17 seconds of game time). We further
demonstrate how accurate team classification allows accurate team-conditional
heat maps of player positioning to be computed. | [
"cs.CV"
] |
We propose a novel attention gate (AG) model for medical imaging that
automatically learns to focus on target structures of varying shapes and sizes.
Models trained with AGs implicitly learn to suppress irrelevant regions in an
input image while highlighting salient features useful for a specific task.
This enables us to eliminate the necessity of using explicit external
tissue/organ localisation modules of cascaded convolutional neural networks
(CNNs). AGs can be easily integrated into standard CNN architectures such as
the U-Net model with minimal computational overhead while increasing the model
sensitivity and prediction accuracy. The proposed Attention U-Net architecture
is evaluated on two large CT abdominal datasets for multi-class image
segmentation. Experimental results show that AGs consistently improve the
prediction performance of U-Net across different datasets and training sizes
while preserving computational efficiency. The code for the proposed
architecture is publicly available. | [
"cs.CV"
] |
The ability to determine what parts of objects and surfaces people touch as
they go about their daily lives would be useful in understanding how the
COVID-19 virus spreads. To determine whether a person has touched an object or
surface using visual data, images, or videos, is a hard problem. Computer
vision 3D reconstruction approaches project objects and the human body from the
2D image domain to 3D and perform 3D space intersection directly. However, this
solution would not meet the accuracy requirement in applications due to
projection error. Another standard approach is to train a neural network to
infer touch actions from the collected visual data. This strategy would require
significant amounts of training data to generalize over scale and viewpoint
variations. A different approach to this problem is to identify whether a
person has touched a defined object. In this work, we show that the solution to
this problem can be straightforward. Specifically, we show that the contact
between an object and a static surface can be identified by projecting the
object onto the static surface through two different viewpoints and analyzing
their 2D intersection. The object contacts the surface when the projected
points are close to each other; we call this cross view projection consistency.
Instead of doing 3D scene reconstruction or transfer learning from deep
networks, a mapping from the surface in the two camera views to the surface
space is the only requirement. For planar space, this mapping is the Homography
transformation. This simple method can be easily adapted to real-life
applications. In this paper, we apply our method to do office occupancy
detection for studying the COVID-19 transmission pattern from an office desk in
a meeting room using the contact information. | [
"cs.CV",
"eess.IV"
] |
A wide range of reinforcement learning (RL) problems - including robustness,
transfer learning, unsupervised RL, and emergent complexity - require
specifying a distribution of tasks or environments in which a policy will be
trained. However, creating a useful distribution of environments is error
prone, and takes a significant amount of developer time and effort. We propose
Unsupervised Environment Design (UED) as an alternative paradigm, where
developers provide environments with unknown parameters, and these parameters
are used to automatically produce a distribution over valid, solvable
environments. Existing approaches to automatically generating environments
suffer from common failure modes: domain randomization cannot generate
structure or adapt the difficulty of the environment to the agent's learning
progress, and minimax adversarial training leads to worst-case environments
that are often unsolvable. To generate structured, solvable environments for
our protagonist agent, we introduce a second, antagonist agent that is allied
with the environment-generating adversary. The adversary is motivated to
generate environments which maximize regret, defined as the difference between
the protagonist and antagonist agent's return. We call our technique
Protagonist Antagonist Induced Regret Environment Design (PAIRED). Our
experiments demonstrate that PAIRED produces a natural curriculum of
increasingly complex environments, and PAIRED agents achieve higher zero-shot
transfer performance when tested in highly novel environments. | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
This paper presents a modular lightweight network model for road objects
detection, such as car, pedestrian and cyclist, especially when they are far
away from the camera and their sizes are small. Great advances have been made
for the deep networks, but small objects detection is still a challenging task.
In order to solve this problem, majority of existing methods utilize
complicated network or bigger image size, which generally leads to higher
computation cost. The proposed network model is referred to as modular feature
fusion detector (MFFD), using a fast and efficient network architecture for
detecting small objects. The contribution lies in the following aspects: 1) Two
base modules have been designed for efficient computation: Front module reduce
the information loss from raw input images; Tinier module decrease model size
and computation cost, while ensuring the detection accuracy. 2) By stacking the
base modules, we design a context features fusion framework for multi-scale
object detection. 3) The propose method is efficient in terms of model size and
computation cost, which is applicable for resource limited devices, such as
embedded systems for advanced driver assistance systems (ADAS). Comparisons
with the state-of-the-arts on the challenging KITTI dataset reveal the
superiority of the proposed method. Especially, 100 fps can be achieved on the
embedded GPUs such as Jetson TX2. | [
"cs.CV"
] |
Graph Neural Networks (GNN) has demonstrated the superior performance in many
challenging applications, including the few-shot learning tasks. Despite its
powerful capacity to learn and generalize the model from few samples, GNN
usually suffers from severe over-fitting and over-smoothing as the model
becomes deep, which limit the scalability. In this work, we propose a novel
Attentive GNN to tackle these challenges, by incorporating a triple-attention
mechanism, i.e. node self-attention, neighborhood attention, and layer memory
attention. We explain why the proposed attentive modules can improve GNN for
few-shot learning with theoretical analysis and illustrations. Extensive
experiments show that the proposed Attentive GNN model achieves the promising
results, comparing to the state-of-the-art GNN- and CNN-based methods for
few-shot learning tasks, over the mini-ImageNet and tiered-ImageNet benchmarks,
under ConvNet-4 and ResNet-based backbone with both inductive and transductive
settings. The codes will be made publicly available. | [
"cs.LG",
"stat.ML"
] |
Reasoning about graphs evolving over time is a challenging concept in many
domains, such as bioinformatics, physics, and social networks. We consider a
common case in which edges can be short term interactions (e.g., messaging) or
long term structural connections (e.g., friendship). In practice, long term
edges are often specified by humans. Human-specified edges can be both
expensive to produce and suboptimal for the downstream task. To alleviate these
issues, we propose a model based on temporal point processes and variational
autoencoders that learns to infer temporal attention between nodes by observing
node communication. As temporal attention drives between-node feature
propagation, using the dynamics of node interactions to learn this key
component provides more flexibility while simultaneously avoiding issues
associated with human-specified edges. We also propose a bilinear
transformation layer for pairs of node features instead of concatenation,
typically used in prior work, and demonstrate its superior performance in all
cases. In experiments on two datasets in the dynamic link prediction task, our
model often outperforms the baseline model that requires a human-specified
graph. Moreover, our learned attention is semantically interpretable and infers
connections similar to actual graphs. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
In this paper, we propose a neuro-symbolic framework called weighted Signal
Temporal Logic Neural Network (wSTL-NN) that combines the characteristics of
neural networks and temporal logics. Weighted Signal Temporal Logic (wSTL)
formulas are recursively composed of subformulas that are combined using
logical and temporal operators. The quantitative semantics of wSTL is defined
such that the quantitative satisfaction of subformulas with higher weights has
more influence on the quantitative satisfaction of the overall wSTL formula. In
the wSTL-NN, each neuron corresponds to a wSTL subformula, and its output
corresponds to the quantitative satisfaction of the formula. We use wSTL-NN to
represent wSTL formulas as features to classify time series data. STL features
are more explainable than those used in classical methods. The wSTL-NN is
end-to-end differentiable, which allows learning of wSTL formulas to be done
using back-propagation. To reduce the number of weights, we introduce two
techniques to sparsify the wSTL-NN.We apply our framework to an occupancy
detection time-series dataset to learn a classifier that predicts the occupancy
status of an office room. | [
"cs.LG",
"cs.NE"
] |
In ordinary Dimensionality Reduction (DR), each data instance in an
m-dimensional space (original space) is mapped to one point in a d-dimensional
space (visual space), preserving as much as possible distances and/or
neighborhood relationships. Despite their popularity, even for simple datasets,
the existing DR techniques unavoidably may produce misleading visual
representations. The problem is not with the existing solutions but with
problem formulation. For two dimensional visual space, if data instances are
not co-planar or do not lie on a 2D manifold, there is no solution for the
problem, and the possible approximations usually result in layouts with
inaccuracies in the distance preservation and overlapped neighborhoods. In this
paper, we elaborate on the concept of Multi-point Dimensionality Reduction
where each data instance can be mapped to possibly more than one point in the
visual space by providing the first general solution to it as a step toward
mitigating this issue. By duplicating points, background information is added
to the visual representation making local neighborhoods in the visual space
more faithful to the original space. Our solution, named Red Gray Plus, is
built upon and extends a combination of ordinary DR and graph drawing
techniques. We show that not only Multi-point Dimensionality Reduction can be
one of the potential directions to improve DR layouts' reliability but also
that our initial solution to the problem outperforms popular ordinary DR
methods quantitatively. | [
"cs.CV"
] |
Graph neural networks (GNNs) achieve remarkable success in graph-based
semi-supervised node classification, leveraging the information from
neighboring nodes to improve the representation learning of target node. The
success of GNNs at node classification depends on the assumption that connected
nodes tend to have the same label. However, such an assumption does not always
work, limiting the performance of GNNs at node classification. In this paper,
we propose label-consistency based graph neural network(LC-GNN), leveraging
node pairs unconnected but with the same labels to enlarge the receptive field
of nodes in GNNs. Experiments on benchmark datasets demonstrate the proposed
LC-GNN outperforms traditional GNNs in graph-based semi-supervised node
classification.We further show the superiority of LC-GNN in sparse scenarios
with only a handful of labeled nodes. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Reasoning is an important ability that we learn from a very early age. Yet,
reasoning is extremely hard for algorithms. Despite impressive recent progress
that has been reported on tasks that necessitate reasoning, such as visual
question answering and visual dialog, models often exploit biases in datasets.
To develop models with better reasoning abilities, recently, the new visual
commonsense reasoning (VCR) task has been introduced. Not only do models have
to answer questions, but also do they have to provide a reason for the given
answer. The proposed baseline achieved compelling results, leveraging a
meticulously designed model composed of LSTM modules and attention nets. Here
we show that a much simpler model obtained by ablating and pruning the existing
intricate baseline can perform better with half the number of trainable
parameters. By associating visual features with attribute information and
better text to image grounding, we obtain further improvements for our simpler
& effective baseline, TAB-VCR. We show that this approach results in a 5.3%,
4.4% and 6.5% absolute improvement over the previous state-of-the-art on
question answering, answer justification and holistic VCR. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Network representation learning, as an approach to learn low dimensional
representations of vertices, has attracted considerable research attention
recently. It has been proven extremely useful in many machine learning tasks
over large graph. Most existing methods focus on learning the structural
representations of vertices in a static network, but cannot guarantee an
accurate and efficient embedding in a dynamic network scenario. To address this
issue, we present an efficient incremental skip-gram algorithm with negative
sampling for dynamic network embedding, and provide a set of theoretical
analyses to characterize the performance guarantee. Specifically, we first
partition a dynamic network into the updated, including addition/deletion of
links and vertices, and the retained networks over time. Then we factorize the
objective function of network embedding into the added, vanished and retained
parts of the network. Next we provide a new stochastic gradient-based method,
guided by the partitions of the network, to update the nodes and the parameter
vectors. The proposed algorithm is proven to yield an objective function value
with a bounded difference to that of the original objective function.
Experimental results show that our proposal can significantly reduce the
training time while preserving the comparable performance. We also demonstrate
the correctness of the theoretical analysis and the practical usefulness of the
dynamic network embedding. We perform extensive experiments on multiple
real-world large network datasets over multi-label classification and link
prediction tasks to evaluate the effectiveness and efficiency of the proposed
framework, and up to 22 times speedup has been achieved. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Deep reinforcement learning has recently made significant progress in solving
computer games and robotic control tasks. A known problem, though, is that
policies overfit to the training environment and may not avoid rare,
catastrophic events such as automotive accidents. A classical technique for
improving the robustness of reinforcement learning algorithms is to train on a
set of randomized environments, but this approach only guards against common
situations. Recently, robust adversarial reinforcement learning (RARL) was
developed, which allows efficient applications of random and systematic
perturbations by a trained adversary. A limitation of RARL is that only the
expected control objective is optimized; there is no explicit modeling or
optimization of risk. Thus the agents do not consider the probability of
catastrophic events (i.e., those inducing abnormally large negative reward),
except through their effect on the expected objective. In this paper we
introduce risk-averse robust adversarial reinforcement learning (RARARL), using
a risk-averse protagonist and a risk-seeking adversary. We test our approach on
a self-driving vehicle controller. We use an ensemble of policy networks to
model risk as the variance of value functions. We show through experiments that
a risk-averse agent is better equipped to handle a risk-seeking adversary, and
experiences substantially fewer crashes compared to agents trained without an
adversary. | [
"cs.LG",
"cs.AI",
"cs.RO"
] |
Suppose that one particular block in a stochastic block model is of interest,
but block labels are only observed for a few of the vertices in the network.
Utilizing a graph realized from the model and the observed block labels, the
vertex nomination task is to order the vertices with unobserved block labels
into a ranked nomination list with the goal of having an abundance of
interesting vertices near the top of the list. There are vertex nomination
schemes in the literature, including the optimally precise canonical nomination
scheme~$\mathcal{L}^C$ and the consistent spectral partitioning nomination
scheme~$\mathcal{L}^P$. While the canonical nomination scheme $\mathcal{L}^C$
is provably optimally precise, it is computationally intractable, being
impractical to implement even on modestly sized graphs. With this in mind, an
approximation of the canonical scheme---denoted the {\it canonical sampling
nomination scheme} $\mathcal{L}^{CS}$---is introduced; $\mathcal{L}^{CS}$
relies on a scalable, Markov chain Monte Carlo-based approximation of
$\mathcal{L}^{C}$, and converges to $\mathcal{L}^{C}$ as the amount of sampling
goes to infinity. The spectral partitioning nomination scheme is also extended
to the {\it extended spectral partitioning nomination scheme},
$\mathcal{L}^{EP}$, which introduces a novel semisupervised clustering
framework to improve upon the precision of $\mathcal{L}^P$. Real-data and
simulation experiments are employed to illustrate the precision of these vertex
nomination schemes, as well as their empirical computational complexity.
Keywords: vertex nomination, Markov chain Monte Carlo, spectral partitioning,
Mclust MSC[2010]: 60J22, 65C40, 62H30, 62H25 | [
"stat.ML"
] |
The success of minimax learning problems of generative adversarial networks
(GANs) has been observed to depend on the minimax optimization algorithm used
for their training. This dependence is commonly attributed to the convergence
speed and robustness properties of the underlying optimization algorithm. In
this paper, we show that the optimization algorithm also plays a key role in
the generalization performance of the trained minimax model. To this end, we
analyze the generalization properties of standard gradient descent ascent (GDA)
and proximal point method (PPM) algorithms through the lens of algorithmic
stability under both convex concave and non-convex non-concave minimax
settings. While the GDA algorithm is not guaranteed to have a vanishing excess
risk in convex concave problems, we show the PPM algorithm enjoys a bounded
excess risk in the same setup. For non-convex non-concave problems, we compare
the generalization performance of stochastic GDA and GDmax algorithms where the
latter fully solves the maximization subproblem at every iteration. Our
generalization analysis suggests the superiority of GDA provided that the
minimization and maximization subproblems are solved simultaneously with
similar learning rates. We discuss several numerical results indicating the
role of optimization algorithms in the generalization of the learned minimax
models. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Distributed learning has become a hot research topic, due to its wide
application in cluster-based large-scale learning, federated learning, edge
computing and so on. Most distributed learning methods assume no error and
attack on the workers. However, many unexpected cases, such as communication
error and even malicious attack, may happen in real applications. Hence,
Byzantine learning (BL), which refers to distributed learning with attack or
error, has recently attracted much attention. Most existing BL methods are
synchronous, which will result in slow convergence when there exist
heterogeneous workers. Furthermore, in some applications like federated
learning and edge computing, synchronization cannot even be performed most of
the time due to the online workers (clients or edge servers). Hence,
asynchronous BL (ABL) is more general and practical than synchronous BL (SBL).
To the best of our knowledge, there exist only two ABL methods. One of them
cannot resist malicious attack. The other needs to store some training
instances on the server, which has the privacy leak problem. In this paper, we
propose a novel method, called buffered asynchronous stochastic gradient
descent (BASGD), for BL. BASGD is an asynchronous method. Furthermore, BASGD
has no need to store any training instances on the server, and hence can
preserve privacy in ABL. BASGD is theoretically proved to have the ability of
resisting against error and malicious attack. Moreover, BASGD has a similar
theoretical convergence rate to that of vanilla asynchronous SGD (ASGD), with
an extra constant variance. Empirical results show that BASGD can significantly
outperform vanilla ASGD and other ABL baselines, when there exists error or
attack on workers. | [
"cs.LG",
"stat.ML"
] |
For machine learning task, lacking sufficient samples mean the trained model
has low confidence to approach the ground truth function. Until recently, after
the generative adversarial networks (GAN) had been proposed, we see the hope of
small samples data augmentation (DA) with realistic fake data, and many works
validated the viability of GAN-based DA. Although most of the works pointed out
higher accuracy can be achieved using GAN-based DA, some researchers stressed
that the fake data generated from GAN has inherent bias, and in this paper, we
explored when the bias is so low that it cannot hurt the performance, we set
experiments to depict the bias in different GAN-based DA setting, and from the
results, we design a pipeline to inspect specific dataset is
efficiently-augmentable with GAN-based DA or not. And finally, depending on our
trial to reduce the bias, we proposed some advice to mitigate bias in GAN-based
DA application. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Neural networks are known to be vulnerable to carefully crafted adversarial
examples, and these malicious samples often transfer, i.e., they maintain their
effectiveness even against other models. With great efforts delved into the
transferability of adversarial examples, surprisingly, less attention has been
paid to its impact on real-world deep learning deployment. In this paper, we
investigate the transferability of adversarial examples across a wide range of
real-world computer vision tasks, including image classification, explicit
content detection, optical character recognition (OCR), and object detection.
It represents the cybercriminal's situation where an ensemble of different
detection mechanisms need to be evaded all at once. We propose practical attack
that overcomes existing attacks' limitation of requiring task-specific loss
functions by targeting on the `dispersion' of internal feature map. We report
evaluation on four different computer vision tasks provided by Google Cloud
Vision APIs to show how our approach outperforms existing attacks by degrading
performance of multiple CV tasks by a large margin with only modest
perturbations. | [
"cs.LG",
"cs.CR"
] |
This is a brief technical note to clarify the state of lower bounds on regret
for reinforcement learning. In particular, this paper:
- Reproduces a lower bound on regret for reinforcement learning, similar to
the result of Theorem 5 in the journal UCRL2 paper (Jaksch et al 2010).
- Clarifies that the proposed proof of Theorem 6 in the REGAL paper (Bartlett
and Tewari 2009) does not hold using the standard techniques without further
work. We suggest that this result should instead be considered a conjecture as
it has no rigorous proof.
- Suggests that the conjectured lower bound given by (Bartlett and Tewari
2009) is incorrect and, in fact, it is possible to improve the scaling of the
upper bound to match the weaker lower bounds presented in this paper.
We hope that this note serves to clarify existing results in the field of
reinforcement learning and provides interesting motivation for future work. | [
"stat.ML",
"cs.LG"
] |
We present an improved method for symbolic regression that seeks to fit data
to formulas that are Pareto-optimal, in the sense of having the best accuracy
for a given complexity. It improves on the previous state-of-the-art by
typically being orders of magnitude more robust toward noise and bad data, and
also by discovering many formulas that stumped previous methods. We develop a
method for discovering generalized symmetries (arbitrary modularity in the
computational graph of a formula) from gradient properties of a neural network
fit. We use normalizing flows to generalize our symbolic regression method to
probability distributions from which we only have samples, and employ
statistical hypothesis testing to accelerate robust brute-force search. | [
"cs.LG",
"cs.AI",
"cs.IT",
"math.IT",
"physics.comp-ph",
"stat.ML"
] |
Deep learning methods have witnessed the great progress in image restoration
with specific metrics (e.g., PSNR, SSIM). However, the perceptual quality of
the restored image is relatively subjective, and it is necessary for users to
control the reconstruction result according to personal preferences or image
characteristics, which cannot be done using existing deterministic networks.
This motivates us to exquisitely design a unified interactive framework for
general image restoration tasks. Under this framework, users can control
continuous transition of different objectives, e.g., the perception-distortion
trade-off of image super-resolution, the trade-off between noise reduction and
detail preservation. We achieve this goal by controlling the latent features of
the designed network. To be specific, our proposed framework, named
Controllable Feature Space Network (CFSNet), is entangled by two branches based
on different objectives. Our framework can adaptively learn the coupling
coefficients of different layers and channels, which provides finer control of
the restored image quality. Experiments on several typical image restoration
tasks fully validate the effective benefits of the proposed method. Code is
available at https://github.com/qibao77/CFSNet. | [
"cs.CV"
] |
We show how an ensemble of $Q^*$-functions can be leveraged for more
effective exploration in deep reinforcement learning. We build on well
established algorithms from the bandit setting, and adapt them to the
$Q$-learning setting. We propose an exploration strategy based on
upper-confidence bounds (UCB). Our experiments show significant gains on the
Atari benchmark. | [
"cs.LG",
"stat.ML"
] |
Person Re-Identification is an important problem in computer vision-based
surveillance applications, in which the same person is attempted to be
identified from surveillance photographs in a variety of nearby zones. At
present, the majority of Person re-ID techniques are based on Convolutional
Neural Networks (CNNs), but Vision Transformers are beginning to displace pure
CNNs for a variety of object recognition tasks. The primary output of a vision
transformer is a global classification token, but vision transformers also
yield local tokens which contain additional information about local regions of
the image. Techniques to make use of these local tokens to improve
classification accuracy are an active area of research. We propose a novel
Locally Aware Transformer (LA-Transformer) that employs a Parts-based
Convolution Baseline (PCB)-inspired strategy for aggregating globally enhanced
local classification tokens into an ensemble of $\sqrt{N}$ classifiers, where
$N$ is the number of patches. An additional novelty is that we incorporate
blockwise fine-tuning which further improves re-ID accuracy. LA-Transformer
with blockwise fine-tuning achieves rank-1 accuracy of $98.27 \%$ with standard
deviation of $0.13$ on the Market-1501 and $98.7\%$ with standard deviation of
$0.2$ on the CUHK03 dataset respectively, outperforming all other
state-of-the-art published methods at the time of writing. | [
"cs.CV"
] |
Recognizing wild faces is extremely hard as they appear with all kinds of
variations. Traditional methods either train with specifically annotated
variation data from target domains, or by introducing unlabeled target
variation data to adapt from the training data. Instead, we propose a universal
representation learning framework that can deal with larger variation unseen in
the given training data without leveraging target domain knowledge. We firstly
synthesize training data alongside some semantically meaningful variations,
such as low resolution, occlusion and head pose. However, directly feeding the
augmented data for training will not converge well as the newly introduced
samples are mostly hard examples. We propose to split the feature embedding
into multiple sub-embeddings, and associate different confidence values for
each sub-embedding to smooth the training procedure. The sub-embeddings are
further decorrelated by regularizing variation classification loss and
variation adversarial loss on different partitions of them. Experiments show
that our method achieves top performance on general face recognition datasets
such as LFW and MegaFace, while significantly better on extreme benchmarks such
as TinyFace and IJB-S. | [
"cs.CV"
] |
We propose a new algorithm for color transfer between images that have
perceptually similar semantic structures. We aim to achieve a more accurate
color transfer that leverages semantically-meaningful dense correspondence
between images. To accomplish this, our algorithm uses neural representations
for matching. Additionally, the color transfer should be spatially variant and
globally coherent. Therefore, our algorithm optimizes a local linear model for
color transfer satisfying both local and global constraints. Our proposed
approach jointly optimizes matching and color transfer, adopting a
coarse-to-fine strategy. The proposed method can be successfully extended from
one-to-one to one-to-many color transfer. The latter further addresses the
problem of mismatching elements of the input image. We validate our proposed
method by testing it on a large variety of image content. | [
"cs.CV"
] |
Temporal point processes (TPP) are probabilistic generative models for
continuous-time event sequences. Neural TPPs combine the fundamental ideas from
point process literature with deep learning approaches, thus enabling
construction of flexible and efficient models. The topic of neural TPPs has
attracted significant attention in the recent years, leading to the development
of numerous new architectures and applications for this class of models. In
this review paper we aim to consolidate the existing body of knowledge on
neural TPPs. Specifically, we focus on important design choices and general
principles for defining neural TPP models. Next, we provide an overview of
application areas commonly considered in the literature. We conclude this
survey with the list of open challenges and important directions for future
work in the field of neural TPPs. | [
"cs.LG"
] |
Natural language explanations of deep neural network decisions provide an
intuitive way for a AI agent to articulate a reasoning process. Current textual
explanations learn to discuss class discriminative features in an image.
However, it is also helpful to understand which attributes might change a
classification decision if present in an image (e.g., "This is not a Scarlet
Tanager because it does not have black wings.") We call such textual
explanations counterfactual explanations, and propose an intuitive method to
generate counterfactual explanations by inspecting which evidence in an input
is missing, but might contribute to a different classification decision if
present in the image. To demonstrate our method we consider a fine-grained
image classification task in which we take as input an image and a
counterfactual class and output text which explains why the image does not
belong to a counterfactual class. We then analyze our generated counterfactual
explanations both qualitatively and quantitatively using proposed automatic
metrics. | [
"cs.CV"
] |
With the advent of state of the art nature-inspired pure attention based
models i.e. transformers, and their success in natural language processing
(NLP), their extension to machine vision (MV) tasks was inevitable and much
felt. Subsequently, vision transformers (ViTs) were introduced which are giving
quite a challenge to the established deep learning based machine vision
techniques. However, pure attention based models/architectures like
transformers require huge data, large training times and large computational
resources. Some recent works suggest that combinations of these two varied
fields can prove to build systems which have the advantages of both these
fields. Accordingly, this state of the art survey paper is introduced which
hopefully will help readers get useful information about this interesting and
potential research area. A gentle introduction to attention mechanisms is
given, followed by a discussion of the popular attention based deep
architectures. Subsequently, the major categories of the intersection of
attention mechanisms and deep learning for machine vision (MV) based are
discussed. Afterwards, the major algorithms, issues and trends within the scope
of the paper are discussed. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
An Axial Shifted MLP architecture (AS-MLP) is proposed in this paper.
Different from MLP-Mixer, where the global spatial feature is encoded for the
information flow through matrix transposition and one token-mixing MLP, we pay
more attention to the local features communication. By axially shifting
channels of the feature map, AS-MLP is able to obtain the information flow from
different axial directions, which captures the local dependencies. Such an
operation enables us to utilize a pure MLP architecture to achieve the same
local receptive field as CNN-like architecture. We can also design the
receptive field size and dilation of blocks of AS-MLP, etc, just like designing
those of convolution kernels. With the proposed AS-MLP architecture, our model
obtains 83.3% Top-1 accuracy with 88M parameters and 15.2 GFLOPs on the
ImageNet-1K dataset. Such a simple yet effective architecture outperforms all
MLP-based architectures and achieves competitive performance compared to the
transformer-based architectures (e.g., Swin Transformer) even with slightly
lower FLOPs. In addition, AS-MLP is also the first MLP-based architecture to be
applied to the downstream tasks (e.g., object detection and semantic
segmentation). The experimental results are also impressive. Our proposed
AS-MLP obtains 51.5 mAP on the COCO validation set and 49.5 MS mIoU on the
ADE20K dataset, which is competitive compared to the transformer-based
architectures. Code is available at https://github.com/svip-lab/AS-MLP. | [
"cs.CV"
] |
In many trajectory-based applications, it is necessary to map raw GPS
trajectories onto road networks in digital maps, which is commonly referred to
as a map-matching process. While most previous map-matching methods have
focused on using rule-based algorithms to deal with the map-matching problems,
in this paper, we consider the map-matching task from the data perspective,
proposing a deep learning-based map-matching model. We build a
Transformer-based map-matching model with a transfer learning approach. We
generate synthetic trajectory data to pre-train the Transformer model and then
fine-tune the model with a limited number of ground-truth data to minimize the
model development cost and reduce the real-to-virtual gap. Three metrics
(Average Hamming Distance, F-score, and BLEU) at two levels (point and segment
level) are used to evaluate the model performance. The results indicate that
the proposed model outperforms existing models. Furthermore, we use the
attention weights of the Transformer to plot the map-matching process and find
how the model matches the road segments correctly. | [
"cs.LG"
] |
Deep Neural Networks (DNNs) are known to be strong predictors, but their
prediction strategies can rarely be understood. With recent advances in
Explainable Artificial Intelligence, approaches are available to explore the
reasoning behind those complex models' predictions. One class of approaches are
post-hoc attribution methods, among which Layer-wise Relevance Propagation
(LRP) shows high performance. However, the attempt at understanding a DNN's
reasoning often stops at the attributions obtained for individual samples in
input space, leaving the potential for deeper quantitative analyses untouched.
As a manual analysis without the right tools is often unnecessarily labor
intensive, we introduce three software packages targeted at scientists to
explore model reasoning using attribution approaches and beyond: (1) Zennit - a
highly customizable and intuitive attribution framework implementing LRP and
related approaches in PyTorch, (2) CoRelAy - a framework to easily and quickly
construct quantitative analysis pipelines for dataset-wide analyses of
explanations, and (3) ViRelAy - a web-application to interactively explore
data, attributions, and analysis results. | [
"cs.LG"
] |
Recognizing every person's action in a crowded and cluttered environment is a
challenging task. In this paper, we propose a real-time action recognition
method, Action4D, which gives reliable and accurate results in the real-world
settings. We propose to tackle the action recognition problem using a holistic
4D "scan" of a cluttered scene to include every detail about the people and
environment. Recognizing multiple people's actions in the cluttered 4D
representation is a new problem. In this paper, we propose novel methods to
solve this problem. We propose a new method to track people in 4D, which can
reliably detect and follow each person in real time. We propose a new deep
neural network, the Action4D-Net, to recognize the action of each tracked
person. The Action4D-Net's novel structure uses both the global feature and the
focused attention to achieve state-of-the-art result. Our real-time method is
invariant to camera view angles, resistant to clutter and able to handle crowd.
The experimental results show that the proposed method is fast, reliable and
accurate. Our method paves the way to action recognition in the real-world
applications and is ready to be deployed to enable smart homes, smart factories
and smart stores. | [
"cs.CV"
] |
The rapid growth of ride-hailing platforms has created a highly competitive
market where businesses struggle to make profits, demanding the need for better
operational strategies. However, real-world experiments are risky and expensive
for these platforms as they deal with millions of users daily. Thus, a need
arises for a simulated environment where they can predict users' reactions to
changes in the platform-specific parameters such as trip fares and incentives.
Building such a simulation is challenging, as these platforms exist within
dynamic environments where thousands of users regularly interact with one
another. This paper presents a framework to mimic and predict user,
specifically driver, behaviors in ride-hailing services. We use a data-driven
hybrid reinforcement learning and imitation learning approach for this. First,
the agent utilizes behavioral cloning to mimic driver behavior using a
real-world data set. Next, reinforcement learning is applied on top of the
pre-trained agents in a simulated environment, to allow them to adapt to
changes in the platform. Our framework provides an ideal playground for
ride-hailing platforms to experiment with platform-specific parameters to
predict drivers' behavioral patterns. | [
"cs.LG",
"cs.AI"
] |
Learning multi-modal representations is an essential step towards real-world
robotic applications, and various multi-modal fusion models have been developed
for this purpose. However, we observe that existing models, whose objectives
are mostly based on joint training, often suffer from learning inferior
representations of each modality. We name this problem Modality Failure, and
hypothesize that the imbalance of modalities and the implicit bias of common
objectives in fusion method prevent encoders of each modality from sufficient
feature learning. To this end, we propose a new multi-modal learning method,
Uni-Modal Teacher, which combines the fusion objective and uni-modal
distillation to tackle the modality failure problem. We show that our method
not only drastically improves the representation of each modality, but also
improves the overall multi-modal task performance. Our method can be
effectively generalized to most multi-modal fusion approaches. We achieve more
than 3% improvement on the VGGSound audio-visual classification task, as well
as improving performance on the NYU depth V2 RGB-D image segmentation task. | [
"cs.LG"
] |
Unsupervised representation learning has succeeded with excellent results in
many applications. It is an especially powerful tool to learn a good
representation of environments with partial or noisy observations. In partially
observable domains it is important for the representation to encode a belief
state, a sufficient statistic of the observations seen so far. In this paper,
we investigate whether it is possible to learn such a belief representation
using modern neural architectures. Specifically, we focus on one-step frame
prediction and two variants of contrastive predictive coding (CPC) as the
objective functions to learn the representations. To evaluate these learned
representations, we test how well they can predict various pieces of
information about the underlying state of the environment, e.g., position of
the agent in a 3D maze. We show that all three methods are able to learn belief
representations of the environment, they encode not only the state information,
but also its uncertainty, a crucial aspect of belief states. We also find that
for CPC multi-step predictions and action-conditioning are critical for
accurate belief representations in visually complex environments. The ability
of neural representations to capture the belief information has the potential
to spur new advances for learning and planning in partially observable domains,
where leveraging uncertainty is essential for optimal decision making. | [
"cs.LG",
"stat.ML"
] |
Deeply-learned planning methods are often based on learning representations
that are optimized for unrelated tasks. For example, they might be trained on
reconstructing the environment. These representations are then combined with
predictor functions for simulating rollouts to navigate the environment. We
find this principle of learning representations unsatisfying and propose to
learn them such that they are directly optimized for the task at hand: to be
maximally predictable for the predictor function. This results in
representations that are by design optimal for the downstream task of planning,
where the learned predictor function is used as a forward model.
To this end, we propose a new way of jointly learning this representation
along with the prediction function, a system we dub Latent Representation
Prediction Network (LARP). The prediction function is used as a forward model
for search on a graph in a viewpoint-matching task and the representation
learned to maximize predictability is found to outperform a pre-trained
representation. Our approach is shown to be more sample-efficient than standard
reinforcement learning methods and our learned representation transfers
successfully to dissimilar objects. | [
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
] |
Despite its success, generative adversarial networks (GANs) still suffer from
mode collapse, namely the generator can only map latent variables to a partial
set of modes of the target distribution. In this paper, we analyze and try to
regularize this issue with an independent and identically distributed (IID)
sampling perspective and emphasize that holding the IID property for generation
in target space (i.e. real data) can naturally avoid mode collapse. This is
based on the basic IID assumption for real data in machine learning. However,
though the source samples $\mathbf{z}$ obey IID, the target generation
$G(\mathbf{z})$ may not necessarily be IID. Based on this observation, we
provide a new loss to encourage the closeness between the inverse source from
generation, and a standard Gaussian distribution in the latent space, as a way
of regularizing the generation to be IID. The logic is that the inverse samples
back from target data should also be IID for source distribution. Experiments
on both synthetic and real-world data show the superiority and robustness of
our model. | [
"cs.LG"
] |
While Transformer architectures have show remarkable success, they are bound
to the computation of all pairwise interactions of input element and thus
suffer from limited scalability. Recent work has been successful by avoiding
the computation of the complete attention matrix, yet leads to problems down
the line. The absence of an explicit attention matrix makes the inclusion of
inductive biases relying on relative interactions between elements more
challenging. An extremely powerful inductive bias is translational
equivariance, which has been conjectured to be responsible for much of the
success of Convolutional Neural Networks on image recognition tasks. In this
work we show how translational equivariance can be implemented in efficient
Transformers based on kernelizable attention - Performers. Our experiments
highlight that the devised approach significantly improves robustness of
Performers to shifts of input images compared to their naive application. This
represents an important step on the path of replacing Convolutional Neural
Networks with more expressive Transformer architectures and will help to
improve sample efficiency and robustness in this realm. | [
"cs.LG",
"cs.CV"
] |
Design of cyber-physical systems (CPSs) is a challenging task that involves
searching over a large search space of various CPS configurations and possible
values of components composing the system. Hence, there is a need for
sample-efficient CPS design space exploration to select the system architecture
and component values that meet the target system requirements. We address this
challenge by formulating CPS design as a multi-objective optimization problem
and propose DISPATCH, a two-step methodology for sample-efficient search over
the design space. First, we use a genetic algorithm to search over discrete
choices of system component values for architecture search and component
selection or only component selection and terminate the algorithm even before
meeting the system requirements, thus yielding a coarse design. In the second
step, we use an inverse design to search over a continuous space to fine-tune
the component values and meet the diverse set of system requirements. We use a
neural network as a surrogate function for the inverse design of the system.
The neural network, converted into a mixed-integer linear program, is used for
active learning to sample component values efficiently in a continuous search
space. We illustrate the efficacy of DISPATCH on electrical circuit benchmarks:
two-stage and three-stage transimpedence amplifiers. Simulation results show
that the proposed methodology improves sample efficiency by 5-14x compared to a
prior synthesis method that relies on reinforcement learning. It also
synthesizes circuits with the best performance (highest bandwidth/lowest area)
compared to designs synthesized using reinforcement learning, Bayesian
optimization, or humans. | [
"cs.LG",
"cs.NE",
"cs.SY",
"eess.SY"
] |
Table extraction is an important but still unsolved problem. In this paper,
we introduce a flexible end-to-end table extraction system. We develop two
rule-based algorithms that perform the complete table recognition process and
support the most frequent table formats found in the scientific literature.
Moreover, to incorporate the extraction of semantic information into the table
recognition process, we develop a graph-based table interpretation method. We
conduct extensive experiments on the challenging table recognition benchmarks
ICDAR 2013 and ICDAR 2019. Our table recognition approach achieves results
competitive with state-of-the-art approaches. Moreover, our complete
information extraction system exhibited a high F1 score of 0.7380 proving the
utility of our approach. | [
"cs.CV"
] |
Policy optimization is a widely-used method in reinforcement learning. Due to
its local-search nature, however, theoretical guarantees on global optimality
often rely on extra assumptions on the Markov Decision Processes (MDPs) that
bypass the challenge of global exploration. To eliminate the need of such
assumptions, in this work, we develop a general solution that adds dilated
bonuses to the policy update to facilitate global exploration. To showcase the
power and generality of this technique, we apply it to several episodic MDP
settings with adversarial losses and bandit feedback, improving and
generalizing the state-of-the-art. Specifically, in the tabular case, we obtain
$\widetilde{\mathcal{O}}(\sqrt{T})$ regret where $T$ is the number of episodes,
improving the $\widetilde{\mathcal{O}}({T}^{2/3})$ regret bound by Shani et al.
(2020). When the number of states is infinite, under the assumption that the
state-action values are linear in some low-dimensional features, we obtain
$\widetilde{\mathcal{O}}({T}^{2/3})$ regret with the help of a simulator,
matching the result of Neu and Olkhovskaya (2020) while importantly removing
the need of an exploratory policy that their algorithm requires. When a
simulator is unavailable, we further consider a linear MDP setting and obtain
$\widetilde{\mathcal{O}}({T}^{14/15})$ regret, which is the first result for
linear MDPs with adversarial losses and bandit feedback. | [
"cs.LG",
"stat.ML"
] |
Bird's Eye View (BEV) is a popular representation for processing 3D point
clouds, and by its nature is fundamentally sparse. Motivated by the
computational limitations of mobile robot platforms, we take a fast,
high-performance BEV 3D object detector - PointPillars - and modify its
backbone to maintain and exploit this input sparsity, leading to decreased
runtimes. We present results on KITTI, a canonical 3D detection dataset, and
Matterport-Chair, a novel Matterport3D-derived chair detection dataset from
scenes in real furnished homes. We evaluate runtime characteristics using a
desktop GPU, an embedded ML accelerator, and a robot CPU, demonstrating that
our method results in significant runtime decreases (2x or more) for embedded
systems with only a modest decrease in detection quality. Our work represents a
new approach for practitioners to optimize models for embedded systems by
maintaining and exploiting input sparsity throughout their entire pipeline to
reduce runtime and resource usage while preserving detection performance. All
models, weights, experimental configurations, and datasets used are publicly
available. | [
"cs.CV",
"cs.LG"
] |
We introduce Video Transformer (VidTr) with separable-attention for video
classification. Comparing with commonly used 3D networks, VidTr is able to
aggregate spatio-temporal information via stacked attentions and provide better
performance with higher efficiency. We first introduce the vanilla video
transformer and show that transformer module is able to perform spatio-temporal
modeling from raw pixels, but with heavy memory usage. We then present VidTr
which reduces the memory cost by 3.3$\times$ while keeping the same
performance. To further compact the model, we propose the standard deviation
based topK pooling attention, which reduces the computation by dropping
non-informative features. VidTr achieves state-of-the-art performance on five
commonly used dataset with lower computational requirement, showing both the
efficiency and effectiveness of our design. Finally, error analysis and
visualization show that VidTr is especially good at predicting actions that
require long-term temporal reasoning. The code and pre-trained weights will be
released. | [
"cs.CV"
] |
Program synthesis from natural language (NL) is practical for humans and,
once technically feasible, would significantly facilitate software development
and revolutionize end-user programming. We present SAPS, an end-to-end neural
network capable of mapping relatively complex, multi-sentence NL specifications
to snippets of executable code. The proposed architecture relies exclusively on
neural components, and is trained on abstract syntax trees, combined with a
pretrained word embedding and a bi-directional multi-layer LSTM for processing
of word sequences. The decoder features a doubly-recurrent LSTM, for which we
propose novel signal propagation schemes and soft attention mechanism. When
applied to a large dataset of problems proposed in a previous study, SAPS
performs on par with or better than the method proposed there, producing
correct programs in over 92% of cases. In contrast to other methods, it does
not require post-processing of the resulting programs, and uses a
fixed-dimensional latent representation as the only interface between the NL
analyzer and the source code generator. | [
"cs.LG",
"cs.AI",
"cs.PL",
"stat.ML"
] |
Despite success on a wide range of problems related to vision, generative
adversarial networks (GANs) often suffer from inferior performance due to
unstable training, especially for text generation. To solve this issue, we
propose a new variational GAN training framework which enjoys superior training
stability. Our approach is inspired by a connection of GANs and reinforcement
learning under a variational perspective. The connection leads to (1)
probability ratio clipping that regularizes generator training to prevent
excessively large updates, and (2) a sample re-weighting mechanism that
improves discriminator training by downplaying bad-quality fake samples.
Moreover, our variational GAN framework can provably overcome the training
issue in many GANs that an optimal discriminator cannot provide any informative
gradient to training generator. By plugging the training approach in diverse
state-of-the-art GAN architectures, we obtain significantly improved
performance over a range of tasks, including text generation, text style
transfer, and image generation. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Training a classifier under fairness constraints has gotten increasing
attention in the machine learning community thanks to moral, legal, and
business reasons. However, several recent works addressing algorithmic fairness
have only focused on simple models such as logistic regression or support
vector machines due to non-convex and non-differentiable fairness criteria
across protected groups, such as race or gender. Neural networks, the most
widely used models for classification nowadays, are precluded and lack
theoretical guarantees. This paper aims to fill this missing but crucial part
of the literature of algorithmic fairness for neural networks. In particular,
we show that overparametrized neural networks could meet the fairness
constraints. The key ingredient of building a fair neural network classifier is
establishing no-regret analysis for neural networks in the overparameterization
regime, which may be of independent interest in the online learning of neural
networks and related applications. | [
"stat.ML",
"cs.LG",
"math.OC"
] |
Large scale recommender models find most relevant items from huge catalogs,
and they play a critical role in modern search and recommendation systems. To
model the input space with large-vocab categorical features, a typical
recommender model learns a joint embedding space through neural networks for
both queries and items from user feedback data. However, with millions to
billions of items in the corpus, users tend to provide feedback for a very
small set of them, causing a power-law distribution. This makes the feedback
data for long-tail items extremely sparse.
Inspired by the recent success in self-supervised representation learning
research in both computer vision and natural language understanding, we propose
a multi-task self-supervised learning (SSL) framework for large-scale item
recommendations. The framework is designed to tackle the label sparsity problem
by learning better latent relationship of item features. Specifically, SSL
improves item representation learning as well as serving as additional
regularization to improve generalization. Furthermore, we propose a novel data
augmentation method that utilizes feature correlations within the proposed
framework.
We evaluate our framework using two real-world datasets with 500M and 1B
training examples respectively. Our results demonstrate the effectiveness of
SSL regularization and show its superior performance over the state-of-the-art
regularization techniques. We also have already launched the proposed
techniques to a web-scale commercial app-to-app recommendation system, with
significant improvements top-tier business metrics demonstrated in A/B
experiments on live traffic. Our online results also verify our hypothesis that
our framework indeed improves model performance even more on slices that lack
supervision. | [
"cs.LG",
"cs.IR",
"stat.ML"
] |
Saliency maps have shown to be both useful and misleading for explaining
model predictions especially in the context of images. In this paper, we
perform sanity checks for text modality and show that the conclusions made for
image do not directly transfer to text. We also analyze the effects of the
input multiplier in certain saliency maps using similarity scores,
max-sensitivity and infidelity evaluation metrics. Our observations reveal that
the input multiplier carries input's structural patterns in explanation maps,
thus leading to similar results regardless of the choice of model parameters.
We also show that the smoothness of a Neural Network (NN) function can affect
the quality of saliency-based explanations. Our investigations reveal that
replacing ReLUs with Softplus and MaxPool with smoother variants such as
LogSumExp (LSE) can lead to explanations that are more reliable based on the
infidelity evaluation metric. | [
"cs.LG",
"cs.AI"
] |
Main characters in images are the most important humans that catch the
viewer's attention upon first look, and they are emphasized by properties such
as size, position, color saturation, and sharpness of focus. Identifying the
main character in images plays an important role in traditional photographic
studies and media analysis, but the task is performed manually and can be slow
and laborious. Furthermore, selection of main characters can be sometimes
subjective. In this paper, we analyze the feasibility of solving the main
character recognition needed for photographic studies automatically and propose
a method for identifying the main characters. The proposed method uses machine
learning based human pose estimation along with traditional computer vision
approaches for this task. We approach the task as a binary classification
problem where each detected human is classified either as a main character or
not. To evaluate both the subjectivity of the task and the performance of our
method, we collected a dataset of 300 varying images from multiple sources and
asked five people, a photographic researcher and four other persons, to
annotate the main characters. Our analysis showed a relatively high agreement
between different annotators. The proposed method achieved a promising F1 score
of 0.83 on the full image set and 0.96 on a subset evaluated as most clear and
important cases by the photographic researcher. | [
"cs.CV",
"cs.LG"
] |
Deep reinforcement learning is successful in decision making for
sophisticated games, such as Atari, Go, etc. However, real-world decision
making often requires reasoning with partial information extracted from complex
visual observations. This paper presents Discriminative Particle Filter
Reinforcement Learning (DPFRL), a new reinforcement learning framework for
complex partial observations. DPFRL encodes a differentiable particle filter in
the neural network policy for explicit reasoning with partial observations over
time. The particle filter maintains a belief using learned discriminative
update, which is trained end-to-end for decision making. We show that using the
discriminative update instead of standard generative models results in
significantly improved performance, especially for tasks with complex visual
observations, because they circumvent the difficulty of modeling complex
observations that are irrelevant to decision making. In addition, to extract
features from the particle belief, we propose a new type of belief feature
based on the moment generating function. DPFRL outperforms state-of-the-art
POMDP RL models in Flickering Atari Games, an existing POMDP RL benchmark, and
in Natural Flickering Atari Games, a new, more challenging POMDP RL benchmark
introduced in this paper. Further, DPFRL performs well for visual navigation
with real-world data in the Habitat environment. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.