text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Open-set domain adaptation (OSDA) considers that the target domain contains
samples from novel categories unobserved in external source domain.
Unfortunately, existing OSDA methods always ignore the demand for the
information of unseen categories and simply recognize them as "unknown" set
without further explanation. This motivates us to understand the unknown
categories more specifically by exploring the underlying structures and
recovering their interpretable semantic attributes. In this paper, we propose a
novel framework to accurately identify the seen categories in target domain,
and effectively recover the semantic attributes for unseen categories.
Specifically, structure preserving partial alignment is developed to recognize
the seen categories through domain-invariant feature learning. Attribute
propagation over visual graph is designed to smoothly transit attributes from
seen to unseen categories via visual-semantic mapping. Moreover, two new
cross-main benchmarks are constructed to evaluate the proposed framework in the
novel and practical challenge. Experimental results on open-set recognition and
semantic recovery demonstrate the superiority of the proposed method over other
compared baselines. | [
"cs.CV"
] |
Representation learning methods for heterogeneous networks produce a
low-dimensional vector embedding for each node that is typically fixed for all
tasks involving the node. Many of the existing methods focus on obtaining a
static vector representation for a node in a way that is agnostic to the
downstream application where it is being used. In practice, however, downstream
tasks such as link prediction require specific contextual information that can
be extracted from the subgraphs related to the nodes provided as input to the
task. To tackle this challenge, we develop SLiCE, a framework bridging static
representation learning methods using global information from the entire graph
with localized attention driven mechanisms to learn contextual node
representations. We first pre-train our model in a self-supervised manner by
introducing higher-order semantic associations and masking nodes, and then
fine-tune our model for a specific link prediction task. Instead of training
node representations by aggregating information from all semantic neighbors
connected via metapaths, we automatically learn the composition of different
metapaths that characterize the context for a specific task without the need
for any pre-defined metapaths. SLiCE significantly outperforms both static and
contextual embedding learning methods on several publicly available benchmark
network datasets. We also interpret the semantic association matrix and provide
its utility and relevance in making successful link predictions between
heterogeneous nodes in the network. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Policy gradient methods are an attractive approach to multi-agent
reinforcement learning problems due to their convergence properties and
robustness in partially observable scenarios. However, there is a significant
performance gap between state-of-the-art policy gradient and value-based
methods on the popular StarCraft Multi-Agent Challenge (SMAC) benchmark. In
this paper, we introduce semi-on-policy (SOP) training as an effective and
computationally efficient way to address the sample inefficiency of on-policy
policy gradient methods. We enhance two state-of-the-art policy gradient
algorithms with SOP training, demonstrating significant performance
improvements. Furthermore, we show that our methods perform as well or better
than state-of-the-art value-based methods on a variety of SMAC tasks. | [
"cs.LG",
"cs.MA"
] |
This paper presents a model architecture for encoding the representations of
part-whole hierarchies in images in form of a graph. The idea is to divide the
image into patches of different levels and then treat all of these patches as
nodes for a fully connected graph. A dynamic feature extraction module is used
to extract feature representations from these patches in each graph iteration.
This enables us to learn a rich graph representation of the image that
encompasses the inherent part-whole hierarchical information. Utilizing proper
self-supervised training techniques, such a model can be trained as a general
purpose vision encoder model which can then be used for various vision related
downstream tasks (e.g., Image Classification, Object Detection, Image
Captioning, etc.). | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
We show that denoising of 3D point clouds can be learned unsupervised,
directly from noisy 3D point cloud data only. This is achieved by extending
recent ideas from learning of unsupervised image denoisers to unstructured 3D
point clouds. Unsupervised image denoisers operate under the assumption that a
noisy pixel observation is a random realization of a distribution around a
clean pixel value, which allows appropriate learning on this distribution to
eventually converge to the correct value. Regrettably, this assumption is not
valid for unstructured points: 3D point clouds are subject to total noise, i.
e., deviations in all coordinates, with no reliable pixel grid. Thus, an
observation can be the realization of an entire manifold of clean 3D points,
which makes a na\"ive extension of unsupervised image denoisers to 3D point
clouds impractical. Overcoming this, we introduce a spatial prior term, that
steers converges to the unique closest out of the many possible modes on a
manifold. Our results demonstrate unsupervised denoising performance similar to
that of supervised learning with clean data when given enough training examples
- whereby we do not need any pairs of noisy and clean training data. | [
"cs.CV",
"cs.GR"
] |
Deepfakes are computer manipulated videos where the face of an individual has
been replaced with that of another. Software for creating such forgeries is
easy to use and ever more popular, causing serious threats to personal
reputation and public security. The quality of classifiers for detecting
deepfakes has improved with the releasing of ever larger datasets, but the
understanding of why a particular video has been labelled as fake has not kept
pace.
In this work we develop, extend and compare white-box, black-box and
model-specific techniques for explaining the labelling of real and fake videos.
In particular, we adapt SHAP, GradCAM and self-attention models to the task of
explaining the predictions of state-of-the-art detectors based on EfficientNet,
trained on the Deepfake Detection Challenge (DFDC) dataset. We compare the
obtained explanations, proposing metrics to quantify their visual features and
desirable characteristics, and also perform a user survey collecting users'
opinions regarding the usefulness of the explainers. | [
"cs.CV"
] |
Interpretable classification models are built with the purpose of providing a
comprehensible description of the decision logic to an external oversight
agent. When considered in isolation, a decision tree, a set of classification
rules, or a linear model, are widely recognized as human-interpretable.
However, such models are generated as part of a larger analytical process. Bias
in data collection and preparation, or in model's construction may severely
affect the accountability of the design process. We conduct an experimental
study of the stability of interpretable models with respect to feature
selection, instance selection, and model selection. Our conclusions should
raise awareness and attention of the scientific community on the need of a
stability impact assessment of interpretable models. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The explosion in workload complexity and the recent slow-down in Moore's law
scaling call for new approaches towards efficient computing. Researchers are
now beginning to use recent advances in machine learning in software
optimizations, augmenting or replacing traditional heuristics and data
structures. However, the space of machine learning for computer hardware
architecture is only lightly explored. In this paper, we demonstrate the
potential of deep learning to address the von Neumann bottleneck of memory
performance. We focus on the critical problem of learning memory access
patterns, with the goal of constructing accurate and efficient memory
prefetchers. We relate contemporary prefetching strategies to n-gram models in
natural language processing, and show how recurrent neural networks can serve
as a drop-in replacement. On a suite of challenging benchmark datasets, we find
that neural networks consistently demonstrate superior performance in terms of
precision and recall. This work represents the first step towards practical
neural-network based prefetching, and opens a wide range of exciting directions
for machine learning in computer architecture research. | [
"cs.LG",
"stat.ML"
] |
We present a method that learns a spatiotemporal neural irradiance field for
dynamic scenes from a single video. Our learned representation enables
free-viewpoint rendering of the input video. Our method builds upon recent
advances in implicit representations. Learning a spatiotemporal irradiance
field from a single video poses significant challenges because the video
contains only one observation of the scene at any point in time. The 3D
geometry of a scene can be legitimately represented in numerous ways since
varying geometry (motion) can be explained with varying appearance and vice
versa. We address this ambiguity by constraining the time-varying geometry of
our dynamic scene representation using the scene depth estimated from video
depth estimation methods, aggregating contents from individual frames into a
single global representation. We provide an extensive quantitative evaluation
and demonstrate compelling free-viewpoint rendering results. | [
"cs.CV"
] |
In this paper, we study the convergence of generative adversarial networks
(GANs) from the perspective of the informativeness of the gradient of the
optimal discriminative function. We show that GANs without restriction on the
discriminative function space commonly suffer from the problem that the
gradient produced by the discriminator is uninformative to guide the generator.
By contrast, Wasserstein GAN (WGAN), where the discriminative function is
restricted to 1-Lipschitz, does not suffer from such a gradient
uninformativeness problem. We further show in the paper that the model with a
compact dual form of Wasserstein distance, where the Lipschitz condition is
relaxed, may also theoretically suffer from this issue. This implies the
importance of Lipschitz condition and motivates us to study the general
formulation of GANs with Lipschitz constraint, which leads to a new family of
GANs that we call Lipschitz GANs (LGANs). We show that LGANs guarantee the
existence and uniqueness of the optimal discriminative function as well as the
existence of a unique Nash equilibrium. We prove that LGANs are generally
capable of eliminating the gradient uninformativeness problem. According to our
empirical analysis, LGANs are more stable and generate consistently higher
quality samples compared with WGAN. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
One of the solutions of depth imaging of moving scene is to project a static
pattern on the object and use just a single image for reconstruction. However,
if the motion of the object is too fast with respect to the exposure time of
the image sensor, patterns on the captured image are blurred and reconstruction
fails. In this paper, we impose multiple projection patterns into each single
captured image to realize temporal super resolution of the depth image
sequences. With our method, multiple patterns are projected onto the object
with higher fps than possible with a camera. In this case, the observed pattern
varies depending on the depth and motion of the object, so we can extract
temporal information of the scene from each single image. The decoding process
is realized using a learning-based approach where no geometric calibration is
needed. Experiments confirm the effectiveness of our method where sequential
shapes are reconstructed from a single image. Both quantitative evaluations and
comparisons with recent techniques were also conducted. | [
"cs.CV"
] |
Sparse codes in neuroscience have been suggested to offer certain
computational advantages over other neural representations of sensory data. To
explore this viewpoint, a sparse code is used to represent natural images in an
optimal control task solved with neuro-dynamic programming, and its
computational properties are investigated. The central finding is that when
feature inputs to a linear network are correlated, an over-complete sparse code
increases the memory capacity of the network in an efficient manner beyond that
possible for any complete code with the same-sized input, and also increases
the speed of learning the network weights. A complete sparse code is found to
maximise the memory capacity of a linear network by decorrelating its feature
inputs to transform the design matrix of the least-squares problem to one of
full rank. It also conditions the Hessian matrix of the least-squares problem,
thereby increasing the rate of convergence to the optimal network weights.
Other types of decorrelating codes would also achieve this. However, an
over-complete sparse code is found to be approximately decorrelated, extracting
a larger number of approximately decorrelated features from the same-sized
input, allowing it to efficiently increase memory capacity beyond that possible
for any complete code: a 2.25 times over-complete sparse code is shown to at
least double memory capacity compared with a complete sparse code using the
same input. This is used in sequential learning to store a potentially large
number of optimal control tasks in the network, while catastrophic forgetting
is avoided using a partitioned representation, yielding a cost-to-go function
approximator that generalizes over the states in each partition. Sparse code
advantages over dense codes and local codes are also discussed. | [
"cs.LG",
"stat.ML"
] |
Rotation detection is a challenging task due to the difficulties of locating
the multi-angle objects and separating them effectively from the background.
Though considerable progress has been made, for practical settings, there still
exist challenges for rotating objects with large aspect ratio, dense
distribution and category extremely imbalance. In this paper, we propose an
end-to-end refined single-stage rotation detector for fast and accurate object
detection by using a progressive regression approach from coarse to fine
granularity. Considering the shortcoming of feature misalignment in existing
refined single-stage detector, we design a feature refinement module to improve
detection performance by getting more accurate features. The key idea of
feature refinement module is to re-encode the position information of the
current refined bounding box to the corresponding feature points through
pixel-wise feature interpolation to realize feature reconstruction and
alignment. For more accurate rotation estimation, an approximate SkewIoU loss
is proposed to solve the problem that the calculation of SkewIoU is not
derivable. Experiments on three popular remote sensing public datasets DOTA,
HRSC2016, UCAS-AOD as well as one scene text dataset ICDAR2015 show the
effectiveness of our approach. Tensorflow and Pytorch version codes are
available at https://github.com/Thinklab-SJTU/R3Det_Tensorflow and
https://github.com/SJTU-Thinklab-Det/r3det-on-mmdetection, and R3Det is also
integrated in our open source rotation detection benchmark:
https://github.com/yangxue0827/RotationDetection. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
A model must adapt itself to generalize to new and different data during
testing. In this setting of fully test-time adaptation the model has only the
test data and its own parameters. We propose to adapt by test entropy
minimization (tent): we optimize the model for confidence as measured by the
entropy of its predictions. Our method estimates normalization statistics and
optimizes channel-wise affine transformations to update online on each batch.
Tent reduces generalization error for image classification on corrupted
ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on
ImageNet-C. Tent handles source-free domain adaptation on digit recognition
from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to
Cityscapes, and on the VisDA-C benchmark. These results are achieved in one
epoch of test-time optimization without altering training. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Machine Learning (ML) models are increasingly deployed in the wild to perform
a wide range of tasks. In this work, we ask to what extent can an adversary
steal functionality of such "victim" models based solely on blackbox
interactions: image in, predictions out. In contrast to prior work, we present
an adversary lacking knowledge of train/test data used by the model, its
internals, and semantics over model outputs. We formulate model functionality
stealing as a two-step approach: (i) querying a set of input images to the
blackbox model to obtain predictions; and (ii) training a "knockoff" with
queried image-prediction pairs. We make multiple remarkable observations: (a)
querying random images from a different distribution than that of the blackbox
training data results in a well-performing knockoff; (b) this is possible even
when the knockoff is represented using a different architecture; and (c) our
reinforcement learning approach additionally improves query sample efficiency
in certain settings and provides performance gains. We validate model
functionality stealing on a range of datasets and tasks, as well as on a
popular image analysis API where we create a reasonable knockoff for as little
as $30. | [
"cs.CV",
"cs.CR",
"cs.LG"
] |
We use large amounts of unlabeled video to learn models for visual tracking
without manual human supervision. We leverage the natural temporal coherency of
color to create a model that learns to colorize gray-scale videos by copying
colors from a reference frame. Quantitative and qualitative experiments suggest
that this task causes the model to automatically learn to track visual regions.
Although the model is trained without any ground-truth labels, our method
learns to track well enough to outperform the latest methods based on optical
flow. Moreover, our results suggest that failures to track are correlated with
failures to colorize, indicating that advancing video colorization may further
improve self-supervised visual tracking. | [
"cs.CV",
"cs.GR",
"cs.LG",
"cs.MM",
"cs.RO"
] |
Most existing point cloud instance and semantic segmentation methods rely
heavily on strong supervision signals, which require point-level labels for
every point in the scene. However, such strong supervision suffers from large
annotation costs, arousing the need to study efficient annotating. In this
paper, we discover that the locations of instances matter for 3D scene
segmentation. By fully taking the advantages of locations, we design a weakly
supervised point cloud segmentation algorithm that only requires clicking on
one point per instance to indicate its location for annotation. With
over-segmentation for pre-processing, we extend these location annotations into
segments as seg-level labels. We further design a segment grouping network
(SegGroup) to generate pseudo point-level labels under seg-level labels by
hierarchically grouping the unlabeled segments into the relevant nearby labeled
segments, so that existing point-level supervised segmentation models can
directly consume these pseudo labels for training. Experimental results show
that our seg-level supervised method (SegGroup) achieves comparable results
with the fully annotated point-level supervised methods. Moreover, it also
outperforms the recent weakly supervised methods given a fixed annotation
budget. | [
"cs.CV"
] |
Measuring similarity between two objects is the core operation in existing
cluster analyses in grouping similar objects into clusters. Cluster analyses
have been applied to a number of applications, including image segmentation,
social network analysis, and computational biology. This paper introduces a new
similarity measure called point-set kernel which computes the similarity
between an object and a sample of objects generated from an unknown
distribution. The proposed clustering procedure utilizes this new measure to
characterize both the typical point of every cluster and the cluster grown from
the typical point. We show that the new clustering procedure is both effective
and efficient such that it can deal with large scale datasets. In contrast,
existing clustering algorithms are either efficient or effective; and even
efficient ones have difficulty dealing with large scale datasets without
special hardware. We show that the proposed algorithm is more effective and
runs orders of magnitude faster than the state-of-the-art density-peak
clustering and scalable kernel k-means clustering when applying to datasets of
millions of data points, on commonly used computing machines. | [
"cs.LG",
"stat.ML"
] |
The purpose of this study was to investigate the use of deep learning for
coniferous/deciduous classification of individual trees from airborne LiDAR
data. To enable efficient processing by a deep convolutional neural network
(CNN), we designed two discrete representations using leaf-off and leaf-on
LiDAR data: a digital surface model with four channels (DSMx4) and a set of
four 2D views (4x2D). A training dataset of labeled tree crowns was generated
via segmentation of tree crowns, followed by co-registration with field data.
Potential mislabels due to GPS error or tree leaning were corrected using a
statistical ensemble filtering procedure. Because the training data was heavily
unbalanced (~8% conifers), we trained an ensemble of CNNs on random balanced
sub-samples of augmented data (180 rotational variations per instance). The
4x2D representation yielded similar classification accuracies to the DSMx4
representation (~82% coniferous and ~90% deciduous) while converging faster.
The data augmentation improved the classification accuracies, but more real
training instances (especially coniferous) likely results in much stronger
improvements. Leaf-off LiDAR data were the primary source of useful
information, which is likely due to the perennial nature of coniferous foliage.
LiDAR intensity values also proved to be useful, but normalization yielded no
significant improvements. Lastly, the classification accuracies of overstory
trees (~90%) were more balanced than those of understory trees (~90% deciduous
and ~65% coniferous), which is likely due to the incomplete capture of
understory tree crowns via airborne LiDAR. Automatic derivation of optimal
features via deep learning provide the opportunity for remarkable improvements
in prediction tasks where captured data are not friendly to human visual system
- likely yielding sub-optimal human-designed features. | [
"cs.LG",
"cs.CV"
] |
Shadow removal is still a challenging task due to its inherent
background-dependent and spatial-variant properties, leading to unknown and
diverse shadow patterns. Even powerful state-of-the-art deep neural networks
could hardly recover traceless shadow-removed background. This paper proposes a
new solution for this task by formulating it as an exposure fusion problem to
address the challenges. Intuitively, we can first estimate multiple
over-exposure images w.r.t. the input image to let the shadow regions in these
images have the same color with shadow-free areas in the input image. Then, we
fuse the original input with the over-exposure images to generate the final
shadow-free counterpart. Nevertheless, the spatial-variant property of the
shadow requires the fusion to be sufficiently `smart', that is, it should
automatically select proper over-exposure pixels from different images to make
the final output natural. To address this challenge, we propose the
shadow-aware FusionNet that takes the shadow image as input to generate fusion
weight maps across all the over-exposure images. Moreover, we propose the
boundary-aware RefineNet to eliminate the remaining shadow trace further. We
conduct extensive experiments on the ISTD, ISTD+, and SRD datasets to validate
our method's effectiveness and show better performance in shadow regions and
comparable performance in non-shadow regions over the state-of-the-art methods.
We release the model and code in
https://github.com/tsingqguo/exposure-fusion-shadow-removal. | [
"cs.CV"
] |
Deep neural networks (DNNs) have been quite successful in solving many
complex learning problems. However, DNNs tend to have a large number of
learning parameters, leading to a large memory and computation requirement. In
this paper, we propose a model compression framework for efficient training and
inference of deep neural networks on embedded systems. Our framework provides
data structures and kernels for OpenCL-based parallel forward and backward
computation in a compressed form. In particular, our method learns sparse
representations of parameters using $\ell_1$-based sparse coding while
training, storing them in compressed sparse matrices. Unlike the previous
works, our method does not require a pre-trained model as an input and
therefore can be more versatile for different application environments. Even
though the use of $\ell_1$-based sparse coding for model compression is not
new, we show that it can be far more effective than previously reported when we
use proximal point algorithms and the technique of debiasing. Our experiments
show that our method can produce minimal learning models suitable for small
embedded devices. | [
"cs.LG",
"stat.ML"
] |
Group activity recognition is a crucial yet challenging problem, whose core
lies in fully exploring spatial-temporal interactions among individuals and
generating reasonable group representations. However, previous methods either
model spatial and temporal information separately, or directly aggregate
individual features to form group features. To address these issues, we propose
a novel group activity recognition network termed GroupFormer. It captures
spatial-temporal contextual information jointly to augment the individual and
group representations effectively with a clustered spatial-temporal
transformer. Specifically, our GroupFormer has three appealing advantages: (1)
A tailor-modified Transformer, Clustered Spatial-Temporal Transformer, is
proposed to enhance the individual representation and group representation. (2)
It models the spatial and temporal dependencies integrally and utilizes
decoders to build the bridge between the spatial and temporal information. (3)
A clustered attention mechanism is utilized to dynamically divide individuals
into multiple clusters for better learning activity-aware semantic
representations. Moreover, experimental results show that the proposed
framework outperforms state-of-the-art methods on the Volleyball dataset and
Collective Activity dataset. Code is available at
https://github.com/xueyee/GroupFormer. | [
"cs.CV"
] |
Recent deep learning approaches for representation learning on graphs follow
a neighborhood aggregation procedure. We analyze some important properties of
these models, and propose a strategy to overcome those. In particular, the
range of "neighboring" nodes that a node's representation draws from strongly
depends on the graph structure, analogous to the spread of a random walk. To
adapt to local neighborhood properties and tasks, we explore an architecture --
jumping knowledge (JK) networks -- that flexibly leverages, for each node,
different neighborhood ranges to enable better structure-aware representation.
In a number of experiments on social, bioinformatics and citation networks, we
demonstrate that our model achieves state-of-the-art performance. Furthermore,
combining the JK framework with models like Graph Convolutional Networks,
GraphSAGE and Graph Attention Networks consistently improves those models'
performance. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Incorporating the side information of text corpus, i.e., authors, time
stamps, and emotional tags, into the traditional text mining models has gained
significant interests in the area of information retrieval, statistical natural
language processing, and machine learning. One branch of these works is the
so-called Author Topic Model (ATM), which incorporates the authors's interests
as side information into the classical topic model. However, the existing ATM
needs to predefine the number of topics, which is difficult and inappropriate
in many real-world settings. In this paper, we propose an Infinite Author Topic
(IAT) model to resolve this issue. Instead of assigning a discrete probability
on fixed number of topics, we use a stochastic process to determine the number
of topics from the data itself. To be specific, we extend a gamma-negative
binomial process to three levels in order to capture the
author-document-keyword hierarchical structure. Furthermore, each document is
assigned a mixed gamma process that accounts for the multi-author's
contribution towards this document. An efficient Gibbs sampling inference
algorithm with each conditional distribution being closed-form is developed for
the IAT model. Experiments on several real-world datasets show the capabilities
of our IAT model to learn the hidden topics, authors' interests on these topics
and the number of topics simultaneously. | [
"stat.ML",
"cs.IR",
"cs.LG"
] |
In recent years, the use of object proposal as a preprocessing step for
target detection to improve computational efficiency has become an effective
method. Good object proposal methods should have high object detection recall
rate and low computational cost, as well as good localization quality and
repeatability. However, it is difficult for current advanced algorithms to
achieve a good balance in the above performance. For this problem, we propose a
class-independent object proposal algorithm BIHL. It combines the advantages of
window scoring and superpixel merging, which not only improves the localization
quality but also speeds up the computational efficiency. The experimental
results on the VOC2007 data set show that when the IOU is 0.5 and 10,000 budget
proposals, our method can achieve the highest detection recall and an mean
average best overlap of 79.5%, and the computational efficiency is nearly three
times faster than the current fastest method. Moreover, our method is the
method with the highest average repeatability among the methods that achieve
good repeatability to various disturbances. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
The advent of the Internet era has led to an explosive growth in the
Electronic Health Records (EHR) in the past decades. The EHR data can be
regarded as a collection of clinical events, including laboratory results,
medication records, physiological indicators, etc, which can be used for
clinical outcome prediction tasks to support constructions of intelligent
health systems. Learning patient representation from these clinical events for
the clinical outcome prediction is an important but challenging step. Most
related studies transform EHR data of a patient into a sequence of clinical
events in temporal order and then use sequential models to learn patient
representations for outcome prediction. However, clinical event sequence
contains thousands of event types and temporal dependencies. We further make an
observation that clinical events occurring in a short period are not
constrained by any temporal order but events in a long term are influenced by
temporal dependencies. The multi-scale temporal property makes it difficult for
traditional sequential models to capture the short-term co-occurrence and the
long-term temporal dependencies in clinical event sequences. In response to the
above challenges, this paper proposes a Multi-level Representation Model (MRM).
MRM first uses a sparse attention mechanism to model the short-term
co-occurrence, then uses interval-based event pooling to remove redundant
information and reduce sequence length and finally predicts clinical outcomes
through Long Short-Term Memory (LSTM). Experiments on real-world datasets
indicate that our proposed model largely improves the performance of clinical
outcome prediction tasks using EHR data. | [
"cs.LG",
"cs.AI"
] |
Time series analysis is used to understand and predict dynamic processes,
including evolving demands in business, weather, markets, and biological
rhythms. Exponential smoothing is used in all these domains to obtain simple
interpretable models of time series and to forecast future values. Despite its
popularity, exponential smoothing fails dramatically in the presence of
outliers, large amounts of noise, or when the underlying time series changes.
We propose a flexible model for time series analysis, using exponential
smoothing cells for overlapping time windows. The approach can detect and
remove outliers, denoise data, fill in missing observations, and provide
meaningful forecasts in challenging situations. In contrast to classic
exponential smoothing, which solves a nonconvex optimization problem over the
smoothing parameters and initial state, the proposed approach requires solving
a single structured convex optimization problem. Recent developments in
efficient convex optimization of large-scale dynamic models make the approach
tractable. We illustrate new capabilities using synthetic examples, and then
use the approach to analyze and forecast noisy real-world time series. Code for
the approach and experiments is publicly available. | [
"stat.ML",
"62F35, 65K10, 49M15"
] |
Graph neural networks (GNNs) have been widely used in representation learning
on graphs and achieved state-of-the-art performance in tasks such as node
classification and link prediction. However, most existing GNNs are designed to
learn node representations on the fixed and homogeneous graphs. The limitations
especially become problematic when learning representations on a misspecified
graph or a heterogeneous graph that consists of various types of nodes and
edges. In this paper, we propose Graph Transformer Networks (GTNs) that are
capable of generating new graph structures, which involve identifying useful
connections between unconnected nodes on the original graph, while learning
effective node representation on the new graphs in an end-to-end fashion. Graph
Transformer layer, a core layer of GTNs, learns a soft selection of edge types
and composite relations for generating useful multi-hop connections so-called
meta-paths. Our experiments show that GTNs learn new graph structures, based on
data and tasks without domain knowledge, and yield powerful node representation
via convolution on the new graphs. Without domain-specific graph preprocessing,
GTNs achieved the best performance in all three benchmark node classification
tasks against the state-of-the-art methods that require pre-defined meta-paths
from domain knowledge. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
This paper presents a deep learning-based de-homogenization method for
structural compliance minimization. By using a convolutional neural network to
parameterize the mapping from a set of lamination parameters on a coarse mesh
to a one-scale design on a fine mesh, we avoid solving the least square
problems associated with traditional de-homogenization approaches and save time
correspondingly. To train the neural network, a two-step custom loss function
has been developed which ensures a periodic output field that follows the local
lamination orientations. A key feature of the proposed method is that the
training is carried out without any use of or reference to the underlying
structural optimization problem, which renders the proposed method robust and
insensitive wrt. domain size, boundary conditions, and loading. A
post-processing procedure utilizing a distance transform on the output field
skeleton is used to project the desired lamination widths onto the output field
while ensuring a predefined minimum length-scale and volume fraction. To
demonstrate that the deep learning approach has excellent generalization
properties, numerical examples are shown for several different load and
boundary conditions. For an appropriate choice of parameters, the
de-homogenized designs perform within $7-25\%$ of the homogenization-based
solution at a fraction of the computational cost. With several options for
further improvements, the scheme may provide the basis for future interactive
high-resolution topology optimization. | [
"cs.LG",
"cs.CV",
"J.6; I.4.9; I.2.6"
] |
Most successful self-supervised learning methods are trained to align the
representations of two independent views from the data. State-of-the-art
methods in video are inspired by image techniques, where these two views are
similarly extracted by cropping and augmenting the resulting crop. However,
these methods miss a crucial element in the video domain: time. We introduce
BraVe, a self-supervised learning framework for video. In BraVe, one of the
views has access to a narrow temporal window of the video while the other view
has a broad access to the video content. Our models learn to generalise from
the narrow view to the general content of the video. Furthermore, BraVe
processes the views with different backbones, enabling the use of alternative
augmentations or modalities into the broad view such as optical flow, randomly
convolved RGB frames, audio or their combinations. We demonstrate that BraVe
achieves state-of-the-art results in self-supervised representation learning on
standard video and audio classification benchmarks including UCF101, HMDB51,
Kinetics, ESC-50 and AudioSet. | [
"cs.CV"
] |
Deep networks are increasingly being applied to problems involving image
synthesis, e.g., generating images from textual descriptions and reconstructing
an input image from a compact representation. Supervised training of
image-synthesis networks typically uses a pixel-wise loss (PL) to indicate the
mismatch between a generated image and its corresponding target image. We
propose instead to use a loss function that is better calibrated to human
perceptual judgments of image quality: the multiscale structural-similarity
score (MS-SSIM). Because MS-SSIM is differentiable, it is easily incorporated
into gradient-descent learning. We compare the consequences of using MS-SSIM
versus PL loss on training deterministic and stochastic autoencoders. For three
different architectures, we collected human judgments of the quality of image
reconstructions. Observers reliably prefer images synthesized by
MS-SSIM-optimized models over those synthesized by PL-optimized models, for two
distinct PL measures ($\ell_1$ and $\ell_2$ distances). We also explore the
effect of training objective on image encoding and analyze conditions under
which perceptually-optimized representations yield better performance on image
classification. Finally, we demonstrate the superiority of
perceptually-optimized networks for super-resolution imaging. Just as computer
vision has advanced through the use of convolutional architectures that mimic
the structure of the mammalian visual system, we argue that significant
additional advances can be made in modeling images through the use of training
objectives that are well aligned to characteristics of human perception. | [
"cs.LG",
"cs.CV"
] |
Every day around the world, interminable terabytes of data are being captured
for surveillance purposes. A typical 1-2MP CCTV camera generates around 7-12GB
of data per day. Frame-by-frame processing of such enormous amount of data
requires hefty computational resources. In recent years, compressive sensing
approaches have shown impressive results in signal processing by reducing the
sampling bandwidth. Different sampling mechanisms were developed to incorporate
compressive sensing in image and video acquisition. Pixel-wise coded exposure
is one among the promising sensing paradigms for capturing videos in the
compressed domain, which was also realized into an all-CMOS sensor
\cite{Xiong2017}. Though cameras that perform compressive sensing save a lot of
bandwidth at the time of sampling and minimize the memory required to store
videos, we cannot do much in terms of processing until the videos are
reconstructed to the original frames. But, the reconstruction of
compressive-sensed (CS) videos still takes a lot of time and is also
computationally expensive. In this work, we show that object detection and
localization can be possible directly on the CS frames (easily upto 20x
compression). To our knowledge, this is the first time that the problem of
object detection and localization on CS frames has been attempted. Hence, we
also created a dataset for training in the CS domain. We were able to achieve a
good accuracy of 46.27\% mAP(Mean Average Precision) with the proposed model
with an inference time of 23ms directly on the compressed frames(approx. 20
original domain frames), this facilitated for real-time inference which was
verified on NVIDIA TX2 embedded board. Our framework will significantly reduce
the communication bandwidth, and thus reduction in power as the video
compression will be done at the image sensor processing core. | [
"cs.CV",
"eess.IV"
] |
Temporal-Difference (TD) learning with nonlinear smooth function
approximation for policy evaluation has achieved great success in modern
reinforcement learning. It is shown that such a problem can be reformulated as
a stochastic nonconvex-strongly-concave optimization problem, which is
challenging as naive stochastic gradient descent-ascent algorithm suffers from
slow convergence. Existing approaches for this problem are based on
two-timescale or double-loop stochastic gradient algorithms, which may also
require sampling large-batch data. However, in practice, a single-timescale
single-loop stochastic algorithm is preferred due to its simplicity and also
because its step-size is easier to tune. In this paper, we propose two
single-timescale single-loop algorithms which require only one data point each
step. Our first algorithm implements momentum updates on both primal and dual
variables achieving an $O(\varepsilon^{-4})$ sample complexity, which shows the
important role of momentum in obtaining a single-timescale algorithm. Our
second algorithm improves upon the first one by applying variance reduction on
top of momentum, which matches the best known $O(\varepsilon^{-3})$ sample
complexity in existing works. Furthermore, our variance-reduction algorithm
does not require a large-batch checkpoint. Moreover, our theoretical results
for both algorithms are expressed in a tighter form of simultaneous primal and
dual side convergence. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
In this work, we investigate whether state-of-the-art object detection
systems have equitable predictive performance on pedestrians with different
skin tones. This work is motivated by many recent examples of ML and vision
systems displaying higher error rates for certain demographic groups than
others. We annotate an existing large scale dataset which contains pedestrians,
BDD100K, with Fitzpatrick skin tones in ranges [1-3] or [4-6]. We then provide
an in-depth comparative analysis of performance between these two skin tone
groupings, finding that neither time of day nor occlusion explain this
behavior, suggesting this disparity is not merely the result of pedestrians in
the 4-6 range appearing in more difficult scenes for detection. We investigate
to what extent time of day, occlusion, and reweighting the supervised loss
during training affect this predictive bias. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
We propose a new modeling approach that is a generalization of generative and
discriminative models. The core idea is to use an implicit parameterization of
a joint probability distribution by specifying only the conditional
distributions. The proposed scheme combines the advantages of both worlds -- it
can use powerful complex discriminative models as its parts, having at the same
time better generalization capabilities. We thoroughly evaluate the proposed
method for a simple classification task with artificial data and illustrate its
advantages for real-word scenarios on a semantic image segmentation problem. | [
"cs.LG"
] |
Graph-structured data are an integral part of many application domains,
including chemoinformatics, computational biology, neuroimaging, and social
network analysis. Over the last two decades, numerous graph kernels, i.e.
kernel functions between graphs, have been proposed to solve the problem of
assessing the similarity between graphs, thereby making it possible to perform
predictions in both classification and regression settings. This manuscript
provides a review of existing graph kernels, their applications, software plus
data resources, and an empirical comparison of state-of-the-art graph kernels. | [
"cs.LG",
"stat.ML"
] |
Exploration remains a central challenge for reinforcement learning (RL).
Virtually all existing methods share the feature of a monolithic behaviour
policy that changes only gradually (at best). In contrast, the exploratory
behaviours of animals and humans exhibit a rich diversity, namely including
forms of switching between modes. This paper presents an initial study of
mode-switching, non-monolithic exploration for RL. We investigate different
modes to switch between, at what timescales it makes sense to switch, and what
signals make for good switching triggers. We also propose practical algorithmic
components that make the switching mechanism adaptive and robust, which enables
flexibility without an accompanying hyper-parameter-tuning burden. Finally, we
report a promising and detailed analysis on Atari, using two-mode exploration
and switching at sub-episodic time-scales. | [
"cs.LG",
"cs.AI",
"68T05",
"I.2.6"
] |
In this paper, we propose a refined scene text detector with a \textit{novel}
Feature Enhancement Network (FEN) for Region Proposal and Text Detection
Refinement. Retrospectively, both region proposal with \textit{only} $3\times
3$ sliding-window feature and text detection refinement with \textit{single
scale} high level feature are insufficient, especially for smaller scene text.
Therefore, we design a new FEN network with \textit{task-specific},
\textit{low} and \textit{high} level semantic features fusion to improve the
performance of text detection. Besides, since \textit{unitary}
position-sensitive RoI pooling in general object detection is unreasonable for
variable text regions, an \textit{adaptively weighted} position-sensitive RoI
pooling layer is devised for further enhancing the detecting accuracy. To
tackle the \textit{sample-imbalance} problem during the refinement stage, we
also propose an effective \textit{positives mining} strategy for efficiently
training our network. Experiments on ICDAR 2011 and 2013 robust text detection
benchmarks demonstrate that our method can achieve state-of-the-art results,
outperforming all reported methods in terms of F-measure. | [
"cs.CV"
] |
In recent years, dock-less shared bikes have been widely spread across many
cities in China and facilitate people's lives. However, at the same time, it
also raises many problems about dock-less shared bike management due to the
mismatching between demands and real distribution of bikes. Before deploying
dock-less shared bikes in a city, companies need to make a plan for dispatching
bikes from places having excessive bikes to locations with high demands for
providing better services. In this paper, we study the problem of inferring
fine-grained bike demands anywhere in a new city before the deployment of
bikes. This problem is challenging because new city lacks training data and
bike demands vary by both places and time. To solve the problem, we provide
various methods to extract discriminative features from multi-source geographic
data, such as POI, road networks and nighttime light, for each place. We
utilize correlation Principle Component Analysis (coPCA) to deal with extracted
features of both old city and new city to realize distribution adaption. Then,
we adopt a discrete wavelet transform (DWT) based model to mine daily patterns
for each place from fine-grained bike demand. We propose an attention based
local CNN model, \textbf{ALCNN}, to infer the daily patterns with latent
features from coPCA with multiple CNNs for modeling the influence of neighbor
places. In addition, ALCNN merges latent features from multiple CNNs and can
select a suitable size of influenced regions. The extensive experiments on
real-life datasets show that the proposed approach outperforms competitive
methods. | [
"cs.LG",
"stat.ML"
] |
Neural Architecture Search (NAS) is a collection of methods to craft the way
neural networks are built. Current NAS methods are far from ab initio and
automatic, as they use manual backbone architectures or micro building blocks
(cells), which have had minor breakthroughs in performance compared to random
baselines. They also involve a significant manual expert effort in various
components of the NAS pipeline. This raises a natural question - Are the
current NAS methods still heavily dependent on manual effort in the search
space design and wiring like it was done when building models before the advent
of NAS? In this paper, instead of merely chasing slight improvements over
state-of-the-art (SOTA) performance, we revisit the fundamental approach to NAS
and propose a novel approach called ReNAS that can search for the complete
neural network without much human effort and is a step closer towards
AutoML-nirvana. Our method starts from a complete graph mapped to a neural
network and searches for the connections and operations by balancing the
exploration and exploitation of the search space. The results are on-par with
the SOTA performance with methods that leverage handcrafted blocks. We believe
that this approach may lead to newer NAS strategies for a variety of network
types. | [
"cs.LG"
] |
Learning node representations is a crucial task with a plethora of
interdisciplinary applications. Nevertheless, as the size of the networks
increases, most widely used models face computational challenges to scale to
large networks. While there is a recent effort towards designing algorithms
that solely deal with scalability issues, most of them behave poorly in terms
of accuracy on downstream tasks. In this paper, we aim at studying models that
balance the trade-off between efficiency and accuracy. In particular, we
propose ${\rm N{\small ode}S{\small ig}}$, a scalable embedding model that
computes binary node representations. ${\rm N{\small ode}S{\small ig}}$
exploits random walk diffusion probabilities via stable random projection
hashing, towards efficiently computing embeddings in the Hamming space. Our
extensive experimental evaluation on various graphs has demonstrated that the
proposed model achieves a good balance between accuracy and efficiency compared
to well-known baseline models on two downstream tasks. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
The objective of meta-learning is to exploit the knowledge obtained from
observed tasks to improve adaptation to unseen tasks. As such, meta-learners
are able to generalize better when they are trained with a larger number of
observed tasks and with a larger amount of data per task. Given the amount of
resources that are needed, it is generally difficult to expect the tasks, their
respective data, and the necessary computational capacity to be available at a
single central location. It is more natural to encounter situations where these
resources are spread across several agents connected by some graph topology.
The formalism of meta-learning is actually well-suited to this decentralized
setting, where the learner would be able to benefit from information and
computational power spread across the agents. Motivated by this observation, in
this work, we propose a cooperative fully-decentralized multi-agent
meta-learning algorithm, referred to as Diffusion-based MAML or Dif-MAML.
Decentralized optimization algorithms are superior to centralized
implementations in terms of scalability, avoidance of communication
bottlenecks, and privacy guarantees. The work provides a detailed theoretical
analysis to show that the proposed strategy allows a collection of agents to
attain agreement at a linear rate and to converge to a stationary point of the
aggregate MAML objective even in non-convex environments. Simulation results
illustrate the theoretical findings and the superior performance relative to
the traditional non-cooperative setting. | [
"cs.LG",
"cs.MA"
] |
3D point-clouds and 2D images are different visual representations of the
physical world. While human vision can understand both representations,
computer vision models designed for 2D image and 3D point-cloud understanding
are quite different. Our paper investigates the potential for transferability
between these two representations by empirically investigating whether this
approach works, what factors affect the transfer performance, and how to make
it work even better. We discovered that we can indeed use the same neural net
model architectures to understand both images and point-clouds. Moreover, we
can transfer pretrained weights from image models to point-cloud models with
minimal effort. Specifically, based on a 2D ConvNet pretrained on an image
dataset, we can transfer the image model to a point-cloud model by
\textit{inflating} 2D convolutional filters to 3D then finetuning its input,
output, and optionally normalization layers. The transferred model can achieve
competitive performance on 3D point-cloud classification, indoor and driving
scene segmentation, even beating a wide range of point-cloud models that adopt
task-specific architectures and use a variety of tricks. | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
Blind face restoration usually relies on facial priors, such as facial
geometry prior or reference prior, to restore realistic and faithful details.
However, very low-quality inputs cannot offer accurate geometric prior while
high-quality references are inaccessible, limiting the applicability in
real-world scenarios. In this work, we propose GFP-GAN that leverages rich and
diverse priors encapsulated in a pretrained face GAN for blind face
restoration. This Generative Facial Prior (GFP) is incorporated into the face
restoration process via novel channel-split spatial feature transform layers,
which allow our method to achieve a good balance of realness and fidelity.
Thanks to the powerful generative facial prior and delicate designs, our
GFP-GAN could jointly restore facial details and enhance colors with just a
single forward pass, while GAN inversion methods require expensive
image-specific optimization at inference. Extensive experiments show that our
method achieves superior performance to prior art on both synthetic and
real-world datasets. | [
"cs.CV"
] |
We explore the effect of introducing prior information into the intermediate
level of neural networks for a learning task on which all the state-of-the-art
machine learning algorithms tested failed to learn. We motivate our work from
the hypothesis that humans learn such intermediate concepts from other
individuals via a form of supervision or guidance using a curriculum. The
experiments we have conducted provide positive evidence in favor of this
hypothesis. In our experiments, a two-tiered MLP architecture is trained on a
dataset with 64x64 binary inputs images, each image with three sprites. The
final task is to decide whether all the sprites are the same or one of them is
different. Sprites are pentomino tetris shapes and they are placed in an image
with different locations using scaling and rotation transformations. The first
part of the two-tiered MLP is pre-trained with intermediate-level targets being
the presence of sprites at each location, while the second part takes the
output of the first part as input and predicts the final task's target binary
event. The two-tiered MLP architecture, with a few tens of thousand examples,
was able to learn the task perfectly, whereas all other algorithms (include
unsupervised pre-training, but also traditional algorithms like SVMs, decision
trees and boosting) all perform no better than chance. We hypothesize that the
optimization difficulty involved when the intermediate pre-training is not
performed is due to the {\em composition} of two highly non-linear tasks. Our
findings are also consistent with hypotheses on cultural learning inspired by
the observations of optimization problems with deep learning, presumably
because of effective local minima. | [
"cs.LG",
"cs.CV",
"cs.NE",
"stat.ML"
] |
Early detection of preventable diseases is important for better disease
management, improved inter-ventions, and more efficient health-care resource
allocation. Various machine learning approacheshave been developed to utilize
information in Electronic Health Record (EHR) for this task. Majorityof
previous attempts, however, focus on structured fields and lose the vast amount
of information inthe unstructured notes. In this work we propose a general
multi-task framework for disease onsetprediction that combines both free-text
medical notes and structured information. We compareperformance of different
deep learning architectures including CNN, LSTM and hierarchical models.In
contrast to traditional text-based prediction models, our approach does not
require disease specificfeature engineering, and can handle negations and
numerical values that exist in the text. Ourresults on a cohort of about 1
million patients show that models using text outperform modelsusing just
structured data, and that models capable of using numerical values and
negations in thetext, in addition to the raw text, further improve performance.
Additionally, we compare differentvisualization methods for medical
professionals to interpret model predictions. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
If the aphorism "All models are wrong"- George Box, continues to be true in
data analysis, particularly when analyzing real-world data, then we should
annotate this wisdom with visible and explainable data-driven patterns. Such
annotations can critically shed invaluable light on validity as well as
limitations of statistical modeling as a data analysis approach. In an effort
to avoid holding our real data to potentially unattainable or even unrealistic
theoretical structures, we propose to utilize the data analysis paradigm called
Categorical Exploratory Data Analysis (CEDA). We illustrate the merits of this
proposal with two real-world data sets from the perspective of goodness-of-fit.
In both data sets, the Normal distribution's bell shape seemingly fits rather
well by first glance. We apply CEDA to bring out where and how each data fits
or deviates from the model shape via several important distributional aspects.
We also demonstrate that CEDA affords a version of tree-based p-value, and
compare it with p-values based on traditional statistical approaches. Along our
data analysis, we invest computational efforts in making graphic display to
illuminate the advantages of using CEDA as one primary way of data analysis in
Data Science education. | [
"stat.ML",
"cs.LG",
"stat.CO",
"stat.ME"
] |
We introduce a novel principle for self-supervised feature learning based on
the discrimination of specific transformations of an image. We argue that the
generalization capability of learned features depends on what image
neighborhood size is sufficient to discriminate different image
transformations: The larger the required neighborhood size and the more global
the image statistics that the feature can describe. An accurate description of
global image statistics allows to better represent the shape and configuration
of objects and their context, which ultimately generalizes better to new tasks
such as object classification and detection. This suggests a criterion to
choose and design image transformations. Based on this criterion, we introduce
a novel image transformation that we call limited context inpainting (LCI).
This transformation inpaints an image patch conditioned only on a small
rectangular pixel boundary (the limited context). Because of the limited
boundary information, the inpainter can learn to match local pixel statistics,
but is unlikely to match the global statistics of the image. We claim that the
same principle can be used to justify the performance of transformations such
as image rotations and warping. Indeed, we demonstrate experimentally that
learning to discriminate transformations such as LCI, image warping and
rotations, yields features with state of the art generalization capabilities on
several datasets such as Pascal VOC, STL-10, CelebA, and ImageNet. Remarkably,
our trained features achieve a performance on Places on par with features
trained through supervised learning with ImageNet labels. | [
"cs.CV"
] |
Exponential families are widely used in machine learning; they include many
distributions in continuous and discrete domains (e.g., Gaussian, Dirichlet,
Poisson, and categorical distributions via the softmax transformation).
Distributions in each of these families have fixed support. In contrast, for
finite domains, there has been recent work on sparse alternatives to softmax
(e.g. sparsemax and alpha-entmax), which have varying support, being able to
assign zero probability to irrelevant categories. This paper expands that work
in two directions: first, we extend alpha-entmax to continuous domains,
revealing a link with Tsallis statistics and deformed exponential families.
Second, we introduce continuous-domain attention mechanisms, deriving efficient
gradient backpropagation algorithms for alpha in {1,2}. Experiments on
attention-based text classification, machine translation, and visual question
answering illustrate the use of continuous attention in 1D and 2D, showing that
it allows attending to time intervals and compact regions. | [
"cs.LG",
"cs.CL",
"cs.CV",
"stat.ML"
] |
We present an approach to depth estimation that fuses information from a
stereo pair with sparse range measurements derived from a LIDAR sensor or a
range camera. The goal of this work is to exploit the complementary strengths
of the two sensor modalities, the accurate but sparse range measurements and
the ambiguous but dense stereo information. These two sources are effectively
and efficiently fused by combining ideas from anisotropic diffusion and
semi-global matching.
We evaluate our approach on the KITTI 2015 and Middlebury 2014 datasets,
using randomly sampled ground truth range measurements as our sparse depth
input. We achieve significant performance improvements with a small fraction of
range measurements on both datasets. We also provide qualitative results from
our platform using the PMDTec Monstar sensor. Our entire pipeline runs on an
NVIDIA TX-2 platform at 5Hz on 1280x1024 stereo images with 128 disparity
levels. | [
"cs.CV"
] |
Learning disentangled representations leads to interpretable models and
facilitates data generation with style transfer, which has been extensively
studied on static data such as images in an unsupervised learning framework.
However, only a few works have explored unsupervised disentangled sequential
representation learning due to challenges of generating sequential data. In
this paper, we propose recurrent Wasserstein Autoencoder (R-WAE), a new
framework for generative modeling of sequential data. R-WAE disentangles the
representation of an input sequence into static and dynamic factors (i.e.,
time-invariant and time-varying parts). Our theoretical analysis shows that,
R-WAE minimizes an upper bound of a penalized form of the Wasserstein distance
between model distribution and sequential data distribution, and simultaneously
maximizes the mutual information between input data and different disentangled
latent factors, respectively. This is superior to (recurrent) VAE which does
not explicitly enforce mutual information maximization between input data and
disentangled latent representations. When the number of actions in sequential
data is available as weak supervision information, R-WAE is extended to learn a
categorical latent representation of actions to improve its disentanglement.
Experiments on a variety of datasets show that our models outperform other
baselines with the same settings in terms of disentanglement and unconditional
video generation both quantitatively and qualitatively. | [
"cs.LG",
"cs.AI"
] |
Depth estimation from a single image represents a very exciting challenge in
computer vision. While other image-based depth sensing techniques leverage on
the geometry between different viewpoints (e.g., stereo or structure from
motion), the lack of these cues within a single image renders ill-posed the
monocular depth estimation task. For inference, state-of-the-art
encoder-decoder architectures for monocular depth estimation rely on effective
feature representations learned at training time. For unsupervised training of
these models, geometry has been effectively exploited by suitable images
warping losses computed from views acquired by a stereo rig or a moving camera.
In this paper, we make a further step forward showing that learning semantic
information from images enables to improve effectively monocular depth
estimation as well. In particular, by leveraging on semantically labeled images
together with unsupervised signals gained by geometry through an image warping
loss, we propose a deep learning approach aimed at joint semantic segmentation
and depth estimation. Our overall learning framework is semi-supervised, as we
deploy groundtruth data only in the semantic domain. At training time, our
network learns a common feature representation for both tasks and a novel
cross-task loss function is proposed. The experimental findings show how,
jointly tackling depth prediction and semantic segmentation, allows to improve
depth estimation accuracy. In particular, on the KITTI dataset our network
outperforms state-of-the-art methods for monocular depth estimation. | [
"cs.CV"
] |
Vehicle Re-Identification is to find images of the same vehicle from various
views in the cross-camera scenario. The main challenges of this task are the
large intra-instance distance caused by different views and the subtle
inter-instance discrepancy caused by similar vehicles. In this paper, we
propose a parsing-based view-aware embedding network (PVEN) to achieve the
view-aware feature alignment and enhancement for vehicle ReID. First, we
introduce a parsing network to parse a vehicle into four different views, and
then align the features by mask average pooling. Such alignment provides a
fine-grained representation of the vehicle. Second, in order to enhance the
view-aware features, we design a common-visible attention to focus on the
common visible views, which not only shortens the distance among
intra-instances, but also enlarges the discrepancy of inter-instances. The PVEN
helps capture the stable discriminative information of vehicle under different
views. The experiments conducted on three datasets show that our model
outperforms state-of-the-art methods by a large margin. | [
"cs.CV"
] |
Convolutional Neural Networks (CNN) has been widely applied in the realm of
computer vision. However, given the fact that CNN models are translation
invariant, they are not aware of the coordinate information of each pixel. Thus
the generalization ability of CNN will be limited since the coordinate
information is crucial for a model to learn affine transformations which
directly operate on the coordinate of each pixel. In this project, we proposed
a simple approach to incorporate the coordinate information to the CNN model
through coordinate embedding. Our approach does not change the downstream model
architecture and can be easily applied to the pre-trained models for the task
like object detection. Our experiments on the German Traffic Sign Detection
Benchmark show that our approach not only significantly improve the model
performance but also have better robustness with respect to the affine
transformation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Policy gradient and actor-critic algorithms form the basis of many commonly
used training techniques in deep reinforcement learning. Using these algorithms
in multiagent environments poses problems such as nonstationarity and
instability. In this paper, we first demonstrate that standard softmax-based
policy gradient can be prone to poor performance in the presence of even the
most benign nonstationarity. By contrast, it is known that the replicator
dynamics, a well-studied model from evolutionary game theory, eliminates
dominated strategies and exhibits convergence of the time-averaged trajectories
to interior Nash equilibria in zero-sum games. Thus, using the replicator
dynamics as a foundation, we derive an elegant one-line change to policy
gradient methods that simply bypasses the gradient step through the softmax,
yielding a new algorithm titled Neural Replicator Dynamics (NeuRD). NeuRD
reduces to the exponential weights/Hedge algorithm in the single-state
all-actions case. Additionally, NeuRD has formal equivalence to softmax
counterfactual regret minimization, which guarantees convergence in the
sequential tabular case. Importantly, our algorithm provides a straightforward
way of extending the replicator dynamics to the function approximation setting.
Empirical results show that NeuRD quickly adapts to nonstationarities,
outperforming policy gradient significantly in both tabular and function
approximation settings, when evaluated on the standard imperfect information
benchmarks of Kuhn Poker, Leduc Poker, and Goofspiel. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Image segmentation of touching objects plays a key role in providing accurate
classification for computer vision technologies. A new line profile based
imaging segmentation algorithm has been developed to provide a robust and
accurate segmentation of a group of touching corns. The performance of the line
profile based algorithm has been compared to a watershed based imaging
segmentation algorithm. Both algorithms are tested on three different patterns
of images, which are isolated corns, single-lines, and random distributed
formations. The experimental results show that the algorithm can segment a
large number of touching corn kernels efficiently and accurately. | [
"cs.CV"
] |
Generating a novel and optimized molecule with desired chemical properties is
an essential part of the drug discovery process. Failure to meet one of the
required properties can frequently lead to failure in a clinical test which is
costly. In addition, optimizing these multiple properties is a challenging task
because the optimization of one property is prone to changing other properties.
In this paper, we pose this multi-property optimization problem as a sequence
translation process and propose a new optimized molecule generator model based
on the Transformer with two constraint networks: property prediction and
similarity prediction. We further improve the model by incorporating score
predictions from these constraint networks in a modified beam search algorithm.
The experiments demonstrate that our proposed model outperforms
state-of-the-art models by a significant margin for optimizing multiple
properties simultaneously. | [
"cs.LG",
"cs.AI"
] |
In a real world environment, person re-identification (Re-ID) is a
challenging task due to variations in lighting conditions, viewing angles, pose
and occlusions. Despite recent performance gains, current person Re-ID
algorithms still suffer heavily when encountering these variations. To address
this problem, we propose a semantic consistency and identity mapping
multi-component generative adversarial network (SC-IMGAN) which provides style
adaptation from one to many domains. To ensure that transformed images are as
realistic as possible, we propose novel identity mapping and semantic
consistency losses to maintain identity across the diverse domains. For the
Re-ID task, we propose a joint verification-identification quartet network
which is trained with generated and real images, followed by an effective
quartet loss for verification. Our proposed method outperforms state-of-the-art
techniques on six challenging person Re-ID datasets: CUHK01, CUHK03, VIPeR,
PRID2011, iLIDS and Market-1501. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Most existing text-to-image generation methods adopt a multi-stage modular
architecture which has three significant problems: 1) Training multiple
networks increases the run time and affects the convergence and stability of
the generative model; 2) These approaches ignore the quality of early-stage
generator images; 3) Many discriminators need to be trained. To this end, we
propose the Dual Attention Generative Adversarial Network (DTGAN) which can
synthesize high-quality and semantically consistent images only employing a
single generator/discriminator pair. The proposed model introduces
channel-aware and pixel-aware attention modules that can guide the generator to
focus on text-relevant channels and pixels based on the global sentence vector
and to fine-tune original feature maps using attention weights. Also,
Conditional Adaptive Instance-Layer Normalization (CAdaILN) is presented to
help our attention modules flexibly control the amount of change in shape and
texture by the input natural-language description. Furthermore, a new type of
visual loss is utilized to enhance the image resolution by ensuring vivid shape
and perceptually uniform color distributions of generated images. Experimental
results on benchmark datasets demonstrate the superiority of our proposed
method compared to the state-of-the-art models with a multi-stage framework.
Visualization of the attention maps shows that the channel-aware attention
module is able to localize the discriminative regions, while the pixel-aware
attention module has the ability to capture the globally visual contents for
the generation of an image. | [
"cs.CV"
] |
There has been tremendous research progress in estimating the depth of a
scene from a monocular camera image. Existing methods for single-image depth
prediction are exclusively based on deep neural networks, and their training
can be unsupervised using stereo image pairs, supervised using LiDAR point
clouds, or semi-supervised using both stereo and LiDAR. In general,
semi-supervised training is preferred as it does not suffer from the weaknesses
of either supervised training, resulting from the difference in the cameras and
the LiDARs field of view, or unsupervised training, resulting from the poor
depth accuracy that can be recovered from a stereo pair. In this paper, we
present our research in single image depth prediction using semi-supervised
training that outperforms the state-of-the-art. We achieve this through a loss
function that explicitly exploits left-right consistency in a stereo
reconstruction, which has not been adopted in previous semi-supervised
training. In addition, we describe the correct use of ground truth depth
derived from LiDAR that can significantly reduce prediction error. The
performance of our depth prediction model is evaluated on popular datasets, and
the importance of each aspect of our semi-supervised training approach is
demonstrated through experimental results. Our deep neural network model has
been made publicly available. | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
At present, adversarial attacks are designed in a task-specific fashion.
However, for downstream computer vision tasks such as image captioning, image
segmentation etc., the current deep learning systems use an image classifier
like VGG16, ResNet50, Inception-v3 etc. as a feature extractor. Keeping this in
mind, we propose Mimic and Fool, a task agnostic adversarial attack. Given a
feature extractor, the proposed attack finds an adversarial image which can
mimic the image feature of the original image. This ensures that the two images
give the same (or similar) output regardless of the task. We randomly select
1000 MSCOCO validation images for experimentation. We perform experiments on
two image captioning models, Show and Tell, Show Attend and Tell and one VQA
model, namely, end-to-end neural module network (N2NMN). The proposed attack
achieves success rate of 74.0%, 81.0% and 87.1% for Show and Tell, Show Attend
and Tell and N2NMN respectively. We also propose a slight modification to our
attack to generate natural-looking adversarial images. In addition, we also
show the applicability of the proposed attack for invertible architecture.
Since Mimic and Fool only requires information about the feature extractor of
the model, it can be considered as a gray-box attack. | [
"cs.CV"
] |
This paper addresses the task of estimating the 6 degrees of freedom pose of
a known 3D object from depth information represented by a point cloud. Deep
features learned by convolutional neural networks from color information have
been the dominant features to be used for inferring object poses, while depth
information receives much less attention. However, depth information contains
rich geometric information of the object shape, which is important for
inferring the object pose. We use depth information represented by point clouds
as the input to both deep networks and geometry-based pose refinement and use
separate networks for rotation and translation regression. We argue that the
axis-angle representation is a suitable rotation representation for deep
learning, and use a geodesic loss function for rotation regression. Ablation
studies show that these design choices outperform alternatives such as the
quaternion representation and L2 loss, or regressing translation and rotation
with the same network. Our simple yet effective approach clearly outperforms
state-of-the-art methods on the YCB-video dataset. The implementation and
trained model are avaliable at: https://github.com/GeeeG/CloudPose. | [
"cs.CV"
] |
High-efficiency point cloud 3D object detection operated on embedded systems
is important for many robotics applications including autonomous driving. Most
previous works try to solve it using anchor-based detection methods which come
with two drawbacks: post-processing is relatively complex and computationally
expensive; tuning anchor parameters is tricky. We are the first to address
these drawbacks with an anchor free and Non-Maximum Suppression free one stage
detector called AFDet. The entire AFDet can be processed efficiently on a CNN
accelerator or a GPU with the simplified post-processing. Without bells and
whistles, our proposed AFDet performs competitively with other one stage
anchor-based methods on KITTI validation set and Waymo Open Dataset validation
set. | [
"cs.CV"
] |
Knowledge about the hidden factors that determine particular system dynamics
is crucial for both explaining them and pursuing goal-directed interventions.
Inferring these factors from time series data without supervision remains an
open challenge. Here, we focus on spatiotemporal processes, including wave
propagation and weather dynamics, for which we assume that universal causes
(e.g. physics) apply throughout space and time. A recently introduced
DIstributed SpatioTemporal graph Artificial Neural network Architecture
(DISTANA) is used and enhanced to learn such processes, requiring fewer
parameters and achieving significantly more accurate predictions compared to
temporal convolutional neural networks and other related approaches. We show
that DISTANA, when combined with a retrospective latent state inference
principle called active tuning, can reliably derive location-respective hidden
causal factors. In a current weather prediction benchmark, DISTANA infers our
planet's land-sea mask solely by observing temperature dynamics and, meanwhile,
uses the self inferred information to improve its own future temperature
predictions. | [
"cs.LG",
"stat.ML"
] |
Sequence prediction models can be learned from example sequences with a
variety of training algorithms. Maximum likelihood learning is simple and
efficient, yet can suffer from compounding error at test time. Reinforcement
learning such as policy gradient addresses the issue but can have prohibitively
poor exploration efficiency. A rich set of other algorithms such as RAML, SPG,
and data noising, have also been developed from different perspectives. This
paper establishes a formal connection between these algorithms. We present a
generalized entropy regularized policy optimization formulation, and show that
the apparently distinct algorithms can all be reformulated as special instances
of the framework, with the only difference being the configurations of a reward
function and a couple of hyperparameters. The unified interpretation offers a
systematic view of the varying properties of exploration and learning
efficiency. Besides, inspired from the framework, we present a new algorithm
that dynamically interpolates among the family of algorithms for scheduled
sequence model learning. Experiments on machine translation, text
summarization, and game imitation learning demonstrate the superiority of the
proposed algorithm. | [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
] |
Detection of small objects in large swaths of imagery is one of the primary
problems in satellite imagery analytics. While object detection in ground-based
imagery has benefited from research into new deep learning approaches,
transitioning such technology to overhead imagery is nontrivial. Among the
challenges is the sheer number of pixels and geographic extent per image: a
single DigitalGlobe satellite image encompasses >64 km2 and over 250 million
pixels. Another challenge is that objects of interest are minuscule (often only
~10 pixels in extent), which complicates traditional computer vision
techniques. To address these issues, we propose a pipeline (You Only Look
Twice, or YOLT) that evaluates satellite images of arbitrary size at a rate of
>0.5 km2/s. The proposed approach can rapidly detect objects of vastly
different scales with relatively little training data over multiple sensors. We
evaluate large test images at native resolution, and yield scores of F1 > 0.8
for vehicle localization. We further explore resolution and object size
requirements by systematically testing the pipeline at decreasing resolution,
and conclude that objects only ~5 pixels in size can still be localized with
high confidence. Code is available at https://github.com/CosmiQ/yolt. | [
"cs.CV"
] |
In many multi-agent spatiotemporal systems, the agents are under the
influence of shared, unobserved variables (e.g., the play a team is executing
in a game of basketball). As a result, the trajectories of the agents are often
statistically dependent at any given time step; however, almost universally,
multi-agent models implicitly assume the agents' trajectories are statistically
independent at each time step. In this paper, we introduce baller2vec++, a
multi-entity Transformer that can effectively model coordinated agents.
Specifically, baller2vec++ applies a specially designed self-attention mask to
a mixture of location and "look-ahead" trajectory sequences to learn the
distributions of statistically dependent agent trajectories. We show that,
unlike baller2vec (baller2vec++'s predecessor), baller2vec++ can learn to
emulate the behavior of perfectly coordinated agents in a simulated toy
dataset. Additionally, when modeling the trajectories of professional
basketball players, baller2vec++ outperforms baller2vec by a wide margin. | [
"cs.LG",
"cs.MA"
] |
Partial domain adaptation aims to transfer knowledge from a label-rich source
domain to a label-scarce target domain which relaxes the fully shared label
space assumption across different domains. In this more general and practical
scenario, a major challenge is how to select source instances in the shared
classes across different domains for positive transfer. To address this issue,
we propose a Domain Adversarial Reinforcement Learning (DARL) framework to
automatically select source instances in the shared classes for circumventing
negative transfer as well as to simultaneously learn transferable features
between domains by reducing the domain shift. Specifically, in this framework,
we employ deep Q-learning to learn policies for an agent to make selection
decisions by approximating the action-value function. Moreover, domain
adversarial learning is introduced to learn domain-invariant features for the
selected source instances by the agent and the target instances, and also to
determine rewards for the agent based on how relevant the selected source
instances are to the target domain. Experiments on several benchmark datasets
demonstrate that the superior performance of our DARL method over existing
state of the arts for partial domain adaptation. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Deep convolutional neural networks demonstrate impressive results in the
super-resolution domain. A series of studies concentrate on improving peak
signal noise ratio (PSNR) by using much deeper layers, which are not friendly
to constrained resources. Pursuing a trade-off between the restoration capacity
and the simplicity of models is still non-trivial. Recent contributions are
struggling to manually maximize this balance, while our work achieves the same
goal automatically with neural architecture search. Specifically, we handle
super-resolution with a multi-objective approach. We also propose an elastic
search tactic at both micro and macro level, based on a hybrid controller that
profits from evolutionary computation and reinforcement learning. Quantitative
experiments help us to draw a conclusion that our generated models dominate
most of the state-of-the-art methods with respect to the individual FLOPS. | [
"cs.CV",
"cs.LG"
] |
Graph data widely exist in many high-impact applications. Inspired by the
success of deep learning in grid-structured data, graph neural network models
have been proposed to learn powerful node-level or graph-level representation.
However, most of the existing graph neural networks suffer from the following
limitations: (1) there is limited analysis regarding the graph convolution
properties, such as seed-oriented, degree-aware and order-free; (2) the node's
degree-specific graph structure is not explicitly expressed in graph
convolution for distinguishing structure-aware node neighborhoods; (3) the
theoretical explanation regarding the graph-level pooling schemes is unclear.
To address these problems, we propose a generic degree-specific graph neural
network named DEMO-Net motivated by Weisfeiler-Lehman graph isomorphism test
that recursively identifies 1-hop neighborhood structures. In order to
explicitly capture the graph topology integrated with node attributes, we argue
that graph convolution should have three properties: seed-oriented,
degree-aware, order-free. To this end, we propose multi-task graph convolution
where each task represents node representation learning for nodes with a
specific degree value, thus leading to preserving the degree-specific graph
structure. In particular, we design two multi-task learning methods:
degree-specific weight and hashing functions for graph convolution. In
addition, we propose a novel graph-level pooling/readout scheme for learning
graph representation provably lying in a degree-specific Hilbert kernel space.
The experimental results on several node and graph classification benchmark
data sets demonstrate the effectiveness and efficiency of our proposed DEMO-Net
over state-of-the-art graph neural network models. | [
"cs.LG",
"stat.ML"
] |
Generative modeling of set-structured data, such as point clouds, requires
reasoning over local and global structures at various scales. However, adopting
multi-scale frameworks for ordinary sequential data to a set-structured data is
nontrivial as it should be invariant to the permutation of its elements. In
this paper, we propose SetVAE, a hierarchical variational autoencoder for sets.
Motivated by recent progress in set encoding, we build SetVAE upon attentive
modules that first partition the set and project the partition back to the
original cardinality. Exploiting this module, our hierarchical VAE learns
latent variables at multiple scales, capturing coarse-to-fine dependency of the
set elements while achieving permutation invariance. We evaluate our model on
point cloud generation task and achieve competitive performance to the prior
arts with substantially smaller model capacity. We qualitatively demonstrate
that our model generalizes to unseen set sizes and learns interesting subset
relations without supervision. Our implementation is available at
https://github.com/jw9730/setvae. | [
"cs.LG",
"cs.CV"
] |
Reservoir computers are powerful tools for chaotic time series prediction.
They can be trained to approximate phase space flows and can thus both predict
future values to a high accuracy, as well as reconstruct the general properties
of a chaotic attractor without requiring a model. In this work, we show that
the ability to learn the dynamics of a complex system can be extended to
systems with co-existing attractors, here a 4-dimensional extension of the
well-known Lorenz chaotic system. We demonstrate that a reservoir computer can
infer entirely unexplored parts of the phase space: a properly trained
reservoir computer can predict the existence of attractors that were never
approached during training and therefore are labelled as unseen. We provide
examples where attractor inference is achieved after training solely on a
single noisy trajectory. | [
"cs.LG",
"nlin.AO"
] |
Nonlinear independent component analysis (ICA) is a general framework for
unsupervised representation learning, and aimed at recovering the latent
variables in data. Recent practical methods perform nonlinear ICA by solving a
series of classification problems based on logistic regression. However, it is
well-known that logistic regression is vulnerable to outliers, and thus the
performance can be strongly weakened by outliers. In this paper, we first
theoretically analyze nonlinear ICA models in the presence of outliers. Our
analysis implies that estimation in nonlinear ICA can be seriously hampered
when outliers exist on the tails of the (noncontaminated) target density, which
happens in a typical case of contamination by outliers. We develop two robust
nonlinear ICA methods based on the {\gamma}-divergence, which is a robust
alternative to the KL-divergence in logistic regression. The proposed methods
are shown to have desired robustness properties in the context of nonlinear
ICA. We also experimentally demonstrate that the proposed methods are very
robust and outperform existing methods in the presence of outliers. Finally,
the proposed method is applied to ICA-based causal discovery and shown to find
a plausible causal relationship on fMRI data. | [
"cs.LG",
"stat.ML"
] |
Implementing systems based on Machine Learning to detect fraud and other
Non-Technical Losses (NTL) is challenging: the data available is biased, and
the algorithms currently used are black-boxes that cannot be either easily
trusted or understood by stakeholders. This work explains our human-in-the-loop
approach to mitigate these problems in a real system that uses a supervised
model to detect Non-Technical Losses (NTL) for an international utility company
from Spain. This approach exploits human knowledge (e.g. from the data
scientists or the company's stakeholders) and the information provided by
explanatory methods to guide the system during the training process. This
simple, efficient method that can be easily implemented in other industrial
projects is tested in a real dataset and the results show that the derived
prediction model is better in terms of accuracy, interpretability, robustness
and flexibility. | [
"cs.LG",
"stat.ML"
] |
Can one learn to diagnose COVID-19 under extreme minimal supervision? Since
the outbreak of the novel COVID-19 there has been a rush for developing
Artificial Intelligence techniques for expert-level disease identification on
Chest X-ray data. In particular, the use of deep supervised learning has become
the go-to paradigm. However, the performance of such models is heavily
dependent on the availability of a large and representative labelled dataset.
The creation of which is a heavily expensive and time consuming task, and
especially imposes a great challenge for a novel disease. Semi-supervised
learning has shown the ability to match the incredible performance of
supervised models whilst requiring a small fraction of the labelled examples.
This makes the semi-supervised paradigm an attractive option for identifying
COVID-19. In this work, we introduce a graph based deep semi-supervised
framework for classifying COVID-19 from chest X-rays. Our framework introduces
an optimisation model for graph diffusion that reinforces the natural relation
among the tiny labelled set and the vast unlabelled data. We then connect the
diffusion prediction output as pseudo-labels that are used in an iterative
scheme in a deep net. We demonstrate, through our experiments, that our model
is able to outperform the current leading supervised model with a tiny fraction
of the labelled examples. Finally, we provide attention maps to accommodate the
radiologist's mental model, better fitting their perceptual and cognitive
abilities. These visualisation aims to assist the radiologist in judging
whether the diagnostic is correct or not, and in consequence to accelerate the
decision. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Graph-based causal discovery methods aim to capture conditional
independencies consistent with the observed data and differentiate causal
relationships from indirect or induced ones. Successful construction of
graphical models of data depends on the assumption of causal sufficiency: that
is, that all confounding variables are measured. When this assumption is not
met, learned graphical structures may become arbitrarily incorrect and effects
implied by such models may be wrongly attributed, carry the wrong magnitude, or
mis-represent direction of correlation. Wide application of graphical models to
increasingly less curated "big data" draws renewed attention to the unobserved
confounder problem.
We present a novel method that aims to control for the latent space when
estimating a DAG by iteratively deriving proxies for the latent space from the
residuals of the inferred model. Under mild assumptions, our method improves
structural inference of Gaussian graphical models and enhances identifiability
of the causal effect. In addition, when the model is being used to predict
outcomes, it un-confounds the coefficients on the parents of the outcomes and
leads to improved predictive performance when out-of-sample regime is very
different from the training data. We show that any improvement of prediction of
an outcome is intrinsically capped and cannot rise beyond a certain limit as
compared to the confounded model. We extend our methodology beyond GGMs to
ordinal variables and nonlinear cases. Our R package provides both PCA and
autoencoder implementations of the methodology, suitable for GGMs with some
guarantees and for better performance in general cases but without such
guarantees. | [
"stat.ML",
"cs.LG",
"q-bio.QM"
] |
Adversarial training has been the topic of dozens of studies and a leading
method for defending against adversarial attacks. Yet, it remains largely
unknown (a) how adversarially-robust ImageNet classifiers (R classifiers)
generalize to out-of-distribution examples; and (b) how their generalization
capability relates to their hidden representations. In this paper, we perform a
thorough, systematic study to answer these two questions across AlexNet,
GoogLeNet, and ResNet-50 architectures. We found that while standard ImageNet
classifiers have a strong texture bias, their R counterparts rely heavily on
shapes. Remarkably, adversarial training induces three simplicity biases into
hidden neurons in the process of 'robustifying' the network. That is, each
convolutional neuron in R networks often changes to detecting (1) pixel-wise
smoother patterns i.e. a mechanism that blocks high-frequency noise from
passing through the network; (2) more lower-level features i.e. textures and
colors (instead of objects); and (3) fewer types of inputs. Our findings reveal
the interesting mechanisms that made networks more adversarially robust and
also explain some recent findings e.g. why R networks benefit from much larger
capacity (Xie and Yuille, 2020) and can act as a strong image prior in image
synthesis (Santurkar et al., 2019). | [
"cs.CV",
"cs.LG"
] |
We propose a novel and principled hybrid CNN+CRF model for stereo estimation.
Our model allows to exploit the advantages of both, convolutional neural
networks (CNNs) and conditional random fields (CRFs) in an unified approach.
The CNNs compute expressive features for matching and distinctive color edges,
which in turn are used to compute the unary and binary costs of the CRF. For
inference, we apply a recently proposed highly parallel dual block descent
algorithm which only needs a small fixed number of iterations to compute a
high-quality approximate minimizer. As the main contribution of the paper, we
propose a theoretically sound method based on the structured output support
vector machine (SSVM) to train the hybrid CNN+CRF model on large-scale data
end-to-end. Our trained models perform very well despite the fact that we are
using shallow CNNs and do not apply any kind of post-processing to the final
output of the CRF. We evaluate our combined models on challenging stereo
benchmarks such as Middlebury 2014 and Kitti 2015 and also investigate the
performance of each individual component. | [
"cs.CV"
] |
Prognostics and Health Management (PHM) is an emerging engineering discipline
which is concerned with the analysis and prediction of equipment health and
performance. One of the key challenges in PHM is to accurately predict
impending failures in the equipment. In recent years, solutions for failure
prediction have evolved from building complex physical models to the use of
machine learning algorithms that leverage the data generated by the equipment.
However, failure prediction problems pose a set of unique challenges that make
direct application of traditional classification and prediction algorithms
impractical. These challenges include the highly imbalanced training data, the
extremely high cost of collecting more failure samples, and the complexity of
the failure patterns. Traditional oversampling techniques will not be able to
capture such complexity and accordingly result in overfitting the training
data. This paper addresses these challenges by proposing a novel algorithm for
failure prediction using Generative Adversarial Networks (GAN-FP). GAN-FP first
utilizes two GAN networks to simultaneously generate training samples and build
an inference network that can be used to predict failures for new samples.
GAN-FP first adopts an infoGAN to generate realistic failure and non-failure
samples, and initialize the weights of the first few layers of the inference
network. The inference network is then tuned by optimizing a weighted loss
objective using only real failure and non-failure samples. The inference
network is further tuned using a second GAN whose purpose is to guarantee the
consistency between the generated samples and corresponding labels. GAN-FP can
be used for other imbalanced classification problems as well. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Within the past decade, the rise of applications based on artificial
intelligence (AI) in general and machine learning (ML) in specific has led to
many significant contributions within different domains. The applications range
from robotics over medical diagnoses up to autonomous driving. However, nearly
all applications rely on trained data. In case this data consists of 3D images,
it is of utmost importance that the labeling is as accurate as possible to
ensure high-quality outcomes of the ML models. Labeling in the 3D space is
mostly manual work performed by expert workers, where they draw 3D bounding
boxes around target objects the ML model should later automatically identify,
e.g., pedestrians for autonomous driving or cancer cells within radiography.
While a small range of recent 3D labeling tools exist, they all share three
major shortcomings: (i) they are specified for autonomous driving applications,
(ii) they lack convenience and comfort functions, and (iii) they have high
dependencies and little flexibility in data format. Therefore, we propose a
novel labeling tool for 3D object detection in point clouds to address these
shortcomings. | [
"cs.CV",
"cs.LG"
] |
Deep generative models seek to recover the process with which the observed
data was generated. They may be used to synthesize new samples or to
subsequently extract representations. Successful approaches in the domain of
images are driven by several core inductive biases. However, a bias to account
for the compositional way in which humans structure a visual scene in terms of
objects has frequently been overlooked. In this work, we investigate object
compositionality as an inductive bias for Generative Adversarial Networks
(GANs). We present a minimal modification of a standard generator to
incorporate this inductive bias and find that it reliably learns to generate
images as compositions of objects. Using this general design as a backbone, we
then propose two useful extensions to incorporate dependencies among objects
and background. We extensively evaluate our approach on several multi-object
image datasets and highlight the merits of incorporating structure for
representation learning purposes. In particular, we find that our structured
GANs are better at generating multi-object images that are more faithful to the
reference distribution. More so, we demonstrate how, by leveraging the
structure of the learned generative process, one can `invert' the learned
generative model to perform unsupervised instance segmentation. On the
challenging CLEVR dataset, it is shown how our approach is able to improve over
other recent purely unsupervised object-centric approaches to image generation. | [
"cs.CV",
"cs.NE",
"I.2.6"
] |
In natural scenes and documents, we can find the correlation between a text
and its color. For instance, the word, "hot", is often printed in red, while
"cold" is often in blue. This correlation can be thought of as a feature that
represents the semantic difference between the words. Based on this
observation, we propose the idea of using text color for word embeddings. While
text-only word embeddings (e.g. word2vec) have been extremely successful, they
often represent antonyms as similar since they are often interchangeable in
sentences. In this paper, we try two tasks to verify the usefulness of text
color in understanding the meanings of words, especially in identifying
synonyms and antonyms. First, we quantify the color distribution of words from
the book cover images and analyze the correlation between the color and meaning
of the word. Second, we try to retrain word embeddings with the color
distribution of words as a constraint. By observing the changes in the word
embeddings of synonyms and antonyms before and after re-training, we aim to
understand the kind of words that have positive or negative effects in their
word embeddings when incorporating text color information. | [
"cs.CV",
"cs.CL",
"cs.LG"
] |
Handwritten mathematical expression recognition (HMER) is an important
research direction in handwriting recognition. The performance of HMER suffers
from the two-dimensional structure of mathematical expressions (MEs). To
address this issue, in this paper, we propose a high-performance HMER model
with scale augmentation and drop attention. Specifically, tackling ME with
unstable scale in both horizontal and vertical directions, scale augmentation
improves the performance of the model on MEs of various scales. An
attention-based encoder-decoder network is used for extracting features and
generating predictions. In addition, drop attention is proposed to further
improve performance when the attention distribution of the decoder is not
precise. Compared with previous methods, our method achieves state-of-the-art
performance on two public datasets of CROHME 2014 and CROHME 2016. | [
"cs.CV"
] |
This paper introduces the Sylvester graphical lasso (SyGlasso) that captures
multiway dependencies present in tensor-valued data. The model is based on the
Sylvester equation that defines a generative model. The proposed model
complements the tensor graphical lasso (Greenewald et al., 2019) that imposes a
Kronecker sum model for the inverse covariance matrix by providing an
alternative Kronecker sum model that is generative and interpretable. A
nodewise regression approach is adopted for estimating the conditional
independence relationships among variables. The statistical convergence of the
method is established, and empirical studies are provided to demonstrate the
recovery of meaningful conditional dependency graphs. We apply the SyGlasso to
an electroencephalography (EEG) study to compare the brain connectivity of
alcoholic and nonalcoholic subjects. We demonstrate that our model can
simultaneously estimate both the brain connectivity and its temporal
dependencies. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
Automated medical report generation in spine radiology, i.e., given spinal
medical images and directly create radiologist-level diagnosis reports to
support clinical decision making, is a novel yet fundamental study in the
domain of artificial intelligence in healthcare. However, it is incredibly
challenging because it is an extremely complicated task that involves visual
perception and high-level reasoning processes. In this paper, we propose the
neural-symbolic learning (NSL) framework that performs human-like learning by
unifying deep neural learning and symbolic logical reasoning for the spinal
medical report generation. Generally speaking, the NSL framework firstly
employs deep neural learning to imitate human visual perception for detecting
abnormalities of target spinal structures. Concretely, we design an adversarial
graph network that interpolates a symbolic graph reasoning module into a
generative adversarial network through embedding prior domain knowledge,
achieving semantic segmentation of spinal structures with high complexity and
variability. NSL secondly conducts human-like symbolic logical reasoning that
realizes unsupervised causal effect analysis of detected entities of
abnormalities through meta-interpretive learning. NSL finally fills these
discoveries of target diseases into a unified template, successfully achieving
a comprehensive medical report generation. When it employed in a real-world
clinical dataset, a series of empirical studies demonstrate its capacity on
spinal medical report generation as well as show that our algorithm remarkably
exceeds existing methods in the detection of spinal structures. These indicate
its potential as a clinical tool that contributes to computer-aided diagnosis. | [
"cs.CV",
"cs.AI",
"cs.LG",
"eess.IV"
] |
An efficient spatial regularization method using superpixel segmentation and
graph Laplacian regularization is proposed for sparse hyperspectral unmixing
method. Since it is likely to find spectrally similar pixels in a homogeneous
region, we use a superpixel segmentation algorithm to extract the homogeneous
regions by considering the image boundaries. We first extract the homogeneous
regions, which are called superpixels, then a weighted graph in each superpixel
is constructed by selecting $K$-nearest pixels in each superpixel. Each node in
the graph represents the spectrum of a pixel and edges connect the similar
pixels inside the superpixel. The spatial similarity is investigated using
graph Laplacian regularization. Sparsity regularization for abundance matrix is
provided using a weighted sparsity promoting norm. Experimental results on
simulated and real data sets show the superiority of the proposed algorithm
over the well-known algorithms in the literature. | [
"cs.CV",
"eess.IV"
] |
Vision-and-language navigation requires an agent to navigate through a real
3D environment following natural language instructions. Despite significant
advances, few previous works are able to fully utilize the strong
correspondence between the visual and textual sequences. Meanwhile, due to the
lack of intermediate supervision, the agent's performance at following each
part of the instruction cannot be assessed during navigation. In this work, we
focus on the granularity of the visual and language sequences as well as the
traceability of agents through the completion of an instruction. We provide
agents with fine-grained annotations during training and find that they are
able to follow the instruction better and have a higher chance of reaching the
target at test time. We enrich the benchmark dataset Room-to-Room (R2R) with
sub-instructions and their corresponding paths. To make use of this data, we
propose effective sub-instruction attention and shifting modules that select
and attend to a single sub-instruction at each time-step. We implement our
sub-instruction modules in four state-of-the-art agents, compare with their
baseline models, and show that our proposed method improves the performance of
all four agents.
We release the Fine-Grained R2R dataset (FGR2R) and the code at
https://github.com/YicongHong/Fine-Grained-R2R. | [
"cs.CV"
] |
Attribute-based person search is in significant demand for applications where
no detected query images are available, such as identifying a criminal from
witness. However, the task itself is quite challenging because there is a huge
modality gap between images and physical descriptions of attributes. Often,
there may also be a large number of unseen categories (attribute combinations).
The current state-of-the-art methods either focus on learning better
cross-modal embeddings by mining only seen data, or they explicitly use
generative adversarial networks (GANs) to synthesize unseen features. The
former tends to produce poor embeddings due to insufficient data, while the
latter does not preserve intra-class compactness during generation. In this
paper, we present a symbiotic adversarial learning framework, called SAL.Two
GANs sit at the base of the framework in a symbiotic learning scheme: one
synthesizes features of unseen classes/categories, while the other optimizes
the embedding and performs the cross-modal alignment on the common embedding
space .Specifically, two different types of generative adversarial networks
learn collaboratively throughout the training process and the interactions
between the two mutually benefit each other. Extensive evaluations show SAL's
superiority over nine state-of-the-art methods with two challenging pedestrian
benchmarks, PETA and Market-1501. The code is publicly available at:
https://github.com/ycao5602/SAL . | [
"cs.CV"
] |
Machine Learning (ML) adoption in the enterprise requires simpler and more
efficient software infrastructure---the bespoke solutions typical in large web
companies are simply untenable. Model scoring, the process of obtaining
predictions from a trained model over new data, is a primary contributor to
infrastructure complexity and cost as models are trained once but used many
times. In this paper we propose HUMMINGBIRD, a novel approach to model scoring,
which compiles featurization operators and traditional ML models (e.g.,
decision trees) into a small set of tensor operations. This approach inherently
reduces infrastructure complexity and directly leverages existing investments
in Neural Network compilers and runtimes to generate efficient computations for
both CPU and hardware accelerators. Our performance results are intriguing:
despite replacing imperative computations (e.g., tree traversals) with tensor
computation abstractions, HUMMINGBIRD is competitive and often outperforms
hand-crafted kernels on micro-benchmarks on both CPU and GPU, while enabling
seamless end-to-end acceleration of ML pipelines. We have released HUMMINGBIRD
as open source. | [
"cs.LG"
] |
Humans rely on properties of the materials that make up objects to guide our
interactions with them. Grasping smooth materials, for example, requires care,
and softness is an ideal property for fabric used in bedding. Even when these
properties are not visual (e.g. softness is a physical property), we may still
infer their presence visually. We refer to such material properties as visual
material attributes. Recognizing these attributes in images can contribute
valuable information for general scene understanding and material recognition.
Unlike well-known object and scene attributes, visual material attributes are
local properties with no fixed shape or spatial extent. We show that given a
set of images annotated with known material attributes, we may accurately
recognize the attributes from small local image patches. Obtaining such
annotations in a consistent fashion at scale, however, is challenging. To
address this, we introduce a method that allows us to probe the human visual
perception of materials by asking simple yes/no questions comparing pairs of
image patches. This provides sufficient weak supervision to build a set of
attributes and associated classifiers that, while unnamed, serve the same
function as the named attributes we use to describe materials. Doing so allows
us to recognize visual material attributes without resorting to exhaustive
manual annotation of a fixed set of named attributes. Furthermore, we show that
this method may be integrated in the end-to-end learning of a material
classification CNN to simultaneously recognize materials and discover their
visual attributes. Our experimental results show that visual material
attributes, whether named or automatically discovered, provide a useful
intermediate representation for known material categories themselves as well as
a basis for transfer learning when recognizing previously-unseen categories. | [
"cs.CV"
] |
Low-light image enhancement (LLIE) is a pervasive yet challenging problem,
since: 1) low-light measurements may vary due to different imaging conditions
in practice; 2) images can be enlightened subjectively according to diverse
preferences by each individual. To tackle these two challenges, this paper
presents a novel deep reinforcement learning based method, dubbed ReLLIE, for
customized low-light enhancement. ReLLIE models LLIE as a markov decision
process, i.e., estimating the pixel-wise image-specific curves sequentially and
recurrently. Given the reward computed from a set of carefully crafted
non-reference loss functions, a lightweight network is proposed to estimate the
curves for enlightening of a low-light image input. As ReLLIE learns a policy
instead of one-one image translation, it can handle various low-light
measurements and provide customized enhanced outputs by flexibly applying the
policy different times. Furthermore, ReLLIE can enhance real-world images with
hybrid corruptions, e.g., noise, by using a plug-and-play denoiser easily.
Extensive experiments on various benchmarks demonstrate the advantages of
ReLLIE, comparing to the state-of-the-art methods. | [
"cs.CV"
] |
Salient instance segmentation is a new challenging task that received
widespread attention in the saliency detection area. The new generation of
saliency detection provides a strong theoretical and technical basis for video
surveillance. Due to the limited scale of the existing dataset and the high
mask annotations cost, plenty of supervision source is urgently needed to train
a well-performing salient instance model. In this paper, we aim to train a
novel salient instance segmentation framework by an inexact supervision without
resorting to laborious labeling. To this end, we present a cyclic global
context salient instance segmentation network (CGCNet), which is supervised by
the combination of salient regions and bounding boxes from the ready-made
salient object detection datasets. To locate salient instance more accurately,
a global feature refining layer is proposed that dilates the features of the
region of interest (ROI) to the global context in a scene. Meanwhile, a
labeling updating scheme is embedded in the proposed framework to update the
coarse-grained labels for next iteration. Experiment results demonstrate that
the proposed end-to-end framework trained by inexact supervised annotations can
be competitive to the existing fully supervised salient instance segmentation
methods. Without bells and whistles, our proposed method achieves a mask AP of
58.3% in the test set of Dataset1K that outperforms the mainstream
state-of-the-art methods. | [
"cs.CV",
"cs.LG"
] |
Molecular dynamics simulations produce data with complex nonlinear dynamics.
If the timestep behavior of such a dynamic system can be represented by a
linear operator, future states can be inferred directly without expensive
simulations. The use of an autoencoder in combination with a physical timestep
operator allows both the relevant structural characteristics of the molecular
graphs and the underlying physics of the system to be isolated during the
training process. In this work, we develop a pipeline for establishing
graph-structured representations of time-series volumetric data from molecular
dynamics simulations. We then train an autoencoder to find nonlinear mappings
to a latent space where future timesteps can be predicted through application
of a linear operator trained in tandem with the autoencoder. Increasing the
dimensionality of the autoencoder output is shown to improve the accuracy of
the physical timestep operator. | [
"cs.LG",
"physics.chem-ph"
] |
A temporal point process is a mathematical model for a time series of
discrete events, which covers various applications. Recently, recurrent neural
network (RNN) based models have been developed for point processes and have
been found effective. RNN based models usually assume a specific functional
form for the time course of the intensity function of a point process (e.g.,
exponentially decreasing or increasing with the time since the most recent
event). However, such an assumption can restrict the expressive power of the
model. We herein propose a novel RNN based model in which the time course of
the intensity function is represented in a general manner. In our approach, we
first model the integral of the intensity function using a feedforward neural
network and then obtain the intensity function as its derivative. This approach
enables us to both obtain a flexible model of the intensity function and
exactly evaluate the log-likelihood function, which contains the integral of
the intensity function, without any numerical approximations. Our model
achieves competitive or superior performances compared to the previous
state-of-the-art methods for both synthetic and real datasets. | [
"cs.LG",
"stat.ML"
] |
Braille has empowered visually challenged community to read and write. But at
the same time, it has created a gap due to widespread inability of non-Braille
users to understand Braille scripts. This gap has fuelled researchers to
propose Optical Braille Recognition techniques to convert Braille documents to
natural language. The main motivation of this work is to cement the
communication gap at academic institutions by translating personal documents of
blind students. This has been accomplished by proposing an economical and
effective technique which digitizes Braille documents using a smartphone
camera. For any given Braille image, a dot detection mechanism based on Hough
transform is proposed which is invariant to skewness, noise and other
deterrents. The detected dots are then clustered into Braille cells using
distance-based clustering algorithm. In succession, the standard physical
parameters of each Braille cells are estimated for feature extraction and
classification as natural language characters. The comprehensive evaluation of
this technique on the proposed dataset of 54 Braille scripts has yielded into
accuracy of 98.71%. | [
"cs.CV"
] |
We introduce a unified probabilistic framework for solving sequential
decision making problems ranging from Bayesian optimisation to contextual
bandits and reinforcement learning. This is accomplished by a probabilistic
model-based approach that explains observed data while capturing predictive
uncertainty during the decision making process. Crucially, this probabilistic
model is chosen to be a Meta-Learning system that allows learning from a
distribution of related problems, allowing data efficient adaptation to a
target task. As a suitable instantiation of this framework, we explore the use
of Neural processes due to statistical and computational desiderata. We apply
our framework to a broad range of problem domains, such as control problems,
recommender systems and adversarial attacks on RL agents, demonstrating an
efficient and general black-box learning approach. | [
"stat.ML",
"cs.LG"
] |
We propose an approach to learning with graph-structured data in the problem
domain of graph classification. In particular, we present a novel type of
readout operation to aggregate node features into a graph-level representation.
To this end, we leverage persistent homology computed via a real-valued,
learnable, filter function. We establish the theoretical foundation for
differentiating through the persistent homology computation. Empirically, we
show that this type of readout operation compares favorably to previous
techniques, especially when the graph connectivity structure is informative for
the learning problem. | [
"cs.LG",
"math.AT",
"stat.ML"
] |
We present MRGAN, a multi-rooted adversarial network which generates
part-disentangled 3D point-cloud shapes without part-based shape supervision.
The network fuses multiple branches of tree-structured graph convolution layers
which produce point clouds, with learnable constant inputs at the tree roots.
Each branch learns to grow a different shape part, offering control over the
shape generation at the part level. Our network encourages disentangled
generation of semantic parts via two key ingredients: a root-mixing training
strategy which helps decorrelate the different branches to facilitate
disentanglement, and a set of loss terms designed with part disentanglement and
shape semantics in mind. Of these, a novel convexity loss incentivizes the
generation of parts that are more convex, as semantic parts tend to be. In
addition, a root-dropping loss further ensures that each root seeds a single
part, preventing the degeneration or over-growth of the point-producing
branches. We evaluate the performance of our network on a number of 3D shape
classes, and offer qualitative and quantitative comparisons to previous works
and baseline approaches. We demonstrate the controllability offered by our
part-disentangled generation through two applications for shape modeling: part
mixing and individual part variation, without receiving segmented shapes as
input. | [
"cs.CV",
"cs.LG"
] |
Machine learning applications such as finance and medicine demand accurate
and justifiable predictions, barring most deep learning methods from use. In
response, previous work combines decision trees with deep learning, yielding
models that (1) sacrifice interpretability for accuracy or (2) sacrifice
accuracy for interpretability. We forgo this dilemma by jointly improving
accuracy and interpretability using Neural-Backed Decision Trees (NBDTs). NBDTs
replace a neural network's final linear layer with a differentiable sequence of
decisions and a surrogate loss. This forces the model to learn high-level
concepts and lessens reliance on highly-uncertain decisions, yielding (1)
accuracy: NBDTs match or outperform modern neural networks on CIFAR, ImageNet
and better generalize to unseen classes by up to 16%. Furthermore, our
surrogate loss improves the original model's accuracy by up to 2%. NBDTs also
afford (2) interpretability: improving human trustby clearly identifying model
mistakes and assisting in dataset debugging. Code and pretrained NBDTs are at
https://github.com/alvinwan/neural-backed-decision-trees. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
Egocentric video recognition is a natural testbed for diverse interaction
reasoning. Due to the large action vocabulary in egocentric video datasets,
recent studies usually utilize a two-branch structure for action recognition,
ie, one branch for verb classification and the other branch for noun
classification. However, correlation studies between the verb and the noun
branches have been largely ignored. Besides, the two branches fail to exploit
local features due to the absence of a position-aware attention mechanism. In
this paper, we propose a novel Symbiotic Attention framework leveraging
Privileged information (SAP) for egocentric video recognition. Finer
position-aware object detection features can facilitate the understanding of
actor's interaction with the object. We introduce these features in action
recognition and regard them as privileged information. Our framework enables
mutual communication among the verb branch, the noun branch, and the privileged
information. This communication process not only injects local details into
global features but also exploits implicit guidance about the spatio-temporal
position of an on-going action. We introduce novel symbiotic attention (SA) to
enable effective communication. It first normalizes the detection guided
features on one branch to underline the action-relevant information from the
other branch. SA adaptively enhances the interactions among the three sources.
To further catalyze this communication, spatial relations are uncovered for the
selection of most action-relevant information. It identifies the most valuable
and discriminative feature for classification. We validate the effectiveness of
our SAP quantitatively and qualitatively. Notably, it achieves the
state-of-the-art on two large-scale egocentric video datasets. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.