text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Superpixels are a useful representation to reduce the complexity of image
data. However, to combine superpixels with convolutional neural networks (CNNs)
in an end-to-end fashion, one requires extra models to generate superpixels and
special operations such as graph convolution. In this paper, we propose a way
to implicitly integrate a superpixel scheme into CNNs, which makes it easy to
use superpixels with CNNs in an end-to-end fashion. Our proposed method
hierarchically groups pixels at downsampling layers and generates superpixels.
Our method can be plugged into many existing architectures without a change in
their feed-forward path because our method does not use superpixels in the
feed-forward path but use them to recover the lost resolution instead of
bilinear upsampling. As a result, our method preserves detailed information
such as object boundaries in the form of superpixels even when the model
contains downsampling layers. We evaluate our method on several tasks such as
semantic segmentation, superpixel segmentation, and monocular depth estimation,
and confirm that it speeds up modern architectures and/or improves their
prediction accuracy in these tasks. | [
"cs.CV"
] |
Following the recent advances in deep networks, object detection and tracking
algorithms with deep learning backbones have been improved significantly;
however, this rapid development resulted in the necessity of large amounts of
annotated labels. Even if the details of such semi-automatic annotation
processes for most of these datasets are not known precisely, especially for
the video annotations, some automated labeling processes are usually employed.
Unfortunately, such approaches might result with erroneous annotations. In this
work, different types of annotation errors for object detection problem are
simulated and the performance of a popular state-of-the-art object detector,
YOLOv3, with erroneous annotations during training and testing stages is
examined. Moreover, some inevitable annotation errors in CVPR-2020 Anti-UAV
Challenge dataset is also examined in this manner, while proposing a solution
to correct such annotation errors of this valuable data set. | [
"cs.CV"
] |
Zoonosis refers to the transmission of infectious diseases from animal to
human. The increasing number of zoonosis incidence makes the great losses to
lives, including humans and animals, and also the impact in social economic. It
motivates development of a system that can predict the future number of
zoonosis occurrences in human. This paper analyses and presents the use of
Seasonal Autoregressive Integrated Moving Average (SARIMA) method for
developing a forecasting model that able to support and provide prediction
number of zoonosis human incidence. The dataset for model development was
collected on a time series data of human tuberculosis occurrences in United
States which comprises of fourteen years of monthly data obtained from a study
published by Centers for Disease Control and Prevention (CDC). Several trial
models of SARIMA were compared to obtain the most appropriate model. Then,
diagnostic tests were used to determine model validity. The result showed that
the SARIMA(9,0,14)(12,1,24)12 is the fittest model. While in the measure of
accuracy, the selected model achieved 0.062 of Theils U value. It implied that
the model was highly accurate and a close fit. It was also indicated the
capability of final model to closely represent and made prediction based on the
tuberculosis historical dataset. | [
"cs.LG",
"q-bio.QM"
] |
Adapting the idea of training CartPole with Deep Q-learning agent, we are
able to find a promising result that prevent the pole from falling down. The
capacity of reinforcement learning (RL) to learn from the interaction between
the environment and agent provides an optimal control strategy. In this paper,
we aim to solve the classic pendulum swing-up problem that making the learned
pendulum to be in upright position and balanced. Deep Deterministic Policy
Gradient algorithm is introduced to operate over continuous action domain in
this problem. Salient results of optimal pendulum are proved with increasing
average return, decreasing loss, and live video in the code part. | [
"stat.ML",
"cs.LG"
] |
We have analyzed manufacturing data from several different semiconductor
manufacturing plants, using decision tree induction software called Q-YIELD.
The software generates rules for predicting when a given product should be
rejected. The rules are intended to help the process engineers improve the
yield of the product, by helping them to discover the causes of rejection.
Experience with Q-YIELD has taught us the importance of data engineering --
preprocessing the data to enable or facilitate decision tree induction. This
paper discusses some of the data engineering problems we have encountered with
semiconductor manufacturing data. The paper deals with two broad classes of
problems: engineering the features in a feature vector representation and
engineering the definition of the target concept (the classes). Manufacturing
process data present special problems for feature engineering, since the data
have multiple levels of granularity (detail, resolution). Engineering the
target concept is important, due to our focus on understanding the past, as
opposed to the more common focus in machine learning on predicting the future. | [
"cs.LG",
"cs.CE",
"cs.CV",
"I.2.6; I.5.2; I.5.4; J.2"
] |
Detecting manipulated facial images and videos is an increasingly important
topic in digital media forensics. As advanced face synthesis and manipulation
methods are made available, new types of fake face representations are being
created which have raised significant concerns for their use in social media.
Hence, it is crucial to detect manipulated face images and localize manipulated
regions. Instead of simply using multi-task learning to simultaneously detect
manipulated images and predict the manipulated mask (regions), we propose to
utilize an attention mechanism to process and improve the feature maps for the
classification task. The learned attention maps highlight the informative
regions to further improve the binary classification (genuine face v. fake
face), and also visualize the manipulated regions. To enable our study of
manipulated face detection and localization, we collect a large-scale database
that contains numerous types of facial forgeries. With this dataset, we perform
a thorough analysis of data-driven fake face detection. We show that the use of
an attention mechanism improves facial forgery detection and manipulated region
localization. | [
"cs.CV"
] |
Camera and lidar are important sensor modalities for robotics in general and
self-driving cars in particular. The sensors provide complementary information
offering an opportunity for tight sensor-fusion. Surprisingly, lidar-only
methods outperform fusion methods on the main benchmark datasets, suggesting a
gap in the literature. In this work, we propose PointPainting: a sequential
fusion method to fill this gap. PointPainting works by projecting lidar points
into the output of an image-only semantic segmentation network and appending
the class scores to each point. The appended (painted) point cloud can then be
fed to any lidar-only method. Experiments show large improvements on three
different state-of-the art methods, Point-RCNN, VoxelNet and PointPillars on
the KITTI and nuScenes datasets. The painted version of PointRCNN represents a
new state of the art on the KITTI leaderboard for the bird's-eye view detection
task. In ablation, we study how the effects of Painting depends on the quality
and format of the semantic segmentation output, and demonstrate how latency can
be minimized through pipelining. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
Clustering is an unsupervised learning method that constitutes a cornerstone
of an intelligent data analysis process. It is used for the exploration of
inter-relationships among a collection of patterns, by organizing them into
homogeneous clusters. Clustering has been dynamically applied to a variety of
tasks in the field of Information Retrieval (IR). Clustering has become one of
the most active area of research and the development. Clustering attempts to
discover the set of consequential groups where those within each group are more
closely related to one another than the others assigned to different groups.
The resultant clusters can provide a structure for organizing large bodies of
text for efficient browsing and searching. There exists a wide variety of
clustering algorithms that has been intensively studied in the clustering
problem. Among the algorithms that remain the most common and effectual, the
iterative optimization clustering algorithms have been demonstrated reasonable
performance for clustering, e.g. the Expectation Maximization (EM) algorithm
and its variants, and the well known k-means algorithm. This paper presents an
analysis on how partition method clustering techniques - EM, K -means and K*
Means algorithm work on heartspect dataset with below mentioned features -
Purity, Entropy, CPU time, Cluster wise analysis, Mean value analysis and inter
cluster distance. Thus the paper finally provides the experimental results of
datasets for five clusters to strengthen the results that the quality of the
behavior in clusters in EM algorithm is far better than k-means algorithm and
k*means algorithm. | [
"cs.LG",
"cs.IR"
] |
This paper presents an unsupervised method to learn a neural network, namely
an explainer, to interpret a pre-trained convolutional neural network (CNN),
i.e., the explainer uses interpretable visual concepts to explain features in
middle conv-layers of a CNN. Given feature maps of a conv-layer of the CNN, the
explainer performs like an auto-encoder, which decomposes the feature maps into
object-part features. The object-part features are learned to reconstruct CNN
features without much loss of information. We can consider the disentangled
representations of object parts a paraphrase of CNN features, which help people
understand the knowledge encoded by the CNN. More crucially, we learn the
explainer via knowledge distillation without using any annotations of object
parts or textures for supervision. In experiments, our method was widely used
to interpret features of different benchmark CNNs, and explainers significantly
boosted the feature interpretability without hurting the discrimination power
of the CNNs. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper presents a method for future localization: to predict a set of
plausible trajectories of ego-motion given a depth image. We predict paths
avoiding obstacles, between objects, even paths turning around a corner into
space behind objects. As a byproduct of the predicted trajectories of
ego-motion, we discover in the image the empty space occluded by foreground
objects. We use no image based features such as semantic labeling/segmentation
or object detection/recognition for this algorithm. Inspired by proxemics, we
represent the space around a person using an EgoSpace map, akin to an
illustrated tourist map, that measures a likelihood of occlusion at the
egocentric coordinate system. A future trajectory of ego-motion is modeled by a
linear combination of compact trajectory bases allowing us to constrain the
predicted trajectory. We learn the relationship between the EgoSpace map and
trajectory from the EgoMotion dataset providing in-situ measurements of the
future trajectory. A cost function that takes into account partial occlusion
due to foreground objects is minimized to predict a trajectory. This cost
function generates a trajectory that passes through the occluded space, which
allows us to discover the empty space behind the foreground objects. We
quantitatively evaluate our method to show predictive validity and apply to
various real world scenes including walking, shopping, and social interactions. | [
"cs.CV"
] |
Although reinforcement learning has been successfully applied in many domains
in recent years, we still lack agents that can systematically generalize. While
relational inductive biases that fit a task can improve generalization of RL
agents, these biases are commonly hard-coded directly in the agent's neural
architecture. In this work, we show that we can incorporate relational
inductive biases, encoded in the form of relational graphs, into agents. Based
on this insight, we propose Grid-to-Graph (GTG), a mapping from grid structures
to relational graphs that carry useful spatial relational inductive biases when
processed through a Relational Graph Convolution Network (R-GCN). We show that,
with GTG, R-GCNs generalize better both in terms of in-distribution and
out-of-distribution compared to baselines based on Convolutional Neural
Networks and Neural Logic Machines on challenging procedurally generated
environments and MinAtar. Furthermore, we show that GTG produces agents that
can jointly reason over observations and environment dynamics encoded in
knowledge bases. | [
"cs.LG"
] |
Nowadays, deep neural networks are widely used in mission critical systems
such as healthcare, self-driving vehicles, and military which have direct
impact on human lives. However, the black-box nature of deep neural networks
challenges its use in mission critical applications, raising ethical and
judicial concerns inducing lack of trust. Explainable Artificial Intelligence
(XAI) is a field of Artificial Intelligence (AI) that promotes a set of tools,
techniques, and algorithms that can generate high-quality interpretable,
intuitive, human-understandable explanations of AI decisions. In addition to
providing a holistic view of the current XAI landscape in deep learning, this
paper provides mathematical summaries of seminal work. We start by proposing a
taxonomy and categorizing the XAI techniques based on their scope of
explanations, methodology behind the algorithms, and explanation level or usage
which helps build trustworthy, interpretable, and self-explanatory deep
learning models. We then describe the main principles used in XAI research and
present the historical timeline for landmark studies in XAI from 2007 to 2020.
After explaining each category of algorithms and approaches in detail, we then
evaluate the explanation maps generated by eight XAI algorithms on image data,
discuss the limitations of this approach, and provide potential future
directions to improve XAI evaluation. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
In this paper, we address the incremental classifier learning problem, which
suffers from catastrophic forgetting. The main reason for catastrophic
forgetting is that the past data are not available during learning. Typical
approaches keep some exemplars for the past classes and use distillation
regularization to retain the classification capability on the past classes and
balance the past and new classes. However, there are four main problems with
these approaches. First, the loss function is not efficient for classification.
Second, there is unbalance problem between the past and new classes. Third, the
size of pre-decided exemplars is usually limited and they might not be
distinguishable from unseen new classes. Forth, the exemplars may not be
allowed to be kept for a long time due to privacy regulations. To address these
problems, we propose (a) a new loss function to combine the cross-entropy loss
and distillation loss, (b) a simple way to estimate and remove the unbalance
between the old and new classes , and (c) using Generative Adversarial Networks
(GANs) to generate historical data and select representative exemplars during
generation. We believe that the data generated by GANs have much less privacy
issues than real images because GANs do not directly copy any real image
patches. We evaluate the proposed method on CIFAR-100, Flower-102, and
MS-Celeb-1M-Base datasets and extensive experiments demonstrate the
effectiveness of our method. | [
"cs.CV"
] |
We present a new learning-based method for multi-frame depth estimation from
a color video, which is a fundamental problem in scene understanding, robot
navigation or handheld 3D reconstruction. While recent learning-based methods
estimate depth at high accuracy, 3D point clouds exported from their depth maps
often fail to preserve important geometric feature (e.g., corners, edges,
planes) of man-made scenes. Widely-used pixel-wise depth errors do not
specifically penalize inconsistency on these features. These inaccuracies are
particularly severe when subsequent depth reconstructions are accumulated in an
attempt to scan a full environment with man-made objects with this kind of
features. Our depth estimation algorithm therefore introduces a Combined Normal
Map (CNM) constraint, which is designed to better preserve high-curvature
features and global planar regions. In order to further improve the depth
estimation accuracy, we introduce a new occlusion-aware strategy that
aggregates initial depth predictions from multiple adjacent views into one
final depth map and one occlusion probability map for the current reference
view. Our method outperforms the state-of-the-art in terms of depth estimation
accuracy, and preserves essential geometric features of man-made indoor scenes
much better than other algorithms. | [
"cs.CV"
] |
Using the raw data from consumer-level RGB-D cameras as input, we propose a
deep-learning based approach to efficiently generate RGB-D images with
completed information in high resolution. To process the input images in low
resolution with missing regions, new operators for adaptive convolution are
introduced in our deep-learning network that consists of three cascaded modules
-- the completion module, the refinement module and the super-resolution
module. The completion module is based on an architecture of encoder-decoder,
where the features of input raw RGB-D will be automatically extracted by the
encoding layers of a deep neural-network. The decoding layers are applied to
reconstruct the completed depth map, which is followed by a refinement module
to sharpen the boundary of different regions. For the super-resolution module,
we generate RGB-D images in high resolution by multiple layers for feature
extraction and a layer for up-sampling. Benefited from the adaptive convolution
operators newly proposed in this paper, our results outperform the existing
deep-learning based approaches for RGB-D image complete and super-resolution.
As an end-to-end approach, high fidelity RGB-D images can be generated
efficiently at the rate of around 21 frames per second. | [
"cs.CV",
"cs.RO",
"eess.IV"
] |
Spatial-temporal feature learning is of vital importance for video emotion
recognition. Previous deep network structures often focused on macro-motion
which extends over long time scales, e.g., on the order of seconds. We believe
integrating structures capturing information about both micro- and macro-motion
will benefit emotion prediction, because human perceive both micro- and
macro-expressions. In this paper, we propose to combine micro- and macro-motion
features to improve video emotion recognition with a two-stream recurrent
network, named MIMAMO (Micro-Macro-Motion) Net. Specifically, smaller and
shorter micro-motions are analyzed by a two-stream network, while larger and
more sustained macro-motions can be well captured by a subsequent recurrent
network. Assigning specific interpretations to the roles of different parts of
the network enables us to make choice of parameters based on prior knowledge:
choices that turn out to be optimal. One of the important innovations in our
model is the use of interframe phase differences rather than optical flow as
input to the temporal stream. Compared with the optical flow, phase differences
require less computation and are more robust to illumination changes. Our
proposed network achieves state of the art performance on two video emotion
datasets, the OMG emotion dataset and the Aff-Wild dataset. The most
significant gains are for arousal prediction, for which motion information is
intuitively more informative. Source code is available at
https://github.com/wtomin/MIMAMO-Net. | [
"cs.CV"
] |
Recent deep-learning based Super-Resolution (SR) methods have achieved
remarkable performance on images with known degradation. However, these methods
always fail in real-world scene, since the Low-Resolution (LR) images after the
ideal degradation (e.g., bicubic down-sampling) deviate from real source
domain. The domain gap between the LR images and the real-world images can be
observed clearly on frequency density, which inspires us to explictly narrow
the undesired gap caused by incorrect degradation. From this point of view, we
design a novel Frequency Consistent Adaptation (FCA) that ensures the frequency
domain consistency when applying existing SR methods to the real scene. We
estimate degradation kernels from unsupervised images and generate the
corresponding LR images. To provide useful gradient information for kernel
estimation, we propose Frequency Density Comparator (FDC) by distinguishing the
frequency density of images on different scales. Based on the domain-consistent
LR-HR pairs, we train easy-implemented Convolutional Neural Network (CNN) SR
models. Extensive experiments show that the proposed FCA improves the
performance of the SR model under real-world setting achieving state-of-the-art
results with high fidelity and plausible perception, thus providing a novel
effective framework for real-world SR application. | [
"cs.CV"
] |
Deep learning based object detectors are commonly deployed on mobile devices
to solve a variety of tasks. For maximum accuracy, each detector is usually
trained to solve one single specific task, and comes with a completely
independent set of parameters. While this guarantees high performance, it is
also highly inefficient, as each model has to be separately downloaded and
stored. In this paper we address the question: can task-specific detectors be
trained and represented as a shared set of weights, plus a very small set of
additional weights for each task? The main contributions of this paper are the
following: 1) we perform the first systematic study of parameter-efficient
transfer learning techniques for object detection problems; 2) we propose a
technique to learn a model patch with a size that is dependent on the
difficulty of the task to be learned, and validate our approach on 10 different
object detection tasks. Our approach achieves similar accuracy as previously
proposed approaches, while being significantly more compact. | [
"cs.CV"
] |
Computing the discrepancy between time series of variable sizes is
notoriously challenging. While dynamic time warping (DTW) is popularly used for
this purpose, it is not differentiable everywhere and is known to lead to bad
local optima when used as a "loss". Soft-DTW addresses these issues, but it is
not a positive definite divergence: due to the bias introduced by entropic
regularization, it can be negative and it is not minimized when the time series
are equal. We propose in this paper a new divergence, dubbed soft-DTW
divergence, which aims to correct these issues. We study its properties; in
particular, under conditions on the ground cost, we show that it is a valid
divergence: it is non-negative and minimized if and only if the two time series
are equal. We also propose a new "sharp" variant by further removing entropic
bias. We showcase our divergences on time series averaging and demonstrate
significant accuracy improvements compared to both DTW and soft-DTW on 84 time
series classification datasets. | [
"cs.LG",
"stat.ML"
] |
Dirichlet processes (DP) are widely applied in Bayesian nonparametric
modeling. However, in their basic form they do not directly integrate
dependency information among data arising from space and time. In this paper,
we propose location dependent Dirichlet processes (LDDP) which incorporate
nonparametric Gaussian processes in the DP modeling framework to model such
dependencies. We develop the LDDP in the context of mixture modeling, and
develop a mean field variational inference algorithm for this mixture model.
The effectiveness of the proposed modeling framework is shown on an image
segmentation task. | [
"stat.ML",
"cs.LG"
] |
In order to perform network analysis tasks, representations that capture the
most relevant information in the graph structure are needed. However, existing
methods do not learn representations that can be interpreted in a
straightforward way and that are robust to perturbations to the graph
structure. In this work, we address these two limitations by proposing
node2coords, a representation learning algorithm for graphs, which learns
simultaneously a low-dimensional space and coordinates for the nodes in that
space. The patterns that span the low dimensional space reveal the graph's most
important structural information. The coordinates of the nodes reveal the
proximity of their local structure to the graph structural patterns. In order
to measure this proximity by taking into account the underlying graph, we
propose to use Wasserstein distances. We introduce an autoencoder that employs
a linear layer in the encoder and a novel Wasserstein barycentric layer at the
decoder. Node connectivity descriptors, that capture the local structure of the
nodes, are passed through the encoder to learn the small set of graph
structural patterns. In the decoder, the node connectivity descriptors are
reconstructed as Wasserstein barycenters of the graph structural patterns. The
optimal weights for the barycenter representation of a node's connectivity
descriptor correspond to the coordinates of that node in the low-dimensional
space. Experimental results demonstrate that the representations learned with
node2coords are interpretable, lead to node embeddings that are stable to
perturbations of the graph structure and achieve competitive or superior
results compared to state-of-the-art methods in node classification. | [
"cs.LG",
"stat.ML"
] |
Imitation learning trains a policy by mimicking expert demonstrations.
Various imitation methods were proposed and empirically evaluated, meanwhile,
their theoretical understanding needs further studies. In this paper, we
firstly analyze the value gap between the expert policy and imitated policies
by two imitation methods, behavioral cloning and generative adversarial
imitation. The results support that generative adversarial imitation can reduce
the compounding errors compared to behavioral cloning, and thus has a better
sample complexity. Noticed that by considering the environment transition model
as a dual agent, imitation learning can also be used to learn the environment
model. Therefore, based on the bounds of imitating policies, we further analyze
the performance of imitating environments. The results show that environment
models can be more effectively imitated by generative adversarial imitation
than behavioral cloning, suggesting a novel application of adversarial
imitation for model-based reinforcement learning. We hope these results could
inspire future advances in imitation learning and model-based reinforcement
learning. | [
"cs.LG"
] |
Convolutional neural networks (CNNs) have emerged as the state-of-the-art in
multiple vision tasks including depth estimation. However, memory and computing
power requirements remain as challenges to be tackled in these models.
Monocular depth estimation has significant use in robotics and virtual reality
that requires deployment on low-end devices. Training a small model from
scratch results in a significant drop in accuracy and it does not benefit from
pre-trained large models. Motivated by the literature of model pruning, we
propose a lightweight monocular depth model obtained from a large trained
model. This is achieved by removing the least important features with a novel
joint end-to-end filter pruning. We propose to learn a binary mask for each
filter to decide whether to drop the filter or not. These masks are trained
jointly to exploit relations between filters at different layers as well as
redundancy within the same layer. We show that we can achieve around 5x
compression rate with small drop in accuracy on the KITTI driving dataset. We
also show that masking can improve accuracy over the baseline with fewer
parameters, even without enforcing compression loss. | [
"cs.CV"
] |
Efficiently finding similar segments or motifs in time series data is a
fundamental task that, due to the ubiquity of these data, is present in a wide
range of domains and situations. Because of this, countless solutions have been
devised but, to date, none of them seems to be fully satisfactory and flexible.
In this article, we propose an innovative standpoint and present a solution
coming from it: an anytime multimodal optimization algorithm for time series
motif discovery based on particle swarms. By considering data from a variety of
domains, we show that this solution is extremely competitive when compared to
the state-of-the-art, obtaining comparable motifs in considerably less time
using minimal memory. In addition, we show that it is robust to different
implementation choices and see that it offers an unprecedented degree of
flexibility with regard to the task. All these qualities make the presented
solution stand out as one of the most prominent candidates for motif discovery
in long time series streams. Besides, we believe the proposed standpoint can be
exploited in further time series analysis and mining tasks, widening the scope
of research and potentially yielding novel effective solutions. | [
"cs.LG",
"cs.NE"
] |
Deep learning models have gained great popularity in statistical modeling
because they lead to very competitive regression models, often outperforming
classical statistical models such as generalized linear models. The
disadvantage of deep learning models is that their solutions are difficult to
interpret and explain, and variable selection is not easily possible because
deep learning models solve feature engineering and variable selection
internally in a nontransparent way. Inspired by the appealing structure of
generalized linear models, we propose a new network architecture that shares
similar features as generalized linear models, but provides superior predictive
power benefiting from the art of representation learning. This new architecture
allows for variable selection of tabular data and for interpretation of the
calibrated deep learning model, in fact, our approach provides an additive
decomposition in the spirit of Shapley values and integrated gradients. | [
"cs.LG",
"cs.AI",
"q-fin.ST",
"stat.AP",
"stat.ML",
"62, 68"
] |
This paper provides a unifying view of a wide range of problems of interest
in machine learning by framing them as the minimization of functionals defined
on the space of probability measures. In particular, we show that generative
adversarial networks, variational inference, and actor-critic methods in
reinforcement learning can all be seen through the lens of our framework. We
then discuss a generic optimization algorithm for our formulation, called
probability functional descent (PFD), and show how this algorithm recovers
existing methods developed independently in the settings mentioned earlier. | [
"cs.LG",
"stat.ML"
] |
We assume data independently sampled from a mixture distribution on the unit
ball of the D-dimensional Euclidean space with K+1 components: the first
component is a uniform distribution on that ball representing outliers and the
other K components are uniform distributions along K d-dimensional linear
subspaces restricted to that ball. We study both the simultaneous recovery of
all K underlying subspaces and the recovery of the best l0 subspace (i.e., with
largest number of points) by minimizing the lp-averaged distances of data
points from d-dimensional subspaces of the D-dimensional space. Unlike other lp
minimization problems, this minimization is non-convex for all p>0 and thus
requires different methods for its analysis. We show that if 0<p <= 1, then
both all underlying subspaces and the best l0 subspace can be precisely
recovered by lp minimization with overwhelming probability. This result extends
to additive homoscedastic uniform noise around the subspaces (i.e., uniform
distribution in a strip around them) and near recovery with an error
proportional to the noise level. On the other hand, if K>1 and p>1, then we
show that both all underlying subspaces and the best l0 subspace cannot be
recovered and even nearly recovered. Further relaxations are also discussed. We
use the results of this paper for partially justifying recent effective
algorithms for modeling data by mixtures of multiple subspaces as well as for
discussing the effect of using variants of lp minimizations in RANSAC-type
strategies for single subspace recovery. | [
"stat.ML"
] |
Federated learning is emerging as a machine learning technique that trains a
model across multiple decentralized parties. It is renowned for preserving
privacy as the data never leaves the computational devices, and recent
approaches further enhance its privacy by hiding messages transferred in
encryption. However, we found that despite the efforts, federated learning
remains privacy-threatening, due to its interactive nature across different
parties. In this paper, we analyze the privacy threats in industrial-level
federated learning frameworks with secure computation, and reveal such threats
widely exist in typical machine learning models such as linear regression,
logistic regression and decision tree. For the linear and logistic regression,
we show through theoretical analysis that it is possible for the attacker to
invert the entire private input of the victim, given very few information. For
the decision tree model, we launch an attack to infer the range of victim's
private inputs. All attacks are evaluated on popular federated learning
frameworks and real-world datasets. | [
"cs.LG",
"cs.AI",
"cs.CR"
] |
In Computer Vision, edge detection is one of the favored approaches for
feature and object detection in images since it provides information about
their objects boundaries. Other region-based approaches use probabilistic
analysis such as clustering and Markov random fields, but those methods cannot
be used to analyze edges and their interaction. In fact, only image
segmentation can produce regions based on edges, but it requires thresholding
by simply separating the regions into binary in-out information. Hence, there
is currently a gap between edge-based and region-based algorithms, since edges
cannot be used to study the properties of a region and vice versa. The
objective of this paper is to present a novel spatial probability analysis that
allows determining the probability of inclusion inside a set of partial
contours (strokes). To answer this objective, we developed a new approach that
uses electromagnetic convolutions and repulsion optimization to compute the
required probabilities. Hence, it becomes possible to generate a continuous
space of probability based only on the edge information, thus bridging the gap
between the edge-based methods and the region-based methods. The developed
method is consistent with the fundamental properties of inclusion probabilities
and its results are validated by comparing an image with the probability-based
estimation given by our algorithm. The method can also be generalized to take
into consideration the intensity of the edges or to be used for 3D shapes. This
is the first documented method that allows computing a space of probability
based on interacting edges, which opens the path to broader applications such
as image segmentation and contour completion. | [
"cs.CV",
"cs.NA",
"math.NA"
] |
The competition "Predicting Generalization in Deep Learning (PGDL)" aims to
provide a platform for rigorous study of generalization of deep learning models
and offer insight into the progress of understanding and explaining these
models. This report presents the solution that was submitted by the user
\emph{smeznar} which achieved the eight place in the competition. In the
proposed approach, we create simple metrics and find their best combination
with automatic testing on the provided dataset, exploring how combinations of
various properties of the input neural network architectures can be used for
the prediction of their generalization. | [
"cs.LG",
"stat.ML"
] |
Deep networks for visual recognition are known to leverage "easy to
recognise" portions of objects such as faces and distinctive texture patterns.
The lack of a holistic understanding of objects may increase fragility and
overfitting. In recent years, several papers have proposed to address this
issue by means of occlusions as a form of data augmentation. However, successes
have been limited to tasks such as weak localization and model interpretation,
but no benefit was demonstrated on image classification on large-scale
datasets. In this paper, we show that, by using a simple technique based on
batch augmentation, occlusions as data augmentation can result in better
performance on ImageNet for high-capacity models (e.g., ResNet50). We also show
that varying amounts of occlusions used during training can be used to study
the robustness of different neural network architectures. | [
"cs.CV",
"cs.LG"
] |
The demand of probabilistic time series forecasting has been recently raised
in various dynamic system scenarios, for example, system identification and
prognostic and health management of machines. To this end, we combine the
advances in both deep generative models and state space model (SSM) to come up
with a novel, data-driven deep probabilistic sequence model. Specially, we
follow the popular encoder-decoder generative structure to build the recurrent
neural networks (RNN) assisted variational sequence model on an augmented
recurrent input space, which could induce rich stochastic sequence dependency.
Besides, in order to alleviate the issue of inconsistency between training and
predicting as well as improving the mining of dynamic patterns, we (i) propose
using a hybrid output as input at next time step, which brings training and
predicting into alignment; and (ii) further devise a generalized
auto-regressive strategy that encodes all the historical dependencies at
current time step. Thereafter, we first investigate the methodological
characteristics of the proposed deep probabilistic sequence model on toy cases,
and then comprehensively demonstrate the superiority of our model against
existing deep probabilistic SSM models through extensive numerical experiments
on eight system identification benchmarks from various dynamic systems.
Finally, we apply our sequence model to a real-world centrifugal compressor
sensor data forecasting problem, and again verify its outstanding performance
by quantifying the time series predictive distribution. | [
"cs.LG",
"stat.ML"
] |
In this paper, we propose a novel end-to-end trainable Video Question
Answering (VideoQA) framework with three major components: 1) a new
heterogeneous memory which can effectively learn global context information
from appearance and motion features; 2) a redesigned question memory which
helps understand the complex semantics of question and highlights queried
subjects; and 3) a new multimodal fusion layer which performs multi-step
reasoning by attending to relevant visual and textual hints with self-updated
attention. Our VideoQA model firstly generates the global context-aware visual
and textual features respectively by interacting current inputs with memory
contents. After that, it makes the attentional fusion of the multimodal visual
and textual representations to infer the correct answer. Multiple cycles of
reasoning can be made to iteratively refine attention weights of the multimodal
data and improve the final representation of the QA pair. Experimental results
demonstrate our approach achieves state-of-the-art performance on four VideoQA
benchmark datasets. | [
"cs.CV"
] |
Representation learning promises to unlock deep learning for the long tail of
vision tasks without expensive labelled datasets. Yet, the absence of a unified
evaluation for general visual representations hinders progress. Popular
protocols are often too constrained (linear classification), limited in
diversity (ImageNet, CIFAR, Pascal-VOC), or only weakly related to
representation quality (ELBO, reconstruction error). We present the Visual Task
Adaptation Benchmark (VTAB), which defines good representations as those that
adapt to diverse, unseen tasks with few examples. With VTAB, we conduct a
large-scale study of many popular publicly-available representation learning
algorithms. We carefully control confounders such as architecture and tuning
budget. We address questions like: How effective are ImageNet representations
beyond standard natural datasets? How do representations trained via generative
and discriminative models compare? To what extent can self-supervision replace
labels? And, how close are we to general visual representations? | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Hidden Markov Model (HMM) combined with Gaussian Process (GP) emission can be
effectively used to estimate the hidden state with a sequence of complex
input-output relational observations. Especially when the spectral mixture (SM)
kernel is used for GP emission, we call this model as a hybrid HMM-GPSM. This
model can effectively model the sequence of time-series data. However, because
of a large number of parameters for the SM kernel, this model can not
effectively be trained with a large volume of data having (1) long sequence for
state transition and 2) a large number of time-series dataset in each sequence.
This paper proposes a scalable learning method for HMM-GPSM. To effectively
train the model with a long sequence, the proposed method employs a Stochastic
Variational Inference (SVI) approach. Also, to effectively process a large
number of data point each time-series data, we approximate the SM kernel using
Reparametrized Random Fourier Feature (R-RFF). The combination of these two
techniques significantly reduces the training time. We validate the proposed
learning method in terms of its hidden-sate estimation accuracy and computation
time using large-scale synthetic and real data sets with missing values. | [
"cs.LG",
"stat.ML"
] |
Knowledge distillation is a generalized logits matching technique for model
compression. Their equivalence is previously established on the condition of
$\textit{infinity temperature}$ and $\textit{zero-mean normalization}$. In this
paper, we prove that with only $\textit{infinity temperature}$, the effect of
knowledge distillation equals to logits matching with an extra regularization.
Furthermore, we reveal that an additional weaker condition --
$\textit{equal-mean initialization}$ rather than the original
$\textit{zero-mean normalization}$ already suffices to set up the equivalence.
The key to our proof is we realize that in modern neural networks with the
cross-entropy loss and softmax activation, the mean of back-propagated gradient
on logits always keeps zero. | [
"cs.LG",
"cs.AI"
] |
SegBlocks reduces the computational cost of existing neural networks, by
dynamically adjusting the processing resolution of image regions based on their
complexity. Our method splits an image into blocks and downsamples blocks of
low complexity, reducing the number of operations and memory consumption. A
lightweight policy network, selecting the complex regions, is trained using
reinforcement learning. In addition, we introduce several modules implemented
in CUDA to process images in blocks. Most important, our novel BlockPad module
prevents the feature discontinuities at block borders of which existing methods
suffer, while keeping memory consumption under control. Our experiments on
Cityscapes and Mapillary Vistas semantic segmentation show that dynamically
processing images offers a better accuracy versus complexity trade-off compared
to static baselines of similar complexity. For instance, our method reduces the
number of floating-point operations of SwiftNet-RN18 by 60% and increases the
inference speed by 50%, with only 0.3% decrease in mIoU accuracy on Cityscapes. | [
"cs.CV"
] |
Knowledge distillation is a strategy of training a student network with guide
of the soft output from a teacher network. It has been a successful method of
model compression and knowledge transfer. However, currently knowledge
distillation lacks a convincing theoretical understanding. On the other hand,
recent finding on neural tangent kernel enables us to approximate a wide neural
network with a linear model of the network's random features. In this paper, we
theoretically analyze the knowledge distillation of a wide neural network.
First we provide a transfer risk bound for the linearized model of the network.
Then we propose a metric of the task's training difficulty, called data
inefficiency. Based on this metric, we show that for a perfect teacher, a high
ratio of teacher's soft labels can be beneficial. Finally, for the case of
imperfect teacher, we find that hard labels can correct teacher's wrong
prediction, which explains the practice of mixing hard and soft labels. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This paper presents a novel approach for job shop scheduling problems using
deep reinforcement learning. To account for the complexity of production
environment, we employ graph neural networks to model the various relations
within production environments. Furthermore, we cast the JSSP as a distributed
optimization problem in which learning agents are individually assigned to
resources which allows for higher flexibility with respect to changing
production environments. The proposed distributed RL agents used to optimize
production schedules for single resources are running together with a
co-simulation framework of the production environment to obtain the required
amount of data. The approach is applied to a multi-robot environment and a
complex production scheduling benchmark environment. The initial results
underline the applicability and performance of the proposed method. | [
"cs.LG",
"stat.ML"
] |
Knowledge graph (KG) embedding is well-known in learning representations of
KGs. Many models have been proposed to learn the interactions between entities
and relations of the triplets. However, long-term information among multiple
triplets is also important to KG. In this work, based on the relational paths,
which are composed of a sequence of triplets, we define the Interstellar as a
recurrent neural architecture search problem for the short-term and long-term
information along the paths. First, we analyze the difficulty of using a
unified model to work as the Interstellar. Then, we propose to search for
recurrent architecture as the Interstellar for different KG tasks. A case study
on synthetic data illustrates the importance of the defined search problem.
Experiments on real datasets demonstrate the effectiveness of the searched
models and the efficiency of the proposed hybrid-search algorithm. | [
"cs.LG",
"stat.ML"
] |
The content based image retrieval aims to find the similar images from a
large scale dataset against a query image. Generally, the similarity between
the representative features of the query image and dataset images is used to
rank the images for retrieval. In early days, various hand designed feature
descriptors have been investigated based on the visual cues such as color,
texture, shape, etc. that represent the images. However, the deep learning has
emerged as a dominating alternative of hand-designed feature engineering from a
decade. It learns the features automatically from the data. This paper presents
a comprehensive survey of deep learning based developments in the past decade
for content based image retrieval. The categorization of existing
state-of-the-art methods from different perspectives is also performed for
greater understanding of the progress. The taxonomy used in this survey covers
different supervision, different networks, different descriptor type and
different retrieval type. A performance analysis is also performed using the
state-of-the-art methods. The insights are also presented for the benefit of
the researchers to observe the progress and to make the best choices. The
survey presented in this paper will help in further research progress in image
retrieval using deep learning. | [
"cs.CV",
"cs.AI",
"cs.MM"
] |
Autonomous agents need large repertoires of skills to act reasonably on new
tasks that they have not seen before. However, acquiring these skills using
only a stream of high-dimensional, unstructured, and unlabeled observations is
a tricky challenge for any autonomous agent. Previous methods have used
variational autoencoders to encode a scene into a low-dimensional vector that
can be used as a goal for an agent to discover new skills. Nevertheless, in
compositional/multi-object environments it is difficult to disentangle all the
factors of variation into such a fixed-length representation of the whole
scene. We propose to use object-centric representations as a modular and
structured observation space, which is learned with a compositional generative
world model. We show that the structure in the representations in combination
with goal-conditioned attention policies helps the autonomous agent to discover
and learn useful skills. These skills can be further combined to address
compositional tasks like the manipulation of several different objects. | [
"cs.LG"
] |
Object detection and classification using video is necessary for intelligent
planning and navigation on a mobile robot. However, current methods can be too
slow or not sufficient for distinguishing multiple classes. Techniques that
rely on binary (foreground/background) labels incorrectly identify areas with
multiple overlapping objects as single segment. We propose two Hierarchical
Markov Random Field models in efforts to distinguish connected objects using
tiered, binary label sets. Near-realtime performance has been achieved using
efficient optimization methods which runs up to 11 frames per second on a dual
core 2.2 Ghz processor. Evaluation of both models is done using footage taken
from a robot obstacle course at the 2010 Intelligent Ground Vehicle
Competition. | [
"cs.CV"
] |
An approach to the time-accurate prediction of chaotic solutions is by
learning temporal patterns from data. Echo State Networks (ESNs), which are a
class of Reservoir Computing, can accurately predict the chaotic dynamics well
beyond the predictability time. Existing studies, however, also showed that
small changes in the hyperparameters may markedly affect the network's
performance. The aim of this paper is to assess and improve the robustness of
Echo State Networks for the time-accurate prediction of chaotic solutions. The
goal is three-fold. First, we investigate the robustness of routinely used
validation strategies. Second, we propose the Recycle Validation, and the
chaotic versions of existing validation strategies, to specifically tackle the
forecasting of chaotic systems. Third, we compare Bayesian optimization with
the traditional Grid Search for optimal hyperparameter selection. Numerical
tests are performed on two prototypical nonlinear systems that have both
chaotic and quasiperiodic solutions. Both model-free and model-informed Echo
State Networks are analysed. By comparing the network's robustness in learning
chaotic versus quasiperiodic solutions, we highlight fundamental challenges in
learning chaotic solutions. The proposed validation strategies, which are based
on the dynamical systems properties of chaotic time series, are shown to
outperform the state-of-the-art validation strategies. Because the strategies
are principled-they are based on chaos theory such as the Lyapunov time-they
can be applied to other Recurrent Neural Networks architectures with little
modification. This work opens up new possibilities for the robust design and
application of Echo State Networks, and Recurrent Neural Networks, to the
time-accurate prediction of chaotic systems. | [
"cs.LG"
] |
The ubiquity of deep neural networks (DNNs), cloud-based training, and
transfer learning is giving rise to a new cybersecurity frontier in which
unsecure DNNs have `structural malware' (i.e., compromised weights and
activation pathways). In particular, DNNs can be designed to have backdoors
that allow an adversary to easily and reliably fool an image classifier by
adding a pattern of pixels called a trigger. It is generally difficult to
detect backdoors, and existing detection methods are computationally expensive
and require extensive resources (e.g., access to the training data). Here, we
propose a rapid feature-generation technique that quantifies the robustness of
a DNN, `fingerprints' its nonlinearity, and allows us to detect backdoors (if
present). Our approach involves studying how a DNN responds to noise-infused
images with varying noise intensity, which we summarize with titration curves.
We find that DNNs with backdoors are more sensitive to input noise and respond
in a characteristic way that reveals the backdoor and where it leads (its
`target'). Our empirical results demonstrate that we can accurately detect
backdoors with high confidence orders-of-magnitude faster than existing
approaches (seconds versus hours). | [
"cs.LG",
"stat.ML"
] |
Knowledge Distillation (KD) has been used in image classification for model
compression. However, rare studies apply this technology on single-stage object
detectors. Focal loss shows that the accumulated errors of easily-classified
samples dominate the overall loss in the training process. This problem is also
encountered when applying KD in the detection task. For KD, the teacher-defined
hard samples are far more important than any others. We propose ADL to address
this issue by adaptively mimicking the teacher's logits, with more attention
paid on two types of hard samples: hard-to-learn samples predicted by teacher
with low certainty and hard-to-mimic samples with a large gap between the
teacher's and the student's prediction. ADL enlarges the distillation loss for
hard-to-learn and hard-to-mimic samples and reduces distillation loss for the
dominant easy samples, enabling distillation to work on the single-stage
detector first time, even if the student and the teacher are identical.
Besides, ADL is effective in both the supervised setting and the
semi-supervised setting, even when the labeled data and unlabeled data are from
different distributions. For distillation on unlabeled data, ADL achieves
better performance than existing data distillation which simply utilizes hard
targets, making the student detector surpass its teacher. On the COCO database,
semi-supervised adaptive distillation (SAD) makes a student detector with a
backbone of ResNet-50 surpasses its teacher with a backbone of ResNet-101,
while the student has half of the teacher's computation complexity. The code is
avaiable at https://github.com/Tangshitao/Semi-supervised-Adaptive-Distillation | [
"cs.CV"
] |
In this paper, we propose a novel pooling layer for graph neural networks
based on maximizing the mutual information between the pooled graph and the
input graph. Since the maximum mutual information is difficult to compute, we
employ the Shannon capacity of a graph as an inductive bias to our pooling
method. More precisely, we show that the input graph to the pooling layer can
be viewed as a representation of a noisy communication channel. For such a
channel, sending the symbols belonging to an independent set of the graph
yields a reliable and error-free transmission of information. We show that
reaching the maximum mutual information is equivalent to finding a maximum
weight independent set of the graph where the weights convey entropy contents.
Through this communication theoretic standpoint, we provide a distinct
perspective for posing the problem of graph pooling as maximizing the
information transmission rate across a noisy communication channel, implemented
by a graph neural network. We evaluate our method, referred to as Maximum
Entropy Weighted Independent Set Pooling (MEWISPool), on graph classification
tasks and the combinatorial optimization problem of the maximum independent
set. Empirical results demonstrate that our method achieves the
state-of-the-art and competitive results on graph classification tasks and the
maximum independent set problem in several benchmark datasets. | [
"cs.LG",
"cs.AI",
"cs.IT",
"cs.NE",
"math.IT"
] |
Existing methods for single images raindrop removal either have poor
robustness or suffer from parameter burdens. In this paper, we propose a new
Adjacent Aggregation Network (A^2Net) with lightweight architectures to remove
raindrops from single images. Instead of directly cascading convolutional
layers, we design an adjacent aggregation architecture to better fuse features
for rich representations generation, which can lead to high quality images
reconstruction. To further simplify the learning process, we utilize a
problem-specific knowledge to force the network focus on the luminance channel
in the YUV color space instead of all RGB channels. By combining adjacent
aggregating operation with color space transformation, the proposed A^2Net can
achieve state-of-the-art performances on raindrop removal with significant
parameters reduction. | [
"cs.CV"
] |
Localizing natural language phrases in images is a challenging problem that
requires joint understanding of both the textual and visual modalities. In the
unsupervised setting, lack of supervisory signals exacerbate this difficulty.
In this paper, we propose a novel framework for unsupervised visual grounding
which uses concept learning as a proxy task to obtain self-supervision. The
simple intuition behind this idea is to encourage the model to localize to
regions which can explain some semantic property in the data, in our case, the
property being the presence of a concept in a set of images. We present
thorough quantitative and qualitative experiments to demonstrate the efficacy
of our approach and show a 5.6% improvement over the current state of the art
on Visual Genome dataset, a 5.8% improvement on the ReferItGame dataset and
comparable to state-of-art performance on the Flickr30k dataset. | [
"cs.CV"
] |
This paper reports Deep LOGISMOS approach to 3D tumor segmentation by
incorporating boundary information derived from deep contextual learning to
LOGISMOS - layered optimal graph image segmentation of multiple objects and
surfaces. Accurate and reliable tumor segmentation is essential to tumor growth
analysis and treatment selection. A fully convolutional network (FCN), UNet, is
first trained using three adjacent 2D patches centered at the tumor, providing
contextual UNet segmentation and probability map for each 2D patch. The UNet
segmentation is then refined by Gaussian Mixture Model (GMM) and morphological
operations. The refined UNet segmentation is used to provide the initial shape
boundary to build a segmentation graph. The cost for each node of the graph is
determined by the UNet probability maps. Finally, a max-flow algorithm is
employed to find the globally optimal solution thus obtaining the final
segmentation. For evaluation, we applied the method to pancreatic tumor
segmentation on a dataset of 51 CT scans, among which 30 scans were used for
training and 21 for testing. With Deep LOGISMOS, DICE Similarity Coefficient
(DSC) and Relative Volume Difference (RVD) reached 83.2+-7.8% and 18.6+-17.4%
respectively, both are significantly improved (p<0.05) compared with contextual
UNet and/or LOGISMOS alone. | [
"cs.CV"
] |
Recent advances in Generative Adversarial Networks (GANs) have shown
impressive results for task of facial expression synthesis. The most successful
architecture is StarGAN, that conditions GANs generation process with images of
a specific domain, namely a set of images of persons sharing the same
expression. While effective, this approach can only generate a discrete number
of expressions, determined by the content of the dataset. To address this
limitation, in this paper, we introduce a novel GAN conditioning scheme based
on Action Units (AU) annotations, which describes in a continuous manifold the
anatomical facial movements defining a human expression. Our approach allows
controlling the magnitude of activation of each AU and combine several of them.
Additionally, we propose a fully unsupervised strategy to train the model, that
only requires images annotated with their activated AUs, and exploit attention
mechanisms that make our network robust to changing backgrounds and lighting
conditions. Extensive evaluation show that our approach goes beyond competing
conditional generators both in the capability to synthesize a much wider range
of expressions ruled by anatomically feasible muscle movements, as in the
capacity of dealing with images in the wild. | [
"cs.CV"
] |
In a number of situations, collecting a function value for every data point
may be prohibitively expensive, and random sampling ignores any structure in
the underlying data. We introduce a scalable optimization algorithm with no
correction steps (in contrast to Frank-Wolfe and its variants), a variant of
gradient ascent for coreset selection in graphs, that greedily selects a
weighted subset of vertices that are deemed most important to sample. Our
algorithm estimates the mean of the function by taking a weighted sum only at
these vertices, and we provably bound the estimation error in terms of the
location and weights of the selected vertices in the graph. In addition, we
consider the case where nodes have different selection costs and provide bounds
on the quality of the low-cost selected coresets. We demonstrate the benefits
of our algorithm on the semi-supervised node classification of graph
convolutional neural network, point clouds and structured graphs, as well as
sensor placement where the cost of placing sensors depends on the location of
the placement. We also elucidate that the empirical convergence of our proposed
method is faster than random selection and various clustering methods while
still respecting sensor placement cost. The paper concludes with validation of
the developed algorithm on both synthetic and real datasets, demonstrating that
it outperforms the current state of the art. | [
"cs.LG",
"stat.ML"
] |
Recent advances in digital imaging has transformed computer vision and
machine learning to new tools for analyzing pathology images. This trend could
automate some of the tasks in the diagnostic pathology and elevate the
pathologist workload. The final step of any cancer diagnosis procedure is
performed by the expert pathologist. These experts use microscopes with high
level of optical magnification to observe minute characteristics of the tissue
acquired through biopsy and fixed on glass slides. Switching between different
magnifications, and finding the magnification level at which they identify the
presence or absence of malignant tissues is important. As the majority of
pathologists still use light microscopy, compared to digital scanners, in many
instance a mounted camera on the microscope is used to capture snapshots from
significant field-of-views. Repositories of such snapshots usually do not
contain the magnification information. In this paper, we extract deep features
of the images available on TCGA dataset with known magnification to train a
classifier for magnification recognition. We compared the results with LBP, a
well-known handcrafted feature extraction method. The proposed approach
achieved a mean accuracy of 96% when a multi-layer perceptron was trained as a
classifier. | [
"cs.CV",
"I.4.9"
] |
Recent efforts have shown promising results for person re-identification by
designing part-based architectures to allow a neural network to learn
discriminative representations from semantically coherent parts. Some efforts
use soft attention to reallocate distant outliers to their most similar parts,
while others adjust part granularity to incorporate more distant positions for
learning the relationships. Others seek to generalize part-based methods by
introducing a dropout mechanism on consecutive regions of the feature map to
enhance distant region relationships. However, only few prior efforts model the
distant or non-local positions of the feature map directly for the person re-ID
task. In this paper, we propose a novel attention mechanism to directly model
long-range relationships via second-order feature statistics. When combined
with a generalized DropBlock module, our method performs equally to or better
than state-of-the-art results for mainstream person re-identification datasets,
including Market1501, CUHK03, and DukeMTMC-reID. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Transformer neural networks have achieved state-of-the-art results for
unstructured data such as text and images but their adoption for
graph-structured data has been limited. This is partly due to the difficulty of
incorporating complex structural information in the basic transformer
framework. We propose a simple yet powerful extension to the transformer -
residual edge channels. The resultant framework, which we call Edge-augmented
Graph Transformer (EGT), can directly accept, process and output structural
information as well as node information. It allows us to use global
self-attention, the key element of transformers, directly for graphs and comes
with the benefit of long-range interaction among nodes. Moreover, the edge
channels allow the structural information to evolve from layer to layer, and
prediction tasks on edges/links can be performed directly from the output
embeddings of these channels. In addition, we introduce a generalized
positional encoding scheme for graphs based on Singular Value Decomposition
which can improve the performance of EGT. Our framework, which relies on global
node feature aggregation, achieves better performance compared to
Convolutional/Message-Passing Graph Neural Networks, which rely on local
feature aggregation within a neighborhood. We verify the performance of EGT in
a supervised learning setting on a wide range of experiments on benchmark
datasets. Our findings indicate that convolutional aggregation is not an
essential inductive bias for graphs and global self-attention can serve as a
flexible and adaptive alternative. | [
"cs.LG"
] |
Image demosaicking and denoising are the two key steps for color image
production pipeline. The classical processing sequence consists of applying
denoising first, and then demosaicking. However, this sequence leads to
oversmoothing and unpleasant checkerboard effect. Moreover, it is very
difficult to change this order, because once the image is demosaicked, the
statistical properties of the noise will be changed dramatically. This is
extremely challenging for traditional denoising models that strongly rely on
statistical assumptions. In this paper, we attempt to tackle this prickly
problem. Indeed, here we invert the traditional CFA processing pipeline by
first demosaicking and then denoising. In the first stage, we design a
demosaicking algorithm that combines traditional methods and a convolutional
neural network (CNN) to reconstruct a full color image ignoring the noise. To
improve the performance in image demosaicking, we modify an Inception
architecture for fusing R, G and B three channels information. This stage
retains all known information that is the key point to obtain pleasurable final
results. After demosaicking, we get a noisy full-color image and use another
CNN to learn the demosaicking residual noise (including artifacts) of it, that
allows to obtain a restored full color image. Our proposed algorithm completely
avoids the checkerboard effect and retains more image detail. Furthermore, it
can process very high-level noise while the performances of other CNN based
methods for noise higher than 20 are rather limited. Experimental results show
clearly that our method outperforms state-of-the-art methods both
quantitatively as well as in terms of visual quality. | [
"cs.CV"
] |
In this work we build a unifying framework to interpolate between
density-driven and geometry-based algorithms for data clustering, and
specifically, to connect the mean shift algorithm with spectral clustering at
discrete and continuum levels. We seek this connection through the introduction
of Fokker-Planck equations on data graphs. Besides introducing new forms of
mean shift algorithms on graphs, we provide new theoretical insights on the
behavior of the family of diffusion maps in the large sample limit as well as
provide new connections between diffusion maps and mean shift dynamics on a
fixed graph. Several numerical examples illustrate our theoretical findings and
highlight the benefits of interpolating density-driven and geometry-based
clustering algorithms. | [
"stat.ML",
"cs.LG",
"math.AP",
"62G20, 62H30, 60J27, 60J25, 35Q84, 58J35, 58J90, 28A33"
] |
The recent proliferation of richly structured probabilistic models raises the
question of how to automatically determine an appropriate model for a dataset.
We investigate this question for a space of matrix decomposition models which
can express a variety of widely used models from unsupervised learning. To
enable model selection, we organize these models into a context-free grammar
which generates a wide variety of structures through the compositional
application of a few simple rules. We use our grammar to generically and
efficiently infer latent components and estimate predictive likelihood for
nearly 2500 structures using a small toolbox of reusable algorithms. Using a
greedy search over our grammar, we automatically choose the decomposition
structure from raw data by evaluating only a small fraction of all models. The
proposed method typically finds the correct structure for synthetic data and
backs off gracefully to simpler models under heavy noise. It learns sensible
structures for datasets as diverse as image patches, motion capture, 20
Questions, and U.S. Senate votes, all using exactly the same code. | [
"cs.LG",
"stat.ML"
] |
A fundamental question in the theory of reinforcement learning is: suppose
the optimal $Q$-function lies in the linear span of a given $d$ dimensional
feature mapping, is sample-efficient reinforcement learning (RL) possible? The
recent and remarkable result of Weisz et al. (2020) resolved this question in
the negative, providing an exponential (in $d$) sample size lower bound, which
holds even if the agent has access to a generative model of the environment.
One may hope that this information theoretic barrier for RL can be circumvented
by further supposing an even more favorable assumption: there exists a
\emph{constant suboptimality gap} between the optimal $Q$-value of the best
action and that of the second-best action (for all states). The hope is that
having a large suboptimality gap would permit easier identification of optimal
actions themselves, thus making the problem tractable; indeed, provided the
agent has access to a generative model, sample-efficient RL is in fact possible
with the addition of this more favorable assumption.
This work focuses on this question in the standard online reinforcement
learning setting, where our main result resolves this question in the negative:
our hardness result shows that an exponential sample complexity lower bound
still holds even if a constant suboptimality gap is assumed in addition to
having a linearly realizable optimal $Q$-function. Perhaps surprisingly, this
implies an exponential separation between the online RL setting and the
generative model setting. Complementing our negative hardness result, we give
two positive results showing that provably sample-efficient RL is possible
either under an additional low-variance assumption or under a novel
hypercontractivity assumption (both implicitly place stronger conditions on the
underlying dynamics model). | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Image segmentation has long been a basic problem in computer vision.
Depth-wise Layering is a kind of segmentation that slices an image in a
depth-wise sequence unlike the conventional image segmentation problems dealing
with surface-wise decomposition. The proposed Depth-wise Layering technique
uses a single depth image of a static scene to slice it into multiple layers.
The technique employs a thresholding approach to segment rows of the dense
depth map into smaller partitions called Line-Segments in this paper. Then, it
uses the line-segment labelling method to identify number of objects and layers
of the scene independently. The final stage is to link objects of the scene to
their respective object-layers. We evaluate the efficiency of the proposed
technique by applying that on many images along with their dense depth maps.
The experiments have shown promising results of layering. | [
"cs.CV",
"eess.IV"
] |
Supervised training a deep neural network aims to "teach" the network to
mimic human visual perception that is represented by image-and-label pairs in
the training data. Superpixelized (SP) images are visually perceivable to
humans, but a conventionally trained deep learning model often performs poorly
when working on SP images. To better mimic human visual perception, we think it
is desirable for the deep learning model to be able to perceive not only raw
images but also SP images. In this paper, we propose a new superpixel-based
data augmentation (SPDA) method for training deep learning models for
biomedical image segmentation. Our method applies a superpixel generation
scheme to all the original training images to generate superpixelized images.
The SP images thus obtained are then jointly used with the original training
images to train a deep learning model. Our experiments of SPDA on four
biomedical image datasets show that SPDA is effective and can consistently
improve the performance of state-of-the-art fully convolutional networks for
biomedical image segmentation in 2D and 3D images. Additional studies also
demonstrate that SPDA can practically reduce the generalization gap. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
How do we learn an object detector that is invariant to occlusions and
deformations? Our current solution is to use a data-driven strategy -- collect
large-scale datasets which have object instances under different conditions.
The hope is that the final classifier can use these examples to learn
invariances. But is it really possible to see all the occlusions in a dataset?
We argue that like categories, occlusions and object deformations also follow a
long-tail. Some occlusions and deformations are so rare that they hardly
happen; yet we want to learn a model invariant to such occurrences. In this
paper, we propose an alternative solution. We propose to learn an adversarial
network that generates examples with occlusions and deformations. The goal of
the adversary is to generate examples that are difficult for the object
detector to classify. In our framework both the original detector and adversary
are learned in a joint manner. Our experimental results indicate a 2.3% mAP
boost on VOC07 and a 2.6% mAP boost on VOC2012 object detection challenge
compared to the Fast-RCNN pipeline. We also release the code for this paper. | [
"cs.CV"
] |
We propose a policy improvement algorithm for Reinforcement Learning (RL)
which is called Rerouted Behavior Improvement (RBI). RBI is designed to take
into account the evaluation errors of the Q-function. Such errors are common in
RL when learning the $Q$-value from finite past experience data. Greedy
policies or even constrained policy optimization algorithms which ignore these
errors may suffer from an improvement penalty (i.e. a negative policy
improvement). To minimize the improvement penalty, the RBI idea is to attenuate
rapid policy changes of low probability actions which were less frequently
sampled. This approach is shown to avoid catastrophic performance degradation
and reduce regret when learning from a batch of past experience. Through a
two-armed bandit with Gaussian distributed rewards example, we show that it
also increases data efficiency when the optimal action has a high variance. We
evaluate RBI in two tasks in the Atari Learning Environment: (1) learning from
observations of multiple behavior policies and (2) iterative RL. Our results
demonstrate the advantage of RBI over greedy policies and other constrained
policy optimization algorithms as a safe learning approach and as a general
data efficient learning algorithm. An anonymous Github repository of our RBI
implementation is found at https://github.com/eladsar/rbi. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Anomaly detection, finding patterns that substantially deviate from those
seen previously, is one of the fundamental problems of artificial intelligence.
Recently, classification-based methods were shown to achieve superior results
on this task. In this work, we present a unifying view and propose an open-set
method, GOAD, to relax current generalization assumptions. Furthermore, we
extend the applicability of transformation-based methods to non-image data
using random affine transformations. Our method is shown to obtain
state-of-the-art accuracy and is applicable to broad data types. The strong
performance of our method is extensively validated on multiple datasets from
different domains. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
In this work, we consider the regret minimization problem for reinforcement
learning in latent Markov Decision Processes (LMDP). In an LMDP, an MDP is
randomly drawn from a set of $M$ possible MDPs at the beginning of the
interaction, but the identity of the chosen MDP is not revealed to the agent.
We first show that a general instance of LMDPs requires at least
$\Omega((SA)^M)$ episodes to even approximate the optimal policy. Then, we
consider sufficient assumptions under which learning good policies requires
polynomial number of episodes. We show that the key link is a notion of
separation between the MDP system dynamics. With sufficient separation, we
provide an efficient algorithm with local guarantee, {\it i.e.,} providing a
sublinear regret guarantee when we are given a good initialization. Finally, if
we are given standard statistical sufficiency assumptions common in the
Predictive State Representation (PSR) literature (e.g., Boots et al.) and a
reachability assumption, we show that the need for initialization can be
removed. | [
"cs.LG"
] |
This paper proposes a Genetic Algorithm based segmentation method that can
automatically segment gray-scale images. The proposed method mainly consists of
spatial unsupervised grayscale image segmentation that divides an image into
regions. The aim of this algorithm is to produce precise segmentation of images
using intensity information along with neighborhood relationships. In this
paper, Fuzzy Hopfield Neural Network (FHNN) clustering helps in generating the
population of Genetic algorithm which there by automatically segments the
image. This technique is a powerful method for image segmentation and works for
both single and multiple-feature data with spatial information. Validity index
has been utilized for introducing a robust technique for finding the optimum
number of components in an image. Experimental results shown that the algorithm
generates good quality segmented image. | [
"cs.CV"
] |
In this paper, we address the problem of generating person images conditioned
on both pose and appearance information. Specifically, given an image xa of a
person and a target pose P(xb), extracted from a different image xb, we
synthesize a new image of that person in pose P(xb), while preserving the
visual details in xa. In order to deal with pixel-to-pixel misalignments caused
by the pose differences between P(xa) and P(xb), we introduce deformable skip
connections in the generator of our Generative Adversarial Network. Moreover, a
nearest-neighbour loss is proposed instead of the common L1 and L2 losses in
order to match the details of the generated image with the target image.
Quantitative and qualitative results, using common datasets and protocols
recently proposed for this task, show that our approach is competitive with
respect to the state of the art. Moreover, we conduct an extensive evaluation
using off-the-shell person re-identification (Re-ID) systems trained with
person-generation based augmented data, which is one of the main important
applications for this task. Our experiments show that our Deformable GANs can
significantly boost the Re-ID accuracy and are even better than
data-augmentation methods specifically trained using Re-ID losses. | [
"cs.CV"
] |
Recently, the progress of learning-by-synthesis has proposed a training model
for synthetic images, which can effectively reduce the cost of human and
material resources. However, due to the different distribution of synthetic
images compared with real images, the desired performance cannot be achieved.
To solve this problem, the previous method learned a model to improve the
realism of the synthetic images. Different from the previous methods, this
paper try to purify real image by extracting discriminative and robust features
to convert outdoor real images to indoor synthetic images. In this paper, we
first introduce the segmentation masks to construct RGB-mask pairs as inputs,
then we design a mask-guided style transfer network to learn style features
separately from the attention and bkgd(background) regions and learn content
features from full and attention region. Moreover, we propose a novel
region-level task-guided loss to restrain the features learnt from style and
content. Experiments were performed using mixed studies (qualitative and
quantitative) methods to demonstrate the possibility of purifying real images
in complex directions. We evaluate the proposed method on various public
datasets, including LPW, COCO and MPIIGaze. Experimental results show that the
proposed method is effective and achieves the state-of-the-art results. | [
"cs.CV"
] |
Point cloud upsampling using deep learning has been paid various efforts in
the past few years. Recent supervised deep learning methods are restricted to
the size of training data and is limited in terms of covering all shapes of
point clouds. Besides, the acquisition of such amount of data is unrealistic,
and the network generally performs less powerful than expected on unseen
records. In this paper, we present an unsupervised approach to upsample point
clouds internally referred as "Zero Shot" Point Cloud Upsampling (ZSPU) at
holistic level. Our approach is solely based on the internal information
provided by a particular point cloud without patching in both self-training and
testing phases. This single-stream design significantly reduces the training
time of the upsampling task, by learning the relation between low-resolution
(LR) point clouds and their high (original) resolution (HR) counterparts. This
association will provide super-resolution (SR) outputs when original point
clouds are loaded as input. We demonstrate competitive performance on benchmark
point cloud datasets when compared to other upsampling methods. Furthermore,
ZSPU achieves superior qualitative results on shapes with complex local details
or high curvatures. | [
"cs.CV"
] |
An essential task in predictive maintenance is the prediction of the
Remaining Useful Life (RUL) through the analysis of multivariate time series.
Using the sliding window method, Convolutional Neural Network (CNN) and
conventional Recurrent Neural Network (RNN) approaches have produced impressive
results on this matter, due to their ability to learn optimized features.
However, sequence information is only partially modeled by CNN approaches. Due
to the flatten mechanism in conventional RNNs, like Long Short Term Memories
(LSTM), the temporal information within the window is not fully preserved. To
exploit the multi-level temporal information, many approaches are proposed
which combine CNN and RNN models. In this work, we propose a new LSTM variant
called embedded convolutional LSTM (ECLSTM). In ECLSTM a group of different 1D
convolutions is embedded into the LSTM structure. Through this, the temporal
information is preserved between and within windows. Since the hyper-parameters
of models require careful tuning, we also propose an automated prediction
framework based on the Bayesian optimization with hyperband optimizer, which
allows for efficient optimization of the network architecture. Finally, we show
the superiority of our proposed ECLSTM approach over the state-of-the-art
approaches on several widely used benchmark data sets for RUL Estimation. | [
"cs.LG",
"stat.ML"
] |
Existing attention mechanisms are trained to attend to individual items in a
collection (the memory) with a predefined, fixed granularity, e.g., a word
token or an image grid. We propose area attention: a way to attend to areas in
the memory, where each area contains a group of items that are structurally
adjacent, e.g., spatially for a 2D memory such as images, or temporally for a
1D memory such as natural language sentences. Importantly, the shape and the
size of an area are dynamically determined via learning, which enables a model
to attend to information with varying granularity. Area attention can easily
work with existing model architectures such as multi-head attention for
simultaneously attending to multiple areas in the memory. We evaluate area
attention on two tasks: neural machine translation (both character and
token-level) and image captioning, and improve upon strong (state-of-the-art)
baselines in all the cases. These improvements are obtainable with a basic form
of area attention that is parameter free. | [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
] |
This is a detailed tutorial paper which explains the Principal Component
Analysis (PCA), Supervised PCA (SPCA), kernel PCA, and kernel SPCA. We start
with projection, PCA with eigen-decomposition, PCA with one and multiple
projection directions, properties of the projection matrix, reconstruction
error minimization, and we connect to auto-encoder. Then, PCA with singular
value decomposition, dual PCA, and kernel PCA are covered. SPCA using both
scoring and Hilbert-Schmidt independence criterion are explained. Kernel SPCA
using both direct and dual approaches are then introduced. We cover all cases
of projection and reconstruction of training and out-of-sample data. Finally,
some simulations are provided on Frey and AT&T face datasets for verifying the
theory in practice. | [
"stat.ML",
"cs.LG"
] |
The current gold standard for human activity recognition (HAR) is based on
the use of cameras. However, the poor scalability of camera systems renders
them impractical in pursuit of the goal of wider adoption of HAR in mobile
computing contexts. Consequently, researchers instead rely on wearable sensors
and in particular inertial sensors. A particularly prevalent wearable is the
smart watch which due to its integrated inertial and optical sensing
capabilities holds great potential for realising better HAR in a non-obtrusive
way. This paper seeks to simplify the wearable approach to HAR through
determining if the wrist-mounted optical sensor alone typically found in a
smartwatch or similar device can be used as a useful source of data for
activity recognition. The approach has the potential to eliminate the need for
the inertial sensing element which would in turn reduce the cost of and
complexity of smartwatches and fitness trackers. This could potentially
commoditise the hardware requirements for HAR while retaining the functionality
of both heart rate monitoring and activity capture all from a single optical
sensor. Our approach relies on the adoption of machine vision for activity
recognition based on suitably scaled plots of the optical signals. We take this
approach so as to produce classifications that are easily explainable and
interpretable by non-technical users. More specifically, images of
photoplethysmography signal time series are used to retrain the penultimate
layer of a convolutional neural network which has initially been trained on the
ImageNet database. We then use the 2048 dimensional features from the
penultimate layer as input to a support vector machine. Results from the
experiment yielded an average classification accuracy of 92.3%. This result
outperforms that of an optical and inertial sensor combined (78%) and
illustrates the capability of HAR systems using... | [
"cs.CV",
"cs.LG",
"K.3.8"
] |
Today, the optimal performance of existing noise-suppression algorithms, both
data-driven and those based on classic statistical methods, is range bound to
specific levels of instantaneous input signal-to-noise ratios. In this paper,
we present a new approach to improve the adaptivity of such algorithms enabling
them to perform robustly across a wide range of input signal and noise types.
Our methodology is based on the dynamic control of algorithmic parameters via
reinforcement learning. Specifically, we model the noise-suppression module as
a black box, requiring no knowledge of the algorithmic mechanics except a
simple feedback from the output. We utilize this feedback as the reward signal
for a reinforcement-learning agent that learns a policy to adapt the
algorithmic parameters for every incoming audio frame (16 ms of data). Our
preliminary results show that such a control mechanism can substantially
increase the overall performance of the underlying noise-suppression algorithm;
42% and 16% improvements in output SNR and MSE, respectively, when compared to
no adaptivity. | [
"cs.LG"
] |
Medical image segmentation - the prerequisite of numerous clinical needs -
has been significantly prospered by recent advances in convolutional neural
networks (CNNs). However, it exhibits general limitations on modeling explicit
long-range relation, and existing cures, resorting to building deep encoders
along with aggressive downsampling operations, leads to redundant deepened
networks and loss of localized details. Hence, the segmentation task awaits a
better solution to improve the efficiency of modeling global contexts while
maintaining a strong grasp of low-level details. In this paper, we propose a
novel parallel-in-branch architecture, TransFuse, to address this challenge.
TransFuse combines Transformers and CNNs in a parallel style, where both global
dependency and low-level spatial details can be efficiently captured in a much
shallower manner. Besides, a novel fusion technique - BiFusion module is
created to efficiently fuse the multi-level features from both branches.
Extensive experiments demonstrate that TransFuse achieves the newest
state-of-the-art results on both 2D and 3D medical image sets including polyp,
skin lesion, hip, and prostate segmentation, with significant parameter
decrease and inference speed improvement. | [
"cs.CV",
"cs.AI"
] |
Deep neural networks based purely on attention have been successful across
several domains, relying on minimal architectural priors from the designer. In
Human Action Recognition (HAR), attention mechanisms have been primarily
adopted on top of standard convolutional or recurrent layers, improving the
overall generalization capability. In this work, we introduce Action
Transformer (AcT), a simple, fully self-attentional architecture that
consistently outperforms more elaborated networks that mix convolutional,
recurrent, and attentive layers. In order to limit computational and energy
requests, building on previous human action recognition research, the proposed
approach exploits 2D pose representations over small temporal windows,
providing a low latency solution for accurate and effective real-time
performance. Moreover, we open-source MPOSE2021, a new large-scale dataset, as
an attempt to build a formal training and evaluation benchmark for real-time
short-time human action recognition. Extensive experimentation on MPOSE2021
with our proposed methodology and several previous architectural solutions
proves the effectiveness of the AcT model and poses the base for future work on
HAR. | [
"cs.CV",
"cs.LG"
] |
Bottom-Up Hidden Tree Markov Model is a highly expressive model for
tree-structured data. Unfortunately, it cannot be used in practice due to the
intractable size of its state-transition matrix. We propose a new approximation
which lies on the Tucker factorisation of tensors. The probabilistic
interpretation of such approximation allows us to define a new probabilistic
model for tree-structured data. Hence, we define the new approximated model and
we derive its learning algorithm. Then, we empirically assess the effective
power of the new model evaluating it on two different tasks. In both cases, our
model outperforms the other approximated model known in the literature. | [
"cs.LG",
"stat.ML"
] |
The theoretical analysis of deep neural networks (DNN) is arguably among the
most challenging research directions in machine learning (ML) right now, as it
requires from scientists to lay novel statistical learning foundations to
explain their behaviour in practice. While some success has been achieved
recently in this endeavour, the question on whether DNNs can be analyzed using
the tools from other scientific fields outside the ML community has not
received the attention it may well have deserved. In this paper, we explore the
interplay between DNNs and game theory (GT), and show how one can benefit from
the classic readily available results from the latter when analyzing the
former. In particular, we consider the widely studied class of congestion
games, and illustrate their intrinsic relatedness to both linear and non-linear
DNNs and to the properties of their loss surface. Beyond retrieving the
state-of-the-art results from the literature, we argue that our work provides a
very promising novel tool for analyzing the DNNs and support this claim by
proposing concrete open problems that can advance significantly our
understanding of DNNs when solved. | [
"cs.LG",
"cs.GT",
"stat.ML"
] |
By assigning each relationship a single label, current approaches formulate
the relationship detection as a classification problem. Under this formulation,
predicate categories are treated as completely different classes. However,
different from the object labels where different classes have explicit
boundaries, predicates usually have overlaps in their semantic meanings. For
example, sit\_on and stand\_on have common meanings in vertical relationships
but different details of how these two objects are vertically placed. In order
to leverage the inherent structures of the predicate categories, we propose to
first build the language hierarchy and then utilize the Hierarchy Guided
Feature Learning (HGFL) strategy to learn better region features of both the
coarse-grained level and the fine-grained level. Besides, we also propose the
Hierarchy Guided Module (HGM) to utilize the coarse-grained level to guide the
learning of fine-grained level features. Experiments show that the proposed
simple yet effective method can improve several state-of-the-art baselines by a
large margin (up to $33\%$ relative gain) in terms of Recall@50 on the task of
Scene Graph Generation in different datasets. | [
"cs.CV",
"cs.CL"
] |
Within months of birth, children develop meaningful expectations about the
world around them. How much of this early knowledge can be explained through
generic learning mechanisms applied to sensory data, and how much of it
requires more substantive innate inductive biases? Addressing this fundamental
question in its full generality is currently infeasible, but we can hope to
make real progress in more narrowly defined domains, such as the development of
high-level visual categories, thanks to improvements in data collecting
technology and recent progress in deep learning. In this paper, our goal is
precisely to achieve such progress by utilizing modern self-supervised deep
learning methods and a recent longitudinal, egocentric video dataset recorded
from the perspective of three young children (Sullivan et al., 2020). Our
results demonstrate the emergence of powerful, high-level visual
representations from developmentally realistic natural videos using generic
self-supervised learning objectives. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
We investigate the relationship between the frequency spectrum of image data
and the generalization behavior of convolutional neural networks (CNN). We
first notice CNN's ability in capturing the high-frequency components of
images. These high-frequency components are almost imperceptible to a human.
Thus the observation leads to multiple hypotheses that are related to the
generalization behaviors of CNN, including a potential explanation for
adversarial examples, a discussion of CNN's trade-off between robustness and
accuracy, and some evidence in understanding training heuristics. | [
"cs.CV",
"cs.LG"
] |
Although automatic emotion recognition from facial expressions and speech has
made remarkable progress, emotion recognition from body gestures has not been
thoroughly explored. People often use a variety of body language to express
emotions, and it is difficult to enumerate all emotional body gestures and
collect enough samples for each category. Therefore, recognizing new emotional
body gestures is critical for better understanding human emotions. However, the
existing methods fail to accurately determine which emotional state a new body
gesture belongs to. In order to solve this problem, we introduce a Generalized
Zero-Shot Learning (GZSL) framework, which consists of three branches to infer
the emotional state of the new body gestures with only their semantic
descriptions. The first branch is a Prototype-Based Detector (PBD) which is
used to determine whether an sample belongs to a seen body gesture category and
obtain the prediction results of the samples from the seen categories. The
second branch is a Stacked AutoEncoder (StAE) with manifold regularization,
which utilizes semantic representations to predict samples from unseen
categories. Note that both of the above branches are for body gesture
recognition. We further add an emotion classifier with a softmax layer as the
third branch in order to better learn the feature representations for this
emotion classification task. The input features for these three branches are
learned by a shared feature extraction network, i.e., a Bidirectional Long
Short-Term Memory Networks (BLSTM) with a self-attention module. We treat these
three branches as subtasks and use multi-task learning strategies for joint
training. The performance of our framework on an emotion recognition dataset is
significantly superior to the traditional method of emotion classification and
state-of-the-art zero-shot learning methods. | [
"cs.CV"
] |
Markov chain Monte Carlo (MCMC) algorithms are ubiquitous in probability
theory in general and in machine learning in particular. A Markov chain is
devised so that its stationary distribution is some probability distribution of
interest. Then one samples from the given distribution by running the Markov
chain for a "long time" until it appears to be stationary and then collects the
sample. However these chains are often very complex and there are no
theoretical guarantees that stationarity is actually reached. In this paper we
study the Gibbs sampler of the posterior distribution of a very simple case of
Latent Dirichlet Allocation, the arguably most well known Bayesian unsupervised
learning model for text generation and text classification. It is shown that
when the corpus consists of two long documents of equal length $m$ and the
vocabulary consists of only two different words, the mixing time is at most of
order $m^2\log m$ (which corresponds to $m\log m$ rounds over the corpus). It
will be apparent from our analysis that it seems very likely that the mixing
time is not much worse in the more relevant case when the number of documents
and the size of the vocabulary are also large as long as each word is
represented a large number in each document, even though the computations
involved may be intractable. | [
"cs.LG",
"stat.ML",
"G.3"
] |
The increasing amount of available data, computing power, and the constant
pursuit for higher performance results in the growing complexity of predictive
models. Their black-box nature leads to opaqueness debt phenomenon inflicting
increased risks of discrimination, lack of reproducibility, and deflated
performance due to data drift. To manage these risks, good MLOps practices ask
for better validation of model performance and fairness, higher explainability,
and continuous monitoring. The necessity of deeper model transparency appears
not only from scientific and social domains, but also emerging laws and
regulations on artificial intelligence. To facilitate the development of
responsible machine learning models, we showcase dalex, a Python package which
implements the model-agnostic interface for interactive model exploration. It
adopts the design crafted through the development of various tools for
responsible machine learning; thus, it aims at the unification of the existing
solutions. This library's source code and documentation are available under
open license at https://python.drwhy.ai/. | [
"cs.LG",
"cs.HC",
"cs.SE",
"stat.ML"
] |
Combining different models is a widely used paradigm in machine learning
applications. While the most common approach is to form an ensemble of models
and average their individual predictions, this approach is often rendered
infeasible by given resource constraints in terms of memory and computation,
which grow linearly with the number of models. We present a layer-wise model
fusion algorithm for neural networks that utilizes optimal transport to (soft-)
align neurons across the models before averaging their associated parameters.
We show that this can successfully yield "one-shot" knowledge transfer (i.e,
without requiring any retraining) between neural networks trained on
heterogeneous non-i.i.d. data. In both i.i.d. and non-i.i.d. settings , we
illustrate that our approach significantly outperforms vanilla averaging, as
well as how it can serve as an efficient replacement for the ensemble with
moderate fine-tuning, for standard convolutional networks (like VGG11),
residual networks (like ResNet18), and multi-layer perceptrons on CIFAR10,
CIFAR100, and MNIST. Finally, our approach also provides a principled way to
combine the parameters of neural networks with different widths, and we explore
its application for model compression. The code is available at the following
link, https://github.com/sidak/otfusion. | [
"cs.LG",
"stat.ML"
] |
In representation learning and non-linear dimension reduction, there is a
huge interest to learn the 'disentangled' latent variables, where each
sub-coordinate almost uniquely controls a facet of the observed data. While
many regularization approaches have been proposed on variational autoencoders,
heuristic tuning is required to balance between disentanglement and loss in
reconstruction accuracy -- due to the unsupervised nature, there is no
principled way to find an optimal weight for regularization. Motivated to
completely bypass regularization, we consider a projection strategy: modifying
the canonical Gaussian encoder, we add a layer of scaling and rotation to the
Gaussian mean, such that the marginal correlations among latent sub-coordinates
become exactly zero. This achieves a theoretically maximal disentanglement, as
guaranteed by zero cross-correlation between one latent sub-coordinate and the
observed varying with the rest. Unlike regularizations, the extra projection
layer does not impact the flexibility of the previous encoder layers, leading
to almost no loss in expressiveness. This approach is simple to implement in
practice. Our numerical experiments demonstrate very good performance, with no
tuning required. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
Shape instantiation which predicts the 3D shape of a dynamic target from one
or more 2D images is important for real-time intra-operative navigation.
Previously, a general shape instantiation framework was proposed with manual
image segmentation to generate a 2D Statistical Shape Model (SSM) and with
Kernel Partial Least Square Regression (KPLSR) to learn the relationship
between the 2D and 3D SSM for 3D shape prediction. In this paper, the two-stage
shape instantiation is improved to be one-stage. PointOutNet with 19
convolutional layers and three fully-connected layers is used as the network
structure and Chamfer distance is used as the loss function to predict the 3D
target point cloud from a single 2D image. With the proposed one-stage shape
instantiation algorithm, a spontaneous image-to-point cloud training and
inference can be achieved. A dataset from 27 Right Ventricle (RV) subjects,
indicating 609 experiments, were used to validate the proposed one-stage shape
instantiation algorithm. An average point cloud-to-point cloud (PC-to-PC) error
of 1.72mm has been achieved, which is comparable to the PLSR-based (1.42mm) and
KPLSR-based (1.31mm) two-stage shape instantiation algorithm. | [
"cs.CV",
"cs.LG"
] |
Over the last few years, traffic data has been exploding and the
transportation discipline has entered the era of big data. It brings out new
opportunities for doing data-driven analysis, but it also challenges
traditional analytic methods. This paper proposes a new Divide and Combine
based approach to do K means clustering on activity-travel behavior time series
using features that are derived using tools in Time Series Analysis and
Topological Data Analysis. Clustering data from five waves of the National
Household Travel Survey ranging from 1990 to 2017 suggests that activity-travel
patterns of individuals over the last three decades can be grouped into three
clusters. Results also provide evidence in support of recent claims about
differences in activity-travel patterns of different survey cohorts. The
proposed method is generally applicable and is not limited only to
activity-travel behavior analysis in transportation studies. Driving behavior,
travel mode choice, household vehicle ownership, when being characterized as
categorical time series, can all be analyzed using the proposed method. | [
"stat.ML",
"cs.LG",
"stat.AP"
] |
With the rapid growth of online fashion market, demand for effective fashion
recommendation systems has never been greater. In fashion recommendation, the
ability to find items that goes well with a few other items based on style is
more important than picking a single item based on the user's entire purchase
history. Since the same user may have purchased dress suits in one month and
casual denims in another, it is impossible to learn the latent style features
of those items using only the user ratings. If we were able to represent the
style features of fashion items in a reasonable way, we will be able to
recommend new items that conform to some small subset of pre-purchased items
that make up a coherent style set. We propose Style2Vec, a vector
representation model for fashion items. Based on the intuition of
distributional semantics used in word embeddings, Style2Vec learns the
representation of a fashion item using other items in matching outfits as
context. Two different convolutional neural networks are trained to maximize
the probability of item co-occurrences. For evaluation, a fashion analogy test
is conducted to show that the resulting representation connotes diverse fashion
related semantics like shapes, colors, patterns and even latent styles. We also
perform style classification using Style2Vec features and show that our method
outperforms other baselines. | [
"cs.CV"
] |
Cutting and pasting image segments feels intuitive: the choice of source
templates gives artists flexibility in recombining existing source material.
Formally, this process takes an image set as input and outputs a collage of the
set elements. Such selection from sets of source templates does not fit easily
in classical convolutional neural models requiring inputs of fixed size.
Inspired by advances in attention and set-input machine learning, we present a
novel architecture that can generate in one forward pass image collages of
source templates using set-structured representations. This paper has the
following contributions: (i) a novel framework for image generation called
Memory Attentive Generation of Image Collages (MAGIC) which gives artists new
ways to create digital collages; (ii) from the machine-learning perspective, we
show a novel Generative Adversarial Networks (GAN) architecture that uses
Set-Transformer layers and set-pooling to blend sets of random image samples -
a hybrid non-parametric approach. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
] |
We propose TAL-Net, an improved approach to temporal action localization in
video that is inspired by the Faster R-CNN object detection framework. TAL-Net
addresses three key shortcomings of existing approaches: (1) we improve
receptive field alignment using a multi-scale architecture that can accommodate
extreme variation in action durations; (2) we better exploit the temporal
context of actions for both proposal generation and action classification by
appropriately extending receptive fields; and (3) we explicitly consider
multi-stream feature fusion and demonstrate that fusing motion late is
important. We achieve state-of-the-art performance for both action proposal and
localization on THUMOS'14 detection benchmark and competitive performance on
ActivityNet challenge. | [
"cs.CV"
] |
Generative Adversarial Networks (GAN) have shown great promise in tasks like
synthetic image generation, image inpainting, style transfer, and anomaly
detection. However, generating discrete data is a challenge. This work presents
an adversarial training based correlated discrete data (CDD) generation model.
It also details an approach for conditional CDD generation. The results of our
approach are presented over two datasets; job-seeking candidates skill set
(private dataset) and MNIST (public dataset). From quantitative and qualitative
analysis of these results, we show that our model performs better as it
leverages inherent correlation in the data, than an existing model that
overlooks correlation. | [
"cs.LG",
"stat.ML"
] |
We investigate the capability of a transformer pretrained on natural language
to generalize to other modalities with minimal finetuning -- in particular,
without finetuning of the self-attention and feedforward layers of the residual
blocks. We consider such a model, which we call a Frozen Pretrained Transformer
(FPT), and study finetuning it on a variety of sequence classification tasks
spanning numerical computation, vision, and protein fold prediction. In
contrast to prior works which investigate finetuning on the same modality as
the pretraining dataset, we show that pretraining on natural language can
improve performance and compute efficiency on non-language downstream tasks.
Additionally, we perform an analysis of the architecture, comparing the
performance of a random initialized transformer to a random LSTM. Combining the
two insights, we find language-pretrained transformers can obtain strong
performance on a variety of non-language tasks. | [
"cs.LG",
"cs.AI"
] |
With the recent progress in Generative Adversarial Networks (GANs), it is
imperative for media and visual forensics to develop detectors which can
identify and attribute images to the model generating them. Existing works have
shown to attribute images to their corresponding GAN sources with high
accuracy. However, these works are limited to a closed set scenario, failing to
generalize to GANs unseen during train time and are therefore, not scalable
with a steady influx of new GANs. We present an iterative algorithm for
discovering images generated from previously unseen GANs by exploiting the fact
that all GANs leave distinct fingerprints on their generated images. Our
algorithm consists of multiple components including network training,
out-of-distribution detection, clustering, merge and refine steps. Through
extensive experiments, we show that our algorithm discovers unseen GANs with
high accuracy and also generalizes to GANs trained on unseen real datasets. We
additionally apply our algorithm to attribution and discovery of GANs in an
online fashion as well as to the more standard task of real/fake detection. Our
experiments demonstrate the effectiveness of our approach to discover new GANs
and can be used in an open-world setup. | [
"cs.CV",
"cs.LG"
] |
Graph distance metric learning serves as the foundation for many graph
learning problems, e.g., graph clustering, graph classification and graph
matching. Existing research works on graph distance metric (or graph kernels)
learning fail to maintain the basic properties of such metrics, e.g.,
non-negative, identity of indiscernibles, symmetry and triangle inequality,
respectively. In this paper, we will introduce a new graph neural network based
distance metric learning approaches, namely GB-DISTANCE (GRAPH-BERT based
Neural Distance). Solely based on the attention mechanism, GB-DISTANCE can
learn graph instance representations effectively based on a pre-trained
GRAPH-BERT model. Different from the existing supervised/unsupervised metrics,
GB-DISTANCE can be learned effectively in a semi-supervised manner. In
addition, GB-DISTANCE can also maintain the distance metric basic properties
mentioned above. Extensive experiments have been done on several benchmark
graph datasets, and the results demonstrate that GB-DISTANCE can out-perform
the existing baseline methods, especially the recent graph neural network model
based graph metrics, with a significant gap in computing the graph distance. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Urban areas consume over two-thirds of the world's energy and account for
more than 70 percent of global CO2 emissions. As stated in IPCC's Global
Warming of 1.5C report, achieving carbon neutrality by 2050 requires a scalable
approach that can be applied in a global context. Conventional methods of
collecting data on energy use and emissions of buildings are extremely
expensive and require specialized geometry information that not all cities have
readily available. High-quality building footprint generation from satellite
images can accelerate this predictive process and empower municipal
decision-making at scale. However, previous deep learning-based approaches use
supplemental data such as point cloud data, building height information, and
multi-band imagery - which has limited availability and is difficult to
produce. In this paper, we propose a modified DeeplabV3+ module with a Dilated
ResNet backbone to generate masks of building footprints from only
three-channel RGB satellite imagery. Furthermore, we introduce an F-Beta
measure in our objective function to help the model account for skewed class
distributions. In addition to an F-Beta objective function, we incorporate an
exponentially weighted boundary loss and use a cross-dataset training strategy
to further increase the quality of predictions. As a result, we achieve
state-of-the-art performance across three standard benchmarks and demonstrate
that our RGB-only method is agnostic to the scale, resolution, and urban
density of satellite imagery. | [
"cs.CV"
] |
Backpropagation and the chain rule of derivatives have been prominent;
however, the total derivative rule has not enjoyed the same amount of
attention. In this work we show how the total derivative rule leads to an
intuitive visual framework for creating gradient estimators on graphical
models. In particular, previous "policy gradient theorems" are easily derived.
We derive new gradient estimators based on density estimation, as well as a
likelihood ratio gradient, which "jumps" to an intermediate node, not directly
to the objective function. We evaluate our methods on model-based policy
gradient algorithms, achieve good performance, and present evidence towards
demystifying the success of the popular PILCO algorithm. | [
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
] |
Autonomous vehicles must balance a complex set of objectives. There is no
consensus on how they should do so, nor on a model for specifying a desired
driving behavior. We created a dataset to help address some of these questions
in a limited operating domain. The data consists of 92 traffic scenarios, with
multiple ways of traversing each scenario. Multiple annotators expressed their
preference between pairs of scenario traversals. We used the data to compare an
instance of a rulebook, carefully hand-crafted independently of the dataset,
with several interpretable machine learning models such as Bayesian networks,
decision trees, and logistic regression trained on the dataset. To compare
driving behavior, these models use scores indicating by how much different
scenario traversals violate each of 14 driving rules. The rules are
interpretable and designed by subject-matter experts. First, we found that
these rules were enough for these models to achieve a high classification
accuracy on the dataset. Second, we found that the rulebook provides high
interpretability without excessively sacrificing performance. Third, the data
pointed to possible improvements in the rulebook and the rules, and to
potential new rules. Fourth, we explored the interpretability vs performance
trade-off by also training non-interpretable models such as a random forest.
Finally, we make the dataset publicly available to encourage a discussion from
the wider community on behavior specification for AVs. Please find it at
github.com/bassam-motional/Reasonable-Crowd. | [
"cs.LG",
"cs.RO"
] |
Time series classification (TSC) is home to a number of algorithm groups that
utilise different kinds of discriminatory patterns. One of these groups
describes classifiers that predict using phase dependant intervals. The time
series forest (TSF) classifier is one of the most well known interval methods,
and has demonstrated strong performance as well as relative speed in training
and predictions. However, recent advances in other approaches have left TSF
behind. TSF originally summarises intervals using three simple summary
statistics. The `catch22' feature set of 22 time series features was recently
proposed to aid time series analysis through a concise set of diverse and
informative descriptive characteristics. We propose combining TSF and catch22
to form a new classifier, the Canonical Interval Forest (CIF). We outline
additional enhancements to the training procedure, and extend the classifier to
include multivariate classification capabilities. We demonstrate a large and
significant improvement in accuracy over both TSF and catch22, and show it to
be on par with top performers from other algorithmic classes. By upgrading the
interval-based component from TSF to CIF, we also demonstrate a significant
improvement in the hierarchical vote collective of transformation-based
ensembles (HIVE-COTE) that combines different time series representations.
HIVE-COTE using CIF is significantly more accurate on the UCR archive than any
other classifier we are aware of and represents a new state of the art for TSC. | [
"cs.LG",
"eess.SP"
] |
We study the problem of adaptive control of a high dimensional linear
quadratic (LQ) system. Previous work established the asymptotic convergence to
an optimal controller for various adaptive control schemes. More recently, for
the average cost LQ problem, a regret bound of ${O}(\sqrt{T})$ was shown, apart
form logarithmic factors. However, this bound scales exponentially with $p$,
the dimension of the state space. In this work we consider the case where the
matrices describing the dynamic of the LQ system are sparse and their
dimensions are large. We present an adaptive control scheme that achieves a
regret bound of ${O}(p \sqrt{T})$, apart from logarithmic factors. In
particular, our algorithm has an average cost of $(1+\eps)$ times the optimum
cost after $T = \polylog(p) O(1/\eps^2)$. This is in comparison to previous
work on the dense dynamics where the algorithm requires time that scales
exponentially with dimension in order to achieve regret of $\eps$ times the
optimal cost.
We believe that our result has prominent applications in the emerging area of
computational advertising, in particular targeted online advertising and
advertising in social networks. | [
"stat.ML",
"cs.LG",
"math.OC"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.