text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Training a deep neural network with a small amount of data is a challenging
problem as it is vulnerable to overfitting. However, one of the practical
difficulties that we often face is to collect many samples. Transfer learning
is a cost-effective solution to this problem. By using the source model trained
with a large-scale dataset, the target model can alleviate the overfitting
originated from the lack of training data. Resorting to the ability of
generalization of the source model, several methods proposed to use the source
knowledge during the whole training procedure. However, this is likely to
restrict the potential of the target model and some transferred knowledge from
the source can interfere with the training procedure. For improving the
generalization performance of the target model with a few training samples, we
proposed a regularization method called sample-based regularization (SBR),
which does not rely on the source's knowledge during training. With SBR, we
suggested a new training framework for transfer learning. Experimental results
showed that our framework outperformed existing methods in various
configurations. | [
"cs.LG",
"stat.ML"
] |
Mutual information is widely applied to learn latent representations of
observations, whilst its implication in classification neural networks remain
to be better explained. We show that optimising the parameters of
classification neural networks with softmax cross-entropy is equivalent to
maximising the mutual information between inputs and labels under the balanced
data assumption. Through experiments on synthetic and real datasets, we show
that softmax cross-entropy can estimate mutual information approximately. When
applied to image classification, this relation helps approximate the point-wise
mutual information between an input image and a label without modifying the
network structure. To this end, we propose infoCAM, informative class
activation map, which highlights regions of the input image that are the most
relevant to a given label based on differences in information. The activation
map helps localise the target object in an input image. Through experiments on
the semi-supervised object localisation task with two real-world datasets, we
evaluate the effectiveness of our information-theoretic approach. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We present Tesla-Rapture, a gesture recognition interface for point clouds
generated by mmWave Radars. State of the art gesture recognition models are
either too resource consuming or not sufficiently accurate for integration into
real-life scenarios using wearable or constrained equipment such as IoT devices
(e.g. Raspberry PI), XR hardware (e.g. HoloLens), or smart-phones. To tackle
this issue, we developed Tesla, a Message Passing Neural Network (MPNN) graph
convolution approach for mmWave radar point clouds. The model outperforms the
state of the art on two datasets in terms of accuracy while reducing the
computational complexity and, hence, the execution time. In particular, the
approach, is able to predict a gesture almost 8 times faster than the most
accurate competitor. Our performance evaluation in different scenarios
(environments, angles, distances) shows that Tesla generalizes well and
improves the accuracy up to 20% in challenging scenarios like a through-wall
setting and sensing at extreme angles. Utilizing Tesla, we develop
Tesla-Rapture, a real-time implementation using a mmWave Radar on a Raspberry
PI 4 and evaluate its accuracy and time-complexity. We also publish the source
code, the trained models, and the implementation of the model for embedded
devices. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
A natural approach to analyze interaction data of form
"what-connects-to-what-when" is to create a time-series (or rather a sequence)
of graphs through temporal discretization (bandwidth selection) and spatial
discretization (vertex contraction). Such discretization together with
non-negative factorization techniques can be useful for obtaining clustering of
graphs. Motivating application of performing clustering of graphs (as opposed
to vertex clustering) can be found in neuroscience and in social network
analysis, and it can also be used to enhance community detection (i.e., vertex
clustering) by way of conditioning on the cluster labels. In this paper, we
formulate a problem of clustering of graphs as a model selection problem. Our
approach involves information criteria, non-negative matrix factorization and
singular value thresholding, and we illustrate our techniques using real and
simulated data. | [
"stat.ML"
] |
Real data are often with multiple modalities or from multiple heterogeneous
sources, thus forming so-called multi-view data, which receives more and more
attentions in machine learning. Multi-view clustering (MVC) becomes its
important paradigm. In real-world applications, some views often suffer from
instances missing. Clustering on such multi-view datasets is called incomplete
multi-view clustering (IMC) and quite challenging. To date, though many
approaches have been developed, most of them are offline and have high
computational and memory costs especially for large scale datasets. To address
this problem, in this paper, we propose an One-Pass Incomplete Multi-view
Clustering framework (OPIMC). With the help of regularized matrix factorization
and weighted matrix factorization, OPIMC can relatively easily deal with such
problem. Different from the existing and sole online IMC method, OPIMC can
directly get clustering results and effectively determine the termination of
iteration process by introducing two global statistics. Finally, extensive
experiments conducted on four real datasets demonstrate the efficiency and
effectiveness of the proposed OPIMC method. | [
"cs.LG",
"stat.ML"
] |
To ensure the security of the general mass, crime prevention is one of the
most higher priorities for any government. An accurate crime prediction model
can help the government, law enforcement to prevent violence, detect the
criminals in advance, allocate the government resources, and recognize problems
causing crimes. To construct any future-oriented tools, examine and understand
the crime patterns in the earliest possible time is essential. In this paper, I
analyzed a real-world crime and accident dataset of Denver county, USA, from
January 2014 to May 2019, which containing 478,578 incidents. This project aims
to predict and highlights the trends of occurrence that will, in return,
support the law enforcement agencies and government to discover the preventive
measures from the prediction rates. At first, I apply several statistical
analysis supported by several data visualization approaches. Then, I implement
various classification algorithms such as Random Forest, Decision Tree,
AdaBoost Classifier, Extra Tree Classifier, Linear Discriminant Analysis,
K-Neighbors Classifiers, and 4 Ensemble Models to classify 15 different classes
of crimes. The outcomes are captured using two popular test methods: train-test
split, and k-fold cross-validation. Moreover, to evaluate the performance
flawlessly, I also utilize precision, recall, F1-score, Mean Squared Error
(MSE), ROC curve, and paired-T-test. Except for the AdaBoost classifier, most
of the algorithms exhibit satisfactory accuracy. Random Forest, Decision Tree,
Ensemble Model 1, 3, and 4 even produce me more than 90% accuracy. Among all
the approaches, Ensemble Model 4 presented superior results for every
evaluation basis. This study could be useful to raise the awareness of peoples
regarding the occurrence locations and to assist security agencies to predict
future outbreaks of violence in a specific area within a particular time. | [
"cs.LG",
"cs.CY",
"stat.ML"
] |
Illicit drug trafficking via social media sites such as Instagram has become
a severe problem, thus drawing a great deal of attention from law enforcement
and public health agencies. How to identify illicit drug dealers from social
media data has remained a technical challenge due to the following reasons. On
the one hand, the available data are limited because of privacy concerns with
crawling social media sites; on the other hand, the diversity of drug dealing
patterns makes it difficult to reliably distinguish drug dealers from common
drug users. Unlike existing methods that focus on posting-based detection, we
propose to tackle the problem of illicit drug dealer identification by
constructing a large-scale multimodal dataset named Identifying Drug Dealers on
Instagram (IDDIG). Totally nearly 4,000 user accounts, of which over 1,400 are
drug dealers, have been collected from Instagram with multiple data sources
including post comments, post images, homepage bio, and homepage images. We
then design a quadruple-based multimodal fusion method to combine the multiple
data sources associated with each user account for drug dealer identification.
Experimental results on the constructed IDDIG dataset demonstrate the
effectiveness of the proposed method in identifying drug dealers (almost 95%
accuracy). Moreover, we have developed a hashtag-based community detection
technique for discovering evolving patterns, especially those related to
geography and drug types. | [
"cs.LG",
"eess.IV"
] |
In this paper, we present a new explainability formalism designed to explain
how each input variable of a test set impacts the predictions of machine
learning models. Hence, we propose a group explainability formalism for trained
machine learning decision rules, based on their response to the variability of
the input variables distribution. In order to emphasize the impact of each
input variable, this formalism uses an information theory framework that
quantifies the influence of all input-output observations based on entropic
projections. This is thus the first unified and model agnostic formalism
enabling data scientists to interpret the dependence between the input
variables, their impact on the prediction errors, and their influence on the
output predictions. Convergence rates of the entropic projections are provided
in the large sample case. Most importantly, we prove that computing an
explanation in our framework has a low algorithmic complexity, making it
scalable to real-life large datasets. We illustrate our strategy by explaining
complex decision rules learned by using XGBoost, Random Forest or Deep Neural
Network classifiers on various datasets such as Adult income, MNIST and CelebA.
We finally make clear its differences with the explainability strategies
\textit{LIME} and \textit{SHAP}, that are based on single observations. Results
can be reproduced by using the freely distributed Python toolbox
https://gems-ai.com}. | [
"stat.ML",
"cs.LG"
] |
Self-attention mechanism recently achieves impressive advancement in Natural
Language Processing (NLP) and Image Processing domains. And its permutation
invariance property makes it ideally suitable for point cloud processing.
Inspired by this remarkable success, we propose an end-to-end architecture,
dubbed Cross-Level Cross-Scale Cross-Attention Network (CLCSCANet), for point
cloud representation learning. First, a point-wise feature pyramid module is
introduced to hierarchically extract features from different scales or
resolutions. Then a cross-level cross-attention is designed to model long-range
inter-level and intra-level dependencies. Finally, we develop a cross-scale
cross-attention module to capture interactions between-and-within scales for
representation enhancement. Compared with state-of-the-art approaches, our
network can obtain competitive performance on challenging 3D object
classification, point cloud segmentation tasks via comprehensive experimental
evaluation. | [
"cs.CV",
"cs.MM"
] |
Semantic embeddings have advanced the state of the art for countless natural
language processing tasks, and various extensions to multimodal domains, such
as visual-semantic embeddings, have been proposed. While the power of
visual-semantic embeddings comes from the distillation and enrichment of
information through machine learning, their inner workings are poorly
understood and there is a shortage of analysis tools. To address this problem,
we generalize the notion of probing tasks to the visual-semantic case. To this
end, we (i) discuss the formalization of probing tasks for embeddings of
image-caption pairs, (ii) define three concrete probing tasks within our
general framework, (iii) train classifiers to probe for those properties, and
(iv) compare various state-of-the-art embeddings under the lens of the proposed
probing tasks. Our experiments reveal an up to 12% increase in accuracy on
visual-semantic embeddings compared to the corresponding unimodal embeddings,
which suggest that the text and image dimensions represented in the former do
complement each other. | [
"cs.LG",
"cs.CL",
"cs.CV"
] |
We consider the problem of sequential graph topology change-point detection
from graph signals. We assume that signals on the nodes of the graph are
regularized by the underlying graph structure via a graph filtering model,
which we then leverage to distill the graph topology change-point detection
problem to a subspace detection problem. We demonstrate how prior information
on the spectral signature of the post-change graph can be incorporated to
implicitly denoise the observed sequential data, thus leading to a natural
CUSUM-based algorithm for change-point detection. Numerical experiments
illustrate the performance of our proposed approach, particularly underscoring
the benefits of (potentially noisy) prior information. | [
"stat.ML",
"cs.LG",
"eess.SP"
] |
Mismatching problem between the source and target noisy corpora severely
hinder the practical use of the machine-learning-based voice activity detection
(VAD). In this paper, we try to address this problem in the transfer learning
prospective. Transfer learning tries to find a common learning machine or a
common feature subspace that is shared by both the source corpus and the target
corpus. The denoising deep neural network is used as the learning machine.
Three transfer techniques, which aim to learn common feature representations,
are used for analysis. Experimental results demonstrate the effectiveness of
the transfer learning schemes on the mismatch problem. | [
"cs.LG"
] |
In recent years, with the advent of massive computational power and the
availability of huge amounts of data, Deep neural networks have enabled the
exploration of uncharted areas in several domains. But at times, they
under-perform due to insufficient data, poor data quality, data that might not
be covering the domain broadly, etc. Knowledge-based systems leverage expert
knowledge for making decisions and suitably take actions. Such systems retain
interpretability in the decision-making process. This paper focuses on
exploring techniques to integrate expert knowledge to the Deep Neural Networks
for sequence-to-sequence and time series models to improve their performance
and interpretability. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
In recent years, cross-spectral iris recognition has emerged as a promising
biometric approach to establish the identity of individuals. However, matching
iris images acquired at different spectral bands (i.e., matching a visible
(VIS) iris probe to a gallery of near-infrared (NIR) iris images or vice versa)
shows a significant performance degradation when compared to intraband NIR
matching. Hence, in this paper, we have investigated a range of deep
convolutional generative adversarial network (DCGAN) architectures to further
improve the accuracy of cross-spectral iris recognition methods. Moreover,
unlike the existing works in the literature, we introduce a resolution
difference into the classical cross-spectral matching problem domain. We have
developed two different techniques using the conditional generative adversarial
network (cGAN) as a backbone architecture for cross-spectral iris matching. In
the first approach, we simultaneously address the cross-resolution and
cross-spectral matching problem by training a cGAN that jointly translates
cross-resolution as well as cross-spectral tasks to the same resolution and
within the same spectrum. In the second approach, we design a coupled
generative adversarial network (cpGAN) architecture consisting of a pair of
cGAN modules that project the VIS and NIR iris images into a low-dimensional
embedding domain to ensure maximum pairwise similarity between the feature
vectors from the two iris modalities of the same subject. | [
"cs.CV"
] |
We propose a deep representation of appearance, i. e., the relation of color,
surface orientation, viewer position, material and illumination. Previous
approaches have useddeep learning to extract classic appearance
representationsrelating to reflectance model parameters (e. g., Phong)
orillumination (e. g., HDR environment maps). We suggest todirectly represent
appearance itself as a network we call aDeep Appearance Map (DAM). This is a 4D
generalizationover 2D reflectance maps, which held the view direction fixed.
First, we show how a DAM can be learned from images or video frames and later
be used to synthesize appearance, given new surface orientations and viewer
positions. Second, we demonstrate how another network can be used to map from
an image or video frames to a DAM network to reproduce this appearance, without
using a lengthy optimization such as stochastic gradient descent
(learning-to-learn). Finally, we show the example of an appearance
estimation-and-segmentation task, mapping from an image showingmultiple
materials to multiple deep appearance maps. | [
"cs.CV",
"cs.GR"
] |
Candlesticks are graphical representations of price movements for a given
period. The traders can discovery the trend of the asset by looking at the
candlestick patterns. Although deep convolutional neural networks have achieved
great success for recognizing the candlestick patterns, their reasoning hides
inside a black box. The traders cannot make sure what the model has learned. In
this contribution, we provide a framework which is to explain the reasoning of
the learned model determining the specific candlestick patterns of time series.
Based on the local search adversarial attacks, we show that the learned model
perceives the pattern of the candlesticks in a way similar to the human trader. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
In this paper, we propose a transformer based approach for visual grounding.
Unlike previous proposal-and-rank frameworks that rely heavily on pretrained
object detectors or proposal-free frameworks that upgrade an off-the-shelf
one-stage detector by fusing textual embeddings, our approach is built on top
of a transformer encoder-decoder and is independent of any pretrained detectors
or word embedding models. Termed VGTR -- Visual Grounding with TRansformers,
our approach is designed to learn semantic-discriminative visual features under
the guidance of the textual description without harming their location ability.
This information flow enables our VGTR to have a strong capability in capturing
context-level semantics of both vision and language modalities, rendering us to
aggregate accurate visual clues implied by the description to locate the
interested object instance. Experiments show that our method outperforms
state-of-the-art proposal-free approaches by a considerable margin on five
benchmarks while maintaining fast inference speed. | [
"cs.CV"
] |
This paper presents a new baseline for visual question answering task. Given
an image and a question in natural language, our model produces accurate
answers according to the content of the image. Our model, while being
architecturally simple and relatively small in terms of trainable parameters,
sets a new state of the art on both unbalanced and balanced VQA benchmark. On
VQA 1.0 open ended challenge, our model achieves 64.6% accuracy on the
test-standard set without using additional data, an improvement of 0.4% over
state of the art, and on newly released VQA 2.0, our model scores 59.7% on
validation set outperforming best previously reported results by 0.5%. The
results presented in this paper are especially interesting because very similar
models have been tried before but significantly lower performance were
reported. In light of the new results we hope to see more meaningful research
on visual question answering in the future. | [
"cs.CV"
] |
Recent work has proposed the concept of backdoor attacks on deep neural
networks (DNNs), where misbehaviors are hidden inside "normal" models, only to
be triggered by very specific inputs. In practice, however, these attacks are
difficult to perform and highly constrained by sharing of models through
transfer learning. Adversaries have a small window during which they must
compromise the student model before it is deployed. In this paper, we describe
a significantly more powerful variant of the backdoor attack, latent backdoors,
where hidden rules can be embedded in a single "Teacher" model, and
automatically inherited by all "Student" models through the transfer learning
process. We show that latent backdoors can be quite effective in a variety of
application contexts, and validate its practicality through real-world attacks
against traffic sign recognition, iris identification of lab volunteers, and
facial recognition of public figures (politicians). Finally, we evaluate 4
potential defenses, and find that only one is effective in disrupting latent
backdoors, but might incur a cost in classification accuracy as tradeoff. | [
"cs.LG",
"cs.CR"
] |
Temporal-difference and Q-learning play a key role in deep reinforcement
learning, where they are empowered by expressive nonlinear function
approximators such as neural networks. At the core of their empirical successes
is the learned feature representation, which embeds rich observations, e.g.,
images and texts, into the latent space that encodes semantic structures.
Meanwhile, the evolution of such a feature representation is crucial to the
convergence of temporal-difference and Q-learning.
In particular, temporal-difference learning converges when the function
approximator is linear in a feature representation, which is fixed throughout
learning, and possibly diverges otherwise. We aim to answer the following
questions: When the function approximator is a neural network, how does the
associated feature representation evolve? If it converges, does it converge to
the optimal one?
We prove that, utilizing an overparameterized two-layer neural network,
temporal-difference and Q-learning globally minimize the mean-squared projected
Bellman error at a sublinear rate. Moreover, the associated feature
representation converges to the optimal one, generalizing the previous analysis
of Cai et al. (2019) in the neural tangent kernel regime, where the associated
feature representation stabilizes at the initial one. The key to our analysis
is a mean-field perspective, which connects the evolution of a
finite-dimensional parameter to its limiting counterpart over an
infinite-dimensional Wasserstein space. Our analysis generalizes to soft
Q-learning, which is further connected to policy gradient. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Modern deep learning models have achieved great success in predictive
accuracy for many data modalities. However, their application to many
real-world tasks is restricted by poor uncertainty estimates, such as
overconfidence on out-of-distribution (OOD) data and ungraceful failing under
distributional shift. Previous benchmarks have found that ensembles of neural
networks (NNs) are typically the best calibrated models on OOD data. Inspired
by this, we leverage recent theoretical advances that characterize the
function-space prior of an ensemble of infinitely-wide NNs as a Gaussian
process, termed the neural network Gaussian process (NNGP). We use the NNGP
with a softmax link function to build a probabilistic model for multi-class
classification and marginalize over the latent Gaussian outputs to sample from
the posterior. This gives us a better understanding of the implicit prior NNs
place on function space and allows a direct comparison of the calibration of
the NNGP and its finite-width analogue. We also examine the calibration of
previous approaches to classification with the NNGP, which treat classification
problems as regression to the one-hot labels. In this case the Bayesian
posterior is exact, and we compare several heuristics to generate a categorical
distribution over classes. We find these methods are well calibrated under
distributional shift. Finally, we consider an infinite-width final layer in
conjunction with a pre-trained embedding. This replicates the important
practical use case of transfer learning and allows scaling to significantly
larger datasets. As well as achieving competitive predictive accuracy, this
approach is better calibrated than its finite width analogue. | [
"stat.ML",
"cs.LG"
] |
Large-scale pretraining of visual representations has led to state-of-the-art
performance on a range of benchmark computer vision tasks, yet the benefits of
these techniques at extreme scale in complex production systems has been
relatively unexplored. We consider the case of a popular visual discovery
product, where these representations are trained with multi-task learning, from
use-case specific visual understanding (e.g. skin tone classification) to
general representation learning for all visual content (e.g. embeddings for
retrieval). In this work, we describe how we (1) generate a dataset with over a
billion images via large weakly-supervised pretraining to improve the
performance of these visual representations, and (2) leverage Transformers to
replace the traditional convolutional backbone, with insights into both system
and performance improvements, especially at 1B+ image scale. To support this
backbone model, we detail a systematic approach to deriving weakly-supervised
image annotations from heterogenous text signals, demonstrating the benefits of
clustering techniques to handle the long-tail distribution of image labels.
Through a comprehensive study of offline and online evaluation, we show that
large-scale Transformer-based pretraining provides significant benefits to
industry computer vision applications. The model is deployed in a production
visual shopping system, with 36% improvement in top-1 relevance and 23%
improvement in click-through volume. We conduct extensive experiments to better
understand the empirical relationships between Transformer-based architectures,
dataset scale, and the performance of production vision systems. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Current deep learning models for classification tasks in computer vision are
trained using mini-batches. In the present article, we take advantage of the
relationships between samples in a mini-batch, using graph neural networks to
aggregate information from similar images. This helps mitigate the adverse
effects of alterations to the input images on classification performance.
Diverse experiments on image-based object and scene classification show that
this approach not only improves a classifier's performance but also increases
its robustness to image perturbations and adversarial attacks. Further, we also
show that mini-batch graph neural networks can help to alleviate the problem of
mode collapse in Generative Adversarial Networks. | [
"cs.CV",
"cs.AI"
] |
In many practical applications, such as fraud detection, credit risk modeling
or medical decision making, classification models for assigning instances to a
predefined set of classes are required to be both precise as well as
interpretable. Linear modeling methods such as logistic regression are often
adopted, since they offer an acceptable balance between precision and
interpretability. Linear methods, however, are not well equipped to handle
categorical predictors with high-cardinality or to exploit non-linear relations
in the data. As a solution, data preprocessing methods such as
weight-of-evidence are typically used for transforming the predictors. The
binning procedure that underlies the weight-of-evidence approach, however, has
been little researched and typically relies on ad-hoc or expert driven
procedures. The objective in this paper, therefore, is to propose a formalized,
data-driven and powerful method.
To this end, we explore the discretization of continuous variables through
the binning of spline functions, which allows for capturing non-linear effects
in the predictor variables and yields highly interpretable predictors taking
only a small number of discrete values. Moreover, we extend upon the
weight-of-evidence approach and propose to estimate the proportions using
shrinkage estimators. Together, this offers an improved ability to exploit both
non-linear and categorical predictors for achieving increased classification
precision, while maintaining interpretability of the resulting model and
decreasing the risk of overfitting.
We present the results of a series of experiments in a fraud detection
setting, which illustrate the effectiveness of the presented approach. We
facilitate reproduction of the presented results and adoption of the proposed
approaches by providing both the dataset and the code for implementing the
experiments and the presented approach. | [
"stat.ML",
"cs.LG"
] |
In this paper we introduce a new approach to computing hidden features of
sampled vector fields. The basic idea is to convert the vector field data to a
graph structure and use tools designed for automatic, unsupervised analysis of
graphs. Using a few data sets we show that the collected features of the vector
fields are correlated with the dynamics known for analytic models which
generates the data. In particular the method may be useful in analysis of data
sets where the analytic model is poorly understood or not known. | [
"cs.LG",
"math.AT",
"math.DS"
] |
With the recent surge in the research of vision transformers, they have
demonstrated remarkable potential for various challenging computer vision
applications, such as image recognition, point cloud classification as well as
video understanding. In this paper, we present empirical results for training a
stronger video vision transformer on the EPIC-KITCHENS-100 Action Recognition
dataset. Specifically, we explore training techniques for video vision
transformers, such as augmentations, resolutions as well as initialization,
etc. With our training recipe, a single ViViT model achieves the performance of
47.4\% on the validation set of EPIC-KITCHENS-100 dataset, outperforming what
is reported in the original paper by 3.4%. We found that video transformers are
especially good at predicting the noun in the verb-noun action prediction task.
This makes the overall action prediction accuracy of video transformers notably
higher than convolutional ones. Surprisingly, even the best video transformers
underperform the convolutional networks on the verb prediction. Therefore, we
combine the video vision transformers and some of the convolutional video
networks and present our solution to the EPIC-KITCHENS-100 Action Recognition
competition. | [
"cs.CV"
] |
Ferrograph image segmentation is of significance for obtaining features of
wear particles. However, wear particles are usually overlapped in the form of
debris chains, which makes challenges to segment wear debris. An overlapping
wear particle segmentation network (OWPSNet) is proposed in this study to
segment the overlapped debris chains. The proposed deep learning model includes
three parts: a region segmentation network, an edge detection network and a
feature refine module. The region segmentation network is an improved U shape
network, and it is applied to separate the wear debris form background of
ferrograph image. The edge detection network is used to detect the edges of
wear particles. Then, the feature refine module combines low-level features and
high-level semantic features to obtain the final results. In order to solve the
problem of sample imbalance, we proposed a square dice loss function to
optimize the model. Finally, extensive experiments have been carried out on a
ferrograph image dataset. Results show that the proposed model is capable of
separating overlapping wear particles. Moreover, the proposed square dice loss
function can improve the segmentation results, especially for the segmentation
results of wear particle edge. | [
"cs.CV"
] |
Building a scalable machine learning system for unsupervised anomaly
detection via representation learning is highly desirable. One of the prevalent
methods is using a reconstruction error from variational autoencoder (VAE) via
maximizing the evidence lower bound. We revisit VAE from the perspective of
information theory to provide some theoretical foundations on using the
reconstruction error, and finally arrive at a simpler and more effective model
for anomaly detection. In addition, to enhance the effectiveness of detecting
anomalies, we incorporate a practical model uncertainty measure into the
metric. We show empirically the competitive performance of our approach on
benchmark datasets. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
This paper proposes a novel domain adaptation algorithm to handle the
challenges posed by the satellite and aerial imagery, and demonstrates its
effectiveness on the built-up region segmentation problem. Built-up area
estimation is an important component in understanding the human impact on the
environment, the effect of public policy, and general urban population
analysis. The diverse nature of aerial and satellite imagery and lack of
labeled data covering this diversity makes machine learning algorithms
difficult to generalize for such tasks, especially across multiple domains. On
the other hand, due to the lack of strong spatial context and structure, in
comparison to the ground imagery, the application of existing unsupervised
domain adaptation methods results in the sub-optimal adaptation. We thoroughly
study the limitations of existing domain adaptation methods and propose a
weakly-supervised adaptation strategy where we assume image-level labels are
available for the target domain. More specifically, we design a built-up area
segmentation network (as encoder-decoder), with an image classification head
added to guide the adaptation. The devised system is able to address the
problem of visual differences in multiple satellite and aerial imagery
datasets, ranging from high resolution (HR) to very high resolution (VHR). A
realistic and challenging HR dataset is created by hand-tagging the 73.4 sq-km
of Rwanda, capturing a variety of build-up structures over different terrain.
The developed dataset is spatially rich compared to existing datasets and
covers diverse built-up scenarios including built-up areas in forests and
deserts, mud houses, tin, and colored rooftops. Extensive experiments are
performed by adapting from the single-source domain, to segment out the target
domain. We achieve high gains ranging 11.6%-52% in IoU over the existing
state-of-the-art methods. | [
"cs.CV",
"eess.IV"
] |
Traffic signs are essential map features globally in the era of autonomous
driving and smart cities. To develop accurate and robust algorithms for traffic
sign detection and classification, a large-scale and diverse benchmark dataset
is required. In this paper, we introduce a traffic sign benchmark dataset of
100K street-level images around the world that encapsulates diverse scenes,
wide coverage of geographical locations, and varying weather and lighting
conditions and covers more than 300 manually annotated traffic sign classes.
The dataset includes 52K images that are fully annotated and 48K images that
are partially annotated. This is the largest and the most diverse traffic sign
dataset consisting of images from all over world with fine-grained annotations
of traffic sign classes. We have run extensive experiments to establish strong
baselines for both the detection and the classification tasks. In addition, we
have verified that the diversity of this dataset enables effective transfer
learning for existing large-scale benchmark datasets on traffic sign detection
and classification. The dataset is freely available for academic research:
https://www.mapillary.com/dataset/trafficsign. | [
"cs.CV"
] |
Reinforcement Learning(RL) with sparse rewards is a major challenge. We
propose \emph{Hindsight Trust Region Policy Optimization}(HTRPO), a new RL
algorithm that extends the highly successful TRPO algorithm with
\emph{hindsight} to tackle the challenge of sparse rewards. Hindsight refers to
the algorithm's ability to learn from information across goals, including ones
not intended for the current task. HTRPO leverages two main ideas. It
introduces QKL, a quadratic approximation to the KL divergence constraint on
the trust region, leading to reduced variance in KL divergence estimation and
improved stability in policy update. It also presents Hindsight Goal
Filtering(HGF) to select conductive hindsight goals. In experiments, we
evaluate HTRPO in various sparse reward tasks, including simple benchmarks,
image-based Atari games, and simulated robot control. Ablation studies indicate
that QKL and HGF contribute greatly to learning stability and high performance.
Comparison results show that in all tasks, HTRPO consistently outperforms both
TRPO and HPG, a state-of-the-art algorithm for RL with sparse rewards. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Bayesian optimization has demonstrated impressive success in finding the
optimum input x* and output f* = f(x*) = max f(x) of a black-box function f. In
some applications, however, the optimum output f* is known in advance and the
goal is to find the corresponding optimum input x*. In this paper, we consider
a new setting in BO in which the knowledge of the optimum output f* is
available. Our goal is to exploit the knowledge about f* to search for the
input x* efficiently. To achieve this goal, we first transform the Gaussian
process surrogate using the information about the optimum output. Then, we
propose two acquisition functions, called confidence bound minimization and
expected regret minimization. We show that our approaches work intuitively and
give quantitatively better performance against standard BO methods. We
demonstrate real applications in tuning a deep reinforcement learning algorithm
on the CartPole problem and XGBoost on Skin Segmentation dataset in which the
optimum values are publicly available. | [
"stat.ML",
"cs.LG"
] |
Model compression aims to deploy deep neural networks (DNN) to mobile devices
with limited computing power and storage resource. However, most of the
existing model compression methods rely on manually defined rules, which
requires domain expertise. In this paper, we propose an Auto Graph
encoder-decoder Model Compression (AGMC) method combined with graph neural
networks (GNN) and reinforcement learning (RL) to find the best compression
policy. We model the target DNN as a graph and use GNN to learn the embeddings
of the DNN automatically. In our experiments, we first compared our method with
rule-based DNN embedding methods to show the graph auto encoder-decoder's
effectiveness. Our learning-based DNN embedding achieved better performance and
a higher compression ratio with fewer search steps. Moreover, we evaluated the
AGMC on CIFAR-10 and ILSVRC-2012 datasets and compared handcrafted and
learning-based model compression approaches. Our method outperformed
handcrafted and learning-based methods on ResNet-56 with 3.6% and 1.8% higher
accuracy, respectively. Furthermore, we achieved a higher compression ratio
than state-of-the-art methods on MobileNet-V2 with just 0.93% accuracy loss. | [
"cs.CV"
] |
In this work, we present the Text Conditioned Auxiliary Classifier Generative
Adversarial Network, (TAC-GAN) a text to image Generative Adversarial Network
(GAN) for synthesizing images from their text descriptions. Former approaches
have tried to condition the generative process on the textual data; but allying
it to the usage of class information, known to diversify the generated samples
and improve their structural coherence, has not been explored. We trained the
presented TAC-GAN model on the Oxford-102 dataset of flowers, and evaluated the
discriminability of the generated images with Inception-Score, as well as their
diversity using the Multi-Scale Structural Similarity Index (MS-SSIM). Our
approach outperforms the state-of-the-art models, i.e., its inception score is
3.45, corresponding to a relative increase of 7.8% compared to the recently
introduced StackGan. A comparison of the mean MS-SSIM scores of the training
and generated samples per class shows that our approach is able to generate
highly diverse images with an average MS-SSIM of 0.14 over all generated
classes. | [
"cs.CV"
] |
Purpose: Image classification is perhaps the most fundamental task in imaging
AI. However, labeling images is time-consuming and tedious. We have recently
demonstrated that reinforcement learning (RL) can classify 2D slices of MRI
brain images with high accuracy. Here we make two important steps toward
speeding image classification: Firstly, we automatically extract class labels
from the clinical reports. Secondly, we extend our prior 2D classification work
to fully 3D image volumes from our institution. Hence, we proceed as follows:
in Part 1, we extract labels from reports automatically using the SBERT natural
language processing approach. Then, in Part 2, we use these labels with RL to
train a classification Deep-Q Network (DQN) for 3D image volumes.
Methods: For Part 1, we trained SBERT with 90 radiology report impressions.
We then used the trained SBERT to predict class labels for use in Part 2. In
Part 2, we applied multi-step image classification to allow for combined Deep-Q
learning using 3D convolutions and TD(0) Q learning. We trained on a set of 90
images. We tested on a separate set of 61 images, again using the classes
predicted from patient reports by the trained SBERT in Part 1. For comparison,
we also trained and tested a supervised deep learning classification network on
the same set of training and testing images using the same labels.
Results: Part 1: Upon training with the corpus of radiology reports, the
SBERT model had 100% accuracy for both normal and metastasis-containing scans.
Part 2: Then, using these labels, whereas the supervised approach quickly
overfit the training data and as expected performed poorly on the testing set
(66% accuracy, just over random guessing), the reinforcement learning approach
achieved an accuracy of 92%. The results were found to be statistically
significant, with a p-value of 3.1 x 10^-5. | [
"cs.CV",
"cs.AI"
] |
We address the problem of localisation of objects as bounding boxes in images
and videos with weak labels. This weakly supervised object localisation problem
has been tackled in the past using discriminative models where each object
class is localised independently from other classes. In this paper, a novel
framework based on Bayesian joint topic modelling is proposed, which differs
significantly from the existing ones in that: (1) All foreground object classes
are modelled jointly in a single generative model that encodes multiple object
co-existence so that "explaining away" inference can resolve ambiguity and lead
to better learning and localisation. (2) Image backgrounds are shared across
classes to better learn varying surroundings and "push out" objects of
interest. (3) Our model can be learned with a mixture of weakly labelled and
unlabelled data, allowing the large volume of unlabelled images on the Internet
to be exploited for learning. Moreover, the Bayesian formulation enables the
exploitation of various types of prior knowledge to compensate for the limited
supervision offered by weakly labelled data, as well as Bayesian domain
adaptation for transfer learning. Extensive experiments on the PASCAL VOC,
ImageNet and YouTube-Object videos datasets demonstrate the effectiveness of
our Bayesian joint model for weakly supervised object localisation. | [
"cs.CV"
] |
We present a versatile formulation of the convolution operation that we term
a "mapped convolution." The standard convolution operation implicitly samples
the pixel grid and computes a weighted sum. Our mapped convolution decouples
these two components, freeing the operation from the confines of the image grid
and allowing the kernel to process any type of structured data. As a test case,
we demonstrate its use by applying it to dense inference on spherical data. We
perform an in-depth study of existing spherical image convolution methods and
propose an improved sampling method for equirectangular images. Then, we
discuss the impact of data discretization when deriving a sampling function,
highlighting drawbacks of the cube map representation for spherical data.
Finally, we illustrate how mapped convolutions enable us to convolve directly
on a mesh by projecting the spherical image onto a geodesic grid and training
on the textured mesh. This method exceeds the state of the art for spherical
depth estimation by nearly 17%. Our findings suggest that mapped convolutions
can be instrumental in expanding the application scope of convolutional neural
networks. | [
"cs.CV"
] |
Humans explain inter-object relationships with semantic labels that
demonstrate a high-level understanding required to perform complex
Vision-Language tasks such as Visual Question Answering (VQA). However,
existing VQA models represent relationships as a combination of object-level
visual features which constrain a model to express interactions between objects
in a single domain, while the model is trying to solve a multi-modal task. In
this paper, we propose a general purpose semantic relationship parser which
generates a semantic feature vector for each subject-predicate-object triplet
in an image, and a Mutual and Self Attention (MSA) mechanism that learns to
identify relationship triplets that are important to answer the given question.
To motivate the significance of semantic relationships, we show an oracle
setting with ground-truth relationship triplets, where our model achieves a
~25% accuracy gain over the closest state-of-the-art model on the challenging
GQA dataset. Further, with our semantic parser, we show that our model
outperforms other comparable approaches on VQA and GQA datasets. | [
"cs.CV",
"cs.AI"
] |
Although object detection has reached a milestone thanks to the great success
of deep learning, the scale variation is still the key challenge. Integrating
multi-level features is presented to alleviate the problems, like the classic
Feature Pyramid Network (FPN) and its improvements. However, the specifically
designed feature integration modules of these methods may not have the optimal
architecture for feature fusion. Moreover, these models have fixed
architectures and data flow paths, when fed with various samples. They cannot
adjust and be compatible with each kind of data. To overcome the above
limitations, we propose a Dynamic Sample-Individualized Connector (DSIC) for
multi-scale object detection. It dynamically adjusts network connections to fit
different samples. In particular, DSIC consists of two components: Intra-scale
Selection Gate (ISG) and Cross-scale Selection Gate (CSG). ISG adaptively
extracts multi-level features from backbone as the input of feature
integration. CSG automatically activate informative data flow paths based on
the multi-level features. Furthermore, these two components are both
plug-and-play and can be embedded in any backbone. Experimental results
demonstrate that the proposed method outperforms the state-of-the-arts. | [
"cs.CV"
] |
In the problem of domain generalization (DG), there are labeled training data
sets from several related prediction problems, and the goal is to make accurate
predictions on future unlabeled data sets that are not known to the learner.
This problem arises in several applications where data distributions fluctuate
because of environmental, technical, or other sources of variation. We
introduce a formal framework for DG, and argue that it can be viewed as a kind
of supervised learning problem by augmenting the original feature space with
the marginal distribution of feature vectors. While our framework has several
connections to conventional analysis of supervised learning algorithms, several
unique aspects of DG require new methods of analysis.
This work lays the learning theoretic foundations of domain generalization,
building on our earlier conference paper where the problem of DG was introduced
(Blanchard et al., 2011). We present two formal models of data generation,
corresponding notions of risk, and distribution-free generalization error
analysis. By focusing our attention on kernel methods, we also provide more
quantitative results and a universally consistent algorithm. An efficient
implementation is provided for this algorithm, which is experimentally compared
to a pooling strategy on one synthetic and three real-world data sets. | [
"stat.ML"
] |
Semantic segmentation is an important component in the perception systems of
autonomous vehicles. In this work, we adopt recent advances in both image and
point cloud segmentation to achieve a better accuracy in the task of segmenting
LiDAR scans. KPRNet improves the convolutional neural network architecture of
2D projection methods and utilizes KPConv to replace the commonly used
post-processing techniques with a learnable point-wise component which allows
us to obtain more accurate 3D labels. With these improvements our model
outperforms the current best method on the SemanticKITTI benchmark, reaching an
mIoU of 63.1. | [
"cs.CV",
"cs.LG"
] |
While many deep learning methods have seen significant success in tackling
the problem of domain adaptation and few-shot learning separately, far fewer
methods are able to jointly tackle both problems in Cross-Domain Few-Shot
Learning (CD-FSL). This problem is exacerbated under sharp domain shifts that
typify common computer vision applications. In this paper, we present a novel,
flexible and effective method to address the CD-FSL problem. Our method, called
Score-based Meta Transfer-Learning (SB-MTL), combines transfer-learning and
meta-learning by using a MAML-optimized feature encoder and a score-based Graph
Neural Network. First, we have a feature encoder with specific layers designed
to be fine-tuned. To do so, we apply a first-order MAML algorithm to find good
initializations. Second, instead of directly taking the classification scores
after fine-tuning, we interpret the scores as coordinates by mapping the
pre-softmax classification scores onto a metric space. Subsequently, we apply a
Graph Neural Network to propagate label information from the support set to the
query set in our score-based metric space. We test our model on the Broader
Study of Cross-Domain Few-Shot Learning (BSCD-FSL) benchmark, which includes a
range of target domains with highly varying dissimilarity to the miniImagenet
source domain. We observe significant improvements in accuracy across 5, 20 and
50 shot, and on the four target domains. In terms of average accuracy, our
model outperforms previous transfer-learning methods by 5.93% and previous
meta-learning methods by 14.28%. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
In this paper, we introduce a novel conditional generative adversarial
network that creates dense 3D point clouds, with color, for assorted classes of
objects in an unsupervised manner. To overcome the difficulty of capturing
intricate details at high resolutions, we propose a point transformer that
progressively grows the network through the use of graph convolutions. The
network is composed of a leaf output layer and an initial set of branches.
Every training iteration evolves a point vector into a point cloud of
increasing resolution. After a fixed number of iterations, the number of
branches is increased by replicating the last branch. Experimental results show
that our network is capable of learning and mimicking a 3D data distribution,
and produces colored point clouds with fine details at multiple resolutions. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Accurate object segmentation is a crucial task in the context of robotic
manipulation. However, creating sufficient annotated training data for neural
networks is particularly time consuming and often requires manual labeling. To
this end, we propose a simple, yet robust solution for learning to segment
unknown objects grasped by a robot. Specifically, we exploit motion and
temporal cues in RGB video sequences. Using optical flow estimation we first
learn to predict segmentation masks of our given manipulator. Then, these
annotations are used in combination with motion cues to automatically
distinguish between background, manipulator and unknown, grasped object. In
contrast to existing systems our approach is fully self-supervised and
independent of precise camera calibration, 3D models or potentially imperfect
depth data. We perform a thorough comparison with alternative baselines and
approaches from literature. The object masks and views are shown to be suitable
training data for segmentation networks that generalize to novel environments
and also allow for watertight 3D reconstruction. | [
"cs.CV",
"cs.RO"
] |
In recent years, model-free methods that use deep learning have achieved
great success in many different reinforcement learning environments. Most
successful approaches focus on solving a single task, while multi-task
reinforcement learning remains an open problem. In this paper, we present a
model based approach to deep reinforcement learning which we use to solve
different tasks simultaneously. We show that our approach not only does not
degrade but actually benefits from learning multiple tasks. For our model, we
also present a new kind of recurrent neural network inspired by residual
networks that decouples memory from computation allowing to model complex
environments that do not require lots of memory. | [
"cs.LG"
] |
Explaining neural network computation in terms of probabilistic/fuzzy logical
operations has attracted much attention due to its simplicity and high
interpretability. Different choices of logical operators such as AND, OR and
XOR give rise to another dimension for network optimization, and in this paper,
we study the open problem of learning a universal logical operator without
prescribing to any logical operations manually. Insightful observations along
this exploration furnish deep convolution networks with a novel logical
interpretation. | [
"cs.LG",
"stat.ML"
] |
We performed an empirical comparison of ICA and PCA algorithms by applying
them on two simulated noisy time series with varying distribution parameters
and level of noise. In general, ICA shows better results than PCA because it
takes into account higher moments of data distribution. On the other hand, PCA
remains quite sensitive to the level of correlations among signals. | [
"stat.ML",
"cs.LG"
] |
We study the optimal sample complexity in large-scale Reinforcement Learning
(RL) problems with policy space generalization, i.e. the agent has a prior
knowledge that the optimal policy lies in a known policy space. Existing
results show that without a generalization model, the sample complexity of an
RL algorithm will inevitably depend on the cardinalities of state space and
action space, which are intractably large in many practical problems.
To avoid such undesirable dependence on the state and action space sizes,
this paper proposes a new notion of eluder dimension for the policy space,
which characterizes the intrinsic complexity of policy learning in an arbitrary
Markov Decision Process (MDP). Using a simulator oracle, we prove a
near-optimal sample complexity upper bound that only depends linearly on the
eluder dimension. We further prove a similar regret bound in deterministic
systems without the simulator. | [
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ML"
] |
Modern machine learning algorithms crucially rely on several design decisions
to achieve strong performance, making the problem of Hyperparameter
Optimization (HPO) more important than ever. Here, we combine the advantages of
the popular bandit-based HPO method Hyperband (HB) and the evolutionary search
approach of Differential Evolution (DE) to yield a new HPO method which we call
DEHB. Comprehensive results on a very broad range of HPO problems, as well as a
wide range of tabular benchmarks from neural architecture search, demonstrate
that DEHB achieves strong performance far more robustly than all previous HPO
methods we are aware of, especially for high-dimensional problems with discrete
input dimensions. For example, DEHB is up to 1000x faster than random search.
It is also efficient in computational time, conceptually simple and easy to
implement, positioning it well to become a new default HPO method. | [
"cs.LG",
"cs.NE"
] |
We introduce the forward-backward (FB) representation of the dynamics of a
reward-free Markov decision process. It provides explicit near-optimal policies
for any reward specified a posteriori. During an unsupervised phase, we use
reward-free interactions with the environment to learn two representations via
off-the-shelf deep learning methods and temporal difference (TD) learning. In
the test phase, a reward representation is estimated either from observations
or an explicit reward description (e.g., a target state). The optimal policy
for that reward is directly obtained from these representations, with no
planning. We assume access to an exploration scheme or replay buffer for the
first phase.
The unsupervised FB loss is well-principled: if training is perfect, the
policies obtained are provably optimal for any reward function. With imperfect
training, the sub-optimality is proportional to the unsupervised approximation
error. The FB representation learns long-range relationships between states and
actions, via a predictive occupancy map, without having to synthesize states as
in model-based approaches.
This is a step towards learning controllable agents in arbitrary black-box
stochastic environments. This approach compares well to goal-oriented RL
algorithms on discrete and continuous mazes, pixel-based MsPacman, and the
FetchReach virtual robot arm. We also illustrate how the agent can immediately
adapt to new tasks beyond goal-oriented RL. | [
"cs.LG",
"cs.AI",
"math.OC"
] |
A novel hybrid data-driven approach is developed for forecasting power system
parameters with the goal of increasing the efficiency of short-term forecasting
studies for non-stationary time-series. The proposed approach is based on mode
decomposition and a feature analysis of initial retrospective data using the
Hilbert-Huang transform and machine learning algorithms. The random forests and
gradient boosting trees learning techniques were examined. The decision tree
techniques were used to rank the importance of variables employed in the
forecasting models. The Mean Decrease Gini index is employed as an impurity
function. The resulting hybrid forecasting models employ the radial basis
function neural network and support vector regression. Apart from introduction
and references the paper is organized as follows. The section 2 presents the
background and the review of several approaches for short-term forecasting of
power system parameters. In the third section a hybrid machine learning-based
algorithm using Hilbert-Huang transform is developed for short-term forecasting
of power system parameters. Fourth section describes the decision tree learning
algorithms used for the issue of variables importance. Finally in section six
the experimental results in the following electric power problems are
presented: active power flow forecasting, electricity price forecasting and for
the wind speed and direction forecasting. | [
"cs.LG",
"stat.ML",
"62M10, 91B84"
] |
As the field of remote sensing is evolving, we witness the accumulation of
information from several modalities, such as multispectral (MS), hyperspectral
(HSI), LiDAR etc. Each of these modalities possess its own distinct
characteristics and when combined synergistically, perform very well in the
recognition and classification tasks. However, fusing multiple modalities in
remote sensing is cumbersome due to highly disparate domains. Furthermore, the
existing methods do not facilitate cross-modal interactions. To this end, we
propose a novel transformer based fusion method for HSI and LiDAR modalities.
The model is composed of stacked auto encoders that harness the cross key-value
pairs for HSI and LiDAR, thus establishing a communication between the two
modalities, while simultaneously using the CNNs to extract the spectral and
spatial information from HSI and LiDAR. We test our model on Houston (Data
Fusion Contest - 2013) and MUUFL Gulfport datasets and achieve competitive
results. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Presenting context images to a viewer's peripheral vision is one of the most
effective techniques to enhance immersive visual experiences. However, most
images only present a narrow view, since the field-of-view (FoV) of standard
cameras is small. To overcome this limitation, we propose a deep learning
approach that learns to predict a 180{\deg} panoramic image from a narrow-view
image. Specifically, we design a foveated framework that applies different
strategies on near-periphery and mid-periphery regions. Two networks are
trained separately, and then are employed jointly to sequentially perform
narrow-to-90{\deg} generation and 90{\deg}-to-180{\deg} generation. The
generated outputs are then fused with their aligned inputs to produce expanded
equirectangular images for viewing. Our experimental results show that
single-view-to-panoramic image generation using deep learning is both feasible
and promising. | [
"cs.CV",
"cs.HC"
] |
Graph neural networks (GNN) have been proven to be mature enough for handling
graph-structured data on node-level graph representation learning tasks.
However, the graph pooling technique for learning expressive graph-level
representation is critical yet still challenging. Existing pooling methods
either struggle to capture the local substructure or fail to effectively
utilize high-order dependency, thus diminishing the expression capability. In
this paper we propose HAP, a hierarchical graph-level representation learning
framework, which is adaptively sensitive to graph structures, i.e., HAP
clusters local substructures incorporating with high-order dependencies. HAP
utilizes a novel cross-level attention mechanism MOA to naturally focus more on
close neighborhood while effectively capture higher-order dependency that may
contain crucial information. It also learns a global graph content GCont that
extracts the graph pattern properties to make the pre- and post-coarsening
graph content maintain stable, thus providing global guidance in graph
coarsening. This novel innovation also facilitates generalization across graphs
with the same form of features. Extensive experiments on fourteen datasets show
that HAP significantly outperforms twelve popular graph pooling methods on
graph classification task with an maximum accuracy improvement of 22.79%, and
exceeds the performance of state-of-the-art graph matching and graph similarity
learning algorithms by over 3.5% and 16.7%. | [
"cs.LG",
"cs.NI",
"cs.SI"
] |
Many popular variants of graph neural networks (GNNs) that are capable of
handling multi-relational graphs may suffer from vanishing gradients. In this
work, we propose a novel GNN architecture based on the Gated Graph Neural
Network with an improved ability to handle long-range dependencies in
multi-relational graphs. An experimental analysis on different synthetic tasks
demonstrates that the proposed architecture outperforms several popular GNN
models. | [
"cs.LG",
"stat.ML"
] |
Current GAN-based art generation methods produce unoriginal artwork due to
their dependence on conditional input. Here, we propose Sketch-And-Paint GAN
(SAPGAN), the first model which generates Chinese landscape paintings from end
to end, without conditional input. SAPGAN is composed of two GANs: SketchGAN
for generation of edge maps, and PaintGAN for subsequent edge-to-painting
translation. Our model is trained on a new dataset of traditional Chinese
landscape paintings never before used for generative research. A 242-person
Visual Turing Test study reveals that SAPGAN paintings are mistaken as human
artwork with 55% frequency, significantly outperforming paintings from baseline
GANs. Our work lays a groundwork for truly machine-original art generation. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Overparameterized networks trained to convergence have shown impressive
performance in domains such as computer vision and natural language processing.
Pushing state of the art on salient tasks within these domains corresponds to
these models becoming larger and more difficult for machine learning
practitioners to use given the increasing memory and storage requirements, not
to mention the larger carbon footprint. Thus, in recent years there has been a
resurgence in model compression techniques, particularly for deep convolutional
neural networks and self-attention based networks such as the Transformer.
Hence, this paper provides a timely overview of both old and current
compression techniques for deep neural networks, including pruning,
quantization, tensor decomposition, knowledge distillation and combinations
thereof.
We assume a basic familiarity with deep learning architectures\footnote{For
an introduction to deep learning, see ~\citet{goodfellow2016deep}}, namely,
Recurrent Neural
Networks~\citep[(RNNs)][]{rumelhart1985learning,hochreiter1997long},
Convolutional Neural Networks~\citep{fukushima1980neocognitron}~\footnote{For
an up to date overview see~\citet{khan2019survey}} and Self-Attention based
networks~\citep{vaswani2017attention}\footnote{For a general overview of
self-attention networks, see ~\citet{chaudhari2019attentive}.},\footnote{For
more detail and their use in natural language processing,
see~\citet{hu2019introductory}}. Most of the papers discussed are proposed in
the context of at least one of these DNN architectures. | [
"cs.LG",
"stat.ML"
] |
In this paper we introduce DeepCrawl, a fully-playable Roguelike prototype
for iOS and Android in which all agents are controlled by policy networks
trained using Deep Reinforcement Learning (DRL). Our aim is to understand
whether recent advances in DRL can be used to develop convincing behavioral
models for non-player characters in videogames. We begin with an analysis of
requirements that such an AI system should satisfy in order to be practically
applicable in video game development, and identify the elements of the DRL
model used in the DeepCrawl prototype. The successes and limitations of
DeepCrawl are documented through a series of playability tests performed on the
final game. We believe that the techniques we propose offer insight into
innovative new avenues for the development of behaviors for non-player
characters in video games, as they offer the potential to overcome critical
issues with | [
"cs.LG",
"cs.AI"
] |
The landmark achievements of AlphaGo Zero have created great research
interest into self-play in reinforcement learning. In self-play, Monte Carlo
Tree Search is used to train a deep neural network, that is then used in tree
searches. Training itself is governed by many hyperparameters.There has been
surprisingly little research on design choices for hyper-parameter values and
loss-functions, presumably because of the prohibitive computational cost to
explore the parameter space. In this paper, we investigate 12 hyper-parameters
in an AlphaZero-like self-play algorithm and evaluate how these parameters
contribute to training. We use small games, to achieve meaningful exploration
with moderate computational effort. The experimental results show that training
is highly sensitive to hyper-parameter choices. Through multi-objective
analysis we identify 4 important hyper-parameters to further assess. To start,
we find surprising results where too much training can sometimes lead to lower
performance. Our main result is that the number of self-play iterations
subsumes MCTS-search simulations, game-episodes, and training epochs. The
intuition is that these three increase together as self-play iterations
increase, and that increasing them individually is sub-optimal. A consequence
of our experiments is a direct recommendation for setting hyper-parameter
values in self-play: the overarching outer-loop of self-play iterations should
be maximized, in favor of the three inner-loop hyper-parameters, which should
be set at lower values. A secondary result of our experiments concerns the
choice of optimization goals, for which we also provide recommendations. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Feature generation is an open topic of investigation in graph machine
learning. In this paper, we study the use of graph homomorphism density
features as a scalable alternative to homomorphism numbers which retain similar
theoretical properties and ability to take into account inductive bias. For
this, we propose a high-performance implementation of a simple sampling
algorithm which computes additive approximations of homomorphism densities. In
the context of graph machine learning, we demonstrate in experiments that
simple linear models trained on sample homomorphism densities can achieve
performance comparable to graph neural networks on standard graph
classification datasets. Finally, we show in experiments on synthetic data that
this algorithm scales to very large graphs when implemented with Bloom filters. | [
"cs.LG",
"cs.DS",
"I.5.1; I.5.2"
] |
Stain variation is a phenomenon observed when distinct pathology laboratories
stain tissue slides that exhibit similar but not identical color appearance.
Due to this color shift between laboratories, convolutional neural networks
(CNNs) trained with images from one lab often underperform on unseen images
from the other lab. Several techniques have been proposed to reduce the
generalization error, mainly grouped into two categories: stain color
augmentation and stain color normalization. The former simulates a wide variety
of realistic stain variations during training, producing stain-invariant CNNs.
The latter aims to match training and test color distributions in order to
reduce stain variation. For the first time, we compared some of these
techniques and quantified their effect on CNN classification performance using
a heterogeneous dataset of hematoxylin and eosin histopathology images from 4
organs and 9 pathology laboratories. Additionally, we propose a novel
unsupervised method to perform stain color normalization using a neural
network. Based on our experimental results, we provide practical guidelines on
how to use stain color augmentation and stain color normalization in future
computational pathology applications. | [
"cs.CV"
] |
Generative Adversarial Networks (GANs) have demonstrated unprecedented
success in various image generation tasks. The encouraging results, however,
come at the price of a cumbersome training process, during which the generator
and discriminator are alternately updated in two stages. In this paper, we
investigate a general training scheme that enables training GANs efficiently in
only one stage. Based on the adversarial losses of the generator and
discriminator, we categorize GANs into two classes, Symmetric GANs and
Asymmetric GANs, and introduce a novel gradient decomposition method to unify
the two, allowing us to train both classes in one stage and hence alleviate the
training effort. We also computationally analyze the efficiency of the proposed
method, and empirically demonstrate that, the proposed method yields a solid
$1.5\times$ acceleration across various datasets and network architectures.
Furthermore, we show that the proposed method is readily applicable to other
adversarial-training scenarios, such as data-free knowledge distillation. The
code is available at https://github.com/zju-vipa/OSGAN. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Generating portrait images by controlling the motions of existing faces is an
important task of great consequence to social media industries. For easy use
and intuitive control, semantically meaningful and fully disentangled
parameters should be used as modifications. However, many existing techniques
do not provide such fine-grained controls or use indirect editing methods i.e.
mimic motions of other individuals. In this paper, a Portrait Image Neural
Renderer (PIRenderer) is proposed to control the face motions with the
parameters of three-dimensional morphable face models (3DMMs). The proposed
model can generate photo-realistic portrait images with accurate movements
according to intuitive modifications. Experiments on both direct and indirect
editing tasks demonstrate the superiority of this model. Meanwhile, we further
extend this model to tackle the audio-driven facial reenactment task by
extracting sequential motions from audio inputs. We show that our model can
generate coherent videos with convincing movements from only a single reference
image and a driving audio stream. Our source code is available at
https://github.com/RenYurui/PIRender. | [
"cs.CV",
"cs.AI"
] |
Convolutional neural networks (CNNs) have become increasingly popular for
solving a variety of computer vision tasks, ranging from image classification
to image segmentation. Recently, autonomous vehicles have created a demand for
depth information, which is often obtained using hardware sensors such as Light
detection and ranging (LIDAR). Although it can provide precise distance
measurements, most LIDARs are still far too expensive to sell in mass-produced
consumer vehicles, which has motivated methods to generate depth information
from commodity automotive sensors like cameras.
In this paper, we propose an approach called Deep Sensor Cloning (DSC). The
idea is to use Convolutional Neural Networks in conjunction with inexpensive
sensors to replicate the 3D point-clouds that are created by expensive LIDARs.
To accomplish this, we develop a new dataset (DSDepth) and a new family of CNN
architectures (DSCnets). While previous tasks such as KITTI depth prediction
use an interpolated RGB-D images as ground-truth for training, we instead use
DSCnets to directly predict LIDAR point-clouds. When we compare the output of
our models to a $75,000 LIDAR, we find that our most accurate DSCnet achieves a
relative error of 5.77% using a single camera and 4.69% using stereo cameras. | [
"cs.CV"
] |
As one of the prevalent components, Feature Pyramid Network (FPN) is widely
used in the current object detection models to improve the performance of
multi-scale detection. However, its interaction is still in a local and lossy
manner, thus limiting the representation power. In this paper, to simulate a
global view of human vision in object detection and address the inherent
defects of interaction mode in FPN, we construct a novel architecture termed
Content-Augmented Feature Pyramid Network (CA-FPN). Unlike the vanilla FPN,
which fuses features within a local receptive field, CA-FPN can adaptively
aggregate similar features from a global view. It is equipped with a global
content extraction module and light linear spatial transformers. The former
allows to extract multi-scale context information and the latter can deeply
combine the global content extraction module with the vanilla FPN using the
linearized attention function, which is designed to reduce model complexity.
Furthermore, CA-FPN can be readily plugged into existing FPN-based models.
Extensive experiments on the challenging COCO and PASCAL VOC object detection
datasets demonstrated that our CA-FPN significantly outperforms competitive
FPN-based detectors without bells and whistles. When plugging CA-FPN into
Cascade R-CNN framework built upon a standard ResNet-50 backbone, our method
can achieve 44.8 AP on COCO mini-val. Its performance surpasses the previous
state-of-the-art by 1.5 AP, demonstrating the potentiality of application. | [
"cs.CV",
"cs.AI",
"68T45",
"I.2.10"
] |
VBM3D is an extension to video of the well known image denoising algorithm
BM3D, which takes advantage of the sparse representation of stacks of similar
patches in a transform domain. The extension is rather straightforward: the
similar 2D patches are taken from a spatio-temporal neighborhood which includes
neighboring frames. In spite of its simplicity, the algorithm offers a good
trade-off between denoising performance and computational complexity. In this
work we revisit this method, providing an open-source C++ implementation
reproducing the results. A detailed description is given and the choice of
parameters is thoroughly discussed. Furthermore, we discuss several extensions
of the original algorithm: (1) a multi-scale implementation, (2) the use of 3D
patches, (3) the use of optical flow to guide the patch search. These
extensions allow to obtain results which are competitive with even the most
recent state of the art. | [
"cs.CV"
] |
Despite the recent successes of deep learning, such models are still far from
some human abilities like learning from few examples, reasoning and explaining
decisions. In this paper, we focus on organ annotation in medical images and we
introduce a reasoning framework that is based on learning fuzzy relations on a
small dataset for generating explanations. Given a catalogue of relations, it
efficiently induces the most relevant relations and combines them for building
constraints in order to both solve the organ annotation task and generate
explanations. We test our approach on a publicly available dataset of medical
images where several organs are already segmented. A demonstration of our model
is proposed with an example of explained annotations. It was trained on a small
training set containing as few as a couple of examples. | [
"cs.CV",
"cs.AI",
"cs.LG",
"eess.IV"
] |
The problem of Offline Policy Evaluation (OPE) in Reinforcement Learning (RL)
is a critical step towards applying RL in real-life applications. Existing work
on OPE mostly focus on evaluating a fixed target policy $\pi$, which does not
provide useful bounds for offline policy learning as $\pi$ will then be
data-dependent. We address this problem by simultaneously evaluating all
policies in a policy class $\Pi$ -- uniform convergence in OPE -- and obtain
nearly optimal error bounds for a number of global / local policy classes. Our
results imply that the model-based planning achieves an optimal episode
complexity of $\widetilde{O}(H^3/d_m\epsilon^2)$ in identifying an
$\epsilon$-optimal policy under the time-inhomogeneous episodic MDP model ($H$
is the planning horizon, $d_m$ is a quantity that reflects the exploration of
the logging policy $\mu$). To the best of our knowledge, this is the first time
the optimal rate is shown to be possible for the offline RL setting and the
paper is the first that systematically investigates the uniform convergence in
OPE. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Auto-regressive sequence-to-sequence models with attention mechanism have
achieved state-of-the-art performance in many tasks such as machine translation
and speech synthesis. These models can be difficult to train. The standard
approach, teacher forcing, guides a model with reference output history during
training. The problem is that the model is unlikely to recover from its
mistakes during inference, where the reference output is replaced by generated
output. Several approaches deal with this problem, largely by guiding the model
with generated output history. To make training stable, these approaches often
require a heuristic schedule or an auxiliary classifier. This paper introduces
attention forcing, which guides the model with generated output history and
reference attention. This approach can train the model to recover from its
mistakes, in a stable fashion, without the need for a schedule or a classifier.
In addition, it allows the model to generate output sequences aligned with the
references, which can be important for cascaded systems like many speech
synthesis systems. Experiments on speech synthesis show that attention forcing
yields significant performance gain. Experiments on machine translation show
that for tasks where various re-orderings of the output are valid, guiding the
model with generated output history is challenging, while guiding the model
with reference attention is beneficial. | [
"cs.LG",
"cs.CL",
"eess.AS",
"stat.ML",
"I.2"
] |
We demonstrate a library for the integration of domain knowledge in deep
learning architectures. Using this library, the structure of the data is
expressed symbolically via graph declarations and the logical constraints over
outputs or latent variables can be seamlessly added to the deep models. The
domain knowledge can be defined explicitly, which improves the models'
explainability in addition to the performance and generalizability in the
low-data regime. Several approaches for such an integration of symbolic and
sub-symbolic models have been introduced; however, there is no library to
facilitate the programming for such an integration in a generic way while
various underlying algorithms can be used. Our library aims to simplify
programming for such an integration in both training and inference phases while
separating the knowledge representation from learning algorithms. We showcase
various NLP benchmark tasks and beyond. The framework is publicly available at
Github(https://github.com/HLR/DomiKnowS). | [
"cs.LG",
"cs.AI",
"cs.CL",
"I.2"
] |
Growing advancements in reinforcement learning has led to advancements in
control theory. Reinforcement learning has effectively solved the inverted
pendulum problem and more recently the double inverted pendulum problem. In
reinforcement learning, our agents learn by interacting with the control system
with the goal of maximizing rewards. In this paper, we explore three such
reward functions in the cart position problem. This paper concludes that a
discontinuous reward function that gives non-zero rewards to agents only if
they are within a given distance from the desired position gives the best
results. | [
"cs.LG",
"cs.AI",
"cs.RO",
"math.OC"
] |
Applying reinforcement learning to robotic systems poses a number of
challenging problems. A key requirement is the ability to handle continuous
state and action spaces while remaining within a limited time and resource
budget. Additionally, for safe operation, the system must make robust decisions
under hard constraints. To address these challenges, we propose a model based
approach that combines Gaussian Process regression and Receding Horizon
Control. Using sparse spectrum Gaussian Processes, we extend previous work by
updating the dynamics model incrementally from a stream of sensory data. This
results in an agent that can learn and plan in real-time under non-linear
constraints. We test our approach on a cart pole swing-up environment and
demonstrate the benefits of online learning on an autonomous racing task. The
environment's dynamics are learned from limited training data and can be reused
in new task instances without retraining. | [
"cs.LG",
"stat.ML"
] |
We tackle object category discovery, which is the problem of discovering and
localizing novel objects in a large unlabeled dataset. While existing methods
show results on datasets with less cluttered scenes and fewer object instances
per image, we present our results on the challenging COCO dataset. Moreover, we
argue that, rather than discovering new categories from scratch, discovery
algorithms can benefit from identifying what is already known and focusing
their attention on the unknown. We propose a method that exploits prior
knowledge about certain object types to discover new categories by leveraging
two memory modules, namely Working and Semantic memory. We show the performance
of our detector on the COCO minival dataset to demonstrate its in-the-wild
capabilities. | [
"cs.CV",
"cs.AI"
] |
This report describes the design, implementation, evaluation and original
enhancements to the Live-Wire method for 2D and 3D image segmentation.
Live-Wire 2D employs a semi-automatic paradigm; the user is asked to select a
few boundary points of the object to segment, to steer the process in the right
direction, while the result is displayed in real time. In our implementation
segmentation is extended to three dimensions by performing this process on a
slice-by-slice basis. User's time and involvement is further reduced by
allowing him to specify object contours in planes orthogonal to the slices. If
these planes are chosen strategically, Live-Wire 3D can perform 2D segmentation
in the plane of each slice automatically. This report also proposes two
improvements to the original method, path heating and a new graph edge feature
function based on variance of path properties along the boundary. We show that
these improvements lead up to a 33% reduction in interaction with the user, and
improved delineation in presence of strong interfering edges. | [
"cs.CV"
] |
Breast ultrasound (BUS) image segmentation is challenging and critical for
BUS Computer-Aided Diagnosis (CAD) systems. Many BUS segmentation approaches
have been proposed in the last two decades, but the performances of most
approaches have been assessed using relatively small private datasets with
differ-ent quantitative metrics, which result in discrepancy in performance
comparison. Therefore, there is a pressing need for building a benchmark to
compare existing methods using a public dataset objectively, and to determine
the performance of the best breast tumor segmentation algorithm available today
and to investigate what segmentation strategies are valuable in clinical
practice and theoretical study. In this work, we will publish a B-mode BUS
image segmentation benchmark (BUSIS) with 562 images and compare the
performance of five state-of-the-art BUS segmentation methods quantitatively. | [
"cs.CV"
] |
Image deblurring has seen a great improvement with the development of deep
neural networks. In practice, however, blurry images often suffer from
additional degradations such as downscaling and compression. To address these
challenges, we propose an Enhanced Deep Pyramid Network (EDPN) for blurry image
restoration from multiple degradations, by fully exploiting the self- and
cross-scale similarities in the degraded image.Specifically, we design two
pyramid-based modules, i.e., the pyramid progressive transfer (PPT) module and
the pyramid self-attention (PSA) module, as the main components of the proposed
network. By taking several replicated blurry images as inputs, the PPT module
transfers both self- and cross-scale similarity information from the same
degraded image in a progressive manner. Then, the PSA module fuses the above
transferred features for subsequent restoration using self- and
spatial-attention mechanisms. Experimental results demonstrate that our method
significantly outperforms existing solutions for blurry image super-resolution
and blurry image deblocking. In the NTIRE 2021 Image Deblurring Challenge, EDPN
achieves the best PSNR/SSIM/LPIPS scores in Track 1 (Low Resolution) and the
best SSIM/LPIPS scores in Track 2 (JPEG Artifacts). | [
"cs.CV"
] |
Lane change prediction of surrounding vehicles is a key building block of
path planning. The focus has been on increasing the accuracy of prediction by
posing it purely as a function estimation problem at the cost of model
understandability. However, the efficacy of any lane change prediction model
can be improved when both corner and failure cases are humanly understandable.
We propose an attention-based recurrent model to tackle both understandability
and prediction quality. We also propose metrics which reflect the discomfort
felt by the driver. We show encouraging results on a publicly available dataset
and proprietary fleet data. | [
"cs.CV",
"cs.LG"
] |
Trust and credibility in machine learning models is bolstered by the ability
of a model to explain itsdecisions. While explainability of deep learning
models is a well-known challenge, a further chal-lenge is clarity of the
explanation itself, which must be interpreted by downstream users.
Layer-wiseRelevance Propagation (LRP), an established explainability technique
developed for deep models incomputer vision, provides intuitive human-readable
heat maps of input images. We present the novelapplication of LRP for the first
time with structured datasets using a deep neural network (1D-CNN),for Credit
Card Fraud detection and Telecom Customer Churn prediction datasets. We show
how LRPis more effective than traditional explainability concepts of Local
Interpretable Model-agnostic Ex-planations (LIME) and Shapley Additive
Explanations (SHAP) for explainability. This effectivenessis both local to a
sample level and holistic over the whole testing set. We also discuss the
significantcomputational time advantage of LRP (1-2s) over LIME (22s) and SHAP
(108s), and thus its poten-tial for real time application scenarios. In
addition, our validation of LRP has highlighted features forenhancing model
performance, thus opening up a new area of research of using XAI as an
approachfor feature subset selection | [
"cs.LG",
"cs.CV"
] |
Unsupervised anomaly detection aims to identify anomalous samples from highly
complex and unstructured data, which is pervasive in both fundamental research
and industrial applications. However, most existing methods neglect the complex
correlation among data samples, which is important for capturing normal
patterns from which the abnormal ones deviate. In this paper, we propose a
method of Correlation aware unsupervised Anomaly detection via Deep Gaussian
Mixture Model (CADGMM), which captures the complex correlation among data
points for high-quality low-dimensional representation learning. Specifically,
the relations among data samples are correlated firstly in forms of a graph
structure, in which, the node denotes the sample and the edge denotes the
correlation between two samples from the feature space. Then, a dual-encoder
that consists of a graph encoder and a feature encoder, is employed to encode
both the feature and correlation information of samples into the
low-dimensional latent space jointly, followed by a decoder for data
reconstruction. Finally, a separate estimation network as a Gaussian Mixture
Model is utilized to estimate the density of the learned latent vector, and the
anomalies can be detected by measuring the energy of the samples. Extensive
experiments on real-world datasets demonstrate the effectiveness of the
proposed method. | [
"cs.LG",
"stat.ML",
"68T30",
"I.5.4"
] |
Patients in the intensive care unit (ICU) require constant and close
supervision. To assist clinical staff in this task, hospitals use monitoring
systems that trigger audiovisual alarms if their algorithms indicate that a
patient's condition may be worsening. However, current monitoring systems are
extremely sensitive to movement artefacts and technical errors. As a result,
they typically trigger hundreds to thousands of false alarms per patient per
day - drowning the important alarms in noise and adding to the exhaustion of
clinical staff. In this setting, data is abundantly available, but obtaining
trustworthy annotations by experts is laborious and expensive. We frame the
problem of false alarm reduction from multivariate time series as a
machine-learning task and address it with a novel multitask network
architecture that utilises distant supervision through multiple related
auxiliary tasks in order to reduce the number of expensive labels required for
training. We show that our approach leads to significant improvements over
several state-of-the-art baselines on real-world ICU data and provide new
insights on the importance of task selection and architectural choices in
distantly supervised multitask learning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Transfer learning with pre-training on large-scale datasets has played an
increasingly significant role in computer vision and natural language
processing recently. However, as there exist numerous application scenarios
that have distinctive demands such as certain latency constraints and
specialized data distributions, it is prohibitively expensive to take advantage
of large-scale pre-training for per-task requirements. In this paper, we focus
on the area of object detection and present a transfer learning system named
GAIA, which could automatically and efficiently give birth to customized
solutions according to heterogeneous downstream needs. GAIA is capable of
providing powerful pre-trained weights, selecting models that conform to
downstream demands such as latency constraints and specified data domains, and
collecting relevant data for practitioners who have very few datapoints for
their tasks. With GAIA, we achieve promising results on COCO, Objects365, Open
Images, Caltech, CityPersons, and UODB which is a collection of datasets
including KITTI, VOC, WiderFace, DOTA, Clipart, Comic, and more. Taking COCO as
an example, GAIA is able to efficiently produce models covering a wide range of
latency from 16ms to 53ms, and yields AP from 38.2 to 46.5 without whistles and
bells. To benefit every practitioner in the community of object detection, GAIA
is released at https://github.com/GAIA-vision. | [
"cs.CV"
] |
In recent years, there has been growing focus on the study of automated
recommender systems. Music recommendation systems serve as a prominent domain
for such works, both from an academic and a commercial perspective. A
fundamental aspect of music perception is that music is experienced in temporal
context and in sequence. In this work we present DJ-MC, a novel
reinforcement-learning framework for music recommendation that does not
recommend songs individually but rather song sequences, or playlists, based on
a model of preferences for both songs and song transitions. The model is
learned online and is uniquely adapted for each listener. To reduce exploration
time, DJ-MC exploits user feedback to initialize a model, which it subsequently
updates by reinforcement. We evaluate our framework with human participants
using both real song and playlist data. Our results indicate that DJ-MC's
ability to recommend sequences of songs provides a significant improvement over
more straightforward approaches, which do not take transitions into account. | [
"cs.LG"
] |
This paper addresses the trade-off between Accuracy and Transparency for deep
learning applied to sports analytics. Neural nets achieve great predictive
accuracy through deep learning, and are popular in sports analytics. But it is
hard to interpret a neural net model and harder still to extract actionable
insights from the knowledge implicit in it. Therefore, we built a simple and
transparent model that mimics the output of the original deep learning model
and represents the learned knowledge in an explicit interpretable way. Our
mimic model is a linear model tree, which combines a collection of linear
models with a regression-tree structure. The tree version of a neural network
achieves high fidelity, explains itself, and produces insights for expert
stakeholders such as athletes and coaches. We propose and compare several
scalable model tree learning heuristics to address the computational challenge
from datasets with millions of data points. | [
"cs.LG",
"stat.ML"
] |
Despite increasing efforts on universal representations for visual
recognition, few have addressed object detection. In this paper, we develop an
effective and efficient universal object detection system that is capable of
working on various image domains, from human faces and traffic signs to medical
CT images. Unlike multi-domain models, this universal model does not require
prior knowledge of the domain of interest. This is achieved by the introduction
of a new family of adaptation layers, based on the principles of squeeze and
excitation, and a new domain-attention mechanism. In the proposed universal
detector, all parameters and computations are shared across domains, and a
single network processes all domains all the time. Experiments, on a newly
established universal object detection benchmark of 11 diverse datasets, show
that the proposed detector outperforms a bank of individual detectors, a
multi-domain detector, and a baseline universal detector, with a 1.3x parameter
increase over a single-domain baseline detector. The code and benchmark will be
released at http://www.svcl.ucsd.edu/projects/universal-detection/. | [
"cs.CV"
] |
We extend the framework of Classification with Costly Features (CwCF) that
works with samples of fixed dimensions to trees of varying depth and breadth
(similar to a JSON/XML file). In this setting, the sample is a tree - sets of
sets of features. Individually for each sample, the task is to sequentially
select informative features that help the classification. Each feature has a
real-valued cost, and the objective is to maximize accuracy while minimizing
the total cost. The process is modeled as an MDP where the states represent the
acquired features, and the actions select unknown features. We present a
specialized neural network architecture trained through deep reinforcement
learning that naturally fits the data and directly selects features in the
tree. We demonstrate our method in seven datasets and compare it to two
baselines. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We introduce Geomstats, an open-source Python toolbox for computations and
statistics on nonlinear manifolds, such as hyperbolic spaces, spaces of
symmetric positive definite matrices, Lie groups of transformations, and many
more. We provide object-oriented and extensively unit-tested implementations.
Among others, manifolds come equipped with families of Riemannian metrics, with
associated exponential and logarithmic maps, geodesics and parallel transport.
Statistics and learning algorithms provide methods for estimation, clustering
and dimension reduction on manifolds. All associated operations are vectorized
for batch computation and provide support for different execution backends,
namely NumPy, PyTorch and TensorFlow, enabling GPU acceleration. This paper
presents the package, compares it with related libraries and provides relevant
code examples. We show that Geomstats provides reliable building blocks to
foster research in differential geometry and statistics, and to democratize the
use of Riemannian geometry in machine learning applications. The source code is
freely available under the MIT license at \url{geomstats.ai}. | [
"cs.LG",
"cs.MS"
] |
Learning an effective representation of 3D point clouds requires a good
metric to measure the discrepancy between two 3D point sets, which is
non-trivial due to their irregularity. Most of the previous works resort to
using the Chamfer discrepancy or Earth Mover's distance, but those metrics are
either ineffective in measuring the differences between point clouds or
computationally expensive. In this paper, we conduct a systematic study with
extensive experiments on distance metrics for 3D point clouds. From this study,
we propose to use sliced Wasserstein distance and its variants for learning
representations of 3D point clouds. In addition, we introduce a new algorithm
to estimate sliced Wasserstein distance that guarantees that the estimated
value is close enough to the true one. Experiments show that the sliced
Wasserstein distance and its variants allow the neural network to learn a more
efficient representation compared to the Chamfer discrepancy. We demonstrate
the efficiency of the sliced Wasserstein metric and its variants on several
tasks in 3D computer vision including training a point cloud autoencoder,
generative modeling, transfer learning, and point cloud registration. | [
"cs.CV"
] |
The major approaches of transfer learning in computer vision have tried to
adapt the source domain to the target domain one-to-one. However, this scenario
is difficult to apply to real applications such as video surveillance systems.
As those systems have many cameras installed at each location regarded as
source domains, it is difficult to identify the proper source domain. In this
paper, we introduce a new transfer learning scenario that has various source
domains and one target domain, assuming video surveillance system integration.
Also, we propose a novel method for automatically producing a high accuracy
model by fusing models trained at various source domains. In particular, we
show how to apply a gating network to fuse source domains for object detection
tasks, which is a new approach. We demonstrate the effectiveness of our method
through experiments on traffic surveillance datasets. | [
"cs.CV"
] |
In this paper, we propose an effective global relation learning algorithm to
recommend an appropriate location of a building unit for in-game customization
of residential home complex. Given a construction layout, we propose a visual
context-aware graph generation network that learns the implicit global
relations among the scene components and infers the location of a new building
unit. The proposed network takes as input the scene graph and the corresponding
top-view depth image. It provides the location recommendations for a
newly-added building units by learning an auto-regressive edge distribution
conditioned on existing scenes. We also introduce a global graph-image matching
loss to enhance the awareness of essential geometry semantics of the site.
Qualitative and quantitative experiments demonstrate that the recommended
location well reflects the implicit spatial rules of components in the
residential estates, and it is instructive and practical to locate the building
units in the 3D scene of the complex construction. | [
"cs.CV"
] |
This paper proposes a human-aware deblurring model that disentangles the
motion blur between foreground (FG) humans and background (BG). The proposed
model is based on a triple-branch encoder-decoder architecture. The first two
branches are learned for sharpening FG humans and BG details, respectively;
while the third one produces global, harmonious results by comprehensively
fusing multi-scale deblurring information from the two domains. The proposed
model is further endowed with a supervised, human-aware attention mechanism in
an end-to-end fashion. It learns a soft mask that encodes FG human information
and explicitly drives the FG/BG decoder-branches to focus on their specific
domains. To further benefit the research towards Human-aware Image Deblurring,
we introduce a large-scale dataset, named HIDE, which consists of 8,422 blurry
and sharp image pairs with 65,784 densely annotated FG human bounding boxes.
HIDE is specifically built to span a broad range of scenes, human object sizes,
motion patterns, and background complexities. Extensive experiments on public
benchmarks and our dataset demonstrate that our model performs favorably
against the state-of-the-art motion deblurring methods, especially in capturing
semantic details. | [
"cs.CV"
] |
Due to the world's demand for security systems, biometrics can be seen as an
important topic of research in computer vision. One of the biometric forms that
has been gaining attention is the recognition based on sclera. The initial and
paramount step for performing this type of recognition is the segmentation of
the region of interest, i.e. the sclera. In this context, two approaches for
such task based on the Fully Convolutional Network (FCN) and on Generative
Adversarial Network (GAN) are introduced in this work. FCN is similar to a
common convolution neural network, however the fully connected layers (i.e.,
the classification layers) are removed from the end of the network and the
output is generated by combining the output of pooling layers from different
convolutional ones. The GAN is based on the game theory, where we have two
networks competing with each other to generate the best segmentation. In order
to perform fair comparison with baselines and quantitative and objective
evaluations of the proposed approaches, we provide to the scientific community
new 1,300 manually segmented images from two databases. The experiments are
performed on the UBIRIS.v2 and MICHE databases and the best performing
configurations of our propositions achieved F-score's measures of 87.48% and
88.32%, respectively. | [
"cs.CV"
] |
Gatys et al. recently introduced a neural algorithm that renders a content
image in the style of another image, achieving so-called style transfer.
However, their framework requires a slow iterative optimization process, which
limits its practical application. Fast approximations with feed-forward neural
networks have been proposed to speed up neural style transfer. Unfortunately,
the speed improvement comes at a cost: the network is usually tied to a fixed
set of styles and cannot adapt to arbitrary new styles. In this paper, we
present a simple yet effective approach that for the first time enables
arbitrary style transfer in real-time. At the heart of our method is a novel
adaptive instance normalization (AdaIN) layer that aligns the mean and variance
of the content features with those of the style features. Our method achieves
speed comparable to the fastest existing approach, without the restriction to a
pre-defined set of styles. In addition, our approach allows flexible user
controls such as content-style trade-off, style interpolation, color & spatial
controls, all using a single feed-forward neural network. | [
"cs.CV"
] |
We consider log-supermodular models on binary variables, which are
probabilistic models with negative log-densities which are submodular. These
models provide probabilistic interpretations of common combinatorial
optimization tasks such as image segmentation. In this paper, we focus
primarily on parameter estimation in the models from known upper-bounds on the
intractable log-partition function. We show that the bound based on separable
optimization on the base polytope of the submodular function is always inferior
to a bound based on "perturb-and-MAP" ideas. Then, to learn parameters, given
that our approximation of the log-partition function is an expectation (over
our own randomization), we use a stochastic subgradient technique to maximize a
lower-bound on the log-likelihood. This can also be extended to conditional
maximum likelihood. We illustrate our new results in a set of experiments in
binary image denoising, where we highlight the flexibility of a probabilistic
model to learn with missing data. | [
"stat.ML",
"cs.LG"
] |
We propose a unified approach for bottom-up hierarchical image segmentation
and object proposal generation for recognition, called Multiscale Combinatorial
Grouping (MCG). For this purpose, we first develop a fast normalized cuts
algorithm. We then propose a high-performance hierarchical segmenter that makes
effective use of multiscale information. Finally, we propose a grouping
strategy that combines our multiscale regions into highly-accurate object
proposals by exploring efficiently their combinatorial space. We also present
Single-scale Combinatorial Grouping (SCG), a faster version of MCG that
produces competitive proposals in under five second per image. We conduct an
extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD,
and COCO datasets, showing that MCG produces state-of-the-art contours,
hierarchical regions, and object proposals. | [
"cs.CV"
] |
Light field imaging involves capturing both angular and spatial distribution
of light; it enables new capabilities, such as post-capture digital refocusing,
camera aperture adjustment, perspective shift, and depth estimation. Micro-lens
array (MLA) based light field cameras provide a cost-effective approach to
light field imaging. There are two main limitations of MLA-based light field
cameras: low spatial resolution and narrow baseline. While low spatial
resolution limits the general purpose use and applicability of light field
cameras, narrow baseline limits the depth estimation range and accuracy. In
this paper, we present a hybrid stereo imaging system that includes a light
field camera and a regular camera. The hybrid system addresses both spatial
resolution and narrow baseline issues of the MLA-based light field cameras
while preserving light field imaging capabilities. | [
"cs.CV"
] |
The majority of existing speech emotion recognition research focuses on
automatic emotion detection using training and testing data from same corpus
collected under the same conditions. The performance of such systems has been
shown to drop significantly in cross-corpus and cross-language scenarios. To
address the problem, this paper exploits a transfer learning technique to
improve the performance of speech emotion recognition systems that is novel in
cross-language and cross-corpus scenarios. Evaluations on five different
corpora in three different languages show that Deep Belief Networks (DBNs)
offer better accuracy than previous approaches on cross-corpus emotion
recognition, relative to a Sparse Autoencoder and SVM baseline system. Results
also suggest that using a large number of languages for training and using a
small fraction of the target data in training can significantly boost accuracy
compared with baseline also for the corpus with limited training examples. | [
"cs.CV",
"cs.CL"
] |
Deep Bregman divergence measures divergence of data points using neural
networks which is beyond Euclidean distance and capable of capturing divergence
over distributions. In this paper, we propose deep Bregman divergences for
contrastive learning of visual representation and we aim to enhance contrastive
loss used in self-supervised learning by training additional networks based on
functional Bregman divergence. In contrast to the conventional contrastive
learning methods which are solely based on divergences between single points,
our framework can capture the divergence between distributions which improves
the quality of learned representation. By combining conventional contrastive
loss with the proposed divergence loss, our method outperforms baseline and
most of previous methods for self-supervised and semi-supervised learning on
multiple classifications and object detection tasks and datasets. The source
code of the method and of all the experiments are available at supplementary. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Object detection is one of the most active areas in computer vision, which
has made significant improvement in recent years. Current state-of-the-art
object detection methods mostly adhere to the framework of regions with
convolutional neural network (R-CNN) and only use local appearance features
inside object bounding boxes. Since these approaches ignore the contextual
information around the object proposals, the outcome of these detectors may
generate a semantically incoherent interpretation of the input image. In this
paper, we propose an ensemble object detection system which incorporates the
local appearance, the contextual information in term of relationships among
objects and the global scene based contextual feature generated by a
convolutional neural network. The system is formulated as a fully connected
conditional random field (CRF) defined on object proposals and the contextual
constraints among object proposals are modeled as edges naturally. Furthermore,
a fast mean field approximation method is utilized to inference in this CRF
model efficiently. The experimental results demonstrate that our approach
achieves a higher mean average precision (mAP) on PASCAL VOC 2007 datasets
compared to the baseline algorithm Faster R-CNN. | [
"cs.CV"
] |
Reinforcement learning provides a powerful and general framework for decision
making and control, but its application in practice is often hindered by the
need for extensive feature and reward engineering. Deep reinforcement learning
methods can remove the need for explicit engineering of policy or value
features, but still require a manually specified reward function. Inverse
reinforcement learning holds the promise of automatic reward acquisition, but
has proven exceptionally difficult to apply to large, high-dimensional problems
with unknown dynamics. In this work, we propose adverserial inverse
reinforcement learning (AIRL), a practical and scalable inverse reinforcement
learning algorithm based on an adversarial reward learning formulation. We
demonstrate that AIRL is able to recover reward functions that are robust to
changes in dynamics, enabling us to learn policies even under significant
variation in the environment seen during training. Our experiments show that
AIRL greatly outperforms prior methods in these transfer settings. | [
"cs.LG"
] |
Metadata are general characteristics of the data in a well-curated and
condensed format, and have been proven to be useful for decision making,
knowledge discovery, and also heterogeneous data organization of biobank. Among
all data types in the biobank, pathology is the key component of the biobank
and also serves as the gold standard of diagnosis. To maximize the utility of
biobank and allow the rapid progress of biomedical science, it is essential to
organize the data with well-populated pathology metadata. However, manual
annotation of such information is tedious and time-consuming. In the study, we
develop a multimodal multitask learning framework to predict four major
slide-level metadata of pathology images. The framework learns generalizable
representations across tissue slides, pathology reports, and case-level
structured data. We demonstrate improved performance across all four tasks with
the proposed method compared to a single modal single task baseline on two test
sets, one external test set from a distinct data source (TCGA) and one internal
held-out test set (TTH). In the test sets, the performance improvements on the
averaged area under receiver operating characteristic curve across the four
tasks are 16.48% and 9.05% on TCGA and TTH, respectively. Such pathology
metadata prediction system may be adopted to mitigate the effort of expert
annotation and ultimately accelerate the data-driven research by better
utilization of the pathology biobank. | [
"cs.CV",
"cs.LG"
] |
Subsets and Splits