text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Direct Feedback Alignment (DFA) is emerging as an efficient and biologically
plausible alternative to the ubiquitous backpropagation algorithm for training
deep neural networks. Despite relying on random feedback weights for the
backward pass, DFA successfully trains state-of-the-art models such as
Transformers. On the other hand, it notoriously fails to train convolutional
networks. An understanding of the inner workings of DFA to explain these
diverging results remains elusive. Here, we propose a theory for the success of
DFA. We first show that learning in shallow networks proceeds in two steps: an
alignment phase, where the model adapts its weights to align the approximate
gradient with the true gradient of the loss function, is followed by a
memorisation phase, where the model focuses on fitting the data. This two-step
process has a degeneracy breaking effect: out of all the low-loss solutions in
the landscape, a network trained with DFA naturally converges to the solution
which maximises gradient alignment. We also identify a key quantity underlying
alignment in deep linear networks: the conditioning of the alignment matrices.
The latter enables a detailed understanding of the impact of data structure on
alignment, and suggests a simple explanation for the well-known failure of DFA
to train convolutional neural networks. Numerical experiments on MNIST and
CIFAR10 clearly demonstrate degeneracy breaking in deep non-linear networks and
show that the align-then-memorise process occurs sequentially from the bottom
layers of the network to the top. | [
"stat.ML",
"cond-mat.dis-nn",
"cs.LG",
"cs.NE"
] |
Machine learning techniques have excelled in the automatic semantic analysis
of images, reaching human-level performances on challenging benchmarks. Yet,
the semantic analysis of videos remains challenging due to the significantly
higher dimensionality of the input data, respectively, the significantly higher
need for annotated training examples. By studying the automatic recognition of
German sign language videos, we demonstrate that on the relatively scarce
training data of 2.800 videos, modern deep learning architectures for video
analysis (such as ResNeXt) along with transfer learning on large gesture
recognition tasks, can achieve about 75% character accuracy. Considering that
this leaves us with a probability of under 25% that a 5 letter word is spelled
correctly, spell-correction systems are crucial for producing readable outputs.
The contribution of this paper is to propose a convolutional neural network for
spell-correction that expects the softmax outputs of the character recognition
network (instead of a misspelled word) as an input. We demonstrate that purely
learning on softmax inputs in combination with scarce training data yields
overfitting as the network learns the inputs by heart. In contrast, training
the network on several variants of the logits of the classification output i.e.
scaling by a constant factor, adding of random noise, mixing of softmax and
hardmax inputs or purely training on hardmax inputs, leads to better
generalization while benefitting from the significant information hidden in
these outputs (that have 98% top-5 accuracy), yielding a readable text despite
the comparably low character accuracy. | [
"cs.CV",
"cs.LG"
] |
In this chapter, we give an overview of part of our previous work based on
the minimal path framework and the Eikonal partial differential equation (PDE).
We show that by designing adequate Riemannian and Randers geodesic metrics the
minimal paths can be utilized to search for solutions to almost all of the
active contour problems and to the Euler-Mumford elastica problem, which allows
to blend the advantages from minimal geodesic paths and those original
approaches, i.e. the active contours and elastica curves. The proposed minimal
path-based models can be applied to deal with a broad variety of image analysis
tasks such as boundary detection, image segmentation and tubular structure
extraction. The numerical implementations for the computation of minimal paths
are known to be quite efficient thanks to the Eikonal solvers such as the
Finsler variant of the fast marching method. | [
"cs.CV"
] |
The free energy functional has recently been proposed as a variational
principle for bounded rational decision-making, since it instantiates a natural
trade-off between utility gains and information processing costs that can be
axiomatically derived. Here we apply the free energy principle to general
decision trees that include both adversarial and stochastic environments. We
derive generalized sequential optimality equations that not only include the
Bellman optimality equations as a limit case, but also lead to well-known
decision-rules such as Expectimax, Minimax and Expectiminimax. We show how
these decision-rules can be derived from a single free energy principle that
assigns a resource parameter to each node in the decision tree. These resource
parameters express a concrete computational cost that can be measured as the
amount of samples that are needed from the distribution that belongs to each
node. The free energy principle therefore provides the normative basis for
generalized optimality equations that account for both adversarial and
stochastic environments. | [
"stat.ML",
"cs.AI",
"cs.GT",
"cs.SY"
] |
The automotive industry is being transformed by technologies, applications
and services ranging from sensors to big data analytics and to artificial
intelligence. In this paper, we present our multidisciplinary initiative of
creating a publicly available dataset to facilitate the visual-related
marketing research and applications in automotive industry such as automotive
exterior design, consumer analytics and sales modelling. We are motivated by
the fact that there is growing interest in product aesthetics but there is no
large-scale dataset available that covers a wide range of variables and
information. We summarise the common issues faced by marketing researchers and
computer scientists through a user survey study, and design our dataset to
alleviate these issues. Our dataset contains 1.4 million images from 899 car
models as well as their corresponding car model specification and sales
information over more than ten years in the UK market. To the best of our
knowledge, this is the very first large-scale automotive dataset which contains
images, text and sales information from multiple sources over a long period of
time. We describe the detailed data structure and the preparation steps, which
we believe has the methodological contribution to the multi-source data fusion
and sharing. In addition, we discuss three dataset application examples to
illustrate the value of our dataset. | [
"cs.CV"
] |
Changepoints are abrupt variations in the generative parameters of a data
sequence. Online detection of changepoints is useful in modelling and
prediction of time series in application areas such as finance, biometrics, and
robotics. While frequentist methods have yielded online filtering and
prediction techniques, most Bayesian papers have focused on the retrospective
segmentation problem. Here we examine the case where the model parameters
before and after the changepoint are independent and we derive an online
algorithm for exact inference of the most recent changepoint. We compute the
probability distribution of the length of the current ``run,'' or time since
the last changepoint, using a simple message-passing algorithm. Our
implementation is highly modular so that the algorithm may be applied to a
variety of types of data. We illustrate this modularity by demonstrating the
algorithm on three different real-world data sets. | [
"stat.ML"
] |
Surface-based geodesic topology provides strong cues for object semantic
analysis and geometric modeling. However, such connectivity information is lost
in point clouds. Thus we introduce GeoNet, the first deep learning architecture
trained to model the intrinsic structure of surfaces represented as point
clouds. To demonstrate the applicability of learned geodesic-aware
representations, we propose fusion schemes which use GeoNet in conjunction with
other baseline or backbone networks, such as PU-Net and PointNet++, for
down-stream point cloud analysis. Our method improves the state-of-the-art on
multiple representative tasks that can benefit from understandings of the
underlying surface topology, including point upsampling, normal estimation,
mesh reconstruction and non-rigid shape classification. | [
"cs.CV"
] |
In the status quo, dementia is yet to be cured. Precise diagnosis prior to
the onset of the symptoms can prevent the rapid progression of the emerging
cognitive impairment. Recent progress has shown that Electroencephalography
(EEG) is the promising and cost-effective test to facilitate the detection of
neurocognitive disorders. However, most of the existing works have been using
only resting-state EEG. The efficiencies of EEG signals from various cognitive
tasks, for dementia classification, have yet to be thoroughly investigated. In
this study, we designed four cognitive tasks that engage different cognitive
performances: attention, working memory, and executive function. We
investigated these tasks by using statistical analysis on both time and
frequency domains of EEG signals from three classes of human subjects: Dementia
(DEM), Mild Cognitive Impairment (MCI), and Normal Control (NC). We also
further evaluated the classification performances of two features extraction
methods: Principal Component Analysis (PCA) and Filter Bank Common Spatial
Pattern (FBCSP). We found that the working memory related tasks yielded good
performances for dementia recognition in both cases using PCA and FBCSP.
Moreover, FBCSP with features combination from four tasks revealed the best
sensitivity of 0.87 and the specificity of 0.80. To our best knowledge, this is
the first work that concurrently investigated several cognitive tasks for
dementia recognition using both statistical analysis and classification scores.
Our results yielded essential information to design and aid in conducting
further experimental tasks to early diagnose dementia patients. | [
"cs.LG",
"eess.SP",
"q-bio.NC"
] |
Recently, the majority of visual trackers adopt Convolutional Neural Network
(CNN) as their backbone to achieve high tracking accuracy. However, less
attention has been paid to the potential adversarial threats brought by CNN,
including Siamese network.
In this paper, we first analyze the existing vulnerabilities in Siamese
trackers and propose the requirements for a successful adversarial attack. On
this basis, we formulate the adversarial generation problem and propose an
end-to-end pipeline to generate a perturbed texture map for the 3D object that
causes the trackers to fail. Finally, we conduct thorough experiments to verify
the effectiveness of our algorithm. Experiment results show that adversarial
examples generated by our algorithm can successfully lower the tracking
accuracy of victim trackers and even make them drift off. To the best of our
knowledge, this is the first work to generate 3D adversarial examples on visual
trackers. | [
"cs.CV"
] |
Recurrent neural networks (RNNs) are more suitable for learning non-linear
dependencies in dynamical systems from observed time series data. In practice
all the external variables driving such systems are not known a priori,
especially in economical forecasting. A class of RNNs called Error Correction
Neural Networks (ECNNs) was designed to compensate for missing input variables.
It does this by feeding back in the current step the error made in the previous
step. The ECNN is implemented in Python by the computation of the appropriate
gradients and it is tested on stock market predictions. As expected it out
performed the simple RNN and LSTM and other hybrid models which involve a
de-noising pre-processing step. The intuition for the latter is that de-noising
may lead to loss of information. | [
"cs.LG",
"math.DS",
"stat.ML",
"37M10, 62M10, 91B84",
"I.2.6; I.5.1; I.5.4"
] |
Deep learning methods are successfully used in applications pertaining to
ubiquitous computing, health, and well-being. Specifically, the area of human
activity recognition (HAR) is primarily transformed by the convolutional and
recurrent neural networks, thanks to their ability to learn semantic
representations from raw input. However, to extract generalizable features,
massive amounts of well-curated data are required, which is a notoriously
challenging task; hindered by privacy issues, and annotation costs. Therefore,
unsupervised representation learning is of prime importance to leverage the
vast amount of unlabeled data produced by smart devices. In this work, we
propose a novel self-supervised technique for feature learning from sensory
data that does not require access to any form of semantic labels. We learn a
multi-task temporal convolutional network to recognize transformations applied
on an input signal. By exploiting these transformations, we demonstrate that
simple auxiliary tasks of the binary classification result in a strong
supervisory signal for extracting useful features for the downstream task. We
extensively evaluate the proposed approach on several publicly available
datasets for smartphone-based HAR in unsupervised, semi-supervised, and
transfer learning settings. Our method achieves performance levels superior to
or comparable with fully-supervised networks, and it performs significantly
better than autoencoders. Notably, for the semi-supervised case, the
self-supervised features substantially boost the detection rate by attaining a
kappa score between 0.7-0.8 with only 10 labeled examples per class. We get
similar impressive performance even if the features are transferred from a
different data source. While this paper focuses on HAR as the application
domain, the proposed technique is general and could be applied to a wide
variety of problems in other areas. | [
"cs.LG",
"stat.ML"
] |
While graph neural networks (GNNs) have been shown to perform well on
graph-based data from a variety of fields, they suffer from a lack of
transparency and accountability, which hinders trust and consequently the
deployment of such models in high-stake and safety-critical scenarios. Even
though recent research has investigated methods for explaining GNNs, these
methods are limited to single-instance explanations, also known as local
explanations. Motivated by the aim of providing global explanations, we adapt
the well-known Automated Concept-based Explanation approach (Ghorbani et al.,
2019) to GNN node and graph classification, and propose GCExplainer.
GCExplainer is an unsupervised approach for post-hoc discovery and extraction
of global concept-based explanations for GNNs, which puts the human in the
loop. We demonstrate the success of our technique on five node classification
datasets and two graph classification datasets, showing that we are able to
discover and extract high-quality concept representations by putting the human
in the loop. We achieve a maximum completeness score of 1 and an average
completeness score of 0.753 across the datasets. Finally, we show that the
concept-based explanations provide an improved insight into the datasets and
GNN models compared to the state-of-the-art explanations produced by
GNNExplainer (Ying et al., 2019). | [
"cs.LG"
] |
In lexicon-based classification, documents are assigned labels by comparing
the number of words that appear from two opposed lexicons, such as positive and
negative sentiment. Creating such words lists is often easier than labeling
instances, and they can be debugged by non-experts if classification
performance is unsatisfactory. However, there is little analysis or
justification of this classification heuristic. This paper describes a set of
assumptions that can be used to derive a probabilistic justification for
lexicon-based classification, as well as an analysis of its expected accuracy.
One key assumption behind lexicon-based classification is that all words in
each lexicon are equally predictive. This is rarely true in practice, which is
why lexicon-based approaches are usually outperformed by supervised classifiers
that learn distinct weights on each word from labeled instances. This paper
shows that it is possible to learn such weights without labeled data, by
leveraging co-occurrence statistics across the lexicons. This offers the best
of both worlds: light supervision in the form of lexicons, and data-driven
classification with higher accuracy than traditional word-counting heuristics. | [
"cs.LG",
"cs.CL",
"stat.ML",
"I.2.6; I.2.7"
] |
Graph neural networks (GNNs) have emerged as a powerful approach for solving
many network mining tasks. However, learning on large graphs remains a
challenge - many recently proposed scalable GNN approaches rely on an expensive
message-passing procedure to propagate information through the graph. We
present the PPRGo model which utilizes an efficient approximation of
information diffusion in GNNs resulting in significant speed gains while
maintaining state-of-the-art prediction performance. In addition to being
faster, PPRGo is inherently scalable, and can be trivially parallelized for
large datasets like those found in industry settings. We demonstrate that PPRGo
outperforms baselines in both distributed and single-machine training
environments on a number of commonly used academic graphs. To better analyze
the scalability of large-scale graph learning methods, we introduce a novel
benchmark graph with 12.4 million nodes, 173 million edges, and 2.8 million
node features. We show that training PPRGo from scratch and predicting labels
for all nodes in this graph takes under 2 minutes on a single machine, far
outpacing other baselines on the same graph. We discuss the practical
application of PPRGo to solve large-scale node classification problems at
Google. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
In recent years, a specific machine learning method called deep learning has
gained huge attraction, as it has obtained astonishing results in broad
applications such as pattern recognition, speech recognition, computer vision,
and natural language processing. Recent research has also been shown that deep
learning techniques can be combined with reinforcement learning methods to
learn useful representations for the problems with high dimensional raw data
input. This chapter reviews the recent advances in deep reinforcement learning
with a focus on the most used deep architectures such as autoencoders,
convolutional neural networks and recurrent neural networks which have
successfully been come together with the reinforcement learning framework. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In recent times, deep artificial neural networks have achieved many successes
in pattern recognition. Part of this success can be attributed to the reliance
on big data to increase generalization. However, in the field of time series
recognition, many datasets are often very small. One method of addressing this
problem is through the use of data augmentation. In this paper, we survey data
augmentation techniques for time series and their application to time series
classification with neural networks. We propose a taxonomy and outline the four
families in time series data augmentation, including transformation-based
methods, pattern mixing, generative models, and decomposition methods.
Furthermore, we empirically evaluate 12 time series data augmentation methods
on 128 time series classification datasets with six different types of neural
networks. Through the results, we are able to analyze the characteristics,
advantages and disadvantages, and recommendations of each data augmentation
method. This survey aims to help in the selection of time series data
augmentation for neural network applications. | [
"cs.LG",
"stat.ML"
] |
Convolutional video models have an order of magnitude larger computational
complexity than their counterpart image-level models. Constrained by
computational resources, there is no model or training method that can train
long video sequences end-to-end. Currently, the main-stream method is to split
a raw video into clips, leading to incomplete fragmentary temporal information
flow. Inspired by natural language processing techniques dealing with long
sentences, we propose to treat videos as serial fragments satisfying Markov
property, and train it as a whole by progressively propagating information
through the temporal dimension in multiple steps. This progressive training
(PGT) method is able to train long videos end-to-end with limited resources and
ensures the effective transmission of information. As a general and robust
training method, we empirically demonstrate that it yields significant
performance improvements on different models and datasets. As an illustrative
example, the proposed method improves SlowOnly network by 3.7 mAP on Charades
and 1.9 top-1 accuracy on Kinetics with negligible parameter and computation
overhead. Code is available at https://github.com/BoPang1996/PGT. | [
"cs.CV"
] |
Reverse-engineering bar charts extracts textual and numeric information from
the visual representations of bar charts to support application scenarios that
require the underlying information. In this paper, we propose a neural
network-based method for reverse-engineering bar charts. We adopt a neural
network-based object detection model to simultaneously localize and classify
textual information. This approach improves the efficiency of textual
information extraction. We design an encoder-decoder framework that integrates
convolutional and recurrent neural networks to extract numeric information. We
further introduce an attention mechanism into the framework to achieve high
accuracy and robustness. Synthetic and real-world datasets are used to evaluate
the effectiveness of the method. To the best of our knowledge, this work takes
the lead in constructing a complete neural network-based method of
reverse-engineering bar charts. | [
"cs.CV",
"cs.LG"
] |
We introduce powerful but simple methodology for identifying anomalous
observations against a corpus of `normal' observations. All data are observed
through a vector-valued feature map. Our approach depends on the choice of
corpus and that feature map but is invariant to affine transformations of the
map and has no other external dependencies, such as choices of metric; we call
it conformance. Applying this method to (signatures) of time series and other
types of streamed data we provide an effective methodology of broad
applicability for identifying anomalous complex multimodal sequential data. We
demonstrate the applicability and effectiveness of our method by evaluating it
against multiple data sets. Based on quantifying performance using the receiver
operating characteristic (ROC) area under the curve (AUC), our method yields an
AUC score of 98.9\% for the PenDigits data set; in a subsequent experiment
involving marine vessel traffic data our approach yields an AUC score of
89.1\%. Based on comparison involving univariate time series from the UEA \&
UCR time series repository with performance quantified using balanced accuracy
and assuming an optimal operating point, our approach outperforms a
state-of-the-art shapelet method for 19 out of 28 data sets. | [
"cs.LG",
"stat.ML"
] |
To be successful in real-world tasks, Reinforcement Learning (RL) needs to
exploit the compositional, relational, and hierarchical structure of the world,
and learn to transfer it to the task at hand. Recent advances in representation
learning for language make it possible to build models that acquire world
knowledge from text corpora and integrate this knowledge into downstream
decision making problems. We thus argue that the time is right to investigate a
tight integration of natural language understanding into RL in particular. We
survey the state of the field, including work on instruction following, text
games, and learning from textual domain knowledge. Finally, we call for the
development of new environments as well as further investigation into the
potential uses of recent Natural Language Processing (NLP) techniques for such
tasks. | [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
] |
Image-based tracking of laparoscopic instruments plays a fundamental role in
computer and robotic-assisted surgeries by aiding surgeons and increasing
patient safety. Computer vision contests, such as the Robust Medical Instrument
Segmentation (ROBUST-MIS) Challenge, seek to encourage the development of
robust models for such purposes, providing large, diverse, and annotated
datasets. To date, most of the existing models for instance segmentation of
medical instruments were based on two-stage detectors, which provide robust
results but are nowhere near to the real-time (5 frames-per-second (fps)at
most). However, in order for the method to be clinically applicable, real-time
capability is utmost required along with high accuracy. In this paper, we
propose the addition of attention mechanisms to the YOLACT architecture that
allows real-time instance segmentation of instrument with improved accuracy on
the ROBUST-MIS dataset. Our proposed approach achieves competitive performance
compared to the winner ofthe 2019 ROBUST-MIS challenge in terms of robustness
scores,obtaining 0.313 MI_DSC and 0.338 MI_NSD, while achieving real-time
performance (37 fps) | [
"cs.CV",
"cs.AI"
] |
For many structured learning tasks, the data annotation process is complex
and costly. Existing annotation schemes usually aim at acquiring completely
annotated structures, under the common perception that partial structures are
of low quality and could hurt the learning process. This paper questions this
common perception, motivated by the fact that structures consist of
interdependent sets of variables. Thus, given a fixed budget, partly annotating
each structure may provide the same level of supervision, while allowing for
more structures to be annotated. We provide an information theoretic
formulation for this perspective and use it, in the context of three diverse
structured learning tasks, to show that learning from partial structures can
sometimes outperform learning from complete ones. Our findings may provide
important insights into structured data annotation schemes and could support
progress in learning protocols for structured tasks. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
In this paper, we address a key limitation of existing 2D face recognition
methods: robustness to occlusions. To accomplish this task, we systematically
analyzed the impact of facial attributes on the performance of a
state-of-the-art face recognition method and through extensive experimentation,
quantitatively analyzed the performance degradation under different types of
occlusion. Our proposed Occlusion-aware face REcOgnition (OREO) approach
learned discriminative facial templates despite the presence of such
occlusions. First, an attention mechanism was proposed that extracted local
identity-related region. The local features were then aggregated with the
global representations to form a single template. Second, a simple, yet
effective, training strategy was introduced to balance the non-occluded and
occluded facial images. Extensive experiments demonstrated that OREO improved
the generalization ability of face recognition under occlusions by (10.17%) in
a single-image-based setting and outperformed the baseline by approximately
(2%) in terms of rank-1 accuracy in an image-set-based scenario. | [
"cs.CV"
] |
The Transformer architecture has become increasingly popular over the past
two years, owing to its impressive performance on a number of natural language
processing (NLP) tasks. However, all Transformer computations occur at the
level of word representations and therefore, it may be argued that Transformer
models do not explicitly attempt to learn hierarchical structure which is
widely assumed to be integral to language. In the present work, we introduce
hierarchical processing into the Transformer model, taking inspiration from the
U-Net architecture, popular in computer vision for its hierarchical view of
natural images. We empirically demonstrate that the proposed architecture
outperforms both the vanilla Transformer and some strong baselines in the
domain of chit-chat dialogue. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Despite the substantial progress of active learning for image recognition,
there still lacks an instance-level active learning method specified for object
detection. In this paper, we propose Multiple Instance Active Object Detection
(MI-AOD), to select the most informative images for detector training by
observing instance-level uncertainty. MI-AOD defines an instance uncertainty
learning module, which leverages the discrepancy of two adversarial instance
classifiers trained on the labeled set to predict instance uncertainty of the
unlabeled set. MI-AOD treats unlabeled images as instance bags and feature
anchors in images as instances, and estimates the image uncertainty by
re-weighting instances in a multiple instance learning (MIL) fashion. Iterative
instance uncertainty learning and re-weighting facilitate suppressing noisy
instances, toward bridging the gap between instance uncertainty and image-level
uncertainty. Experiments validate that MI-AOD sets a solid baseline for
instance-level active learning. On commonly used object detection datasets,
MI-AOD outperforms state-of-the-art methods with significant margins,
particularly when the labeled sets are small. Code is available at
https://github.com/yuantn/MI-AOD. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Adversarial domain-invariant training (ADIT) proves to be effective in
suppressing the effects of domain variability in acoustic modeling and has led
to improved performance in automatic speech recognition (ASR). In ADIT, an
auxiliary domain classifier takes in equally-weighted deep features from a deep
neural network (DNN) acoustic model and is trained to improve their
domain-invariance by optimizing an adversarial loss function. In this work, we
propose an attentive ADIT (AADIT) in which we advance the domain classifier
with an attention mechanism to automatically weight the input deep features
according to their importance in domain classification. With this attentive
re-weighting, AADIT can focus on the domain normalization of phonetic
components that are more susceptible to domain variability and generates deep
features with improved domain-invariance and senone-discriminativity over ADIT.
Most importantly, the attention block serves only as an external component to
the DNN acoustic model and is not involved in ASR, so AADIT can be used to
improve the acoustic modeling with any DNN architectures. More generally, the
same methodology can improve any adversarial learning system with an auxiliary
discriminator. Evaluated on CHiME-3 dataset, the AADIT achieves 13.6% and 9.3%
relative WER improvements, respectively, over a multi-conditional model and a
strong ADIT baseline. | [
"cs.LG",
"cs.CL",
"cs.SD",
"eess.AS",
"stat.ML"
] |
The field of Deep Learning is rich with empirical evidence of human-like
performance on a variety of prediction tasks. However, despite these successes,
the recent Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020
competition suggests that there is a need for more robust and efficient
measures of network generalization. In this work, we propose a new framework
for evaluating the generalization capabilities of trained networks. We use
perturbation response (PR) curves that capture the accuracy change of a given
network as a function of varying levels of training sample perturbation. From
these PR curves, we derive novel statistics that capture generalization
capability. Specifically, we introduce two new measures for accurately
predicting generalization gaps: the Gi-score and Pal-score, that are inspired
by the Gini coefficient and Palma ratio (measures of income inequality), that
accurately predict generalization gaps. Using our framework applied to intra
and inter class sample mixup, we attain better predictive scores than the
current state-of-the-art measures on a majority of tasks in the PGDL
competition. In addition, we show that our framework and the proposed
statistics can be used to capture to what extent a trained network is invariant
to a given parametric input transformation, such as rotation or translation.
Therefore, these generalization gap prediction statistics also provide a useful
means for selecting the optimal network architectures and hyperparameters that
are invariant to a certain perturbation. | [
"cs.LG",
"cs.AI"
] |
Synthesising 3D facial motion from speech is a crucial problem manifesting in
a multitude of applications such as computer games and movies. Recently
proposed methods tackle this problem in controlled conditions of speech. In
this paper, we introduce the first methodology for 3D facial motion synthesis
from speech captured in arbitrary recording conditions ("in-the-wild") and
independent of the speaker. For our purposes, we captured 4D sequences of
people uttering 500 words, contained in the Lip Reading Words (LRW) a publicly
available large-scale in-the-wild dataset, and built a set of 3D blendshapes
appropriate for speech. We correlate the 3D shape parameters of the speech
blendshapes to the LRW audio samples by means of a novel time-warping
technique, named Deep Canonical Attentional Warping (DCAW), that can
simultaneously learn hierarchical non-linear representations and a warping path
in an end-to-end manner. We thoroughly evaluate our proposed methods, and show
the ability of a deep learning model to synthesise 3D facial motion in handling
different speakers and continuous speech signals in uncontrolled conditions. | [
"cs.CV"
] |
A single unit (head) is the conventional input feature extractor in deep
learning architectures trained on multivariate time series signals. The
importance of the fixed-dimensional vector representation generated by the
single-head network has been demonstrated for industrial machinery condition
monitoring and predictive maintenance. However, processing heterogeneous sensor
signals with a single-head may result in a model that cannot explicitly account
for the diversity in time-varying multivariate inputs. This work extends the
conventional single-head deep learning models to a more robust form by
developing context-specific heads to independently capture the inherent pattern
in each sensor reading. Using the turbofan aircraft engine benchmark dataset
(CMAPSS), an extensive experiment is performed to verify the effectiveness and
benefits of multi-head multilayer perceptron, recurrent networks, convolution
network, the transformer-style stand-alone attention network, and their
variants for remaining useful life estimation. Moreover, the effect of
different attention mechanisms on the multi-head models is also evaluated. In
addition, each architecture's relative advantage and computational overhead are
analyzed. Results show that utilizing the attention layer is task-sensitive and
model dependent, as it does not provide consistent improvement across the
models investigated. The best model is further compared with five
state-of-the-art models, and the comparison shows that a relatively simple
multi-head architecture performs better than the state-of-the-art models. The
results presented in this study demonstrate the importance of multi-head models
and attention mechanisms to an improved understanding of the remaining useful
life of industrial assets. | [
"cs.LG",
"J.2; D.2.11; E.1"
] |
Cross-view image generation has been recently proposed to generate images of
one view from another dramatically different view. In this paper, we
investigate exocentric (third-person) view to egocentric (first-person) view
image generation. This is a challenging task since egocentric view sometimes is
remarkably different from exocentric view. Thus, transforming the appearances
across the two views is a non-trivial task. To this end, we propose a novel
Parallel Generative Adversarial Network (P-GAN) with a novel cross-cycle loss
to learn the shared information for generating egocentric images from
exocentric view. We also incorporate a novel contextual feature loss in the
learning procedure to capture the contextual information in images. Extensive
experiments on the Exo-Ego datasets show that our model outperforms the
state-of-the-art approaches. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Video-based person re-identification (re-ID) is an important research topic
in computer vision. The key to tackling the challenging task is to exploit both
spatial and temporal clues in video sequences. In this work, we propose a novel
graph-based framework, namely Multi-Granular Hypergraph (MGH), to pursue better
representational capabilities by modeling spatiotemporal dependencies in terms
of multiple granularities. Specifically, hypergraphs with different spatial
granularities are constructed using various levels of part-based features
across the video sequence. In each hypergraph, different temporal granularities
are captured by hyperedges that connect a set of graph nodes (i.e., part-based
features) across different temporal ranges. Two critical issues (misalignment
and occlusion) are explicitly addressed by the proposed hypergraph propagation
and feature aggregation schemes. Finally, we further enhance the overall video
representation by learning more diversified graph-level representations of
multiple granularities based on mutual information minimization. Extensive
experiments on three widely adopted benchmarks clearly demonstrate the
effectiveness of the proposed framework. Notably, 90.0% top-1 accuracy on MARS
is achieved using MGH, outperforming the state-of-the-arts. Code is available
at https://github.com/daodaofr/hypergraph_reid. | [
"cs.CV"
] |
Dynamic imaging is a recently proposed action description paradigm for
simultaneously capturing motion and temporal evolution information,
particularly in the context of deep convolutional neural networks (CNNs).
Compared with optical flow for motion characterization, dynamic imaging
exhibits superior efficiency and compactness. Inspired by the success of
dynamic imaging in RGB video, this study extends it to the depth domain. To
better exploit three-dimensional (3D) characteristics, multi-view dynamic
images are proposed. In particular, the raw depth video is densely projected
with respect to different virtual imaging viewpoints by rotating the virtual
camera within the 3D space. Subsequently, dynamic images are extracted from the
obtained multi-view depth videos and multi-view dynamic images are thus
constructed from these images. Accordingly, more view-tolerant visual cues can
be involved. A novel CNN model is then proposed to perform feature learning on
multi-view dynamic images. Particularly, the dynamic images from different
views share the same convolutional layers but correspond to different fully
connected layers. This is aimed at enhancing the tuning effectiveness on
shallow convolutional layers by alleviating the gradient vanishing problem.
Moreover, as the spatial occurrence variation of the actions may impair the
CNN, an action proposal approach is also put forth. In experiments, the
proposed approach can achieve state-of-the-art performance on three challenging
datasets. | [
"cs.CV"
] |
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes. | [
"cs.CV"
] |
Learning about many things can provide numerous benefits to a reinforcement
learning system. For example, learning many auxiliary value functions, in
addition to optimizing the environmental reward, appears to improve both
exploration and representation learning. The question we tackle in this paper
is how to sculpt the stream of experience---how to adapt the learning system's
behavior---to optimize the learning of a collection of value functions. A
simple answer is to compute an intrinsic reward based on the statistics of each
auxiliary learner, and use reinforcement learning to maximize that intrinsic
reward. Unfortunately, implementing this simple idea has proven difficult, and
thus has been the focus of decades of study. It remains unclear which of the
many possible measures of learning would work well in a parallel learning
setting where environmental reward is extremely sparse or absent. In this
paper, we investigate and compare different intrinsic reward mechanisms in a
new bandit-like parallel-learning testbed. We discuss the interaction between
reward and prediction learners and highlight the importance of introspective
prediction learners: those that increase their rate of learning when progress
is possible, and decrease when it is not. We provide a comprehensive empirical
comparison of 14 different rewards, including well-known ideas from
reinforcement learning and active learning. Our results highlight a simple but
seemingly powerful principle: intrinsic rewards based on the amount of learning
can generate useful behavior, if each individual learner is introspective. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The overarching goals in image-based localization are scale, robustness and
speed. In recent years, approaches based on local features and sparse 3D
point-cloud models have both dominated the benchmarks and seen successful
realworld deployment. They enable applications ranging from robot navigation,
autonomous driving, virtual and augmented reality to device geo-localization.
Recently end-to-end learned localization approaches have been proposed which
show promising results on small scale datasets. However the positioning
accuracy, scalability, latency and compute & storage requirements of these
approaches remain open challenges. We aim to deploy localization at
global-scale where one thus relies on methods using local features and sparse
3D models. Our approach spans from offline model building to real-time
client-side pose fusion. The system compresses appearance and geometry of the
scene for efficient model storage and lookup leading to scalability beyond what
what has been previously demonstrated. It allows for low-latency localization
queries and efficient fusion run in real-time on mobile platforms by combining
server-side localization with real-time visual-inertial-based camera pose
tracking. In order to further improve efficiency we leverage a combination of
priors, nearest neighbor search, geometric match culling and a cascaded pose
candidate refinement step. This combination outperforms previous approaches
when working with large scale models and allows deployment at unprecedented
scale. We demonstrate the effectiveness of our approach on a proof-of-concept
system localizing 2.5 million images against models from four cities in
different regions on the world achieving query latencies in the 200ms range. | [
"cs.CV"
] |
In this paper, we conduct a large-scale study of font statistics in book
covers and online advertisements. Through the statistical study, we try to
understand how graphic designers relate fonts and content genres and identify
the relationship between font styles, colors, and genres. We propose an
automatic approach to extract font information from graphic designs by applying
a sequence of character detection, style classification, and clustering
techniques to the graphic designs. The extracted font information is
accumulated together with genre information, such as romance or business, for
further trend analysis. Through our unique empirical study, we show that the
collected font statistics reveal interesting trends in terms of how typographic
design represents the impression and the atmosphere of the content genres. | [
"cs.CV"
] |
With the popularity of blockchain technology, the financial security issues
of blockchain transaction networks have become increasingly serious. Phishing
scam detection methods will protect possible victims and build a healthier
blockchain ecosystem. Usually, the existing works define phishing scam
detection as a node classification task by learning the potential features of
users through graph embedding methods such as random walk or graph neural
network (GNN). However, these detection methods are suffered from high
complexity due to the large scale of the blockchain transaction network,
ignoring temporal information of the transaction. Addressing this problem, we
defined the transaction pattern graphs for users and transformed the phishing
scam detection into a graph classification task. To extract richer information
from the input graph, we proposed a multi-channel graph classification model
(MCGC) with multiple feature extraction channels for GNN. The transaction
pattern graphs and MCGC are more able to detect potential phishing scammers by
extracting the transaction pattern features of the target users. Extensive
experiments on seven benchmark and Ethereum datasets demonstrate that the
proposed MCGC can not only achieve state-of-the-art performance in the graph
classification task but also achieve effective phishing scam detection based on
the target users' transaction pattern graphs. | [
"cs.LG"
] |
We introduce Patch Refinement a two-stage model for accurate 3D object
detection and localization from point cloud data. Patch Refinement is composed
of two independently trained Voxelnet-based networks, a Region Proposal Network
(RPN) and a Local Refinement Network (LRN). We decompose the detection task
into a preliminary Bird's Eye View (BEV) detection step and a local 3D
detection step. Based on the proposed BEV locations by the RPN, we extract
small point cloud subsets ("patches"), which are then processed by the LRN,
which is less limited by memory constraints due to the small area of each
patch. Therefore, we can apply encoding with a higher voxel resolution locally.
The independence of the LRN enables the use of additional augmentation
techniques and allows for an efficient, regression focused training as it uses
only a small fraction of each scene. Evaluated on the KITTI 3D object detection
benchmark, our submission from January 28, 2019, outperformed all previous
entries on all three difficulties of the class car, using only 50 % of the
available training data and only LiDAR information. | [
"cs.CV",
"cs.LG"
] |
In this paper we improve the image embeddings generated in the graph neural
network solution for few shot learning. We propose alternate architectures for
existing networks such as Inception-Net, U-Net, Attention U-Net, and
Squeeze-Net to generate embeddings and increase the accuracy of the models. We
improve the quality of embeddings created at the cost of the time taken to
generate them. The proposed implementations outperform the existing state of
the art methods for 1-shot and 5-shot learning on the Omniglot dataset. The
experiments involved a testing set and training set which had no common classes
between them. The results for 5-way and 10-way/20-way tests have been
tabulated. | [
"cs.CV",
"cs.LG"
] |
We propose a learning-based method that solves monocular stereo and can be
extended to fuse depth information from multiple target frames. Given two
unconstrained images from a monocular camera with known intrinsic calibration,
our network estimates relative camera poses and the depth map of the source
image. The core contribution of the proposed method is threefold. First, a
network is tailored for static scenes that jointly estimates the optical flow
and camera motion. By the joint estimation, the optical flow search space is
gradually reduced resulting in an efficient and accurate flow estimation.
Second, a novel triangulation layer is proposed to encode the estimated optical
flow and camera motion while avoiding common numerical issues caused by
epipolar. Third, beyond two-view depth estimation, we further extend the above
networks to fuse depth information from multiple target images and estimate the
depth map of the source image. To further benefit the research community, we
introduce tools to generate photorealistic structure-from-motion datasets such
that deep networks can be well trained and evaluated. The proposed method is
compared with previous methods and achieves state-of-the-art results within
less time. Images from real-world applications and Google Earth are used to
demonstrate the generalization ability of the method. | [
"cs.CV"
] |
Traditional deep learning algorithms often fail to generalize when they are
tested outside of the domain of training data. Because data distributions can
change dynamically in real-life applications once a learned model is deployed,
in this paper we are interested in single-source domain generalization (SDG)
which aims to develop deep learning algorithms able to generalize from a single
training domain where no information about the test domain is available at
training time. Firstly, we design two simple MNISTbased SDG benchmarks, namely
MNIST Color SDG-MP and MNIST Color SDG-UP, which highlight the two different
fundamental SDG issues of increasing difficulties: 1) a class-correlated
pattern in the training domain is missing (SDG-MP), or 2) uncorrelated with the
class (SDG-UP), in the testing data domain. This is in sharp contrast with the
current domain generalization (DG) benchmarks which mix up different
correlation and variation factors and thereby make hard to disentangle success
or failure factors when benchmarking DG algorithms. We further evaluate several
state-of-the-art SDG algorithms through our simple benchmark, namely MNIST
Color SDG-MP, and show that the issue SDG-MP is largely unsolved despite of a
decade of efforts in developing DG algorithms. Finally, we also propose a
partially reversed contrastive loss to encourage intra-class diversity and find
less strongly correlated patterns, to deal with SDG-MP and show that the
proposed approach is very effective on our MNIST Color SDG-MP benchmark. | [
"cs.CV",
"cs.LG"
] |
Advancements in sensing and computing technologies, the development of human
and computer interaction frameworks, big data storage capabilities, and the
emergence of cloud storage and could computing have resulted in an abundance of
data in the modern industry. This data availability has encouraged researchers
and industry practitioners to rely on data-based machine learning, especially
deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily
dependent on the training data and how well that data represents the test data.
This issue mandates fine-tuning and even training the models from scratch when
there is a slight change in operating conditions or equipment. Transfer
learning is an approach that can remedy this issue by keeping portions of what
is learned from previous training and transferring them to the new application.
In this paper, a unified definition for transfer learning and its different
types is provided, Prognostics and Health Management (PHM) studies that have
used transfer learning are reviewed in detail, and finally, a discussion on
transfer learning application considerations and gaps is provided for improving
the applicability of transfer learning in PHM. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Networks (CNNs) can be shifted across 2D images or 3D
videos to segment them. They have a fixed input size and typically perceive
only small local contexts of the pixels to be classified as foreground or
background. In contrast, Multi-Dimensional Recurrent NNs (MD-RNNs) can perceive
the entire spatio-temporal context of each pixel in a few sweeps through all
pixels, especially when the RNN is a Long Short-Term Memory (LSTM). Despite
these theoretical advantages, however, unlike CNNs, previous MD-LSTM variants
were hard to parallelize on GPUs. Here we re-arrange the traditional cuboid
order of computations in MD-LSTM in pyramidal fashion. The resulting
PyraMiD-LSTM is easy to parallelize, especially for 3D data such as stacks of
brain slice images. PyraMiD-LSTM achieved best known pixel-wise brain image
segmentation results on MRBrainS13 (and competitive results on EM-ISBI12). | [
"cs.CV",
"cs.LG"
] |
Keeping in mind the necessity of intelligent system in educational sector,
this paper proposes a text analysis based automated approach for automatic
evaluation of the descriptive answers in an examination. In particular, the
research focuses on the use of intelligent concepts of Natural Language
Processing and Data Mining for computer aided examination evaluation system.
The paper present an architecture for fair evaluation of answer sheet. In this
architecture, the examiner creates a sample answer sheet for given sets of
question. By using the concept of text summarization, text semantics and
keywords summarization, the final score for each answer is calculated. The text
similarity model is based on Siamese Manhattan LSTM (MaLSTM). The results of
this research were compared to manually graded assignments and other existing
system. This approach was found to be very efficient in order to be implemented
in an institution or in an university. | [
"cs.LG",
"cs.CL",
"cs.IR"
] |
We propose a novel approach for few-shot talking-head synthesis. While recent
works in neural talking heads have produced promising results, they can still
produce images that do not preserve the identity of the subject in source
images. We posit this is a result of the entangled representation of each
subject in a single latent code that models 3D shape information, identity
cues, colors, lighting and even background details. In contrast, we propose to
factorize the representation of a subject into its spatial and style
components. Our method generates a target frame in two steps. First, it
predicts a dense spatial layout for the target image. Second, an image
generator utilizes the predicted layout for spatial denormalization and
synthesizes the target frame. We experimentally show that this disentangled
representation leads to a significant improvement over previous methods, both
quantitatively and qualitatively. | [
"cs.CV"
] |
The autoregressive language model (ALM) trained with maximum likelihood
estimation (MLE) is widely used in unconditional text generation. Due to
exposure bias, the generated texts still suffer from low quality and diversity.
This presents statistically as a discrepancy between the real text and
generated text. Some research shows a discriminator can detect this
discrepancy. Because the discriminator can encode more information than the
generator, discriminator has the potentiality to improve generator. To
alleviate the exposure bias, generative adversarial networks (GAN) use the
discriminator to update the generator's parameters directly, but they fail by
being evaluated precisely. A critical reason for the failure is the difference
between the discriminator input and the ALM input. We propose a novel mechanism
by adding a filter which has the same input as the discriminator. First,
discriminator detects the discrepancy signals and passes to filter directly (or
by learning). Then, we use the filter to reject some generated samples with a
sampling-based method. Thus, the original generative distribution is revised to
reduce the discrepancy. Two ALMs, RNN-based and Transformer-based, are
experimented. Evaluated precisely by three metrics, our mechanism consistently
outperforms the ALMs and all kinds of GANs across two benchmark data sets. | [
"cs.CV",
"cs.CL"
] |
Deep learning continues to revolutionize an ever-growing number of critical
application areas including healthcare, transportation, finance, and basic
sciences. Despite their increased predictive power, model transparency and
human explainability remain a significant challenge due to the "black box"
nature of modern deep learning models. In many cases the desired balance
between interpretability and performance is predominately task specific.
Human-centric domains such as healthcare necessitate a renewed focus on
understanding how and why these frameworks are arriving at critical and
potentially life-or-death decisions. Given the quantity of research and
empirical successes of deep learning for computer vision, most of the existing
interpretability research has focused on image processing techniques.
Comparatively, less attention has been paid to interpreting deep learning
frameworks using sequential data. Given recent deep learning advancements in
highly sequential domains such as natural language processing and physiological
signal processing, the need for deep sequential explanations is at an all-time
high. In this paper, we review current techniques for interpreting deep
learning techniques involving sequential data, identify similarities to
non-sequential methods, and discuss current limitations and future avenues of
sequential interpretability research. | [
"cs.LG",
"stat.ML"
] |
The main approach to defining equivalence among acyclic directed causal
graphical models is based on the conditional independence relationships in the
distributions that the causal models can generate, in terms of the Markov
equivalence. However, it is known that when cycles are allowed in the causal
structure, conditional independence may not be a suitable notion for
equivalence of two structures, as it does not reflect all the information in
the distribution that is useful for identification of the underlying structure.
In this paper, we present a general, unified notion of equivalence for linear
Gaussian causal directed graphical models, whether they are cyclic or acyclic.
In our proposed definition of equivalence, two structures are equivalent if
they can generate the same set of data distributions. We also propose a weaker
notion of equivalence called quasi-equivalence, which we show is the extent of
identifiability from observational data. We propose analytic as well as
graphical methods for characterizing the equivalence of two structures.
Additionally, we propose a score-based method for learning the structure from
observational data, which successfully deals with both acyclic and cyclic
structures. | [
"cs.LG",
"stat.ML"
] |
Analyzing spatio-temporal data like video is a challenging task that requires
processing visual and temporal information effectively. Convolutional Neural
Networks have shown promise as baseline fixed feature extractors through
transfer learning, a technique that helps minimize the training cost on visual
information. Temporal information is often handled using hand-crafted features
or Recurrent Neural Networks, but this can be overly specific or prohibitively
complex. Building a fully trainable system that can efficiently analyze
spatio-temporal data without hand-crafted features or complex training is an
open challenge. We present a new neural network architecture to address this
challenge, the Convolutional Drift Network (CDN). Our CDN architecture combines
the visual feature extraction power of deep Convolutional Neural Networks with
the intrinsically efficient temporal processing provided by Reservoir
Computing. In this introductory paper on the CDN, we provide a very simple
baseline implementation tested on two egocentric (first-person) video activity
datasets.We achieve video-level activity classification results on-par with
state-of-the art methods. Notably, performance on this complex spatio-temporal
task was produced by only training a single feed-forward layer in the CDN. | [
"cs.CV",
"cs.NE",
"eess.IV"
] |
Style synthesis attracts great interests recently, while few works focus on
its dual problem "style separation". In this paper, we propose the Style
Separation and Synthesis Generative Adversarial Network (S3-GAN) to
simultaneously implement style separation and style synthesis on object
photographs of specific categories. Based on the assumption that the object
photographs lie on a manifold, and the contents and styles are independent, we
employ S3-GAN to build mappings between the manifold and a latent vector space
for separating and synthesizing the contents and styles. The S3-GAN consists of
an encoder network, a generator network, and an adversarial network. The
encoder network performs style separation by mapping an object photograph to a
latent vector. Two halves of the latent vector represent the content and style,
respectively. The generator network performs style synthesis by taking a
concatenated vector as input. The concatenated vector contains the style half
vector of the style target image and the content half vector of the content
target image. Once obtaining the images from the generator network, an
adversarial network is imposed to generate more photo-realistic images.
Experiments on CelebA and UT Zappos 50K datasets demonstrate that the S3-GAN
has the capacity of style separation and synthesis simultaneously, and could
capture various styles in a single model. | [
"cs.CV"
] |
This paper introduces a newly collected and novel dataset (StereoMSI) for
example-based single and colour-guided spectral image super-resolution. The
dataset was first released and promoted during the PIRM2018 spectral image
super-resolution challenge. To the best of our knowledge, the dataset is the
first of its kind, comprising 350 registered colour-spectral image pairs. The
dataset has been used for the two tracks of the challenge and, for each of
these, we have provided a split into training, validation and testing. This
arrangement is a result of the challenge structure and phases, with the first
track focusing on example-based spectral image super-resolution and the second
one aiming at exploiting the registered stereo colour imagery to improve the
resolution of the spectral images. Each of the tracks and splits has been
selected to be consistent across a number of image quality metrics. The dataset
is quite general in nature and can be used for a wide variety of applications
in addition to the development of spectral image super-resolution methods. | [
"cs.CV"
] |
A novel energy-efficient edge computing paradigm is proposed for real-time
deep learning-based image upsampling applications. State-of-the-art deep
learning solutions for image upsampling are currently trained using either
resize or sub-pixel convolution to learn kernels that generate high fidelity
images with minimal artifacts. However, performing inference with these learned
convolution kernels requires memory-intensive feature map transformations that
dominate time and energy costs in real-time applications. To alleviate this
pressure on memory bandwidth, we confine the use of resize or sub-pixel
convolution to training in the cloud by transforming learned convolution
kernels to deconvolution kernels before deploying them for inference as a
functionally equivalent deconvolution. These kernel transformations, intended
as a one-time cost when shifting from training to inference, enable a systems
designer to use each algorithm in their optimal context by preserving the image
fidelity learned when training in the cloud while minimizing data transfer
penalties during inference at the edge. We also explore existing variants of
deconvolution inference algorithms and introduce a novel variant for
consideration. We analyze and compare the inference properties of
convolution-based upsampling algorithms using a quantitative model of incurred
time and energy costs and show that using deconvolution for inference at the
edge improves both system latency and energy efficiency when compared to their
sub-pixel or resize convolution counterparts. | [
"cs.CV",
"cs.AR",
"cs.DC",
"cs.LG"
] |
Video style transfer is a useful component for applications such as augmented
reality, non-photorealistic rendering, and interactive games. Many existing
methods use optical flow to preserve the temporal smoothness of the synthesized
video. However, the estimation of optical flow is sensitive to occlusions and
rapid motions. Thus, in this work, we introduce a novel evolve-sync loss
computed by evolvements to replace optical flow. Using this evolve-sync loss,
we build an adversarial learning framework, termed as Video Style Transfer
Generative Adversarial Network (VST-GAN), which improves upon the MGAN method
for image style transfer for more efficient video style transfer. We perform
extensive experimental evaluations of our method and show quantitative and
qualitative improvements over the state-of-the-art methods. | [
"cs.CV"
] |
Today's general-purpose deep convolutional neural networks (CNN) for image
classification and object detection are trained offline on large static
datasets. Some applications, however, will require training in real-time on
live video streams with a human-in-the-loop. We refer to this class of problem
as Time-ordered Online Training (ToOT) - these problems will require a
consideration of not only the quantity of incoming training data, but the human
effort required to tag and use it. In this paper, we define training benefit as
a metric to measure the effectiveness of a sequence in using each user
interaction. We demonstrate and evaluate a system tailored to performing ToOT
in the field, capable of training an image classifier on a live video stream
through minimal input from a human operator. We show that by exploiting the
time-ordered nature of the video stream through optical flow-based object
tracking, we can increase the effectiveness of human actions by about 8 times. | [
"cs.CV",
"cs.AI",
"cs.HC",
"C.1.3"
] |
Visual dialog is a challenging vision-language task, which requires the agent
to answer multi-round questions about an image. It typically needs to address
two major problems: (1) How to answer visually-grounded questions, which is the
core challenge in visual question answering (VQA); (2) How to infer the
co-reference between questions and the dialog history. An example of visual
co-reference is: pronouns (\eg, ``they'') in the question (\eg, ``Are they on
or off?'') are linked with nouns (\eg, ``lamps'') appearing in the dialog
history (\eg, ``How many lamps are there?'') and the object grounded in the
image. In this work, to resolve the visual co-reference for visual dialog, we
propose a novel attention mechanism called Recursive Visual Attention (RvA).
Specifically, our dialog agent browses the dialog history until the agent has
sufficient confidence in the visual co-reference resolution, and refines the
visual attention recursively. The quantitative and qualitative experimental
results on the large-scale VisDial v0.9 and v1.0 datasets demonstrate that the
proposed RvA not only outperforms the state-of-the-art methods, but also
achieves reasonable recursion and interpretable attention maps without
additional annotations. The code is available at
\url{https://github.com/yuleiniu/rva}. | [
"cs.CV"
] |
The paper presents our proposed solutions for the MediaEval 2020
Flood-Related Multimedia Task, which aims to analyze and detect flooding events
in multimedia content shared over Twitter. In total, we proposed four different
solutions including a multi-modal solution combining textual and visual
information for the mandatory run, and three single modal image and text-based
solutions as optional runs. In the multimodal method, we rely on a supervised
multimodal bitransformer model that combines textual and visual features in an
early fusion, achieving a micro F1-score of .859 on the development data set.
For the text-based flood events detection, we use a transformer network (i.e.,
pretrained Italian BERT model) achieving an F1-score of .853. For image-based
solutions, we employed multiple deep models, pre-trained on both, the ImageNet
and places data sets, individually and combined in an early fusion achieving
F1-scores of .816 and .805 on the development set, respectively. | [
"cs.CV"
] |
This paper considers a canonical clustering problem where one receives
unlabeled samples drawn from a balanced mixture of two elliptical distributions
and aims for a classifier to estimate the labels. Many popular methods
including PCA and k-means require individual components of the mixture to be
somewhat spherical, and perform poorly when they are stretched. To overcome
this issue, we propose a non-convex program seeking for an affine transform to
turn the data into a one-dimensional point cloud concentrating around -1 and 1,
after which clustering becomes easy. Our theoretical contributions are
two-fold: (1) we show that the non-convex loss function exhibits desirable
landscape properties as long as the sample size exceeds some constant multiple
of the dimension, and (2) we leverage this to prove that an efficient
first-order algorithm achieves near-optimal statistical precision even without
good initialization. We also propose a general methodology for multi-class
clustering tasks with flexible choices of feature transforms and loss
objectives. | [
"stat.ML",
"cs.LG",
"math.OC",
"math.ST",
"stat.ME",
"stat.TH",
"62H30"
] |
To synthesize high-quality person images with arbitrary poses is challenging.
In this paper, we propose a novel Multi-scale Conditional Generative
Adversarial Networks (MsCGAN), aiming to convert the input conditional person
image to a synthetic image of any given target pose, whose appearance and the
texture are consistent with the input image. MsCGAN is a multi-scale
adversarial network consisting of two generators and two discriminators. One
generator transforms the conditional person image into a coarse image of the
target pose globally, and the other is to enhance the detailed quality of the
synthetic person image through a local reinforcement network. The outputs of
the two generators are then merged into a synthetic, discriminant and
high-resolution image. On the other hand, the synthetic image is downsampled to
multiple resolutions as the input to multi-scale discriminator networks. The
proposed multi-scale generators and discriminators handling different levels of
visual features can benefit to synthesizing high-resolution person images with
realistic appearance and texture. Experiments are conducted on the Market-1501
and DeepFashion datasets to evaluate the proposed model, and both qualitative
and quantitative results demonstrate the superior performance of the proposed
MsCGAN. | [
"cs.CV"
] |
Overfitting in deep learning has been the focus of a number of recent works,
yet its exact impact on the behavior of neural networks is not well understood.
This study analyzes overfitting by examining how the distribution of logits
alters in relation to how much the model overfits. Specifically, we find that
when training with few data samples, the distribution of logit activations when
processing unseen test samples of an under-represented class tends to shift
towards and even across the decision boundary, while the over-represented class
seems unaffected. In image segmentation, foreground samples are often heavily
under-represented. We observe that sensitivity of the model drops as a result
of overfitting, while precision remains mostly stable. Based on our analysis,
we derive asymmetric modifications of existing loss functions and regularizers
including a large margin loss, focal loss, adversarial training and mixup,
which specifically aim at reducing the shift observed when embedding unseen
samples of the under-represented class. We study the case of binary
segmentation of brain tumor core and show that our proposed simple
modifications lead to significantly improved segmentation performance over the
symmetric variants. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Nonlinear estimation in robotics and vision is typically plagued with
outliers due to wrong data association, or to incorrect detections from signal
processing and machine learning methods. This paper introduces two unifying
formulations for outlier-robust estimation, Generalized Maximum Consensus
(G-MC) and Generalized Truncated Least Squares (G-TLS), and investigates
fundamental limits, practical algorithms, and applications. Our first
contribution is a proof that outlier-robust estimation is inapproximable: in
the worst case, it is impossible to (even approximately) find the set of
outliers, even with slower-than-polynomial-time algorithms (particularly,
algorithms running in quasi-polynomial time). As a second contribution, we
review and extend two general-purpose algorithms. The first, Adaptive Trimming
(ADAPT), is combinatorial, and is suitable for G-MC; the second, Graduated
Non-Convexity (GNC), is based on homotopy methods, and is suitable for G-TLS.
We extend ADAPT and GNC to the case where the user does not have prior
knowledge of the inlier-noise statistics (or the statistics may vary over time)
and is unable to guess a reasonable threshold to separate inliers from outliers
(as the one commonly used in RANSAC). We propose the first minimally tuned
algorithms for outlier rejection, that dynamically decide how to separate
inliers from outliers. Our third contribution is an evaluation of the proposed
algorithms on robot perception problems: mesh registration, image-based object
detection (shape alignment), and pose graph optimization. ADAPT and GNC execute
in real-time, are deterministic, outperform RANSAC, and are robust up to 80-90%
outliers. Their minimally tuned versions also compare favorably with the state
of the art, even though they do not rely on a noise bound for the inliers. | [
"cs.CV",
"cs.RO"
] |
Novel vision sensors such as thermal, hyperspectral, polarization, and event
cameras provide information that is not available from conventional intensity
cameras. An obstacle to using these sensors with current powerful deep neural
networks is the lack of large labeled training datasets. This paper proposes a
Network Grafting Algorithm (NGA), where a new front end network driven by
unconventional visual inputs replaces the front end network of a pretrained
deep network that processes intensity frames. The self-supervised training uses
only synchronously-recorded intensity frames and novel sensor data to maximize
feature similarity between the pretrained network and the grafted network. We
show that the enhanced grafted network reaches competitive average precision
(AP50) scores to the pretrained network on an object detection task using
thermal and event camera datasets, with no increase in inference costs.
Particularly, the grafted network driven by thermal frames showed a relative
improvement of 49.11% over the use of intensity frames. The grafted front end
has only 5--8% of the total parameters and can be trained in a few hours on a
single GPU equivalent to 5% of the time that would be needed to train the
entire object detector from labeled data. NGA allows new vision sensors to
capitalize on previously pretrained powerful deep models, saving on training
cost and widening a range of applications for novel sensors. | [
"cs.CV",
"cs.LG"
] |
Compared with shallow domain adaptation, recent progress in deep domain
adaptation has shown that it can achieve higher predictive performance and
stronger capacity to tackle structural data (e.g., image and sequential data).
The underlying idea of deep domain adaptation is to bridge the gap between
source and target domains in a joint space so that a supervised classifier
trained on labeled source data can be nicely transferred to the target domain.
This idea is certainly intuitive and powerful, however, limited theoretical
understandings have been developed to support its underpinning principle. In
this paper, we have provided a rigorous framework to explain why it is possible
to close the gap of the target and source domains in the joint space. More
specifically, we first study the loss incurred when performing transfer
learning from the source to the target domain. This provides a theory that
explains and generalizes existing work in deep domain adaptation which was
mainly empirical. This enables us to further explain why closing the gap in the
joint space can directly minimize the loss incurred for transfer learning
between the two domains. To our knowledge, this offers the first theoretical
result that characterizes a direct bound on the joint space and the gain of
transfer learning via deep domain adaptation | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
We present a fast and scalable algorithm to induce non-monotonic logic
programs from statistical learning models. We reduce the problem of search for
best clauses to instances of the High-Utility Itemset Mining (HUIM) problem. In
the HUIM problem, feature values and their importance are treated as
transactions and utilities respectively. We make use of TreeExplainer, a fast
and scalable implementation of the Explainable AI tool SHAP, to extract locally
important features and their weights from ensemble tree models. Our experiments
with UCI standard benchmarks suggest a significant improvement in terms of
classification evaluation metrics and running time of the training algorithm
compared to ALEPH, a state-of-the-art Inductive Logic Programming (ILP) system. | [
"cs.LG",
"cs.AI",
"cs.LO"
] |
Although current deep generative adversarial networks (GANs) could synthesize
high-quality (HQ) images, discovering novel GAN encoders for image
reconstruction is still favorable. When embedding images to latent space,
existing GAN encoders work well for aligned images (such as the human face),
but they do not adapt to more generalized GANs. To our knowledge, current
state-of-the-art GAN encoders do not have a proper encoder to reconstruct
high-fidelity images from most misaligned HQ synthesized images on different
GANs. Their performances are limited, especially on non-aligned and real
images. We propose a novel method (named MTV-TSA) to handle such problems.
Creating multi-type latent vectors (MTV) from latent space and two-scale
attentions (TSA) from images allows designing a set of encoders that can be
adaptable to a variety of pre-trained GANs. We generalize two sets of loss
functions to optimize the encoders. The designed encoders could make GANs
reconstruct higher fidelity images from most synthesized HQ images. In
addition, the proposed method can reconstruct real images well and process them
based on learned attribute directions. The designed encoders have unified
convolutional blocks and could match well in current GAN architectures (such as
PGGAN, StyleGANs, and BigGAN) by fine-tuning the corresponding normalization
layers and the last block. Such well-designed encoders can also be trained to
converge more quickly. | [
"cs.CV",
"eess.IV"
] |
Machine learning and in particular deep learning algorithms are the emerging
approaches to data analysis. These techniques have transformed traditional data
mining-based analysis radically into a learning-based model in which existing
data sets along with their cluster labels (i.e., train set) are learned to
build a supervised learning model and predict the cluster labels of unseen data
(i.e., test set). In particular, deep learning techniques are capable of
capturing and learning hidden features in a given data sets and thus building a
more accurate prediction model for clustering and labeling problem. However,
the major problem is that time series data are often unlabeled and thus
supervised learning-based deep learning algorithms cannot be directly adapted
to solve the clustering problems for these special and complex types of data
sets. To address this problem, this paper introduces a two-stage method for
clustering time series data. First, a novel technique is introduced to utilize
the characteristics (e.g., volatility) of given time series data in order to
create labels and thus be able to transform the problem from unsupervised
learning into supervised learning. Second, an autoencoder-based deep learning
model is built to learn and model both known and hidden features of time series
data along with their created labels to predict the labels of unseen time
series data. The paper reports a case study in which financial and stock time
series data of selected 70 stock indices are clustered into distinct groups
using the introduced two-stage procedure. The results show that the proposed
procedure is capable of achieving 87.5\% accuracy in clustering and predicting
the labels for unseen time series data. | [
"cs.LG",
"stat.ML"
] |
Electroencephalograph (EEG) emotion recognition is a significant task in the
brain-computer interface field. Although many deep learning methods are
proposed recently, it is still challenging to make full use of the information
contained in different domains of EEG signals. In this paper, we present a
novel method, called four-dimensional attention-based neural network (4D-aNN)
for EEG emotion recognition. First, raw EEG signals are transformed into 4D
spatial-spectral-temporal representations. Then, the proposed 4D-aNN adopts
spectral and spatial attention mechanisms to adaptively assign the weights of
different brain regions and frequency bands, and a convolutional neural network
(CNN) is utilized to deal with the spectral and spatial information of the 4D
representations. Moreover, a temporal attention mechanism is integrated into a
bidirectional Long Short-Term Memory (LSTM) to explore temporal dependencies of
the 4D representations. Our model achieves state-of-the-art performance on the
SEED dataset under intra-subject splitting. The experimental results have shown
the effectiveness of the attention mechanisms in different domains for EEG
emotion recognition. | [
"cs.LG"
] |
Text recognition has attracted considerable research interests because of its
various applications. The cutting-edge text recognition methods are based on
attention mechanisms. However, most of attention methods usually suffer from
serious alignment problem due to its recurrency alignment operation, where the
alignment relies on historical decoding results. To remedy this issue, we
propose a decoupled attention network (DAN), which decouples the alignment
operation from using historical decoding results. DAN is an effective, flexible
and robust end-to-end text recognizer, which consists of three components: 1) a
feature encoder that extracts visual features from the input image; 2) a
convolutional alignment module that performs the alignment operation based on
visual features from the encoder; and 3) a decoupled text decoder that makes
final prediction by jointly using the feature map and attention maps.
Experimental results show that DAN achieves state-of-the-art performance on
multiple text recognition tasks, including offline handwritten text recognition
and regular/irregular scene text recognition. | [
"cs.CV"
] |
This paper proposes a simple self-supervised approach for learning a
representation for visual correspondence from raw video. We cast correspondence
as prediction of links in a space-time graph constructed from video. In this
graph, the nodes are patches sampled from each frame, and nodes adjacent in
time can share a directed edge. We learn a representation in which pairwise
similarity defines transition probability of a random walk, so that long-range
correspondence is computed as a walk along the graph. We optimize the
representation to place high probability along paths of similarity. Targets for
learning are formed without supervision, by cycle-consistency: the objective is
to maximize the likelihood of returning to the initial node when walking along
a graph constructed from a palindrome of frames. Thus, a single path-level
constraint implicitly supervises chains of intermediate comparisons. When used
as a similarity metric without adaptation, the learned representation
outperforms the self-supervised state-of-the-art on label propagation tasks
involving objects, semantic parts, and pose. Moreover, we demonstrate that a
technique we call edge dropout, as well as self-supervised adaptation at
test-time, further improve transfer for object-centric correspondence. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Detecting pedestrians, especially under heavy occlusions, is a challenging
computer vision problem with numerous real-world applications. This paper
introduces a novel approach, termed as PSC-Net, for occluded pedestrian
detection. The proposed PSC-Net contains a dedicated module that is designed to
explicitly capture both inter and intra-part co-occurrence information of
different pedestrian body parts through a Graph Convolutional Network (GCN).
Both inter and intra-part co-occurrence information contribute towards
improving the feature representation for handling varying level of occlusions,
ranging from partial to severe occlusions. Our PSC-Net exploits the topological
structure of pedestrian and does not require part-based annotations or
additional visible bounding-box (VBB) information to learn part spatial
co-occurrence. Comprehensive experiments are performed on two challenging
datasets: CityPersons and Caltech datasets. The proposed PSC-Net achieves
state-of-the-art detection performance on both. On the heavy occluded
(\textbf{HO}) set of CityPerosns test set, our PSC-Net obtains an absolute gain
of 4.0\% in terms of log-average miss rate over the state-of-the-art with same
backbone, input scale and without using additional VBB supervision. Further,
PSC-Net improves the state-of-the-art from 37.9 to 34.8 in terms of log-average
miss rate on Caltech (\textbf{HO}) test set. | [
"cs.CV"
] |
We present GraphTSNE, a novel visualization technique for graph-structured
data based on t-SNE. The growing interest in graph-structured data increases
the importance of gaining human insight into such datasets by means of
visualization. Among the most popular visualization techniques, classical t-SNE
is not suitable on such datasets because it has no mechanism to make use of
information from the graph structure. On the other hand, visualization
techniques which operate on graphs, such as Laplacian Eigenmaps and tsNET, have
no mechanism to make use of information from node features. Our proposed method
GraphTSNE produces visualizations which account for both graph structure and
node features. It is based on scalable and unsupervised training of a graph
convolutional network on a modified t-SNE loss. By assembling a suite of
evaluation metrics, we demonstrate that our method produces desirable
visualizations on three benchmark datasets. | [
"cs.LG",
"stat.ML"
] |
Object proposal is essential for current state-of-the-art object detection
pipelines. However, the existing proposal methods generally fail in producing
results with satisfying localization accuracy. The case is even worse for small
objects which however are quite common in practice. In this paper we propose a
novel Scale-aware Pixel-wise Object Proposal (SPOP) network to tackle the
challenges. The SPOP network can generate proposals with high recall rate and
average best overlap (ABO), even for small objects. In particular, in order to
improve the localization accuracy, a fully convolutional network is employed
which predicts locations of object proposals for each pixel. The produced
ensemble of pixel-wise object proposals enhances the chance of hitting the
object significantly without incurring heavy extra computational cost. To solve
the challenge of localizing objects at small scale, two localization networks
which are specialized for localizing objects with different scales are
introduced, following the divide-and-conquer philosophy. Location outputs of
these two networks are then adaptively combined to generate the final proposals
by a large-/small-size weighting network. Extensive evaluations on PASCAL VOC
2007 show the SPOP network is superior over the state-of-the-art models. The
high-quality proposals from SPOP network also significantly improve the mean
average precision (mAP) of object detection with Fast-RCNN framework. Finally,
the SPOP network (trained on PASCAL VOC) shows great generalization performance
when testing it on ILSVRC 2013 validation set. | [
"cs.CV"
] |
Visual emotion analysis (VEA) has attracted great attention recently, due to
the increasing tendency of expressing and understanding emotions through images
on social networks. Different from traditional vision tasks, VEA is inherently
more challenging since it involves a much higher level of complexity and
ambiguity in human cognitive process. Most of the existing methods adopt deep
learning techniques to extract general features from the whole image,
disregarding the specific features evoked by various emotional stimuli.
Inspired by the \textit{Stimuli-Organism-Response (S-O-R)} emotion model in
psychological theory, we proposed a stimuli-aware VEA method consisting of
three stages, namely stimuli selection (S), feature extraction (O) and emotion
prediction (R). First, specific emotional stimuli (i.e., color, object, face)
are selected from images by employing the off-the-shelf tools. To the best of
our knowledge, it is the first time to introduce stimuli selection process into
VEA in an end-to-end network. Then, we design three specific networks, i.e.,
Global-Net, Semantic-Net and Expression-Net, to extract distinct emotional
features from different stimuli simultaneously. Finally, benefiting from the
inherent structure of Mikel's wheel, we design a novel hierarchical
cross-entropy loss to distinguish hard false examples from easy ones in an
emotion-specific manner. Experiments demonstrate that the proposed method
consistently outperforms the state-of-the-art approaches on four public visual
emotion datasets. Ablation study and visualizations further prove the validity
and interpretability of our method. | [
"cs.CV",
"cs.AI"
] |
Diabetic retinopathy (DR) and age related macular degeneration (ARMD) are
among the major causes of visual impairment worldwide. DR is mainly
characterized by red spots, namely microaneurysms and bright lesions,
specifically exudates whereas ARMD is mainly identified by tiny yellow or white
deposits called drusen. Since exudates might be the only manifestation of the
early diabetic retinopathy, there is an increase demand for automatic
retinopathy diagnosis. Exudates and drusen may share similar appearances, thus
discriminating between them is of interest to enhance screening performance. In
this research, we investigative the role of bag of words approach in the
automatic diagnosis of retinopathy diabetes. We proposed to use a single based
and multiple based methods for the construction of the visual dictionary by
combining the histogram of word occurrences from each dictionary and building a
single histogram. The introduced approach is evaluated for automatic diagnosis
of normal and abnormal color fundus images with bright lesions. This approach
has been implemented on 430 fundus images, including six publicly available
datasets, in addition to one local dataset. The mean accuracies reported are
97.2% and 99.77% for single based and multiple based dictionaries respectively. | [
"cs.CV"
] |
A broad class of unsupervised deep learning methods such as Generative
Adversarial Networks (GANs) involve training of overparameterized models where
the number of parameters of the model exceeds a certain threshold. A large body
of work in supervised learning have shown the importance of model
overparameterization in the convergence of the gradient descent (GD) to
globally optimal solutions. In contrast, the unsupervised setting and GANs in
particular involve non-convex concave mini-max optimization problems that are
often trained using Gradient Descent/Ascent (GDA). The role and benefits of
model overparameterization in the convergence of GDA to a global saddle point
in non-convex concave problems is far less understood. In this work, we present
a comprehensive analysis of the importance of model overparameterization in
GANs both theoretically and empirically. We theoretically show that in an
overparameterized GAN model with a $1$-layer neural network generator and a
linear discriminator, GDA converges to a global saddle point of the underlying
non-convex concave min-max problem. To the best of our knowledge, this is the
first result for global convergence of GDA in such settings. Our theory is
based on a more general result that holds for a broader class of nonlinear
generators and discriminators that obey certain assumptions (including deeper
generators and random feature discriminators). We also empirically study the
role of model overparameterization in GANs using several large-scale
experiments on CIFAR-10 and Celeb-A datasets. Our experiments show that
overparameterization improves the quality of generated samples across various
model architectures and datasets. Remarkably, we observe that
overparameterization leads to faster and more stable convergence behavior of
GDA across the board. | [
"cs.LG",
"stat.ML"
] |
It is becoming increasingly clear that many machine learning classifiers are
vulnerable to adversarial examples. In attempting to explain the origin of
adversarial examples, previous studies have typically focused on the fact that
neural networks operate on high dimensional data, they overfit, or they are too
linear. Here we argue that the origin of adversarial examples is primarily due
to an inherent uncertainty that neural networks have about their predictions.
We show that the functional form of this uncertainty is independent of
architecture, dataset, and training protocol; and depends only on the
statistics of the logit differences of the network, which do not change
significantly during training. This leads to adversarial error having a
universal scaling, as a power-law, with respect to the size of the adversarial
perturbation. We show that this universality holds for a broad range of
datasets (MNIST, CIFAR10, ImageNet, and random data), models (including
state-of-the-art deep networks, linear models, adversarially trained networks,
and networks trained on randomly shuffled labels), and attacks (FGSM, step
l.l., PGD). Motivated by these results, we study the effects of reducing
prediction entropy on adversarial robustness. Finally, we study the effect of
network architectures on adversarial sensitivity. To do this, we use neural
architecture search with reinforcement learning to find adversarially robust
architectures on CIFAR10. Our resulting architecture is more robust to white
\emph{and} black box attacks compared to previous attempts. | [
"stat.ML",
"cs.LG"
] |
Since its inception, the neural estimation of mutual information (MI) has
demonstrated the empirical success of modeling expected dependency between
high-dimensional random variables. However, MI is an aggregate statistic and
cannot be used to measure point-wise dependency between different events. In
this work, instead of estimating the expected dependency, we focus on
estimating point-wise dependency (PD), which quantitatively measures how likely
two outcomes co-occur. We show that we can naturally obtain PD when we are
optimizing MI neural variational bounds. However, optimizing these bounds is
challenging due to its large variance in practice. To address this issue, we
develop two methods (free of optimizing MI variational bounds): Probabilistic
Classifier and Density-Ratio Fitting. We demonstrate the effectiveness of our
approaches in 1) MI estimation, 2) self-supervised representation learning, and
3) cross-modal retrieval task. | [
"cs.LG",
"stat.ME",
"stat.ML"
] |
Style analysis of artwork in computer vision predominantly focuses on
achieving results in target image generation through optimizing understanding
of low level style characteristics such as brush strokes. However,
fundamentally different techniques are required to computationally understand
and control qualities of art which incorporate higher level style
characteristics. We study style representations learned by neural network
architectures incorporating these higher level characteristics. We find
variation in learned style features from incorporating triplets annotated by
art historians as supervision for style similarity. Networks leveraging
statistical priors or pretrained on photo collections such as ImageNet can also
derive useful visual representations of artwork. We align the impact of these
expert human knowledge, statistical, and photo realism priors on style
representations with art historical research and use these representations to
perform zero-shot classification of artists. To facilitate this work, we also
present the first large-scale dataset of portraits prepared for computational
analysis. | [
"cs.CV"
] |
Fine-grained object recognition concerns the identification of the type of an
object among a large number of closely related sub-categories. Multisource data
analysis, that aims to leverage the complementary spectral, spatial, and
structural information embedded in different sources, is a promising direction
towards solving the fine-grained recognition problem that involves low
between-class variance, small training set sizes for rare classes, and class
imbalance. However, the common assumption of co-registered sources may not hold
at the pixel level for small objects of interest. We present a novel
methodology that aims to simultaneously learn the alignment of multisource data
and the classification model in a unified framework. The proposed method
involves a multisource region attention network that computes per-source
feature representations, assigns attention scores to candidate regions sampled
around the expected object locations by using these representations, and
classifies the objects by using an attention-driven multisource representation
that combines the feature representations and the attention scores from all
sources. All components of the model are realized using deep neural networks
and are learned in an end-to-end fashion. Experiments using RGB, multispectral,
and LiDAR elevation data for classification of street trees showed that our
approach achieved 64.2% and 47.3% accuracies for the 18-class and 40-class
settings, respectively, which correspond to 13% and 14.3% improvement relative
to the commonly used feature concatenation approach from multiple sources. | [
"cs.CV"
] |
Reinforcement Learning (RL) has achieved impressive performance in many
complex environments due to the integration with Deep Neural Networks (DNNs).
At the same time, Genetic Algorithms (GAs), often seen as a competing approach
to RL, had limited success in scaling up to the DNNs required to solve
challenging tasks. Contrary to this dichotomic view, in the physical world,
evolution and learning are complementary processes that continuously interact.
The recently proposed Evolutionary Reinforcement Learning (ERL) framework has
demonstrated mutual benefits to performance when combining the two methods.
However, ERL has not fully addressed the scalability problem of GAs. In this
paper, we show that this problem is rooted in an unfortunate combination of a
simple genetic encoding for DNNs and the use of traditional
biologically-inspired variation operators. When applied to these encodings, the
standard operators are destructive and cause catastrophic forgetting of the
traits the networks acquired. We propose a novel algorithm called Proximal
Distilled Evolutionary Reinforcement Learning (PDERL) that is characterised by
a hierarchical integration between evolution and learning. The main innovation
of PDERL is the use of learning-based variation operators that compensate for
the simplicity of the genetic representation. Unlike traditional operators, our
proposals meet the functional requirements of variation operators when applied
on directly-encoded DNNs. We evaluate PDERL in five robot locomotion settings
from the OpenAI gym. Our method outperforms ERL, as well as two
state-of-the-art RL algorithms, PPO and TD3, in all tested environments. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Turing machine and decision tree have developed independently for a long
time. With the recent development of differentiable models, there is an
intersection between them. Neural turing machine(NTM) opens door for the memory
network. It use differentiable attention mechanism to read/write external
memory bank. Differentiable forest brings differentiable properties to
classical decision tree. In this short note, we show the deep connection
between these two models. That is: differentiable forest is a special case of
NTM. Differentiable forest is actually decision tree based neural turing
machine. Based on this deep connection, we propose a response augmented
differential forest (RaDF). The controller of RaDF is differentiable forest,
the external memory of RaDF are response vectors which would be read/write by
leaf nodes. | [
"cs.LG",
"cs.AI",
"cs.NE"
] |
Transformers are widely used in natural language processing due to their
ability to model longer-term dependencies in text. Although these models
achieve state-of-the-art performance for many language related tasks, their
applicability outside of the natural language processing field has been
minimal. In this work, we propose the use of transformer models for the
prediction of dynamical systems representative of physical phenomena. The use
of Koopman based embeddings provide a unique and powerful method for projecting
any dynamical system into a vector representation which can then be predicted
by a transformer model. The proposed model is able to accurately predict
various dynamical systems and outperform classical methods that are commonly
used in the scientific machine learning literature. | [
"cs.LG",
"physics.comp-ph"
] |
This paper presents a general graph representation learning framework called
DeepGL for learning deep node and edge representations from large (attributed)
graphs. In particular, DeepGL begins by deriving a set of base features (e.g.,
graphlet features) and automatically learns a multi-layered hierarchical graph
representation where each successive layer leverages the output from the
previous layer to learn features of a higher-order. Contrary to previous work,
DeepGL learns relational functions (each representing a feature) that
generalize across-networks and therefore useful for graph-based transfer
learning tasks. Moreover, DeepGL naturally supports attributed graphs, learns
interpretable features, and is space-efficient (by learning sparse feature
vectors). In addition, DeepGL is expressive, flexible with many interchangeable
components, efficient with a time complexity of $\mathcal{O}(|E|)$, and
scalable for large networks via an efficient parallel implementation. Compared
with the state-of-the-art method, DeepGL is (1) effective for across-network
transfer learning tasks and attributed graph representation learning, (2)
space-efficient requiring up to 6x less memory, (3) fast with up to 182x
speedup in runtime performance, and (4) accurate with an average improvement of
20% or more on many learning tasks. | [
"stat.ML",
"cs.LG",
"cs.SI"
] |
Standardized benchmarks are crucial for the majority of computer vision
applications. Although leaderboards and ranking tables should not be
over-claimed, benchmarks often provide the most objective measure of
performance and are therefore important guides for research. The benchmark for
Multiple Object Tracking, MOTChallenge, was launched with the goal to establish
a standardized evaluation of multiple object tracking methods. The challenge
focuses on multiple people tracking, since pedestrians are well studied in the
tracking community, and precise tracking and detection has high practical
relevance. Since the first release, MOT15, MOT16, and MOT17 have tremendously
contributed to the community by introducing a clean dataset and precise
framework to benchmark multi-object trackers. In this paper, we present our
MOT20benchmark, consisting of 8 new sequences depicting very crowded
challenging scenes. The benchmark was presented first at the 4thBMTT MOT
Challenge Workshop at the Computer Vision and Pattern Recognition Conference
(CVPR) 2019, and gives to chance to evaluate state-of-the-art methods for
multiple object tracking when handling extremely crowded scenarios. | [
"cs.CV"
] |
We present our approach to unsupervised domain adaptation for single-stage
object detectors on top-view grid maps in automated driving scenarios. Our goal
is to train a robust object detector on grid maps generated from custom sensor
data and setups. We first introduce a single-stage object detector for grid
maps based on RetinaNet. We then extend our model by image- and instance-level
domain classifiers at different feature pyramid levels which are trained in an
adversarial manner. This allows us to train robust object detectors for
unlabeled domains. We evaluate our approach quantitatively on the nuScenes and
KITTI benchmarks and present qualitative domain adaptation results for
unlabeled measurements recorded by our experimental vehicle. Our results
demonstrate that object detection accuracy for unlabeled domains can be
improved by applying our domain adaptation strategy. | [
"cs.CV"
] |
We build a deep learning model to detect and classify heart disease using
$X-ray$. We collect data from several hospitals and public datasets. After
preprocess we get 3026 images including disease type VSD, ASD, TOF and normal
control. The main problem we have to solve is to enable the network to
accurately learn the characteristics of the heart, to ensure the reliability of
the network while increasing accuracy. By learning the doctor's diagnostic
experience, labeling the image and using tools to extract masks of heart
region, we train a U-net to generate a mask to give more attention. It forces
the model to focus on the characteristics of the heart region and obtain more
reliable results. | [
"cs.CV"
] |
Diffusion has shown great success in improving accuracy of unsupervised image
retrieval systems by utilizing high-order structures of image manifold.
However, existing diffusion methods suffer from three major limitations: 1)
they usually rely on local structures without considering global manifold
information; 2) they focus on improving pair-wise similarities within existing
images input output transductively while lacking flexibility to learn
representations for novel unseen instances inductively; 3) they fail to scale
to large datasets due to prohibitive memory consumption and computational
burden due to intrinsic high-order operations on the whole graph. In this
paper, to address these limitations, we propose a novel method, Graph Diffusion
Networks (GRAD-Net), that adopts graph neural networks (GNNs), a novel variant
of deep learning algorithms on irregular graphs. GRAD-Net learns semantic
representations by exploiting both local and global structures of image
manifold in an unsupervised fashion. By utilizing sparse coding techniques,
GRAD-Net not only preserves global information on the image manifold, but also
enables scalable training and efficient querying. Experiments on several large
benchmark datasets demonstrate effectiveness of our method over
state-of-the-art diffusion algorithms for unsupervised image retrieval. | [
"cs.CV"
] |
Most recent person re-identification approaches are based on the use of deep
convolutional neural networks (CNNs). These networks, although effective in
multiple tasks such as classification or object detection, tend to focus on the
most discriminative part of an object rather than retrieving all its relevant
features. This behavior penalizes the performance of a CNN for the
re-identification task, since it should identify diverse and fine grained
features. It is then essential to make the network learn a wide variety of
finer characteristics in order to make the re-identification process of people
effective and robust to finer changes. In this article, we introduce Deep
Miner, a method that allows CNNs to "mine" richer and more diverse features
about people for their re-identification. Deep Miner is specifically composed
of three types of branches: a Global branch (G-branch), a Local branch
(L-branch) and an Input-Erased branch (IE-branch). G-branch corresponds to the
initial backbone which predicts global characteristics, while L-branch
retrieves part level resolution features. The IE-branch for its part, receives
partially suppressed feature maps as input thereby allowing the network to
"mine" new features (those ignored by G-branch) as output. For this special
purpose, a dedicated suppression procedure for identifying and removing
features within a given CNN is introduced. This suppression procedure has the
major benefit of being simple, while it produces a model that significantly
outperforms state-of-the-art (SOTA) re-identification methods. Specifically, we
conduct experiments on four standard person re-identification benchmarks and
witness an absolute performance gain up to 6.5% mAP compared to SOTA. | [
"cs.CV"
] |
Computing at the edge is increasingly important since a massive amount of
data is generated. This poses challenges in transporting all that data to the
remote data centers and cloud, where they can be processed and analyzed. On the
other hand, harnessing the edge data is essential for offering data-driven and
machine learning-based applications, if the challenges, such as device
capabilities, connectivity, and heterogeneity can be mitigated. Machine
learning applications are very compute-intensive and require processing of
large amount of data. However, edge devices are often resources-constrained, in
terms of compute resources, power, storage, and network connectivity. Hence,
limiting their potential to run efficiently and accurately state-of-the art
deep neural network (DNN) models, which are becoming larger and more complex.
This paper proposes a novel offloading mechanism by leveraging installed-base
on-premises (edge) computational resources. The proposed mechanism allows the
edge devices to offload heavy and compute-intensive workloads to edge nodes
instead of using remote cloud. Our offloading mechanism has been prototyped and
tested with state-of-the art person and object detection DNN models for mobile
robots and video surveillance applications. The performance shows a significant
gain compared to cloud-based offloading strategies in terms of accuracy and
latency. | [
"cs.LG",
"cs.DC",
"cs.RO"
] |
Generative adversarial networks (GANs), famous for the capability of learning
complex underlying data distribution, are however known to be tricky in the
training process, which would probably result in mode collapse or performance
deterioration. Current approaches of dealing with GANs' issues almost utilize
some practical training techniques for the purpose of regularization, which on
the other hand undermines the convergence and theoretical soundness of GAN. In
this paper, we propose to stabilize GAN training via a novel particle-based
variational inference -- Langevin Stein variational gradient descent (LSVGD),
which not only inherits the flexibility and efficiency of original SVGD but
aims to address its instability issues by incorporating an extra disturbance
into the update dynamics. We further demonstrate that by properly adjusting the
noise variance, LSVGD simulates a Langevin process whose stationary
distribution is exactly the target distribution. We also show that LSVGD
dynamics has an implicit regularization which is able to enhance particles'
spread-out and diversity. At last we present an efficient way of applying
particle-based variational inference on a general GAN training procedure no
matter what loss function is adopted. Experimental results on one synthetic
dataset and three popular benchmark datasets -- Cifar-10, Tiny-ImageNet and
CelebA validate that LSVGD can remarkably improve the performance and stability
of various GAN models. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
To enhance the ability of neural networks to extract local point cloud
features and improve their quality, in this paper, we propose a multiscale
graph generation method and a self-adaptive graph convolution method. First, we
propose a multiscale graph generation method for point clouds. This approach
transforms point clouds into a structured multiscale graph form that supports
multiscale analysis of point clouds in the scale space and can obtain the
dimensional features of point cloud data at different scales, thus making it
easier to obtain the best point cloud features. Because traditional
convolutional neural networks are not applicable to graph data with irregular
vertex neighborhoods, this paper presents an sef-adaptive graph convolution
kernel that uses the Chebyshev polynomial to fit an irregular convolution
filter based on the theory of optimal approximation. In this paper, we adopt
max pooling to synthesize the features of different scale maps and generate the
point cloud features. In experiments conducted on three widely used public
datasets, the proposed method significantly outperforms other state-of-the-art
models, demonstrating its effectiveness and generalizability. | [
"cs.CV"
] |
Probabilistic models are often trained by maximum likelihood, which
corresponds to minimizing a specific f-divergence between the model and data
distribution. In light of recent successes in training Generative Adversarial
Networks, alternative non-likelihood training criteria have been proposed.
Whilst not necessarily statistically efficient, these alternatives may better
match user requirements such as sharp image generation. A general variational
method for training probabilistic latent variable models using maximum
likelihood is well established; however, how to train latent variable models
using other f-divergences is comparatively unknown. We discuss a variational
approach that, when combined with the recently introduced Spread Divergence,
can be applied to train a large class of latent variable models using any
f-divergence. | [
"stat.ML",
"cs.LG"
] |
We tackle the challenge of disentangled representation learning in generative
adversarial networks (GANs) from the perspective of regularized optimal
transport (OT). Specifically, a smoothed OT loss gives rise to an implicit
transportation plan between the latent space and the data space. Based on this
theoretical observation, we exploit a structured regularization on the
transportation plan to encourage a prescribed latent subspace to be
informative. This yields the formulation of a novel informative OT-based GAN.
By convex duality, we obtain the equivalent view that this leads to perturbed
ground costs favoring sparsity in the informative latent dimensions.
Practically, we devise a stable training algorithm for the proposed informative
GAN. Our experiments support the hypothesis that such regularizations
effectively yield the discovery of disentangled and interpretable latent
representations. Our work showcases potential power of a regularized OT
framework in the context of generative modeling through its access to the
transport plan. Further challenges are addressed in this line. | [
"cs.LG",
"stat.ML"
] |
The security of object detection systems has attracted increasing attention,
especially when facing adversarial patch attacks. Since patch attacks change
the pixels in a restricted area on objects, they are easy to implement in the
physical world, especially for attacking human detection systems. The existing
defenses against patch attacks are mostly applied for image classification
problems and have difficulty resisting human detection attacks. Towards this
critical issue, we propose an efficient and effective plug-in defense component
on the YOLO detection system, which we name Ad-YOLO. The main idea is to add a
patch class on the YOLO architecture, which has a negligible inference
increment. Thus, Ad-YOLO is expected to directly detect both the objects of
interest and adversarial patches. To the best of our knowledge, our approach is
the first defense strategy against human detection attacks.
We investigate Ad-YOLO's performance on the YOLOv2 baseline. To improve the
ability of Ad-YOLO to detect variety patches, we first use an adversarial
training process to develop a patch dataset based on the Inria dataset, which
we name Inria-Patch. Then, we train Ad-YOLO by a combination of Pascal VOC,
Inria, and Inria-Patch datasets. With a slight drop of $0.70\%$ mAP on VOC 2007
test set, Ad-YOLO achieves $80.31\%$ AP of persons, which highly outperforms
$33.93\%$ AP for YOLOv2 when facing white-box patch attacks. Furthermore,
compared with YOLOv2, the results facing a physical-world attack are also
included to demonstrate Ad-YOLO's excellent generalization ability. | [
"cs.CV"
] |
The recent progress in neural architecture search (NAS) has allowed scaling
the automated design of neural architectures to real-world domains, such as
object detection and semantic segmentation. However, one prerequisite for the
application of NAS are large amounts of labeled data and compute resources.
This renders its application challenging in few-shot learning scenarios, where
many related tasks need to be learned, each with limited amounts of data and
compute time. Thus, few-shot learning is typically done with a fixed neural
architecture. To improve upon this, we propose MetaNAS, the first method which
fully integrates NAS with gradient-based meta-learning. MetaNAS optimizes a
meta-architecture along with the meta-weights during meta-training. During
meta-testing, architectures can be adapted to a novel task with a few steps of
the task optimizer, that is: task adaptation becomes computationally cheap and
requires only little data per task. Moreover, MetaNAS is agnostic in that it
can be used with arbitrary model-agnostic meta-learning algorithms and
arbitrary gradient-based NAS methods. %We present encouraging results for
MetaNAS with a combination of DARTS and REPTILE on few-shot classification
benchmarks. Empirical results on standard few-shot classification benchmarks
show that MetaNAS with a combination of DARTS and REPTILE yields
state-of-the-art results. | [
"cs.LG",
"stat.ML"
] |
We study reinforcement learning in an infinite-horizon average-reward setting
with linear function approximation, where the transition probability function
of the underlying Markov Decision Process (MDP) admits a linear form over a
feature mapping of the current state, action, and next state. We propose a new
algorithm UCRL2-VTR, which can be seen as an extension of the UCRL2 algorithm
with linear function approximation. We show that UCRL2-VTR with Bernstein-type
bonus can achieve a regret of $\tilde{O}(d\sqrt{DT})$, where $d$ is the
dimension of the feature mapping, $T$ is the horizon, and $\sqrt{D}$ is the
diameter of the MDP. We also prove a matching lower bound
$\tilde{\Omega}(d\sqrt{DT})$, which suggests that the proposed UCRL2-VTR is
minimax optimal up to logarithmic factors. To the best of our knowledge, our
algorithm is the first nearly minimax optimal RL algorithm with function
approximation in the infinite-horizon average-reward setting. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
Recurrent major mood episodes and subsyndromal mood instability cause
substantial disability in patients with bipolar disorder. Early identification
of mood episodes enabling timely mood stabilisation is an important clinical
goal. Recent technological advances allow the prospective reporting of mood in
real time enabling more accurate, efficient data capture. The complex nature of
these data streams in combination with challenge of deriving meaning from
missing data mean pose a significant analytic challenge. The signature method
is derived from stochastic analysis and has the ability to capture important
properties of complex ordered time series data. To explore whether the onset of
episodes of mania and depression can be identified using self-reported mood
data. | [
"stat.ML"
] |
Most state-of-the-art methods of object detection suffer from poor
generalization ability when the training and test data are from different
domains, e.g., with different styles. To address this problem, previous methods
mainly use holistic representations to align feature-level and pixel-level
distributions of different domains, which may neglect the instance-level
characteristics of objects in images. Besides, when transferring detection
ability across different domains, it is important to obtain the instance-level
features that are domain-invariant, instead of the styles that are
domain-specific. Therefore, in order to extract instance-invariant features, we
should disentangle the domain-invariant features from the domain-specific
features. To this end, a progressive disentangled framework is first proposed
to solve domain adaptive object detection. Particularly, base on disentangled
learning used for feature decomposition, we devise two disentangled layers to
decompose domain-invariant and domain-specific features. And the
instance-invariant features are extracted based on the domain-invariant
features. Finally, to enhance the disentanglement, a three-stage training
mechanism including multiple loss functions is devised to optimize our model.
In the experiment, we verify the effectiveness of our method on three
domain-shift scenes. Our method is separately 2.3\%, 3.6\%, and 4.0\% higher
than the baseline method \cite{saito2019strong}. | [
"cs.CV"
] |
We address the problems of identifying malware in network telemetry logs and
providing \emph{indicators of compromise} -- comprehensible explanations of
behavioral patterns that identify the threat. In our system, an array of
specialized detectors abstracts network-flow data into comprehensible
\emph{network events} in a first step. We develop a neural network that
processes this sequence of events and identifies specific threats, malware
families and broad categories of malware. We then use the
\emph{integrated-gradients} method to highlight events that jointly constitute
the characteristic behavioral pattern of the threat. We compare network
architectures based on CNNs, LSTMs, and transformers, and explore the efficacy
of unsupervised pre-training experimentally on large-scale telemetry data. We
demonstrate how this system detects njRAT and other malware based on behavioral
patterns. | [
"cs.LG",
"cs.CR"
] |
Text-to-Face (TTF) synthesis is a challenging task with great potential for
diverse computer vision applications. Compared to Text-to-Image (TTI) synthesis
tasks, the textual description of faces can be much more complicated and
detailed due to the variety of facial attributes and the parsing of high
dimensional abstract natural language. In this paper, we propose a Text-to-Face
model that not only produces images in high resolution (1024x1024) with
text-to-image consistency, but also outputs multiple diverse faces to cover a
wide range of unspecified facial features in a natural way. By fine-tuning the
multi-label classifier and image encoder, our model obtains the vectors and
image embeddings which are used to transform the input noise vector sampled
from the normal distribution. Afterwards, the transformed noise vector is fed
into a pre-trained high-resolution image generator to produce a set of faces
with the desired facial attributes. We refer to our model as TTF-HD.
Experimental results show that TTF-HD generates high-quality faces with
state-of-the-art performance. | [
"cs.CV"
] |
Unsupervised learning poses one of the most difficult challenges in computer
vision today. The task has an immense practical value with many applications in
artificial intelligence and emerging technologies, as large quantities of
unlabeled videos can be collected at relatively low cost. In this paper, we
address the unsupervised learning problem in the context of detecting the main
foreground objects in single images. We train a student deep network to predict
the output of a teacher pathway that performs unsupervised object discovery in
videos or large image collections. Our approach is different from published
methods on unsupervised object discovery. We move the unsupervised learning
phase during training time, then at test time we apply the standard
feed-forward processing along the student pathway. This strategy has the
benefit of allowing increased generalization possibilities during training,
while remaining fast at testing. Our unsupervised learning algorithm can run
over several generations of student-teacher training. Thus, a group of student
networks trained in the first generation collectively create the teacher at the
next generation. In experiments our method achieves top results on three
current datasets for object discovery in video, unsupervised image segmentation
and saliency detection. At test time the proposed system is fast, being one to
two orders of magnitude faster than published unsupervised methods. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.