text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Aiming towards human-level generalization, there is a need to explore
adaptable representation learning methods with greater transferability. Most
existing approaches independently address task-transferability and cross-domain
adaptation, resulting in limited generalization. In this paper, we propose
UM-Adapt - a unified framework to effectively perform unsupervised domain
adaptation for spatially-structured prediction tasks, simultaneously
maintaining a balanced performance across individual tasks in a multi-task
setting. To realize this, we propose two novel regularization strategies; a)
Contour-based content regularization (CCR) and b) exploitation of inter-task
coherency using a cross-task distillation module. Furthermore, avoiding a
conventional ad-hoc domain discriminator, we re-utilize the cross-task
distillation loss as output of an energy function to adversarially minimize the
input domain discrepancy. Through extensive experiments, we demonstrate
superior generalizability of the learned representations simultaneously for
multiple tasks under domain-shifts from synthetic to natural environments.
UM-Adapt yields state-of-the-art transfer learning results on ImageNet
classification and comparable performance on PASCAL VOC 2007 detection task,
even with a smaller backbone-net. Moreover, the resulting semi-supervised
framework outperforms the current fully-supervised multi-task learning
state-of-the-art on both NYUD and Cityscapes dataset. | [
"cs.CV"
] |
We consider the problem where $N$ agents collaboratively interact with an
instance of a stochastic $K$ arm bandit problem for $K \gg N$. The agents aim
to simultaneously minimize the cumulative regret over all the agents for a
total of $T$ time steps, the number of communication rounds, and the number of
bits in each communication round. We present Limited Communication
Collaboration - Upper Confidence Bound (LCC-UCB), a doubling-epoch based
algorithm where each agent communicates only after the end of the epoch and
shares the index of the best arm it knows. With our algorithm, LCC-UCB, each
agent enjoys a regret of $\tilde{O}\left(\sqrt{({K/N}+ N)T}\right)$,
communicates for $O(\log T)$ steps and broadcasts $O(\log K)$ bits in each
communication step. We extend the work to sparse graphs with maximum degree
$K_G$, and diameter $D$ and propose LCC-UCB-GRAPH which enjoys a regret bound
of $\tilde{O}\left(D\sqrt{(K/N+ K_G)DT}\right)$. Finally, we empirically show
that the LCC-UCB and the LCC-UCB-GRAPH algorithm perform well and outperform
strategies that communicate through a central node | [
"cs.LG",
"cs.AI",
"cs.MA"
] |
Training machine learning models that are robust against adversarial inputs
poses seemingly insurmountable challenges. To better understand adversarial
robustness, we consider the underlying problem of learning robust
representations. We develop a notion of representation vulnerability that
captures the maximum change of mutual information between the input and output
distributions, under the worst-case input perturbation. Then, we prove a
theorem that establishes a lower bound on the minimum adversarial risk that can
be achieved for any downstream classifier based on its representation
vulnerability. We propose an unsupervised learning method for obtaining
intrinsically robust representations by maximizing the worst-case mutual
information between the input and output distributions. Experiments on
downstream classification tasks support the robustness of the representations
found using unsupervised learning with our training principle. | [
"cs.LG",
"cs.CR",
"cs.IT",
"math.IT",
"stat.ML"
] |
Web Image Context Extraction (WICE) consists in obtaining the textual
information describing an image using the content of the surrounding webpage. A
common preprocessing step before performing WICE is to render the content of
the webpage. When done at a large scale (e.g., for search engine indexation),
it may become very computationally costly (up to several seconds per page). To
avoid this cost, we introduce a novel WICE approach that combines Graph Neural
Networks (GNNs) and Natural Language Processing models. Our method relies on a
graph model containing both node types and text as features. The model is fed
through several blocks of GNNs to extract the textual context. Since no labeled
WICE dataset with ground truth exists, we train and evaluate the GNNs on a
proxy task that consists in finding the semantically closest text to the image
caption. We then interpret importance weights to find the most relevant text
nodes and define them as the image context. Thanks to GNNs, our model is able
to encode both structural and semantic information from the webpage. We show
that our approach gives promising results to help address the large-scale WICE
problem using only HTML data. | [
"cs.CV",
"cs.NE",
"eess.IV"
] |
Graph neural networks (GNNs) have been shown with superior performance in
various applications, but training dedicated GNNs can be costly for large-scale
graphs. Some recent work started to study the pre-training of GNNs. However,
none of them provide theoretical insights into the design of their frameworks,
or clear requirements and guarantees towards the transferability of GNNs. In
this work, we establish a theoretically grounded and practically useful
framework for the transfer learning of GNNs. Firstly, we propose a novel view
towards the essential graph information and advocate the capturing of it as the
goal of transferable GNN training, which motivates the design of Ours, a novel
GNN framework based on ego-graph information maximization to analytically
achieve this goal. Secondly, we specify the requirement of structure-respecting
node features as the GNN input, and derive a rigorous bound of GNN
transferability based on the difference between the local graph Laplacians of
the source and target graphs. Finally, we conduct controlled synthetic
experiments to directly justify our theoretical conclusions. Extensive
experiments on real-world networks towards role identification show consistent
results in the rigorously analyzed setting of direct-transfering, while those
towards large-scale relation prediction show promising results in the more
generalized and practical setting of transfering with fine-tuning. | [
"cs.LG",
"stat.ML"
] |
Nowadays Knowledge Graphs constitute a mainstream approach for the
representation of relational information on big heterogeneous data, however,
they may contain a big amount of imputed noise when constructed automatically.
To address this problem, different error detection methodologies have been
proposed, mainly focusing on path ranking and representation learning. This
work presents various mainstream approaches and proposes a hybrid and modular
methodology for the task. We compare different methods on two benchmarks and
one real-world biomedical publications dataset, showcasing the potential of our
approach and providing insights on graph embeddings when dealing with noisy
Knowledge Graphs. | [
"cs.LG",
"cs.AI",
"cs.IR",
"stat.ML"
] |
We present a systematic comparison between neural network (NN) architectures
for inference of AC-OPF solutions. Using fully connected NNs as a baseline we
demonstrate the efficacy of leveraging network topology in the models by
constructing abstract representations of electrical grids in the graph domain,
for both convolutional and graph NNs. The performance of the NN architectures
is compared for regression (predicting optimal generator set-points) and
classification (predicting the active set of constraints) settings.
Computational gains for obtaining optimal solutions are also presented. | [
"cs.LG",
"cs.SY",
"eess.SP",
"eess.SY",
"physics.data-an"
] |
In recent years, face detection has experienced significant performance
improvement with the boost of deep convolutional neural networks. In this
report, we reimplement the state-of-the-art detector SRN and apply some tricks
proposed in the recent literatures to obtain an extremely strong face detector,
named VIM-FD. In specific, we exploit more powerful backbone network like
DenseNet-121, revisit the data augmentation based on data-anchor-sampling
proposed in PyramidBox, and use the max-in-out label and anchor matching
strategy in SFD. In addition, we also introduce the attention mechanism to
provide additional supervision. Over the most popular and challenging face
detection benchmark, i.e., WIDER FACE, the proposed VIM-FD achieves
state-of-the-art performance. | [
"cs.CV"
] |
Task-oriented dialog (TOD) systems often need to formulate knowledge base
(KB) queries corresponding to the user intent and use the query results to
generate system responses. Existing approaches require dialog datasets to
explicitly annotate these KB queries -- these annotations can be time
consuming, and expensive. In response, we define the novel problems of
predicting the KB query and training the dialog agent, without explicit KB
query annotation. For query prediction, we propose a reinforcement learning
(RL) baseline, which rewards the generation of those queries whose KB results
cover the entities mentioned in subsequent dialog. Further analysis reveals
that correlation among query attributes in KB can significantly confuse memory
augmented policy optimization (MAPO), an existing state of the art RL agent. To
address this, we improve the MAPO baseline with simple but important
modifications suited to our task. To train the full TOD system for our setting,
we propose a pipelined approach: it independently predicts when to make a KB
query (query position predictor), then predicts a KB query at the predicted
position (query predictor), and uses the results of predicted query in
subsequent dialog (next response predictor). Overall, our work proposes first
solutions to our novel problem, and our analysis highlights the research
challenges in training TOD systems without query annotation. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Time series data are prevalent in electronic health records, mostly in the
form of physiological parameters such as vital signs and lab tests. The
patterns of these values may be significant indicators of patients' clinical
states and there might be patterns that are unknown to clinicians but are
highly predictive of some outcomes. Many of these values are also missing which
makes it difficult to apply existing methods like decision trees. We propose a
recurrent neural network model that reduces overfitting to noisy observations
by limiting interactions between features. We analyze its performance on
mortality, ICD-9 and AKI prediction from observational values on the Medical
Information Mart for Intensive Care III (MIMIC-III) dataset. Our models result
in an improvement of 1.1% [p<0.01] in AU-ROC for mortality prediction under the
MetaVision subset and 1.0% and 2.2% [p<0.01] respectively for mortality and AKI
under the full MIMIC-III dataset compared to existing state-of-the-art
interpolation, embedding and decay-based recurrent models. | [
"cs.LG",
"cs.CY",
"stat.ML"
] |
Although Generative Adversarial Networks (GANs) are successfully applied to
diverse fields, training GANs on synthetic aperture radar (SAR) data is a
challenging task mostly due to speckle noise. On the one hands, in a learning
perspective of human's perception, it is natural to learn a task by using
various information from multiple sources. However, in the previous GAN works
on SAR target image generation, the information on target classes has only been
used. Due to the backscattering characteristics of SAR image signals, the
shapes and structures of SAR target images are strongly dependent on their pose
angles. Nevertheless, the pose angle information has not been incorporated into
such generative models for SAR target images. In this paper, we firstly propose
a novel GAN-based multi-task learning (MTL) method for SAR target image
generation, called PeaceGAN that uses both pose angle and target class
information, which makes it possible to produce SAR target images of desired
target classes at intended pose angles. For this, the PeaceGAN has two
additional structures, a pose estimator and an auxiliary classifier, at the
side of its discriminator to combine the pose and class information more
efficiently. In addition, the PeaceGAN is jointly learned in an end-to-end
manner as MTL with both pose angle and target class information, thus enhancing
the diversity and quality of generated SAR target images The extensive
experiments show that taking an advantage of both pose angle and target class
learning by the proposed pose estimator and auxiliary classifier can help the
PeaceGAN's generator effectively learn the distributions of SAR target images
in the MTL framework, so that it can better generate the SAR target images more
flexibly and faithfully at intended pose angles for desired target classes
compared to the recent state-of-the-art methods. | [
"cs.CV",
"eess.IV"
] |
The joint optimization of representation learning and clustering in the
embedding space has experienced a breakthrough in recent years. In spite of the
advance, clustering with representation learning has been limited to flat-level
categories, which often involves cohesive clustering with a focus on instance
relations. To overcome the limitations of flat clustering, we introduce
hierarchically-clustered representation learning (HCRL), which simultaneously
optimizes representation learning and hierarchical clustering in the embedding
space. Compared with a few prior works, HCRL firstly attempts to consider a
generation of deep embeddings from every component of the hierarchy, not just
leaf components. In addition to obtaining hierarchically clustered embeddings,
we can reconstruct data by the various abstraction levels, infer the intrinsic
hierarchical structure, and learn the level-proportion features. We conducted
evaluations with image and text domains, and our quantitative analyses showed
competent likelihoods and the best accuracies compared with the baselines. | [
"cs.LG",
"stat.ML"
] |
It is well known that neural networks with rectified linear units (ReLU)
activation functions are positively scale-invariant. Conventional algorithms
like stochastic gradient descent optimize the neural networks in the vector
space of weights, which is, however, not positively scale-invariant. This
mismatch may lead to problems during the optimization process. Then, a natural
question is: \emph{can we construct a new vector space that is positively
scale-invariant and sufficient to represent ReLU neural networks so as to
better facilitate the optimization process }? In this paper, we provide our
positive answer to this question. First, we conduct a formal study on the
positive scaling operators which forms a transformation group, denoted as
$\mathcal{G}$. We show that the value of a path (i.e. the product of the
weights along the path) in the neural network is invariant to positive scaling
and prove that the value vector of all the paths is sufficient to represent the
neural networks under mild conditions. Second, we show that one can identify
some basis paths out of all the paths and prove that the linear span of their
value vectors (denoted as $\mathcal{G}$-space) is an invariant space with lower
dimension under the positive scaling group. Finally, we design stochastic
gradient descent algorithm in $\mathcal{G}$-space (abbreviated as
$\mathcal{G}$-SGD) to optimize the value vector of the basis paths of neural
networks with little extra cost by leveraging back-propagation. Our experiments
show that $\mathcal{G}$-SGD significantly outperforms the conventional SGD
algorithm in optimizing ReLU networks on benchmark datasets. | [
"stat.ML",
"cs.LG"
] |
Using deep learning to analyze mechanical stress distributions has been
gaining interest with the demand for fast stress analysis methods. Deep
learning approaches have achieved excellent outcomes when utilized to speed up
stress computation and learn the physics without prior knowledge of underlying
equations. However, most studies restrict the variation of geometry or boundary
conditions, making these methods difficult to be generalized to unseen
configurations. We propose a conditional generative adversarial network (cGAN)
model for predicting 2D von Mises stress distributions in solid structures. The
cGAN learns to generate stress distributions conditioned by geometries, load,
and boundary conditions through a two-player minimax game between two neural
networks with no prior knowledge. By evaluating the generative network on two
stress distribution datasets under multiple metrics, we demonstrate that our
model can predict more accurate high-resolution stress distributions than a
baseline convolutional neural network model, given various and complex cases of
geometry, load and boundary conditions. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Anomaly detection is a classical problem in computer vision, namely the
determination of the normal from the abnormal when datasets are highly biased
towards one class (normal) due to the insufficient sample size of the other
class (abnormal). While this can be addressed as a supervised learning problem,
a significantly more challenging problem is that of detecting the
unknown/unseen anomaly case that takes us instead into the space of a
one-class, semi-supervised learning paradigm. We introduce such a novel anomaly
detection model, by using a conditional generative adversarial network that
jointly learns the generation of high-dimensional image space and the inference
of latent space. Employing encoder-decoder-encoder sub-networks in the
generator network enables the model to map the input image to a lower dimension
vector, which is then used to reconstruct the generated output image. The use
of the additional encoder network maps this generated image to its latent
representation. Minimizing the distance between these images and the latent
vectors during training aids in learning the data distribution for the normal
samples. As a result, a larger distance metric from this learned data
distribution at inference time is indicative of an outlier from that
distribution - an anomaly. Experimentation over several benchmark datasets,
from varying domains, shows the model efficacy and superiority over previous
state-of-the-art approaches. | [
"cs.CV"
] |
Dense and accurate 3D mapping from a monocular sequence is a key technology
for several applications and still an open research area. This paper leverages
recent results on single-view CNN-based depth estimation and fuses them with
multi-view depth estimation. Both approaches present complementary strengths.
Multi-view depth is highly accurate but only in high-texture areas and
high-parallax cases. Single-view depth captures the local structure of
mid-level regions, including texture-less areas, but the estimated depth lacks
global coherence. The single and multi-view fusion we propose is challenging in
several aspects. First, both depths are related by a deformation that depends
on the image content. Second, the selection of multi-view points of high
accuracy might be difficult for low-parallax configurations. We present
contributions for both problems. Our results in the public datasets of NYUv2
and TUM shows that our algorithm outperforms the individual single and
multi-view approaches. A video showing the key aspects of mapping in our Single
and Multi-view depth proposal is available at https://youtu.be/ipc5HukTb4k | [
"cs.CV",
"cs.RO"
] |
Deep reinforcement learning agents have achieved state-of-the-art results by
directly maximising cumulative reward. However, environments contain a much
wider variety of possible training signals. In this paper, we introduce an
agent that also maximises many other pseudo-reward functions simultaneously by
reinforcement learning. All of these tasks share a common representation that,
like unsupervised learning, continues to develop in the absence of extrinsic
rewards. We also introduce a novel mechanism for focusing this representation
upon extrinsic rewards, so that learning can rapidly adapt to the most relevant
aspects of the actual task. Our agent significantly outperforms the previous
state-of-the-art on Atari, averaging 880\% expert human performance, and a
challenging suite of first-person, three-dimensional \emph{Labyrinth} tasks
leading to a mean speedup in learning of 10$\times$ and averaging 87\% expert
human performance on Labyrinth. | [
"cs.LG",
"cs.NE"
] |
The majority of machine learning algorithms assumes that objects are
represented as vectors. But often the objects we want to learn on are more
naturally represented by other data structures such as sequences and time
series. For these representations many standard learning algorithms are
unavailable. We generalize gradient-based learning algorithms to time series
under dynamic time warping. To this end, we introduce elastic functions, which
extend functions on time series to matrix spaces. Necessary conditions are
presented under which generalized gradient learning on time series is
consistent. We indicate how results carry over to arbitrary elastic distance
functions and to sequences consisting of symbolic elements. Specifically, four
linear classifiers are extended to time series under dynamic time warping and
applied to benchmark datasets. Results indicate that generalized gradient
learning via elastic functions have the potential to complement the
state-of-the-art in statistical pattern recognition on time series. | [
"cs.LG"
] |
Dictionary based classifiers are a family of algorithms for time series
classification (TSC), that focus on capturing the frequency of pattern
occurrences in a time series. The ensemble based Bag of Symbolic Fourier
Approximation Symbols (BOSS) was found to be a top performing TSC algorithm in
a recent evaluation, as well as the best performing dictionary based
classifier. A recent addition to the category, the Word Extraction for Time
Series Classification (WEASEL), claims an improvement on this performance. Both
of these algorithms however have non-trivial scalability issues, taking a
considerable amount of build time and space on larger datasets. We evaluate
changes to the way BOSS chooses classifiers for its ensemble, replacing its
parameter search with random selection. This change allows for the easy
implementation of contracting, setting a build time limit for the classifier
and check-pointing, saving progress during the classifiers build. To
differentiate between the two BOSS ensemble methods we refer to our randomised
version as RBOSS. Additionally we test the application of common ensembling
techniques to help retain accuracy from the loss of the BOSS parameter search.
We achieve a significant reduction in build time without a significant change
in accuracy on average when compared to BOSS by creating a size $n$ weighted
ensemble selecting the best performers from $k$ randomly chosen parameter sets.
Our experiments are conducted on datasets from the recently expanded UCR time
series archive. We demonstrate the usability improvements to RBOSS with a case
study using a large whale acoustics dataset for which BOSS proved infeasible. | [
"cs.LG",
"stat.ML"
] |
Running time of the light field depth estimation algorithms is typically
high. This assessment is based on the computational complexity of existing
methods and the large amounts of data involved. The aim of our work is to
develop a simple and fast algorithm for accurate depth computation. In this
context, we propose an approach, which involves Semi-Global Matching for the
processing of light field images. It forms on comparison of pixels'
correspondences with different metrics in the substantially bounded light field
space. We show that our method is suitable for the fast production of a proper
result in a variety of light field configurations | [
"cs.CV"
] |
Convolutional neural networks (CNN) are now being widely used for classifying
and detecting pulmonary abnormalities in chest radiographs. Two complementary
generalization properties of CNNs, translation invariance and equivariance, are
particularly useful in detecting manifested abnormalities associated with
pulmonary disease, regardless of their spatial locations within the image.
However, these properties also come with the loss of exact spatial information
and global relative positions of abnormalities detected in local regions.
Global relative positions of such abnormalities may help distinguish similar
conditions, such as COVID-19 and viral pneumonia. In such instances, a global
attention mechanism is needed, which CNNs do not support in their traditional
architectures that aim for generalization afforded by translation invariance
and equivariance. Vision Transformers provide a global attention mechanism, but
lack translation invariance and equivariance, requiring significantly more
training data samples to match generalization of CNNs. To address the loss of
spatial information and global relations between features, while preserving the
inductive biases of CNNs, we present a novel technique that serves as an
auxiliary attention mechanism to existing CNN architectures, in order to
extract global correlations between salient features. | [
"cs.CV",
"cs.LG"
] |
Reliable facial expression recognition plays a critical role in human-machine
interactions. However, most of the facial expression analysis methodologies
proposed to date pay little or no attention to the protection of a user's
privacy. In this paper, we propose a Privacy-Preserving Representation-Learning
Variational Generative Adversarial Network (PPRL-VGAN) to learn an image
representation that is explicitly disentangled from the identity information.
At the same time, this representation is discriminative from the standpoint of
facial expression recognition and generative as it allows expression-equivalent
face image synthesis. We evaluate the proposed model on two public datasets
under various threat scenarios. Quantitative and qualitative results
demonstrate that our approach strikes a balance between the preservation of
privacy and data utility. We further demonstrate that our model can be
effectively applied to other tasks such as expression morphing and image
completion. | [
"cs.CV"
] |
Vision-and-language pre-training has achieved impressive success in learning
multimodal representations between vision and language. To generalize this
success to non-English languages, we introduce UC2, the first machine
translation-augmented framework for cross-lingual cross-modal representation
learning. To tackle the scarcity problem of multilingual captions for image
datasets, we first augment existing English-only datasets with other languages
via machine translation (MT). Then we extend the standard Masked Language
Modeling and Image-Text Matching training objectives to multilingual setting,
where alignment between different languages is captured through shared visual
context (i.e, using image as pivot). To facilitate the learning of a joint
embedding space of images and all languages of interest, we further propose two
novel pre-training tasks, namely Masked Region-to-Token Modeling (MRTM) and
Visual Translation Language Modeling (VTLM), leveraging MT-enhanced translated
data. Evaluation on multilingual image-text retrieval and multilingual visual
question answering benchmarks demonstrates that our proposed framework achieves
new state-of-the-art on diverse non-English benchmarks while maintaining
comparable performance to monolingual pre-trained models on English tasks. | [
"cs.CV"
] |
We consider the problem of learning similarity functions. While there has
been substantial progress in learning suitable distance metrics, these
techniques in general lack decision reasoning, i.e., explaining why the input
set of images is similar or dissimilar. In this work, we solve this key problem
by proposing the first method to generate generic visual similarity
explanations with gradient-based attention. We demonstrate that our technique
is agnostic to the specific similarity model type, e.g., we show applicability
to Siamese, triplet, and quadruplet models. Furthermore, we make our proposed
similarity attention a principled part of the learning process, resulting in a
new paradigm for learning similarity functions. We demonstrate that our
learning mechanism results in more generalizable, as well as explainable,
similarity models. Finally, we demonstrate the generality of our framework by
means of experiments on a variety of tasks, including image retrieval, person
re-identification, and low-shot semantic segmentation. | [
"cs.CV",
"cs.LG"
] |
We explore and analyze the latent style space of StyleGAN2, a
state-of-the-art architecture for image generation, using models pretrained on
several different datasets. We first show that StyleSpace, the space of
channel-wise style parameters, is significantly more disentangled than the
other intermediate latent spaces explored by previous works. Next, we describe
a method for discovering a large collection of style channels, each of which is
shown to control a distinct visual attribute in a highly localized and
disentangled manner. Third, we propose a simple method for identifying style
channels that control a specific attribute, using a pretrained classifier or a
small number of example images. Manipulation of visual attributes via these
StyleSpace controls is shown to be better disentangled than via those proposed
in previous works. To show this, we make use of a newly proposed Attribute
Dependency metric. Finally, we demonstrate the applicability of StyleSpace
controls to the manipulation of real images. Our findings pave the way to
semantically meaningful and well-disentangled image manipulations via simple
and intuitive interfaces. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
General game testing relies on the use of human play testers, play test
scripting, and prior knowledge of areas of interest to produce relevant test
data. Using deep reinforcement learning (DRL), we introduce a self-learning
mechanism to the game testing framework. With DRL, the framework is capable of
exploring and/or exploiting the game mechanics based on a user-defined,
reinforcing reward signal. As a result, test coverage is increased and
unintended game play mechanics, exploits and bugs are discovered in a multitude
of game types. In this paper, we show that DRL can be used to increase test
coverage, find exploits, test map difficulty, and to detect common problems
that arise in the testing of first-person shooter (FPS) games. | [
"cs.LG",
"cs.AI"
] |
There is a long history of using meta learning as representation learning,
specifically for determining the relevance of inputs. In this paper, we examine
an instance of meta-learning in which feature relevance is learned by adapting
step size parameters of stochastic gradient descent---building on a variety of
prior work in stochastic approximation, machine learning, and artificial neural
networks. In particular, we focus on stochastic meta-descent introduced in the
Incremental Delta-Bar-Delta (IDBD) algorithm for setting individual step sizes
for each feature of a linear function approximator. Using IDBD, a feature with
large or small step sizes will have a large or small impact on generalization
from training examples. As a main contribution of this work, we extend IDBD to
temporal-difference (TD) learning---a form of learning which is effective in
sequential, non i.i.d. problems. We derive a variety of IDBD generalizations
for TD learning, demonstrating that they are able to distinguish which features
are relevant and which are not. We demonstrate that TD IDBD is effective at
learning feature relevance in both an idealized gridworld and a real-world
robotic prediction task. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Empirical risk minimization is a standard principle for choosing algorithms
in learning theory. In this paper we study the properties of empirical risk
minimization for time series. The analysis is carried out in a general
framework that covers different types of forecasting applications encountered
in the literature. We are concerned with 1-step-ahead prediction of a
univariate time series generated by a parameter-driven process. A class of
recursive algorithms is available to forecast the time series. The algorithms
are recursive in the sense that the forecast produced in a given period is a
function of the lagged values of the forecast and of the time series. The
relationship between the generating mechanism of the time series and the class
of algorithms is unspecified. Our main result establishes that the algorithm
chosen by empirical risk minimization achieves asymptotically the optimal
predictive performance that is attainable within the class of algorithms. | [
"stat.ML",
"cs.LG"
] |
Deep learning has proven to be a highly effective problem-solving tool for
object detection and image segmentation across various domains such as
healthcare and autonomous driving. At the heart of this performance lies neural
architecture design which relies heavily on domain knowledge and prior
experience on the researchers' behalf. More recently, this process of finding
the most optimal architectures, given an initial search space of possible
operations, was automated by Neural Architecture Search (NAS). In this paper,
we evaluate the robustness of one such algorithm known as Efficient NAS (ENAS)
against data agnostic poisoning attacks on the original search space with
carefully designed ineffective operations. By evaluating algorithm performance
on the CIFAR-10 dataset, we empirically demonstrate how our novel search space
poisoning (SSP) approach and multiple-instance poisoning attacks exploit design
flaws in the ENAS controller to result in inflated prediction error rates for
child networks. Our results provide insights into the challenges to surmount in
using NAS for more adversarially robust architecture search. | [
"cs.LG",
"cs.CR",
"cs.NE",
"stat.ML"
] |
Graphs are ubiquitous in modelling relational structures. Recent endeavours
in machine learning for graph-structured data have led to many architectures
and learning algorithms. However, the graph used by these algorithms is often
constructed based on inaccurate modelling assumptions and/or noisy data. As a
result, it fails to represent the true relationships between nodes. A Bayesian
framework which targets posterior inference of the graph by considering it as a
random quantity can be beneficial. In this paper, we propose a novel
non-parametric graph model for constructing the posterior distribution of graph
adjacency matrices. The proposed model is flexible in the sense that it can
effectively take into account the output of graph-based learning algorithms
that target specific tasks. In addition, model inference scales well to large
graphs. We demonstrate the advantages of this model in three different problem
settings: node classification, link prediction and recommendation. | [
"stat.ML",
"cs.LG"
] |
Sensory data are often comprised of independent content and transformation
factors. For example, face images may have shapes as content and poses as
transformation. To infer separately these factors from given data, various
``disentangling'' models have been proposed. However, many of these are
supervised or semi-supervised, either requiring attribute labels that are often
unavailable or disallowing for generalization over new contents. In this study,
we introduce a novel deep generative model, called group-based variational
autoencoders. In this, we assume no explicit labels, but a weaker form of
structure that groups together data instances having the same content but
transformed differently; we thereby separately estimate a group-common factor
as content and an instance-specific factor as transformation. This approach
allows for learning to represent a general continuous space of contents, which
can accommodate unseen contents. Despite the simplicity, our model succeeded in
learning, from five datasets, content representations that are highly separate
from the transformation representation and generalizable to data with novel
contents. We further provide detailed analysis of the latent content code and
show insight into how our model obtains the notable transformation invariance
and content generalizability. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Abstracting complex 3D shapes with parsimonious part-based representations
has been a long standing goal in computer vision. This paper presents a
learning-based solution to this problem which goes beyond the traditional 3D
cuboid representation by exploiting superquadrics as atomic elements. We
demonstrate that superquadrics lead to more expressive 3D scene parses while
being easier to learn than 3D cuboid representations. Moreover, we provide an
analytical solution to the Chamfer loss which avoids the need for computational
expensive reinforcement learning or iterative prediction. Our model learns to
parse 3D objects into consistent superquadric representations without
supervision. Results on various ShapeNet categories as well as the SURREAL
human body dataset demonstrate the flexibility of our model in capturing fine
details and complex poses that could not have been modelled using cuboids. | [
"cs.CV"
] |
This abstract describes the segmentation system used to participate in the
challenge ISIC 2017: Skin Lesion Analysis Towards Melanoma Detection. Several
preprocessing techniques have been tested for three color representations (RGB,
YCbCr and HSV) of 392 images. Results have been used to choose the better
preprocessing for each channel. In each case a neural network is trained to
predict the Jaccard Index based on object characteristics. The system includes
black frames and reference circle detection algorithms but no special treatment
is done for hair removal. Segmentation is performed in two steps first the best
channel to be segmented is chosen by selecting the best neural network output.
If this output does not predict a Jaccard Index over 0.5 a more aggressive
preprocessing is performed using open and close morphological operations and
the segmentation of the channel that obtains the best output from the neural
networks is selected as the lesion. | [
"cs.CV"
] |
Acquiring accurate three-dimensional depth information conventionally
requires expensive multibeam LiDAR devices. Recently, researchers have
developed a less expensive option by predicting depth information from
two-dimensional color imagery. However, there still exists a substantial gap in
accuracy between depth information estimated from two-dimensional images and
real LiDAR point-cloud. In this paper, we introduce a fusion-based depth
prediction method, called FusionMapping. This is the first method that fuses
colored imagery and two-dimensional laser scan to estimate depth in-formation.
More specifically, we propose an autoencoder-based depth prediction network and
a novel point-cloud refinement network for depth estimation. We analyze the
performance of our FusionMapping approach on the KITTI LiDAR odometry dataset
and an indoor mobile robot system. The results show that our introduced
approach estimates depth with better accuracy when compared to existing
methods. | [
"cs.CV"
] |
With the advancement of remote-sensed imaging large volumes of very high
resolution land cover images can now be obtained. Automation of object
recognition in these 2D images, however, is still a key issue. High intra-class
variance and low inter-class variance in Very High Resolution (VHR) images
hamper the accuracy of prediction in object recognition tasks. Most successful
techniques in various computer vision tasks recently are based on deep
supervised learning. In this work, a deep Convolutional Neural Network (CNN)
based on symmetric encoder-decoder architecture with skip connections is
employed for the 2D semantic segmentation of most common land cover object
classes - impervious surface, buildings, low vegetation, trees and cars. Atrous
convolutions are employed to have large receptive field in the proposed CNN
model. Further, the CNN outputs are post-processed using Fully Connected
Conditional Random Field (FCRF) model to refine the CNN pixel label
predictions. The proposed CNN-FCRF model achieves an overall accuracy of 90.5%
on the ISPRS Vaihingen Dataset. | [
"cs.CV",
"eess.IV"
] |
Among all fashion attributes, color is challenging to detect due to its
subjective perception. Existing classification approaches can not go beyond the
predefined list of discrete color names. In this paper, we argue that color
detection is a regression problem. Thus, we propose a new architecture, based
on attention modules and in two-stages. The first stage corrects the image
illumination while detecting the main discrete color name. The second stage
combines a colorname-attention (dependent of the detected color) with an
object-attention (dependent of the clothing category) and finally weights a
spatial pooling over the image pixels' RGB values. We further expand our work
for multiple colors garments. We collect a dataset where each fashion item is
labeled with a continuous color palette: we empirically show the benefits of
our approach. | [
"cs.CV",
"cs.LG"
] |
Various modifications of TRANSFORMER were recently used to solve time-series
forecasting problem. We propose Query Selector - an efficient, deterministic
algorithm for sparse attention matrix. Experiments show it achieves
state-of-the art results on ETT, Helpdesk and BPI'12 datasets. | [
"cs.LG"
] |
Convolutional Neural Networks (CNNs) achieved great cognitive performance at
the expense of considerable computation load. To relieve the computation load,
many optimization works are developed to reduce the model redundancy by
identifying and removing insignificant model components, such as weight
sparsity and filter pruning. However, these works only evaluate model
components' static significance with internal parameter information, ignoring
their dynamic interaction with external inputs. With per-input feature
activation, the model component significance can dynamically change, and thus
the static methods can only achieve sub-optimal results. Therefore, we propose
a dynamic CNN optimization framework in this work. Based on the neural network
attention mechanism, we propose a comprehensive dynamic optimization framework
including (1) testing-phase channel and column feature map pruning, as well as
(2) training-phase optimization by targeted dropout. Such a dynamic
optimization framework has several benefits: (1) First, it can accurately
identify and aggressively remove per-input feature redundancy with considering
the model-input interaction; (2) Meanwhile, it can maximally remove the feature
map redundancy in various dimensions thanks to the multi-dimension flexibility;
(3) The training-testing co-optimization favors the dynamic pruning and helps
maintain the model accuracy even with very high feature pruning ratio.
Extensive experiments show that our method could bring 37.4% to 54.5% FLOPs
reduction with negligible accuracy drop on various of test networks. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Accurate classification of self-care problems in children who suffer from
physical and motor affliction is an important problem in the healthcare
industry. This is a difficult and a time consumming process and it needs the
expertise of occupational therapists. In recent years, healthcare professionals
have opened up to the idea of using expert systems and artificial intelligence
in the diagnosis and classification of self care problems. In this study, we
propose a new deep learning based approach named Care2Vec for solving these
kind of problems and use a real world self care activities dataset that is
based on a conceptual framework designed by the World Health Organization
(WHO). Care2Vec is a mix of unsupervised and supervised learning where we use
Autoencoders and Deep neural networks as a two step modeling process. We found
that Care2Vec has a better prediction accuracy than some of the traditional
methods reported in the literature for solving the self care classification
problem viz. Decision trees and Artificial neural networks. | [
"cs.LG",
"stat.ML"
] |
Block-sparse regularization is already well-known in active thermal imaging
and is used for multiple measurement based inverse problems. The main
bottleneck of this method is the choice of regularization parameters which
differs for each experiment. To avoid time-consuming manually selected
regularization parameter, we propose a learned block-sparse optimization
approach using an iterative algorithm unfolded into a deep neural network. More
precisely, we show the benefits of using a learned block iterative shrinkage
thresholding algorithm that is able to learn the choice of regularization
parameters. In addition, this algorithm enables the determination of a suitable
weight matrix to solve the underlying inverse problem. Therefore, in this paper
we present the algorithm and compare it with state of the art block iterative
shrinkage thresholding using synthetically generated test data and experimental
test data from active thermography for defect reconstruction. Our results show
that the use of the learned block-sparse optimization approach provides smaller
normalized mean square errors for a small fixed number of iterations than
without learning. Thus, this new approach allows to improve the convergence
speed and only needs a few iterations to generate accurate defect
reconstruction in photothermal super resolution imaging. | [
"cs.CV",
"cs.AI",
"physics.app-ph",
"physics.comp-ph"
] |
We investigate the training and performance of generative adversarial
networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs.
As our main theoretical contribution, we clarify the situation with bias in GAN
loss functions raised by recent work: we show that gradient estimators used in
the optimization process for both MMD GANs and Wasserstein GANs are unbiased,
but learning a discriminator based on samples leads to biased gradients for the
generator parameters. We also discuss the issue of kernel choice for the MMD
critic, and characterize the kernel corresponding to the energy distance used
for the Cramer GAN critic. Being an integral probability metric, the MMD
benefits from training strategies recently developed for Wasserstein GANs. In
experiments, the MMD GAN is able to employ a smaller critic network than the
Wasserstein GAN, resulting in a simpler and faster-training algorithm with
matching performance. We also propose an improved measure of GAN convergence,
the Kernel Inception Distance, and show how to use it to dynamically adapt
learning rates during GAN training. | [
"stat.ML",
"cs.LG"
] |
We present TICaM, a Time-of-flight In-car Cabin Monitoring dataset for
vehicle interior monitoring using a single wide-angle depth camera. Our dataset
addresses the deficiencies of currently available in-car cabin datasets in
terms of the ambit of labeled classes, recorded scenarios and provided
annotations; all at the same time. We record an exhaustive list of actions
performed while driving and provide for them multi-modal labeled images (depth,
RGB and IR), with complete annotations for 2D and 3D object detection, instance
and semantic segmentation as well as activity annotations for RGB frames.
Additional to real recordings, we provide a synthetic dataset of in-car cabin
images with same multi-modality of images and annotations, providing a unique
and extremely beneficial combination of synthetic and real data for effectively
training cabin monitoring systems and evaluating domain adaptation approaches.
The dataset is available at https://vizta-tof.kl.dfki.de/. | [
"cs.CV"
] |
We study a fundamental problem in computational chemistry known as molecular
conformation generation, trying to predict stable 3D structures from 2D
molecular graphs. Existing machine learning approaches usually first predict
distances between atoms and then generate a 3D structure satisfying the
distances, where noise in predicted distances may induce extra errors during 3D
coordinate generation. Inspired by the traditional force field methods for
molecular dynamics simulation, in this paper, we propose a novel approach
called ConfGF by directly estimating the gradient fields of the log density of
atomic coordinates. The estimated gradient fields allow directly generating
stable conformations via Langevin dynamics. However, the problem is very
challenging as the gradient fields are roto-translation equivariant. We notice
that estimating the gradient fields of atomic coordinates can be translated to
estimating the gradient fields of interatomic distances, and hence develop a
novel algorithm based on recent score-based generative models to effectively
estimate these gradients. Experimental results across multiple tasks show that
ConfGF outperforms previous state-of-the-art baselines by a significant margin. | [
"cs.LG",
"physics.chem-ph",
"q-bio.BM"
] |
Machine learning models, especially deep neural networks (DNNs), have been
shown to be vulnerable against adversarial examples which are carefully crafted
samples with a small magnitude of the perturbation. Such adversarial
perturbations are usually restricted by bounding their $\mathcal{L}_p$ norm
such that they are imperceptible, and thus many current defenses can exploit
this property to reduce their adversarial impact. In this paper, we instead
introduce "unrestricted" perturbations that manipulate semantically meaningful
image-based visual descriptors - color and texture - in order to generate
effective and photorealistic adversarial examples. We show that these
semantically aware perturbations are effective against JPEG compression,
feature squeezing and adversarially trained model. We also show that the
proposed methods can effectively be applied to both image classification and
image captioning tasks on complex datasets such as ImageNet and MSCOCO. In
addition, we conduct comprehensive user studies to show that our generated
semantic adversarial examples are photorealistic to humans despite large
magnitude perturbations when compared to other attacks. | [
"cs.CV"
] |
Approaches to continual learning aim to successfully learn a set of related
tasks that arrive in an online manner. Recently, several frameworks have been
developed which enable deep learning to be deployed in this learning scenario.
A key modelling decision is to what extent the architecture should be shared
across tasks. On the one hand, separately modelling each task avoids
catastrophic forgetting but it does not support transfer learning and leads to
large models. On the other hand, rigidly specifying a shared component and a
task-specific part enables task transfer and limits the model size, but it is
vulnerable to catastrophic forgetting and restricts the form of task-transfer
that can occur. Ideally, the network should adaptively identify which parts of
the network to share in a data driven way. Here we introduce such an approach
called Continual Learning with Adaptive Weights (CLAW), which is based on
probabilistic modelling and variational inference. Experiments show that CLAW
achieves state-of-the-art performance on six benchmarks in terms of overall
continual learning performance, as measured by classification accuracy, and in
terms of addressing catastrophic forgetting. | [
"stat.ML",
"cs.LG"
] |
Attention is a general reasoning mechanism than can flexibly deal with image
information, but its memory requirements had made it so far impractical for
high resolution image generation. We present Grid Partitioned Attention (GPA),
a new approximate attention algorithm that leverages a sparse inductive bias
for higher computational and memory efficiency in image domains: queries attend
only to few keys, spatially close queries attend to close keys due to
correlations. Our paper introduces the new attention layer, analyzes its
complexity and how the trade-off between memory usage and model power can be
tuned by the hyper-parameters.We will show how such attention enables novel
deep learning architectures with copying modules that are especially useful for
conditional image generation tasks like pose morphing. Our contributions are
(i) algorithm and code1of the novel GPA layer, (ii) a novel deep
attention-copying architecture, and (iii) new state-of-the art experimental
results in human pose morphing generation benchmarks. | [
"cs.CV",
"cs.LG"
] |
We propose Axial Transformers, a self-attention-based autoregressive model
for images and other data organized as high dimensional tensors. Existing
autoregressive models either suffer from excessively large computational
resource requirements for high dimensional data, or make compromises in terms
of distribution expressiveness or ease of implementation in order to decrease
resource requirements. Our architecture, by contrast, maintains both full
expressiveness over joint distributions over data and ease of implementation
with standard deep learning frameworks, while requiring reasonable memory and
computation and achieving state-of-the-art results on standard generative
modeling benchmarks. Our models are based on axial attention, a simple
generalization of self-attention that naturally aligns with the multiple
dimensions of the tensors in both the encoding and the decoding settings.
Notably the proposed structure of the layers allows for the vast majority of
the context to be computed in parallel during decoding without introducing any
independence assumptions. This semi-parallel structure goes a long way to
making decoding from even a very large Axial Transformer broadly applicable. We
demonstrate state-of-the-art results for the Axial Transformer on the
ImageNet-32 and ImageNet-64 image benchmarks as well as on the BAIR Robotic
Pushing video benchmark. We open source the implementation of Axial
Transformers. | [
"cs.CV"
] |
Video-based person recognition is challenging due to persons being blocked
and blurred, and the variation of shooting angle. Previous research always
focused on person recognition on still images, ignoring similarity and
continuity between video frames. To tackle the challenges above, we propose a
novel Frame Aggregation and Multi-Modal Fusion (FAMF) framework for video-based
person recognition, which aggregates face features and incorporates them with
multi-modal information to identify persons in videos. For frame aggregation,
we propose a novel trainable layer based on NetVLAD (named AttentionVLAD),
which takes arbitrary number of features as input and computes a fixed-length
aggregation feature based on feature quality. We show that introducing an
attention mechanism to NetVLAD can effectively decrease the impact of
low-quality frames. For the multi-model information of videos, we propose a
Multi-Layer Multi-Modal Attention (MLMA) module to learn the correlation of
multi-modality by adaptively updating Gram matrix. Experimental results on
iQIYI-VID-2019 dataset show that our framework outperforms other
state-of-the-art methods. | [
"cs.CV",
"cs.MM"
] |
Policy optimization methods are popular reinforcement learning algorithms,
because their incremental and on-policy nature makes them more stable than the
value-based counterparts. However, the same properties also make them slow to
converge and sample inefficient, as the on-policy requirement precludes data
reuse and the incremental updates couple large iteration complexity into the
sample complexity. These characteristics have been observed in experiments as
well as in theory in the recent work of~\citet{agarwal2020pc}, which provides a
policy optimization method PCPG that can robustly find near optimal polices for
approximately linear Markov decision processes but suffers from an extremely
poor sample complexity compared with value-based techniques.
In this paper, we propose a new algorithm, COPOE, that overcomes the sample
complexity issue of PCPG while retaining its robustness to model
misspecification. Compared with PCPG, COPOE makes several important algorithmic
enhancements, such as enabling data reuse, and uses more refined analysis
techniques, which we expect to be more broadly applicable to designing new
reinforcement learning algorithms. The result is an improvement in sample
complexity from $\widetilde{O}(1/\epsilon^{11})$ for PCPG to
$\widetilde{O}(1/\epsilon^3)$ for PCPG, nearly bridging the gap with
value-based techniques. | [
"cs.LG"
] |
Annotating large scale datasets to train modern convolutional neural networks
is prohibitively expensive and time-consuming for many real tasks. One
alternative is to train the model on labeled synthetic datasets and apply it in
the real scenes. However, this straightforward method often fails to generalize
well mainly due to the domain bias between the synthetic and real datasets.
Many unsupervised domain adaptation (UDA) methods are introduced to address
this problem but most of them only focus on the simple classification task. In
this paper, we present a novel UDA model to solve the more complex object
detection problem in the context of autonomous driving. Our model integrates
both pixel level and feature level based transformtions to fulfill the cross
domain detection task and can be further trained end-to-end to pursue better
performance. We employ objectives of the generative adversarial network and the
cycle consistency loss for image translation in the pixel space. To address the
potential semantic inconsistency problem, we propose region proposal based
feature adversarial training to preserve the semantics of our target objects as
well as further minimize the domain shifts. Extensive experiments are conducted
on several different datasets, and the results demonstrate the robustness and
superiority of our method. | [
"cs.CV"
] |
Real-time generic object detection on mobile platforms is a crucial but
challenging computer vision task. However, previous CNN-based detectors suffer
from enormous computational cost, which hinders them from real-time inference
in computation-constrained scenarios. In this paper, we investigate the
effectiveness of two-stage detectors in real-time generic detection and propose
a lightweight two-stage detector named ThunderNet. In the backbone part, we
analyze the drawbacks in previous lightweight backbones and present a
lightweight backbone designed for object detection. In the detection part, we
exploit an extremely efficient RPN and detection head design. To generate more
discriminative feature representation, we design two efficient architecture
blocks, Context Enhancement Module and Spatial Attention Module. At last, we
investigate the balance between the input resolution, the backbone, and the
detection head. Compared with lightweight one-stage detectors, ThunderNet
achieves superior performance with only 40% of the computational cost on PASCAL
VOC and COCO benchmarks. Without bells and whistles, our model runs at 24.1 fps
on an ARM-based device. To the best of our knowledge, this is the first
real-time detector reported on ARM platforms. Code will be released for paper
reproduction. | [
"cs.CV"
] |
Reinforcement learning encounters major challenges in multi-agent settings,
such as scalability and non-stationarity. Recently, value function
factorization learning emerges as a promising way to address these challenges
in collaborative multi-agent systems. However, existing methods have been
focusing on learning fully decentralized value functions, which are not
efficient for tasks requiring communication. To address this limitation, this
paper presents a novel framework for learning nearly decomposable Q-functions
(NDQ) via communication minimization, with which agents act on their own most
of the time but occasionally send messages to other agents in order for
effective coordination. This framework hybridizes value function factorization
learning and communication learning by introducing two information-theoretic
regularizers. These regularizers are maximizing mutual information between
agents' action selection and communication messages while minimizing the
entropy of messages between agents. We show how to optimize these regularizers
in a way that is easily integrated with existing value function factorization
methods such as QMIX. Finally, we demonstrate that, on the StarCraft unit
micromanagement benchmark, our framework significantly outperforms baseline
methods and allows us to cut off more than $80\%$ of communication without
sacrificing the performance. The videos of our experiments are available at
https://sites.google.com/view/ndq. | [
"cs.LG",
"stat.ML"
] |
Detection in large-scale scenes is a challenging problem due to small objects
and extreme scale variation. It is essential to focus on the image regions of
small objects. In this paper, we propose a novel Adaptive Zoom (AdaZoom)
network as a selective magnifier with flexible shape and focal length to
adaptively zoom the focus regions for object detection. Based on policy
gradient, we construct a reinforcement learning framework for focus region
generation, with the reward formulated by object distributions. The scales and
aspect ratios of the generated regions are adaptive to the scales and
distribution of objects inside. We apply variable magnification according to
the scale of the region for adaptive multi-scale detection. We further propose
collaborative training to complementarily promote the performance of AdaZoom
and the detection network. To validate the effectiveness, we conduct extensive
experiments on VisDrone2019, UAVDT, and DOTA datasets. The experiments show
AdaZoom brings a consistent and significant improvement over different
detection networks, achieving state-of-the-art performance on these datasets,
especially outperforming the existing methods by AP of 4.64% on Vis-Drone2019. | [
"cs.CV"
] |
The design and performance of computer vision algorithms are greatly
influenced by the hardware on which they are implemented. CPUs, multi-core
CPUs, FPGAs and GPUs have inspired new algorithms and enabled existing ideas to
be realized. This is notably the case with GPUs, which has significantly
changed the landscape of computer vision research through deep learning. As the
end of Moores law approaches, researchers and hardware manufacturers are
exploring alternative hardware computing paradigms. Quantum computers are a
very promising alternative and offer polynomial or even exponential speed-ups
over conventional computing for some problems. This paper presents a novel
approach to image segmentation that uses new quantum computing hardware.
Segmentation is formulated as a graph cut problem that can be mapped to the
quantum approximate optimization algorithm (QAOA). This algorithm can be
implemented on current and near-term quantum computers. Encouraging results are
presented on artificial and medical imaging data. This represents an important,
practical step towards leveraging quantum computers for computer vision. | [
"cs.CV"
] |
The sensibility and sensitivity of the environment play a decisive role in
the safe and secure operation of autonomous vehicles. This perception of the
surrounding is way similar to human visual representation. The human's brain
perceives the environment by utilizing different sensory channels and develop a
view-invariant representation model. Keeping in this context, different
exteroceptive sensors are deployed on the autonomous vehicle for perceiving the
environment. The most common exteroceptive sensors are camera, Lidar and radar
for autonomous vehicle's perception. Despite being these sensors have
illustrated their benefit in the visible spectrum domain yet in the adverse
weather conditions, for instance, at night, they have limited operation
capability, which may lead to fatal accidents. In this work, we explore thermal
object detection to model a view-invariant model representation by employing
the self-supervised contrastive learning approach. For this purpose, we have
proposed a deep neural network Self Supervised Thermal Network (SSTN) for
learning the feature embedding to maximize the information between visible and
infrared spectrum domain by contrastive learning, and later employing these
learned feature representation for the thermal object detection using
multi-scale encoder-decoder transformer network. The proposed method is
extensively evaluated on the two publicly available datasets: the FLIR-ADAS
dataset and the KAIST Multi-Spectral dataset. The experimental results
illustrate the efficacy of the proposed method. | [
"cs.CV"
] |
Most recent graph clustering methods have resorted to Graph Auto-Encoders
(GAEs) to perform joint clustering and embedding learning. However, two
critical issues have been overlooked. First, the accumulative error, inflicted
by learning with noisy clustering assignments, degrades the effectiveness and
robustness of the clustering model. This problem is called Feature Randomness.
Second, reconstructing the adjacency matrix sets the model to learn irrelevant
similarities for the clustering task. This problem is called Feature Drift.
Interestingly, the theoretical relation between the aforementioned problems has
not yet been investigated. We study these issues from two aspects: (1) there is
a trade-off between Feature Randomness and Feature Drift when clustering and
reconstruction are performed at the same level, and (2) the problem of Feature
Drift is more pronounced for GAE models, compared with vanilla auto-encoder
models, due to the graph convolutional operation and the graph decoding design.
Motivated by these findings, we reformulate the GAE-based clustering
methodology. Our solution is two-fold. First, we propose a sampling operator
$\Xi$ that triggers a protection mechanism against the noisy clustering
assignments. Second, we propose an operator $\Upsilon$ that triggers a
correction mechanism against Feature Drift by gradually transforming the
reconstructed graph into a clustering-oriented one. As principal advantages,
our solution grants a considerable improvement in clustering effectiveness and
robustness and can be easily tailored to existing GAE models. | [
"cs.LG"
] |
LIDAR point clouds and RGB-images are both extremely essential for 3D object
detection. So many state-of-the-art 3D detection algorithms dedicate in fusing
these two types of data effectively. However, their fusion methods based on
Birds Eye View (BEV) or voxel format are not accurate. In this paper, we
propose a novel fusion approach named Point-based Attentive Cont-conv
Fusion(PACF) module, which fuses multi-sensor features directly on 3D points.
Except for continuous convolution, we additionally add a Point-Pooling and an
Attentive Aggregation to make the fused features more expressive. Moreover,
based on the PACF module, we propose a 3D multi-sensor multi-task network
called Pointcloud-Image RCNN(PI-RCNN as brief), which handles the image
segmentation and 3D object detection tasks. PI-RCNN employs a segmentation
sub-network to extract full-resolution semantic feature maps from images and
then fuses the multi-sensor features via powerful PACF module. Beneficial from
the effectiveness of the PACF module and the expressive semantic features from
the segmentation module, PI-RCNN can improve much in 3D object detection. We
demonstrate the effectiveness of the PACF module and PI-RCNN on the KITTI 3D
Detection benchmark, and our method can achieve state-of-the-art on the metric
of 3D AP. | [
"cs.CV"
] |
Nearest Neighbor Search (NNS) is a central task in knowledge representation,
learning, and reasoning. There is vast literature on efficient algorithms for
constructing data structures and performing exact and approximate NNS. This
paper studies NNS under Uncertainty (NNSU). Specifically, consider the setting
in which an NNS algorithm has access only to a stochastic distance oracle that
provides a noisy, unbiased estimate of the distance between any pair of points,
rather than the exact distance. This models many situations of practical
importance, including NNS based on human similarity judgements, physical
measurements, or fast, randomized approximations to exact distances. A naive
approach to NNSU could employ any standard NNS algorithm and repeatedly query
and average results from the stochastic oracle (to reduce noise) whenever it
needs a pairwise distance. The problem is that a sufficient number of repeated
queries is unknown in advance; e.g., a point maybe distant from all but one
other point (crude distance estimates suffice) or it may be close to a large
number of other points (accurate estimates are necessary). This paper shows how
ideas from cover trees and multi-armed bandits can be leveraged to develop an
NNSU algorithm that has optimal dependence on the dataset size and the
(unknown)geometry of the dataset. | [
"stat.ML",
"cs.LG"
] |
Image steganography is a procedure for hiding messages inside pictures. While
other techniques such as cryptography aim to prevent adversaries from reading
the secret message, steganography aims to hide the presence of the message
itself. In this paper, we propose a novel technique for hiding arbitrary binary
data in images using generative adversarial networks which allow us to optimize
the perceptual quality of the images produced by our model. We show that our
approach achieves state-of-the-art payloads of 4.4 bits per pixel, evades
detection by steganalysis tools, and is effective on images from multiple
datasets. To enable fair comparisons, we have released an open source library
that is available online at https://github.com/DAI-Lab/SteganoGAN. | [
"cs.CV",
"cs.LG",
"cs.MM",
"stat.ML"
] |
Background: Choosing the most performing method in terms of outcome
prediction or variables selection is a recurring problem in prognosis studies,
leading to many publications on methods comparison. But some aspects have
received little attention. First, most comparison studies treat prediction
performance and variable selection aspects separately. Second, methods are
either compared within a binary outcome setting (based on an arbitrarily chosen
delay) or within a survival setting, but not both. In this paper, we propose a
comparison methodology to weight up those different settings both in terms of
prediction and variables selection, while incorporating advanced machine
learning strategies. Methods: Using a high-dimensional case study on a
sickle-cell disease (SCD) cohort, we compare 8 statistical methods. In the
binary outcome setting, we consider logistic regression (LR), support vector
machine (SVM), random forest (RF), gradient boosting (GB) and neural network
(NN); while on the survival analysis setting, we consider the Cox Proportional
Hazards (PH), the CURE and the C-mix models. We then compare performances of
all methods both in terms of risk prediction and variable selection, with a
focus on the use of Elastic-Net regularization technique. Results: Among all
assessed statistical methods assessed, the C-mix model yields the better
performances in both the two considered settings, as well as interesting
interpretation aspects. There is some consistency in selected covariates across
methods within a setting, but not much across the two settings. Conclusions: It
appears that learning withing the survival setting first, and then going back
to a binary prediction using the survival estimates significantly enhance
binary predictions. | [
"stat.ML",
"cs.LG"
] |
Normalizing flows, autoregressive models, variational autoencoders (VAEs),
and deep energy-based models are among competing likelihood-based frameworks
for deep generative learning. Among them, VAEs have the advantage of fast and
tractable sampling and easy-to-access encoding networks. However, they are
currently outperformed by other models such as normalizing flows and
autoregressive models. While the majority of the research in VAEs is focused on
the statistical challenges, we explore the orthogonal direction of carefully
designing neural architectures for hierarchical VAEs. We propose Nouveau VAE
(NVAE), a deep hierarchical VAE built for image generation using depth-wise
separable convolutions and batch normalization. NVAE is equipped with a
residual parameterization of Normal distributions and its training is
stabilized by spectral regularization. We show that NVAE achieves
state-of-the-art results among non-autoregressive likelihood-based models on
the MNIST, CIFAR-10, CelebA 64, and CelebA HQ datasets and it provides a strong
baseline on FFHQ. For example, on CIFAR-10, NVAE pushes the state-of-the-art
from 2.98 to 2.91 bits per dimension, and it produces high-quality images on
CelebA HQ. To the best of our knowledge, NVAE is the first successful VAE
applied to natural images as large as 256$\times$256 pixels. The source code is
available at https://github.com/NVlabs/NVAE . | [
"stat.ML",
"cs.CV",
"cs.LG"
] |
The construction of efficient and effective decision trees remains a key
topic in machine learning because of their simplicity and flexibility. A lot of
heuristic algorithms have been proposed to construct near-optimal decision
trees. ID3, C4.5 and CART are classical decision tree algorithms and the split
criteria they used are Shannon entropy, Gain Ratio and Gini index respectively.
All the split criteria seem to be independent, actually, they can be unified in
a Tsallis entropy framework. Tsallis entropy is a generalization of Shannon
entropy and provides a new approach to enhance decision trees' performance with
an adjustable parameter $q$. In this paper, a Tsallis Entropy Criterion (TEC)
algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index,
which generalizes the split criteria of decision trees. More importantly, we
reveal the relations between Tsallis entropy with different $q$ and other split
criteria. Experimental results on UCI data sets indicate that the TEC algorithm
achieves statistically significant improvement over the classical algorithms. | [
"stat.ML",
"cs.AI",
"cs.LG"
] |
Recent work on Neural Radiance Fields (NeRF) showed how neural networks can
be used to encode complex 3D environments that can be rendered
photorealistically from novel viewpoints. Rendering these images is very
computationally demanding and recent improvements are still a long way from
enabling interactive rates, even on high-end hardware. Motivated by scenarios
on mobile and mixed reality devices, we propose FastNeRF, the first NeRF-based
system capable of rendering high fidelity photorealistic images at 200Hz on a
high-end consumer GPU. The core of our method is a graphics-inspired
factorization that allows for (i) compactly caching a deep radiance map at each
position in space, (ii) efficiently querying that map using ray directions to
estimate the pixel values in the rendered image. Extensive experiments show
that the proposed method is 3000 times faster than the original NeRF algorithm
and at least an order of magnitude faster than existing work on accelerating
NeRF, while maintaining visual quality and extensibility. | [
"cs.CV"
] |
Unsupervised learning poses one of the most difficult challenges in computer
vision today. The task has an immense practical value with many applications in
artificial intelligence and emerging technologies, as large quantities of
unlabeled videos can be collected at relatively low cost. In this paper, we
address the unsupervised learning problem in the context of detecting the main
foreground objects in single images. We train a student deep network to predict
the output of a teacher pathway that performs unsupervised object discovery in
videos or large image collections. Our approach is different from published
methods on unsupervised object discovery. We move the unsupervised learning
phase during training time, then at test time we apply the standard
feed-forward processing along the student pathway. This strategy has the
benefit of allowing increased generalization possibilities during training,
while remaining fast at testing. Our unsupervised learning algorithm can run
over several generations of student-teacher training. Thus, a group of student
networks trained in the first generation collectively create the teacher at the
next generation. In experiments our method achieves top results on three
current datasets for object discovery in video, unsupervised image segmentation
and saliency detection. At test time the proposed system is fast, being one to
two orders of magnitude faster than published unsupervised methods. | [
"cs.CV"
] |
We identify a phenomenon, which we refer to as multi-model forgetting, that
occurs when sequentially training multiple deep networks with partially-shared
parameters; the performance of previously-trained models degrades as one
optimizes a subsequent one, due to the overwriting of shared parameters. To
overcome this, we introduce a statistically-justified weight plasticity loss
that regularizes the learning of a model's shared parameters according to their
importance for the previous models, and demonstrate its effectiveness when
training two models sequentially and for neural architecture search. Adding
weight plasticity in neural architecture search preserves the best models to
the end of the search and yields improved results in both natural language
processing and computer vision tasks. | [
"cs.LG",
"stat.ML"
] |
A robust and fast automatic moving object detection and tracking system is
essential to characterize target object and extract spatial and temporal
information for different functionalities including video surveillance systems,
urban traffic monitoring and navigation, robotic. In this dissertation, I
present a collaborative Spatial Pyramid Context-aware moving object detection
and Tracking system. The proposed visual tracker is composed of one master
tracker that usually relies on visual object features and two auxiliary
trackers based on object temporal motion information that will be called
dynamically to assist master tracker. SPCT utilizes image spatial context at
different level to make the video tracking system resistant to occlusion,
background noise and improve target localization accuracy and robustness. We
chose a pre-selected seven-channel complementary features including RGB color,
intensity and spatial pyramid of HoG to encode object color, shape and spatial
layout information. We exploit integral histogram as building block to meet the
demands of real-time performance. A novel fast algorithm is presented to
accurately evaluate spatially weighted local histograms in constant time
complexity using an extension of the integral histogram method. Different
techniques are explored to efficiently compute integral histogram on GPU
architecture and applied for fast spatio-temporal median computations and 3D
face reconstruction texturing. We proposed a multi-component framework based on
semantic fusion of motion information with projected building footprint map to
significantly reduce the false alarm rate in urban scenes with many tall
structures. The experiments on extensive VOTC2016 benchmark dataset and aerial
video confirm that combining complementary tracking cues in an intelligent
fusion framework enables persistent tracking for Full Motion Video and Wide
Aerial Motion Imagery. | [
"cs.CV"
] |
Explainability techniques for Graph Neural Networks still have a long way to
go compared to explanations available for both neural and decision decision
tree-based models trained on tabular data. Using a task that straddles both
graphs and tabular data, namely Entity Matching, we comment on key aspects of
explainability that are missing in GNN model explanations. | [
"cs.LG",
"cs.AI"
] |
Black-box optimizers that explore in parameter space have often been shown to
outperform more sophisticated action space exploration methods developed
specifically for the reinforcement learning problem. We examine these black-box
methods closely to identify situations in which they are worse than action
space exploration methods and those in which they are superior. Through simple
theoretical analyses, we prove that complexity of exploration in parameter
space depends on the dimensionality of parameter space, while complexity of
exploration in action space depends on both the dimensionality of action space
and horizon length. This is also demonstrated empirically by comparing simple
exploration methods on several model problems, including Contextual Bandit,
Linear Regression and Reinforcement Learning in continuous control. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Due to their black-box and data-hungry nature, deep learning techniques are
not yet widely adopted for real-world applications in critical domains, like
healthcare and justice. This paper presents Memory Wrap, a plug-and-play
extension to any image classification model. Memory Wrap improves both
data-efficiency and model interpretability, adopting a content-attention
mechanism between the input and some memories of past training samples. We show
that Memory Wrap outperforms standard classifiers when it learns from a limited
set of data, and it reaches comparable performance when it learns from the full
dataset. We discuss how its structure and content-attention mechanisms make
predictions interpretable, compared to standard classifiers. To this end, we
both show a method to build explanations by examples and counterfactuals, based
on the memory content, and how to exploit them to get insights about its
decision process. We test our approach on image classification tasks using
several architectures on three different datasets, namely CIFAR10, SVHN, and
CINIC10. | [
"cs.LG"
] |
Time series are often complex and rich in information but sparsely labeled
and therefore challenging to model. In this paper, we propose a self-supervised
framework for learning generalizable representations for non-stationary time
series. Our approach, called Temporal Neighborhood Coding (TNC), takes
advantage of the local smoothness of a signal's generative process to define
neighborhoods in time with stationary properties. Using a debiased contrastive
objective, our framework learns time series representations by ensuring that in
the encoding space, the distribution of signals from within a neighborhood is
distinguishable from the distribution of non-neighboring signals. Our
motivation stems from the medical field, where the ability to model the dynamic
nature of time series data is especially valuable for identifying, tracking,
and predicting the underlying patients' latent states in settings where
labeling data is practically impossible. We compare our method to recently
developed unsupervised representation learning approaches and demonstrate
superior performance on clustering and classification tasks for multiple
datasets. | [
"cs.LG",
"stat.ML"
] |
Objective: To validate and compare the performance of eight available deep
learning architectures in grading the severity of glaucoma based on color
fundus images. Materials and Methods: We retrospectively collected a dataset of
5978 fundus images and their glaucoma severities were annotated by the
consensus of two experienced ophthalmologists. We preprocessed the images to
generate global and local regions of interest (ROIs), namely the global
field-of-view images and the local disc region images. We then divided the
generated images into three independent sub-groups for training, validation,
and testing purposes. With the datasets, eight convolutional neural networks
(CNNs) (i.e., VGG16, VGG19, ResNet, DenseNet, InceptionV3, InceptionResNet,
Xception, and NASNetMobile) were trained separately to grade glaucoma severity,
and validated quantitatively using the area under the receiver operating
characteristic (ROC) curve and the quadratic kappa score. Results: The CNNs,
except VGG16 and VGG19, achieved average kappa scores of 80.36% and 78.22% when
trained from scratch on global and local ROIs, and 85.29% and 82.72% when
fine-tuned using the pre-trained weights, respectively. VGG16 and VGG19
achieved reasonable accuracy when trained from scratch, but they failed when
using pre-trained weights for global and local ROIs. Among these CNNs, the
DenseNet had the highest classification accuracy (i.e., 75.50%) based on
pre-trained weights when using global ROIs, as compared to 65.50% when using
local ROIs. Conclusion: The experiments demonstrated the feasibility of the
deep learning technology in grading glaucoma severity. In particular, global
field-of-view images contain relatively richer information that may be critical
for glaucoma assessment, suggesting that we should use the entire field-of-view
of a fundus image for training a deep learning network. | [
"cs.CV"
] |
Understanding where people are looking is an informative social cue. In this
work, we present Gaze360, a large-scale gaze-tracking dataset and method for
robust 3D gaze estimation in unconstrained images. Our dataset consists of 238
subjects in indoor and outdoor environments with labelled 3D gaze across a wide
range of head poses and distances. It is the largest publicly available dataset
of its kind by both subject and variety, made possible by a simple and
efficient collection method. Our proposed 3D gaze model extends existing models
to include temporal information and to directly output an estimate of gaze
uncertainty. We demonstrate the benefits of our model via an ablation study,
and show its generalization performance via a cross-dataset evaluation against
other recent gaze benchmark datasets. We furthermore propose a simple
self-supervised approach to improve cross-dataset domain adaptation. Finally,
we demonstrate an application of our model for estimating customer attention in
a supermarket setting. Our dataset and models are available at
http://gaze360.csail.mit.edu . | [
"cs.CV"
] |
In the Gastric Histopathology Image Classification (GHIC) tasks, which are
usually weakly supervised learning missions, there is inevitably redundant
information in the images. Therefore, designing networks that can focus on
effective distinguishing features has become a popular research topic. In this
paper, to accomplish the tasks of GHIC superiorly and to assist pathologists in
clinical diagnosis, an intelligent Hierarchical Conditional Random Field based
Attention Mechanism (HCRF-AM) model is proposed. The HCRF-AM model consists of
an Attention Mechanism (AM) module and an Image Classification (IC) module. In
the AM module, an HCRF model is built to extract attention regions. In the IC
module, a Convolutional Neural Network (CNN) model is trained with the
attention regions selected and then an algorithm called Classification
Probability-based Ensemble Learning is applied to obtain the image-level
results from patch-level output of the CNN. In the experiment, a classification
specificity of 96.67% is achieved on a gastric histopathology dataset with 700
images. Our HCRF-AM model demonstrates high classification performance and
shows its effectiveness and future potential in the GHIC field. | [
"cs.CV"
] |
Knowledge distillation is an effective way for model compression in deep
learning. Given a large model (i.e., teacher model), it aims to improve the
performance of a compact model (i.e., student model) by transferring the
information from the teacher. An essential challenge in knowledge distillation
is to identify the appropriate information to transfer. In early works, only
the final output of the teacher model is used as the soft label to help the
training of student models. Recently, the information from intermediate layers
is also adopted for better distillation. In this work, we aim to optimize the
process of knowledge distillation from the perspective of kernel matrix. The
output of each layer in a neural network can be considered as a new feature
space generated by applying a kernel function on original images. Hence, we
propose to transfer the corresponding kernel matrix (i.e., Gram matrix) from
teacher models to student models for distillation. However, the size of the
whole kernel matrix is quadratic to the number of examples. To improve the
efficiency, we decompose the original kernel matrix with Nystr{\"{o}}m method
and then transfer the partial matrix obtained with landmark points, whose size
is linear in the number of examples. More importantly, our theoretical analysis
shows that the difference between the original kernel matrices of teacher and
student can be well bounded by that of their corresponding partial matrices.
Finally, a new strategy of generating appropriate landmark points is proposed
for better distillation. The empirical study on benchmark data sets
demonstrates the effectiveness of the proposed algorithm. Code will be
released. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Image captioning is a multimodal task involving computer vision and natural
language processing, where the goal is to learn a mapping from the image to its
natural language description. In general, the mapping function is learned from
a training set of image-caption pairs. However, for some language, large scale
image-caption paired corpus might not be available. We present an approach to
this unpaired image captioning problem by language pivoting. Our method can
effectively capture the characteristics of an image captioner from the pivot
language (Chinese) and align it to the target language (English) using another
pivot-target (Chinese-English) sentence parallel corpus. We evaluate our method
on two image-to-English benchmark datasets: MSCOCO and Flickr30K. Quantitative
comparisons against several baseline approaches demonstrate the effectiveness
of our method. | [
"cs.CV"
] |
Self-attention has emerged as a vital component of state-of-the-art
sequence-to-sequence models for natural language processing in recent years,
brought to the forefront by pre-trained bi-directional Transformer models. Its
effectiveness is partly due to its non-sequential architecture, which promotes
scalability and parallelism but limits the model to inputs of a bounded length.
In particular, such architectures perform poorly on algorithmic tasks, where
the model must learn a procedure which generalizes to input lengths unseen in
training, a capability we refer to as inductive generalization. Identifying the
computational limits of existing self-attention mechanisms, we propose I-BERT,
a bi-directional Transformer that replaces positional encodings with a
recurrent layer. The model inductively generalizes on a variety of algorithmic
tasks where state-of-the-art Transformer models fail to do so. We also test our
method on masked language modeling tasks where training and validation sets are
partitioned to verify inductive generalization. Out of three algorithmic and
two natural language inductive generalization tasks, I-BERT achieves
state-of-the-art results on four tasks. | [
"cs.LG",
"stat.ML"
] |
Different technologies have been proposed to provide indoor localisation:
magnetic field, bluetooth , WiFi, etc. Among them, WiFi is the one with the
highest availability and highest accuracy. This fact allows for an ubiquitous
accurate localisation available for almost any environment and any device.
However, WiFi-based localisation is still an open problem.
In this article, we propose a new WiFi-based indoor localisation system that
takes advantage of the great ability of Convolutional Neural Networks in
classification problems. Three different approaches were used to achieve this
goal: a custom architecture called WiFiNet designed and trained specifically to
solve this problem and the most popular pre-trained networks using both
transfer learning and feature extraction.
Results indicate that WiFiNet is as a great approach for indoor localisation
in a medium-sized environment (30 positions and 113 access points) as it
reduces the mean localisation error (33%) and the processing time when compared
with state-of-the-art WiFi indoor localisation algorithms such as SVM. | [
"cs.LG",
"cs.NI"
] |
Deep supervised learning has achieved great success in the last decade.
However, its deficiencies of dependence on manual labels and vulnerability to
attacks have driven people to explore a better solution. As an alternative,
self-supervised learning attracts many researchers for its soaring performance
on representation learning in the last several years. Self-supervised
representation learning leverages input data itself as supervision and benefits
almost all types of downstream tasks. In this survey, we take a look into new
self-supervised learning methods for representation in computer vision, natural
language processing, and graph learning. We comprehensively review the existing
empirical methods and summarize them into three main categories according to
their objectives: generative, contrastive, and generative-contrastive
(adversarial). We further investigate related theoretical analysis work to
provide deeper thoughts on how self-supervised learning works. Finally, we
briefly discuss open problems and future directions for self-supervised
learning. An outline slide for the survey is provided. | [
"cs.LG",
"stat.ML"
] |
We address an essential problem in computer vision, that of unsupervised
object segmentation in video, where a main object of interest in a video
sequence should be automatically separated from its background. An efficient
solution to this task would enable large-scale video interpretation at a high
semantic level in the absence of the costly manually labeled ground truth. We
propose an efficient unsupervised method for generating foreground object
soft-segmentation masks based on automatic selection and learning from highly
probable positive features. We show that such features can be selected
efficiently by taking into consideration the spatio-temporal, appearance and
motion consistency of the object during the whole observed sequence. We also
emphasize the role of the contrasting properties between the foreground object
and its background. Our model is created in two stages: we start from pixel
level analysis, on top of which we add a regression model trained on a
descriptor that considers information over groups of pixels and is both
discriminative and invariant to many changes that the object undergoes
throughout the video. We also present theoretical properties of our
unsupervised learning method, that under some mild constraints is guaranteed to
learn a correct discriminative classifier even in the unsupervised case. Our
method achieves competitive and even state of the art results on the
challenging Youtube-Objects and SegTrack datasets, while being at least one
order of magnitude faster than the competition. We believe that the competitive
performance of our method in practice, along with its theoretical properties,
constitute an important step towards solving unsupervised discovery in video. | [
"cs.CV"
] |
Attention is sparse in vision transformers. We observe the final prediction
in vision transformers is only based on a subset of most informative tokens,
which is sufficient for accurate image recognition. Based on this observation,
we propose a dynamic token sparsification framework to prune redundant tokens
progressively and dynamically based on the input. Specifically, we devise a
lightweight prediction module to estimate the importance score of each token
given the current features. The module is added to different layers to prune
redundant tokens hierarchically. To optimize the prediction module in an
end-to-end manner, we propose an attention masking strategy to differentiably
prune a token by blocking its interactions with other tokens. Benefiting from
the nature of self-attention, the unstructured sparse tokens are still hardware
friendly, which makes our framework easy to achieve actual speed-up. By
hierarchically pruning 66% of the input tokens, our method greatly reduces
31%~37% FLOPs and improves the throughput by over 40% while the drop of
accuracy is within 0.5% for various vision transformers. Equipped with the
dynamic token sparsification framework, DynamicViT models can achieve very
competitive complexity/accuracy trade-offs compared to state-of-the-art CNNs
and vision transformers on ImageNet. Code is available at
https://github.com/raoyongming/DynamicViT | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
Graph representation learning is a ubiquitous task in machine learning where
the goal is to embed each vertex into a low-dimensional vector space. We
consider the bipartite graph and formalize its representation learning problem
as a statistical estimation problem of parameters in a semiparametric
exponential family distribution. The bipartite graph is assumed to be generated
by a semiparametric exponential family distribution, whose parametric component
is given by the proximity of outputs of two one-layer neural networks, while
nonparametric (nuisance) component is the base measure. Neural networks take
high-dimensional features as inputs and output embedding vectors. In this
setting, the representation learning problem is equivalent to recovering the
weight matrices. The main challenges of estimation arise from the nonlinearity
of activation functions and the nonparametric nuisance component of the
distribution. To overcome these challenges, we propose a pseudo-likelihood
objective based on the rank-order decomposition technique and focus on its
local geometry. We show that the proposed objective is strongly convex in a
neighborhood around the ground truth, so that a gradient descent-based method
achieves linear convergence rate. Moreover, we prove that the sample complexity
of the problem is linear in dimensions (up to logarithmic factors), which is
consistent with parametric Gaussian models. However, our estimator is robust to
any model misspecification within the exponential family, which is validated in
extensive experiments. | [
"stat.ML",
"cs.LG"
] |
Current supervised methods for facial landmark detection require a large
amount of training data and may suffer from overfitting to specific datasets
due to the massive number of parameters. We introduce a semi-supervised method
in which the crucial idea is to first generate implicit face knowledge from the
large amounts of unlabeled images of faces available today. In a first,
completely unsupervised stage, we train an adversarial autoencoder to
reconstruct faces via a low-dimensional face embedding. In a second, supervised
stage, we interleave the decoder with transfer layers to retask the generation
of color images to the prediction of landmark heatmaps. Our framework (3FabRec)
achieves state-of-the-art performance on several common benchmarks and, most
importantly, is able to maintain impressive accuracy on extremely small
training sets down to as few as 10 images. As the interleaved layers only add a
low amount of parameters to the decoder, inference runs at several hundred FPS
on a GPU. | [
"cs.CV"
] |
Arguably, unsupervised learning plays a crucial role in the majority of
algorithms for processing brain imaging. A recently introduced unsupervised
approach Deep InfoMax (DIM) is a promising tool for exploring brain structure
in a flexible non-linear way. In this paper, we investigate the use of variants
of DIM in a setting of progression to Alzheimer's disease in comparison with
supervised AlexNet and ResNet inspired convolutional neural networks. As a
benchmark, we use a classification task between four groups: patients with
stable, and progressive mild cognitive impairment (MCI), with Alzheimer's
disease, and healthy controls. Our dataset is comprised of 828 subjects from
the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our
experiments highlight encouraging evidence of the high potential utility of DIM
in future neuroimaging studies. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Perceiving the world in terms of objects and tracking them through time is a
crucial prerequisite for reasoning and scene understanding. Recently, several
methods have been proposed for unsupervised learning of object-centric
representations. However, since these models were evaluated on different
downstream tasks, it remains unclear how they compare in terms of basic
perceptual abilities such as detection, figure-ground segmentation and tracking
of objects. To close this gap, we design a benchmark with four data sets of
varying complexity and seven additional test sets featuring challenging
tracking scenarios relevant for natural videos. Using this benchmark, we
compare the perceptual abilities of four object-centric approaches: ViMON, a
video-extension of MONet, based on recurrent spatial attention, OP3, which
exploits clustering via spatial mixture models, as well as TBA and SCALOR,
which use explicit factorization via spatial transformers. Our results suggest
that the architectures with unconstrained latent representations learn more
powerful representations in terms of object detection, segmentation and
tracking than the spatial transformer based architectures. We also observe that
none of the methods are able to gracefully handle the most challenging tracking
scenarios despite their synthetic nature, suggesting that our benchmark may
provide fruitful guidance towards learning more robust object-centric video
representations. | [
"cs.CV"
] |
Sparse approximations using highly over-complete dictionaries is a
state-of-the-art tool for many imaging applications including denoising,
super-resolution, compressive sensing, light-field analysis, and object
recognition. Unfortunately, the applicability of such methods is severely
hampered by the computational burden of sparse approximation: these algorithms
are linear or super-linear in both the data dimensionality and size of the
dictionary. We propose a framework for learning the hierarchical structure of
over-complete dictionaries that enables fast computation of sparse
representations. Our method builds on tree-based strategies for nearest
neighbor matching, and presents domain-specific enhancements that are highly
efficient for the analysis of image patches. Contrary to most popular methods
for building spatial data structures, out methods rely on shallow, balanced
trees with relatively few layers. We show an extensive array of experiments on
several applications such as image denoising/superresolution, compressive
video/light-field sensing where we practically achieve 100-1000x speedup (with
a less than 1dB loss in accuracy). | [
"cs.CV"
] |
We consider log-supermodular models on binary variables, which are
probabilistic models with negative log-densities which are submodular. These
models provide probabilistic interpretations of common combinatorial
optimization tasks such as image segmentation. In this paper, we focus
primarily on parameter estimation in the models from known upper-bounds on the
intractable log-partition function. We show that the bound based on separable
optimization on the base polytope of the submodular function is always inferior
to a bound based on "perturb-and-MAP" ideas. Then, to learn parameters, given
that our approximation of the log-partition function is an expectation (over
our own randomization), we use a stochastic subgradient technique to maximize a
lower-bound on the log-likelihood. This can also be extended to conditional
maximum likelihood. We illustrate our new results in a set of experiments in
binary image denoising, where we highlight the flexibility of a probabilistic
model to learn with missing data. | [
"stat.ML",
"cs.LG"
] |
Generative adversarial networks (GANs) are able to model the complex
highdimensional distributions of real-world data, which suggests they could be
effective for anomaly detection. However, few works have explored the use of
GANs for the anomaly detection task. We leverage recently developed GAN models
for anomaly detection, and achieve state-of-the-art performance on image and
network intrusion datasets, while being several hundred-fold faster at test
time than the only published GAN-based method. | [
"cs.LG",
"stat.ML"
] |
The construction of Mapper has emerged in the last decade as a powerful and
effective topological data analysis tool that approximates and generalizes
other topological summaries, such as the Reeb graph, the contour tree, split,
and joint trees. In this paper, we study the parallel analysis of the
construction of Mapper. We give a provably correct parallel algorithm to
execute Mapper on multiple processors and discuss the performance results that
compare our approach to a reference sequential Mapper implementation. We report
the performance experiments that demonstrate the efficiency of our method. | [
"cs.CV",
"cs.CG",
"cs.DC",
"stat.ML"
] |
Finding a generally accepted formal definition of a disentangled
representation in the context of an agent behaving in an environment is an
important challenge towards the construction of data-efficient autonomous
agents. Higgins et al. recently proposed Symmetry-Based Disentangled
Representation Learning, a definition based on a characterization of symmetries
in the environment using group theory. We build on their work and make
observations, theoretical and empirical, that lead us to argue that
Symmetry-Based Disentangled Representation Learning cannot only be based on
static observations: agents should interact with the environment to discover
its symmetries. Our experiments can be reproduced in Colab and the code is
available on GitHub. | [
"cs.LG",
"stat.ML"
] |
Segmentation and analysis of individual pores and grains of mudrocks from
scanning electron microscope images is non-trivial because of noise, imaging
artifacts, variation in pixel grayscale values across images, and overlaps in
grayscale values among different physical features such as silt grains, clay
grains, and pores in an image, which make their identification difficult.
Moreover, because grains and pores often have overlapping grayscale values,
direct application of threshold-based segmentation techniques is not
sufficient. Recent advances in the field of computer vision have made it easier
and faster to segment images and identify multiple occurrences of such features
in an image, provided that ground-truth data for training the algorithm is
available. Here, we propose a deep learning SEM image segmentation model,
MudrockNet based on Google's DeepLab-v3+ architecture implemented with the
TensorFlow library. The ground-truth data was obtained from an image-processing
workflow applied to scanning electron microscope images of uncemented muds from
the Kumano Basin offshore Japan at depths < 1.1 km. The trained deep learning
model obtained a pixel-accuracy about 90%, and predictions for the test data
obtained a mean intersection over union (IoU) of 0.6591 for silt grains and
0.6642 for pores. We also compared our model with the random forest classifier
using trainable Weka segmentation in ImageJ, and it was observed that
MudrockNet gave better predictions for both silt grains and pores. The size,
concentration, and spatial arrangement of the silt and clay grains can affect
the petrophysical properties of a mudrock, and an automated method to
accurately identify the different grains and pores in mudrocks can help improve
reservoir and seal characterization for petroleum exploration and anthropogenic
waste sequestration. | [
"cs.CV",
"physics.geo-ph",
"I.4.6; I.4.3"
] |
Optical flow is inherently a 2D search problem, and thus the computational
complexity grows quadratically with respect to the search window, making large
displacements matching infeasible for high-resolution images. In this paper, we
take inspiration from Transformers and propose a new method for high-resolution
optical flow estimation with significantly less computation. Specifically, a 1D
attention operation is first applied in the vertical direction of the target
image, and then a simple 1D correlation in the horizontal direction of the
attended image is able to achieve 2D correspondence modeling effect. The
directions of attention and correlation can also be exchanged, resulting in two
3D cost volumes that are concatenated for optical flow estimation. The novel 1D
formulation empowers our method to scale to very high-resolution input images
while maintaining competitive performance. Extensive experiments on Sintel,
KITTI and real-world 4K ($2160 \times 3840$) resolution images demonstrated the
effectiveness and superiority of our proposed method. Code and models are
available at \url{https://github.com/haofeixu/flow1d}. | [
"cs.CV"
] |
When compared to unimodal systems, multimodal biometric systems have several
advantages, including lower error rate, higher accuracy, and larger population
coverage. However, multimodal systems have an increased demand for integrity
and privacy because they must store multiple biometric traits associated with
each user. In this paper, we present a deep learning framework for
feature-level fusion that generates a secure multimodal template from each
user's face and iris biometrics. We integrate a deep hashing (binarization)
technique into the fusion architecture to generate a robust binary multimodal
shared latent representation. Further, we employ a hybrid secure architecture
by combining cancelable biometrics with secure sketch techniques and integrate
it with a deep hashing framework, which makes it computationally prohibitive to
forge a combination of multiple biometrics that pass the authentication. The
efficacy of the proposed approach is shown using a multimodal database of face
and iris and it is observed that the matching performance is improved due to
the fusion of multiple biometrics. Furthermore, the proposed approach also
provides cancelability and unlinkability of the templates along with improved
privacy of the biometric data. Additionally, we also test the proposed hashing
function for an image retrieval application using a benchmark dataset. The main
goal of this paper is to develop a method for integrating multimodal fusion,
deep hashing, and biometric security, with an emphasis on structural data from
modalities like face and iris. The proposed approach is in no way a general
biometric security framework that can be applied to all biometric modalities,
as further research is needed to extend the proposed framework to other
unconstrained biometric modalities. | [
"cs.CV",
"cs.AI",
"cs.IT",
"math.IT"
] |
We consider the problem of training robust and accurate deep neural networks
(DNNs) when subject to various proportions of noisy labels. Large-scale
datasets tend to contain mislabeled samples that can be memorized by DNNs,
impeding the performance. With appropriate handling, this degradation can be
alleviated. There are two problems to consider: how to distinguish clean
samples and how to deal with noisy samples. In this paper, we present Ensemble
Noise-robust K-fold Cross-Validation Selection (E-NKCVS) to effectively select
clean samples from noisy data, solving the first problem. For the second
problem, we create a new pseudo label for any sample determined to have an
uncertain or likely corrupt label. E-NKCVS obtains multiple predicted labels
for each sample and the entropy of these labels is used to tune the weight
given to the pseudo label and the given label. Theoretical analysis and
extensive verification of the algorithms in the noisy label setting are
provided. We evaluate our approach on various image and text classification
tasks where the labels have been manually corrupted with different noise
ratios. Additionally, two large real-world noisy datasets are also used,
Clothing-1M and WebVision. E-NKCVS is empirically shown to be highly tolerant
to considerable proportions of label noise and has a consistent improvement
over state-of-the-art methods. Especially on more difficult datasets with
higher noise ratios, we can achieve a significant improvement over the
second-best model. Moreover, our proposed approach can easily be integrated
into existing DNN methods to improve their robustness against label noise. | [
"cs.LG",
"cs.CV"
] |
Despite the great empirical success of deep reinforcement learning, its
theoretical foundation is less well understood. In this work, we make the first
attempt to theoretically understand the deep Q-network (DQN) algorithm (Mnih et
al., 2015) from both algorithmic and statistical perspectives. In specific, we
focus on a slight simplification of DQN that fully captures its key features.
Under mild assumptions, we establish the algorithmic and statistical rates of
convergence for the action-value functions of the iterative policy sequence
obtained by DQN. In particular, the statistical error characterizes the bias
and variance that arise from approximating the action-value function using deep
neural network, while the algorithmic error converges to zero at a geometric
rate. As a byproduct, our analysis provides justifications for the techniques
of experience replay and target network, which are crucial to the empirical
success of DQN. Furthermore, as a simple extension of DQN, we propose the
Minimax-DQN algorithm for zero-sum Markov game with two players. Borrowing the
analysis of DQN, we also quantify the difference between the policies obtained
by Minimax-DQN and the Nash equilibrium of the Markov game in terms of both the
algorithmic and statistical rates of convergence. | [
"cs.LG",
"math.OC",
"stat.ML"
] |
In the last few years, deep learning has led to very good performance on a
variety of problems, such as visual recognition, speech recognition and natural
language processing. Among different types of deep neural networks,
convolutional neural networks have been most extensively studied. Leveraging on
the rapid growth in the amount of the annotated data and the great improvements
in the strengths of graphics processor units, the research on convolutional
neural networks has been emerged swiftly and achieved state-of-the-art results
on various tasks. In this paper, we provide a broad survey of the recent
advances in convolutional neural networks. We detailize the improvements of CNN
on different aspects, including layer design, activation function, loss
function, regularization, optimization and fast computation. Besides, we also
introduce various applications of convolutional neural networks in computer
vision, speech and natural language processing. | [
"cs.CV",
"cs.LG",
"cs.NE"
] |
The attention that deep learning has garnered from the academic community and
industry continues to grow year over year, and it has been said that we are in
a new golden age of artificial intelligence research. However, neural networks
are still often seen as a "black box" where learning occurs but cannot be
understood in a human-interpretable way. Since these machine learning systems
are increasingly being adopted in security contexts, it is important to explore
these interpretations. We consider an Android malware traffic dataset for
approaching this problem. Then, using the information plane, we explore how
homeomorphism affects learned representation of the data and the invariance of
the mutual information captured by the parameters on that data. We empirically
validate these results, using accuracy as a second measure of similarity of
learned representations.
Our results suggest that although the details of learned representations and
the specific coordinate system defined over the manifold of all parameters
differ slightly, the functional approximations are the same. Furthermore, our
results show that since mutual information remains invariant under
homeomorphism, only feature engineering methods that alter the entropy of the
dataset will change the outcome of the neural network. This means that for some
datasets and tasks, neural networks require meaningful, human-driven feature
engineering or changes in architecture to provide enough information for the
neural network to generate a sufficient statistic. Applying our results can
serve to guide analysis methods for machine learning engineers and suggests
that neural networks that can exploit the convolution theorem are equally
accurate as standard convolutional neural networks, and can be more
computationally efficient. | [
"cs.LG",
"cs.CR",
"cs.IT",
"math.IT",
"stat.ML"
] |
It is not until recently that graph neural networks (GNNs) are adopted to
perform graph representation learning, among which, those based on the
aggregation of features within the neighborhood of a node achieved great
success. However, despite such achievements, GNNs illustrate defects in
identifying some common structural patterns which, unfortunately, play
significant roles in various network phenomena. In this paper, we propose
GraLSP, a GNN framework which explicitly incorporates local structural patterns
into the neighborhood aggregation through random anonymous walks. Specifically,
we capture local graph structures via random anonymous walks, powerful and
flexible tools that represent structural patterns. The walks are then fed into
the feature aggregation, where we design various mechanisms to address the
impact of structural features, including adaptive receptive radius, attention
and amplification. In addition, we design objectives that capture similarities
between structures and are optimized jointly with node proximity objectives.
With the adequate leverage of structural patterns, our model is able to
outperform competitive counterparts in various prediction tasks in multiple
datasets. | [
"cs.LG",
"stat.ML"
] |
Effective feature-extraction is critical to models' contextual understanding,
particularly for applications to robotics and autonomous driving, such as
multimodal trajectory prediction. However, state-of-the-art generative methods
face limitations in representing the scene context, leading to predictions of
inadmissible futures. We alleviate these limitations through the use of
self-attention, which enables better control over representing the agent's
social context; we propose a local feature-extraction pipeline that produces
more salient information downstream, with improved parameter efficiency. We
show improvements on standard metrics (minADE, minFDE, DAO, DAC) over various
baselines on the Argoverse dataset. We release our code at:
https://github.com/Manojbhat09/Trajformer | [
"cs.CV"
] |
Most algorithms for representation learning and link prediction in relational
data have been designed for static data. However, the data they are applied to
usually evolves with time, such as friend graphs in social networks or user
interactions with items in recommender systems. This is also the case for
knowledge bases, which contain facts such as (US, has president, B. Obama,
[2009-2017]) that are valid only at certain points in time. For the problem of
link prediction under temporal constraints, i.e., answering queries such as
(US, has president, ?, 2012), we propose a solution inspired by the canonical
decomposition of tensors of order 4. We introduce new regularization schemes
and present an extension of ComplEx (Trouillon et al., 2016) that achieves
state-of-the-art performance. Additionally, we propose a new dataset for
knowledge base completion constructed from Wikidata, larger than previous
benchmarks by an order of magnitude, as a new reference for evaluating temporal
and non-temporal link prediction methods. | [
"stat.ML",
"cs.LG"
] |
Recently, due to the strength of deep convolutional neural networks (CNN),
many CNN-based image quality assessment (IQA) models have been studied.
However, previous CNN-based IQA models likely have yet to utilize the
characteristics of the human visual system (HVS) fully for IQA problems when
they simply entrust everything to the CNN, expecting it to learn from a
training dataset. However, in this paper, we propose a novel saliency-channel
attention residual network based on the just-noticeable-difference (JND)
concept for full-reference image quality assessments (FR-IQA). It is referred
to as JND-SalCAR and shows significant improvements in large IQA datasets with
various types of distortion. The proposed JND-SalCAR effectively learns how to
incorporate human psychophysical characteristics, such as visual saliency and
JND, into image quality predictions. In the proposed network, a SalCAR block is
devised so that perceptually important features can be extracted with the help
of saliency-based spatial attention and channel attention schemes. In addition,
a saliency map serves as a guideline for predicting a patch weight map in order
to afford stable training of end-to-end optimization for the JND-SalCAR. To the
best of our knowledge, our work presents the first HVS-inspired trainable
FR-IQA network that considers both visual saliency and the JND characteristics
of the HVS. When the visual saliency map and the JND probability map are
explicitly given as priors, they can be usefully combined to predict IQA scores
rated by humans more precisely, eventually leading to performance improvements
and faster convergence. The experimental results show that the proposed
JND-SalCAR significantly outperforms all recent state-of-the-art FR-IQA methods
on large IQA datasets in terms of the Spearman rank order coefficient (SRCC)
and the Pearson linear correlation coefficient (PLCC). | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.