text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Simplicial complexes form an important class of topological spaces that are
frequently used to in many applications areas such as computer-aided design,
computer graphics, and simulation. The representation learning on graphs, which
are just 1-d simplicial complexes, has witnessed a great attention and success
in the past few years. Due to the additional complexity higher dimensional
simplicial hold, there has not been enough effort to extend representation
learning to these objects especially when it comes to learn entire-simplicial
complex representation. In this work, we propose a method for simplicial
complex-level representation learning that embeds a simplicial complex to a
universal embedding space in a way that complex-to-complex proximity is
preserved. Our method utilizes a simplex-level embedding induced by a
pre-trained simplicial autoencoder to learn an entire simplicial complex
representation. To the best of our knowledge, this work presents the first
method for learning simplicial complex-level representation. | [
"cs.LG",
"cs.CG",
"cs.CV",
"math.AT",
"stat.ML"
] |
This paper proposes a knowledge distillation method for foreground object
search (FoS). Given a background and a rectangle specifying the foreground
location and scale, FoS retrieves compatible foregrounds in a certain category
for later image composition. Foregrounds within the same category can be
grouped into a small number of patterns. Instances within each pattern are
compatible with any query input interchangeably. These instances are referred
to as interchangeable foregrounds. We first present a pipeline to build
pattern-level FoS dataset containing labels of interchangeable foregrounds. We
then establish a benchmark dataset for further training and testing following
the pipeline. As for the proposed method, we first train a foreground encoder
to learn representations of interchangeable foregrounds. We then train a query
encoder to learn query-foreground compatibility following a knowledge
distillation framework. It aims to transfer knowledge from interchangeable
foregrounds to supervise representation learning of compatibility. The query
feature representation is projected to the same latent space as interchangeable
foregrounds, enabling very efficient and interpretable instance-level search.
Furthermore, pattern-level search is feasible to retrieve more controllable,
reasonable and diverse foregrounds. The proposed method outperforms the
previous state-of-the-art by 10.42% in absolute difference and 24.06% in
relative improvement evaluated by mean average precision (mAP). Extensive
experimental results also demonstrate its efficacy from various aspects. The
benchmark dataset and code will be release shortly. | [
"cs.CV"
] |
The common self-supervised pre-training practice requires collecting massive
unlabeled data together and then trains a representation model, dubbed
\textbf{joint training}. However, in real-world scenarios where data are
collected in a streaming fashion, the joint training scheme is usually
storage-heavy and time-consuming. A more efficient alternative is to train a
model continually with streaming data, dubbed \textbf{sequential training}.
Nevertheless, it is unclear how well sequential self-supervised pre-training
performs with streaming data. In this paper, we conduct thorough experiments to
investigate self-supervised pre-training with streaming data. Specifically, we
evaluate the transfer performance of sequential self-supervised pre-training
with four different data sequences on three different downstream tasks and make
comparisons with joint self-supervised pre-training. Surprisingly, we find
sequential self-supervised learning exhibits almost the same performance as the
joint training when the distribution shifts within streaming data are mild.
Even for data sequences with large distribution shifts, sequential
self-supervised training with simple techniques, e.g., parameter regularization
or data replay, still performs comparably to joint training. Based on our
findings, we recommend using sequential self-supervised training as a
\textbf{more efficient yet performance-competitive} representation learning
practice for real-world applications. | [
"cs.LG",
"cs.CV"
] |
We study bandits and reinforcement learning (RL) subject to a conservative
constraint where the agent is asked to perform at least as well as a given
baseline policy. This setting is particular relevant in real-world domains
including digital marketing, healthcare, production, finance, etc. For
multi-armed bandits, linear bandits and tabular RL, specialized algorithms and
theoretical analyses were proposed in previous work. In this paper, we present
a unified framework for conservative bandits and RL, in which our core
technique is to calculate the necessary and sufficient budget obtained from
running the baseline policy. For lower bounds, our framework gives a black-box
reduction that turns a certain lower bound in the nonconservative setting into
a new lower bound in the conservative setting. We strengthen the existing lower
bound for conservative multi-armed bandits and obtain new lower bounds for
conservative linear bandits, tabular RL and low-rank MDP. For upper bounds, our
framework turns a certain nonconservative upper-confidence-bound (UCB)
algorithm into a conservative algorithm with a simple analysis. For multi-armed
bandits, linear bandits and tabular RL, our new upper bounds tighten or match
existing ones with significantly simpler analyses. We also obtain a new upper
bound for conservative low-rank MDP. | [
"cs.LG",
"stat.ML"
] |
Most of existing salient object detection models have achieved great progress
by aggregating multi-level features extracted from convolutional neural
networks. However, because of the different receptive fields of different
convolutional layers, there exists big differences between features generated
by these layers. Common feature fusion strategies (addition or concatenation)
ignore these differences and may cause suboptimal solutions. In this paper, we
propose the F3Net to solve above problem, which mainly consists of cross
feature module (CFM) and cascaded feedback decoder (CFD) trained by minimizing
a new pixel position aware loss (PPA). Specifically, CFM aims to selectively
aggregate multi-level features. Different from addition and concatenation, CFM
adaptively selects complementary components from input features before fusion,
which can effectively avoid introducing too much redundant information that may
destroy the original features. Besides, CFD adopts a multi-stage feedback
mechanism, where features closed to supervision will be introduced to the
output of previous layers to supplement them and eliminate the differences
between features. These refined features will go through multiple similar
iterations before generating the final saliency maps. Furthermore, different
from binary cross entropy, the proposed PPA loss doesn't treat pixels equally,
which can synthesize the local structure information of a pixel to guide the
network to focus more on local details. Hard pixels from boundaries or
error-prone parts will be given more attention to emphasize their importance.
F3Net is able to segment salient object regions accurately and provide clear
local details. Comprehensive experiments on five benchmark datasets demonstrate
that F3Net outperforms state-of-the-art approaches on six evaluation metrics. | [
"cs.CV"
] |
Color-coded aperture (CCA) methods can physically measure the depth of a
scene given by physical cues from a single-shot image of a monocular camera.
However, they are vulnerable to actual lens aberrations in real scenes because
they assume an ideal lens for simplifying algorithms. In this paper, we propose
physical cue-based deep learning for CCA photography. To address actual lens
aberrations, we developed a deep deaberration network (DDN) that is
additionally equipped with a self-attention mechanism of position and color
channels to efficiently learn the lens aberration. Furthermore, a new Bayes L1
loss function based on Bayesian deep learning enables to handle the uncertainty
of depth estimation more accurately. Quantitative and qualitative comparisons
demonstrate that our method is superior to conventional methods including real
outdoor scenes. Furthermore, compared to a long-baseline stereo camera, the
proposed method provides an error-free depth map at close range, as there is no
blind spot between the left and right cameras. | [
"cs.CV"
] |
The optical flow of humans is well known to be useful for the analysis of
human action. Recent optical flow methods focus on training deep networks to
approach the problem. However, the training data used by them does not cover
the domain of human motion. Therefore, we develop a dataset of multi-human
optical flow and train optical flow networks on this dataset. We use a 3D model
of the human body and motion capture data to synthesize realistic flow fields
in both single- and multi-person images. We then train optical flow networks to
estimate human flow fields from pairs of images. We demonstrate that our
trained networks are more accurate than a wide range of top methods on held-out
test data and that they can generalize well to real image sequences. The code,
trained models and the dataset are available for research. | [
"cs.CV",
"cs.LG"
] |
Feature means countenance, remote sensing scene objects with similar
characteristics, associated to interesting scene elements in the image
formation process. They are classified into three types in image processing,
that is low, middle and high. Low level features are color, texture and middle
level feature is shape and high level feature is semantic gap of objects. An
image retrieval system is a computer system for browsing, searching and
retrieving images from a large image database. Content Based Image Retrieval is
a technique which uses visual features of image such as color, shape, texture
to search user required image from large image database according to user
requests in the form of a query. MKNN is an enhancing method of KNN. The
proposed KNN classification is called MKNN. MKNN contains two parts for
processing, they are validity of the train samples and applying weighted KNN.
The validity of each point is computed according to its neighbors. In our
proposal, Modified K-Nearest Neighbor can be considered a kind of weighted KNN
so that the query label is approximated by weighting the neighbors of the
query. | [
"cs.CV"
] |
We propose a deep Graph Neural Network (GNN) model that alternates two types
of layers. The first type is inspired by Reservoir Computing (RC) and generates
new vertex features by iterating a non-linear map until it converges to a fixed
point. The second type of layer implements graph pooling operations, that
gradually reduce the support graph and the vertex features, and further improve
the computational efficiency of the RC-based GNN. The architecture is,
therefore, pyramidal. In the last layer, the features of the remaining vertices
are combined into a single vector, which represents the graph embedding.
Through a mathematical derivation introduced in this paper, we show formally
how graph pooling can reduce the computational complexity of the model and
speed-up the convergence of the dynamical updates of the vertex features. Our
proposed approach to the design of RC-based GNNs offers an advantageous and
principled trade-off between accuracy and complexity, which we extensively
demonstrate in experiments on a large set of graph datasets. | [
"cs.LG",
"cs.NE",
"stat.ML"
] |
Security inspection often deals with a piece of baggage or suitcase where
objects are heavily overlapped with each other, resulting in an unsatisfactory
performance for prohibited items detection in X-ray images. In the literature,
there have been rare studies and datasets touching this important topic. In
this work, we contribute the first high-quality object detection dataset for
security inspection, named Occluded Prohibited Items X-ray (OPIXray) image
benchmark. OPIXray focused on the widely-occurred prohibited item "cutter",
annotated manually by professional inspectors from the international airport.
The test set is further divided into three occlusion levels to better
understand the performance of detectors. Furthermore, to deal with the
occlusion in X-ray images detection, we propose the De-occlusion Attention
Module (DOAM), a plug-and-play module that can be easily inserted into and thus
promote most popular detectors. Despite the heavy occlusion in X-ray imaging,
shape appearance of objects can be preserved well, and meanwhile different
materials visually appear with different colors and textures. Motivated by
these observations, our DOAM simultaneously leverages the different appearance
information of the prohibited item to generate the attention map, which helps
refine feature maps for the general detectors. We comprehensively evaluate our
module on the OPIXray dataset, and demonstrate that our module can consistently
improve the performance of the state-of-the-art detection methods such as SSD,
FCOS, etc, and significantly outperforms several widely-used attention
mechanisms. In particular, the advantages of DOAM are more significant in the
scenarios with higher levels of occlusion, which demonstrates its potential
application in real-world inspections. The OPIXray benchmark and our model are
released at https://github.com/OPIXray-author/OPIXray. | [
"cs.CV"
] |
Robots hold promise in many scenarios involving outdoor use, such as
search-and-rescue, wildlife management, and collecting data to improve
environment, climate, and weather forecasting. However, autonomous navigation
of outdoor trails remains a challenging problem. Recent work has sought to
address this issue using deep learning. Although this approach has achieved
state-of-the-art results, the deep learning paradigm may be limited due to a
reliance on large amounts of annotated training data. Collecting and curating
training datasets may not be feasible or practical in many situations,
especially as trail conditions may change due to seasonal weather variations,
storms, and natural erosion. In this paper, we explore an approach to address
this issue through virtual-to-real-world transfer learning using a variety of
deep learning models trained to classify the direction of a trail in an image.
Our approach utilizes synthetic data gathered from virtual environments for
model training, bypassing the need to collect a large amount of real images of
the outdoors. We validate our approach in three main ways. First, we
demonstrate that our models achieve classification accuracies upwards of 95% on
our synthetic data set. Next, we utilize our classification models in the
control system of a simulated robot to demonstrate feasibility. Finally, we
evaluate our models on real-world trail data and demonstrate the potential of
virtual-to-real-world transfer learning. | [
"cs.LG",
"cs.CV",
"cs.RO",
"stat.ML"
] |
This paper investigates into the colorization problem which converts a
grayscale image to a colorful version. This is a very difficult problem and
normally requires manual adjustment to achieve artifact-free quality. For
instance, it normally requires human-labelled color scribbles on the grayscale
target image or a careful selection of colorful reference images (e.g.,
capturing the same scene in the grayscale target image). Unlike the previous
methods, this paper aims at a high-quality fully-automatic colorization method.
With the assumption of a perfect patch matching technique, the use of an
extremely large-scale reference database (that contains sufficient color
images) is the most reliable solution to the colorization problem. However,
patch matching noise will increase with respect to the size of the reference
database in practice. Inspired by the recent success in deep learning
techniques which provide amazing modeling of large-scale data, this paper
re-formulates the colorization problem so that deep learning techniques can be
directly employed. To ensure artifact-free quality, a joint bilateral filtering
based post-processing step is proposed. We further develop an adaptive image
clustering technique to incorporate the global image information. Numerous
experiments demonstrate that our method outperforms the state-of-art algorithms
both in terms of quality and speed. | [
"cs.CV"
] |
An adversarial patch can arbitrarily manipulate image pixels within a
restricted region to induce model misclassification. The threat of this
localized attack has gained significant attention because the adversary can
mount a physically-realizable attack by attaching patches to the victim object.
Recent provably robust defenses generally follow the PatchGuard framework by
using CNNs with small receptive fields and secure feature aggregation for
robust model predictions. In this paper, we extend PatchGuard to PatchGuard++
for provably detecting the adversarial patch attack to boost both provable
robust accuracy and clean accuracy. In PatchGuard++, we first use a CNN with
small receptive fields for feature extraction so that the number of features
corrupted by the adversarial patch is bounded. Next, we apply masks in the
feature space and evaluate predictions on all possible masked feature maps.
Finally, we extract a pattern from all masked predictions to catch the
adversarial patch attack. We evaluate PatchGuard++ on ImageNette (a 10-class
subset of ImageNet), ImageNet, and CIFAR-10 and demonstrate that PatchGuard++
significantly improves the provable robustness and clean performance. | [
"cs.CV"
] |
Despite their popularity, to date, the application of normalizing flows on
categorical data stays limited. The current practice of using dequantization to
map discrete data to a continuous space is inapplicable as categorical data has
no intrinsic order. Instead, categorical data have complex and latent relations
that must be inferred, like the synonymy between words. In this paper, we
investigate \emph{Categorical Normalizing Flows}, that is normalizing flows for
categorical data. By casting the encoding of categorical data in continuous
space as a variational inference problem, we jointly optimize the continuous
representation and the model likelihood. Using a factorized decoder, we
introduce an inductive bias to model any interactions in the normalizing flow.
As a consequence, we do not only simplify the optimization compared to having a
joint decoder, but also make it possible to scale up to a large number of
categories that is currently impossible with discrete normalizing flows. Based
on Categorical Normalizing Flows, we propose GraphCNF a permutation-invariant
generative model on graphs. GraphCNF implements a three step approach modeling
the nodes, edges and adjacency matrix stepwise to increase efficiency. On
molecule generation, GraphCNF outperforms both one-shot and autoregressive
flow-based state-of-the-art. | [
"cs.LG",
"stat.ML"
] |
A generalisation of a latent position network model known as the random dot
product graph is considered. We show that, whether the normalised Laplacian or
adjacency matrix is used, the vector representations of nodes obtained by
spectral embedding, using the largest eigenvalues by magnitude, provide
strongly consistent latent position estimates with asymptotically Gaussian
error, up to indefinite orthogonal transformation. The mixed membership and
standard stochastic block models constitute special cases where the latent
positions live respectively inside or on the vertices of a simplex, crucially,
without assuming the underlying block connectivity probability matrix is
positive-definite. Estimation via spectral embedding can therefore be achieved
by respectively estimating this simplicial support, or fitting a Gaussian
mixture model. In the latter case, the use of $K$-means (with Euclidean
distance), as has been previously recommended, is suboptimal and for
identifiability reasons unsound. Indeed, Euclidean distances and angles are not
preserved under indefinite orthogonal transformation, and we show stochastic
block model examples where such quantities vary appreciably. Empirical
improvements in link prediction (over the random dot product graph), as well as
the potential to uncover richer latent structure (than posited under the mixed
membership or standard stochastic block models) are demonstrated in a
cyber-security example. | [
"stat.ML",
"cs.LG"
] |
Deep neural networks for video-based eye tracking have demonstrated
resilience to noisy environments, stray reflections, and low resolution.
However, to train these networks, a large number of manually annotated images
are required. To alleviate the cumbersome process of manual labeling, computer
graphics rendering is employed to automatically generate a large corpus of
annotated eye images under various conditions. In this work, we introduce a
synthetic eye image generation platform that improves upon previous work by
adding features such as an active deformable iris, an aspherical cornea,
retinal retro-reflection, gaze-coordinated eye-lid deformations, and blinks. To
demonstrate the utility of our platform, we render images reflecting the
represented gaze distributions inherent in two publicly available datasets,
NVGaze and OpenEDS. We also report on the performance of two semantic
segmentation architectures (SegNet and RITnet) trained on rendered images and
tested on the original datasets. | [
"cs.CV"
] |
The key idea of current deep learning methods for dense prediction is to
apply a model on a regular patch centered on each pixel to make pixel-wise
predictions. These methods are limited in the sense that the patches are
determined by network architecture instead of learned from data. In this work,
we propose the dense transformer networks, which can learn the shapes and sizes
of patches from data. The dense transformer networks employ an encoder-decoder
architecture, and a pair of dense transformer modules are inserted into each of
the encoder and decoder paths. The novelty of this work is that we provide
technical solutions for learning the shapes and sizes of patches from data and
efficiently restoring the spatial correspondence required for dense prediction.
The proposed dense transformer modules are differentiable, thus the entire
network can be trained. We apply the proposed networks on natural and
biological image segmentation tasks and show superior performance is achieved
in comparison to baseline methods. | [
"cs.CV",
"cs.LG",
"cs.NE",
"stat.ML"
] |
This is the project report for CSCI-GA.2271-001. We target human pose
estimation in artistic images. For this goal, we design an end-to-end system
that uses neural style transfer for pose regression. We collect a 277-style set
for arbitrary style transfer and build an artistic 281-image test set. We
directly run pose regression on the test set and show promising results. For
pose regression, we propose a 2d-induced bone map from which pose is lifted. To
help such a lifting, we additionally annotate the pseudo 3d labels of the full
in-the-wild MPII dataset. Further, we append another style transfer as self
supervision to improve 2d. We perform extensive ablation studies to analyze the
introduced features. We also compare end-to-end with per-style training and
allude to the tradeoff between style transfer and pose regression. Lastly, we
generalize our model to the real-world human dataset and show its potentiality
as a generic pose model. We explain the theoretical foundation in Appendix. We
release code at https://github.com/strawberryfg/NAPA-NST-HPE, data, and video. | [
"cs.CV"
] |
Exploration is one of the core challenges in reinforcement learning. A common
formulation of curiosity-driven exploration uses the difference between the
real future and the future predicted by a learned model. However, predicting
the future is an inherently difficult task which can be ill-posed in the face
of stochasticity. In this paper, we introduce an alternative form of curiosity
that rewards novel associations between different senses. Our approach exploits
multiple modalities to provide a stronger signal for more efficient
exploration. Our method is inspired by the fact that, for humans, both sight
and sound play a critical role in exploration. We present results on several
Atari environments and Habitat (a photorealistic navigation simulator), showing
the benefits of using an audio-visual association model for intrinsically
guiding learning agents in the absence of external rewards. For videos and
code, see https://vdean.github.io/audio-curiosity.html. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Current crowd counting algorithms are only concerned about the number of
people in an image, which lacks low-level fine-grained information of the
crowd. For many practical applications, the total number of people in an image
is not as useful as the number of people in each sub-category. E.g., knowing
the number of people waiting inline or browsing can help retail stores; knowing
the number of people standing/sitting can help restaurants/cafeterias; knowing
the number of violent/non-violent people can help police in crowd management.
In this paper, we propose fine-grained crowd counting, which differentiates a
crowd into categories based on the low-level behavior attributes of the
individuals (e.g. standing/sitting or violent behavior) and then counts the
number of people in each category. To enable research in this area, we
construct a new dataset of four real-world fine-grained counting tasks:
traveling direction on a sidewalk, standing or sitting, waiting in line or not,
and exhibiting violent behavior or not. Since the appearance features of
different crowd categories are similar, the challenge of fine-grained crowd
counting is to effectively utilize contextual information to distinguish
between categories. We propose a two branch architecture, consisting of a
density map estimation branch and a semantic segmentation branch. We propose
two refinement strategies for improving the predictions of the two branches.
First, to encode contextual information, we propose feature propagation guided
by the density map prediction, which eliminates the effect of background
features during propagation. Second, we propose a complementary attention model
to share information between the two branches. Experiment results confirm the
effectiveness of our method. | [
"cs.CV"
] |
In this paper, we consider the problem of multi-agent navigation in partially
observable grid environments. This problem is challenging for centralized
planning approaches as they, typically, rely on the full knowledge of the
environment. We suggest utilizing the reinforcement learning approach when the
agents, first, learn the policies that map observations to actions and then
follow these policies to reach their goals. To tackle the challenge associated
with learning cooperative behavior, i.e. in many cases agents need to yield to
each other to accomplish a mission, we use a mixing Q-network that complements
learning individual policies. In the experimental evaluation, we show that such
approach leads to plausible results and scales well to large number of agents. | [
"cs.LG",
"cs.AI"
] |
Preference-based Reinforcement Learning (PbRL) replaces reward values in
traditional reinforcement learning by preferences to better elicit human
opinion on the target objective, especially when numerical reward values are
hard to design or interpret. Despite promising results in applications, the
theoretical understanding of PbRL is still in its infancy. In this paper, we
present the first finite-time analysis for general PbRL problems. We first show
that a unique optimal policy may not exist if preferences over trajectories are
deterministic for PbRL. If preferences are stochastic, and the preference
probability relates to the hidden reward values, we present algorithms for
PbRL, both with and without a simulator, that are able to identify the best
policy up to accuracy $\varepsilon$ with high probability. Our method explores
the state space by navigating to under-explored states, and solves PbRL using a
combination of dueling bandits and policy search. Experiments show the efficacy
of our method when it is applied to real-world problems. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Autonomous Vehicles (AVs) are required to operate safely and efficiently in
dynamic environments. For this, the AVs equipped with Joint
Radar-Communications (JRC) functions can enhance the driving safety by
utilizing both radar detection and data communication functions. However,
optimizing the performance of the AV system with two different functions under
uncertainty and dynamic of surrounding environments is very challenging. In
this work, we first propose an intelligent optimization framework based on the
Markov Decision Process (MDP) to help the AV make optimal decisions in
selecting JRC operation functions under the dynamic and uncertainty of the
surrounding environment. We then develop an effective learning algorithm
leveraging recent advances of deep reinforcement learning techniques to find
the optimal policy for the AV without requiring any prior information about
surrounding environment. Furthermore, to make our proposed framework more
scalable, we develop a Transfer Learning (TL) mechanism that enables the AV to
leverage valuable experiences for accelerating the training process when it
moves to a new environment. Extensive simulations show that the proposed
transferable deep reinforcement learning framework reduces the obstacle miss
detection probability by the AV up to 67% compared to other conventional deep
reinforcement learning approaches. | [
"cs.LG",
"cs.RO"
] |
Deep learning-based segmentation methods are vulnerable to unforeseen data
distribution shifts during deployment, e.g. change of image appearances or
contrasts caused by different scanners, unexpected imaging artifacts etc. In
this paper, we present a cooperative framework for training image segmentation
models and a latent space augmentation method for generating hard examples.
Both contributions improve model generalization and robustness with limited
data. The cooperative training framework consists of a fast-thinking network
(FTN) and a slow-thinking network (STN). The FTN learns decoupled image
features and shape features for image reconstruction and segmentation tasks.
The STN learns shape priors for segmentation correction and refinement. The two
networks are trained in a cooperative manner. The latent space augmentation
generates challenging examples for training by masking the decoupled latent
space in both channel-wise and spatial-wise manners. We performed extensive
experiments on public cardiac imaging datasets. Using only 10 subjects from a
single site for training, we demonstrated improved cross-site segmentation
performance and increased robustness against various unforeseen imaging
artifacts compared to strong baseline methods. Particularly, cooperative
training with latent space data augmentation yields 15% improvement in terms of
average Dice score when compared to a standard training method. | [
"cs.CV",
"cs.AI",
"cs.LG",
"q-bio.QM"
] |
Though machine learning has achieved notable success in modeling sequential
and spatial data for speech recognition and in computer vision, applications to
remote sensing and climate science problems are seldom considered. In this
paper, we demonstrate techniques from unsupervised learning of future video
frame prediction, to increase the accuracy of ice flow tracking in
multi-spectral satellite images. As the volume of cryosphere data increases in
coming years, this is an interesting and important opportunity for machine
learning to address a global challenge for climate change, risk management from
floods, and conserving freshwater resources. Future frame prediction of ice
melt and tracking the optical flow of ice dynamics presents modeling
difficulties, due to uncertainties in global temperature increase, changing
precipitation patterns, occlusion from cloud cover, rapid melting and glacier
retreat due to black carbon aerosol deposition, from wildfires or human fossil
emissions. We show the adversarial learning method helps improve the accuracy
of tracking the optical flow of ice dynamics compared to existing methods in
climate science. We present a dataset, IceNet, to encourage machine learning
research and to help facilitate further applications in the areas of
cryospheric science and climate change. | [
"cs.LG",
"physics.ao-ph",
"physics.geo-ph",
"stat.ML"
] |
Rare diseases affecting 350 million individuals are commonly associated with
delay in diagnosis or misdiagnosis. To improve those patients' outcome, rare
disease detection is an important task for identifying patients with rare
conditions based on longitudinal medical claims. In this paper, we present a
deep learning method for detecting patients with exocrine pancreatic
insufficiency (EPI) (a rare disease). The contribution includes 1) a large
longitudinal study using 7 years medical claims from 1.8 million patients
including 29,149 EPI patients, 2) a new deep learning model using generative
adversarial networks (GANs) to boost rare disease class, and also leveraging
recurrent neural networks to model patient sequence data, 3) an accurate
prediction with 0.56 PR-AUC which outperformed benchmark models in terms of
precision and recall. | [
"cs.LG",
"stat.ML"
] |
We propose a method to predict severity of age related macular degeneration
(AMD) from input optical coherence tomography (OCT) images. Although there is
no standard clinical severity scale for AMD, we leverage deep learning (DL)
based image registration and clustering methods to identify diseased cases and
predict their severity. Experiments demonstrate our approach's disease
classification performance matches state of the art methods. The predicted
disease severity performs well on previously unseen data. Registration output
provides better explainability than class activation maps regarding label and
severity decisions | [
"cs.CV",
"eess.IV"
] |
Scarcity of labeled data has motivated the development of semi-supervised
learning methods, which learn from large portions of unlabeled data alongside a
few labeled samples. Consistency Regularization between model's predictions
under different input perturbations, particularly has shown to provide
state-of-the art results in a semi-supervised framework. However, most of these
method have been limited to classification and segmentation applications. We
propose Transformation Consistency Regularization, which delves into a more
challenging setting of image-to-image translation, which remains unexplored by
semi-supervised algorithms. The method introduces a diverse set of geometric
transformations and enforces the model's predictions for unlabeled data to be
invariant to those transformations. We evaluate the efficacy of our algorithm
on three different applications: image colorization, denoising and
super-resolution. Our method is significantly data efficient, requiring only
around 10 - 20% of labeled samples to achieve similar image reconstructions to
its fully-supervised counterpart. Furthermore, we show the effectiveness of our
method in video processing applications, where knowledge from a few frames can
be leveraged to enhance the quality of the rest of the movie. | [
"cs.CV"
] |
Generative adversarial networks (GANs) are among the most successful models
for learning high-complexity, real-world distributions. However, in theory, due
to the highly non-convex, non-concave landscape of the minmax training
objective, GAN remains one of the least understood deep learning models. In
this work, we formally study how GANs can efficiently learn certain
hierarchically generated distributions that are close to the distribution of
images in practice. We prove that when a distribution has a structure that we
refer to as Forward Super-Resolution, then simply training generative
adversarial networks using gradient descent ascent (GDA) can indeed learn this
distribution efficiently, both in terms of sample and time complexities. We
also provide concrete empirical evidence that not only our assumption "forward
super-resolution" is very natural in practice, but also the underlying learning
mechanisms that we study in this paper (to allow us efficiently train GAN via
GDA in theory) simulates the actual learning process of GANs in practice on
real-world problems. | [
"cs.LG",
"cs.DS",
"cs.NE",
"math.OC",
"stat.ML"
] |
Text-based video segmentation is a challenging task that segments out the
natural language referred objects in videos. It essentially requires semantic
comprehension and fine-grained video understanding. Existing methods introduce
language representation into segmentation models in a bottom-up manner, which
merely conducts vision-language interaction within local receptive fields of
ConvNets. We argue that such interaction is not fulfilled since the model can
barely construct region-level relationships given partial observations, which
is contrary to the description logic of natural language/referring expressions.
In fact, people usually describe a target object using relations with other
objects, which may not be easily understood without seeing the whole video. To
address the issue, we introduce a novel top-down approach by imitating how we
human segment an object with the language guidance. We first figure out all
candidate objects in videos and then choose the refereed one by parsing
relations among those high-level objects. Three kinds of object-level relations
are investigated for precise relationship understanding, i.e., positional
relation, text-guided semantic relation, and temporal relation. Extensive
experiments on A2D Sentences and J-HMDB Sentences show our method outperforms
state-of-the-art methods by a large margin. Qualitative results also show our
results are more explainable. Besides, based on the inspiration, we win the
first place in CVPR2021 Referring Youtube-VOS challenge. | [
"cs.CV"
] |
Attention-based learning for fine-grained image recognition remains a
challenging task, where most of the existing methods treat each object part in
isolation, while neglecting the correlations among them. In addition, the
multi-stage or multi-scale mechanisms involved make the existing methods less
efficient and hard to be trained end-to-end. In this paper, we propose a novel
attention-based convolutional neural network (CNN) which regulates multiple
object parts among different input images. Our method first learns multiple
attention region features of each input image through the one-squeeze
multi-excitation (OSME) module, and then apply the multi-attention multi-class
constraint (MAMC) in a metric learning framework. For each anchor feature, the
MAMC functions by pulling same-attention same-class features closer, while
pushing different-attention or different-class features away. Our method can be
easily trained end-to-end, and is highly efficient which requires only one
training stage. Moreover, we introduce Dogs-in-the-Wild, a comprehensive dog
species dataset that surpasses similar existing datasets by category coverage,
data volume and annotation quality. This dataset will be released upon
acceptance to facilitate the research of fine-grained image recognition.
Extensive experiments are conducted to show the substantial improvements of our
method on four benchmark datasets. | [
"cs.CV"
] |
Certifiable local robustness, which rigorously precludes small-norm
adversarial examples, has received significant attention as a means of
addressing security concerns in deep learning. However, for some classification
problems, local robustness is not a natural objective, even in the presence of
adversaries; for example, if an image contains two classes of subjects, the
correct label for the image may be considered arbitrary between the two, and
thus enforcing strict separation between them is unnecessary. In this work, we
introduce two relaxed safety properties for classifiers that address this
observation: (1) relaxed top-k robustness, which serves as the analogue of
top-k accuracy; and (2) affinity robustness, which specifies which sets of
labels must be separated by a robustness margin, and which can be
$\epsilon$-close in $\ell_p$ space. We show how to construct models that can be
efficiently certified against each relaxed robustness property, and trained
with very little overhead relative to standard gradient descent. Finally, we
demonstrate experimentally that these relaxed variants of robustness are
well-suited to several significant classification problems, leading to lower
rejection rates and higher certified accuracies than can be obtained when
certifying "standard" local robustness. | [
"cs.LG"
] |
Motivated by the following two observations: 1) people are aging differently
under different conditions for changeable facial attributes, e.g., skin color
may become darker when working outside, and 2) it needs to keep some unchanged
facial attributes during the aging process, e.g., race and gender, we propose a
controllable face aging method via attribute disentanglement generative
adversarial network. To offer fine control over the synthesized face images,
first, an individual embedding of the face is directly learned from an image
that contains the desired facial attribute. Second, since the image may contain
other unwanted attributes, an attribute disentanglement network is used to
separate the individual embedding and learn the common embedding that contains
information about the face attribute (e.g., race). With the common embedding,
we can manipulate the generated face image with the desired attribute in an
explicit manner. Experimental results on two common benchmarks demonstrate that
our proposed generator achieves comparable performance on the aging effect with
state-of-the-art baselines while gaining more flexibility for attribute
control. Code is available at supplementary material. | [
"cs.CV"
] |
Automatic discovery of category-specific 3D keypoints from a collection of
objects of some category is a challenging problem. One reason is that not all
objects in a category necessarily have the same semantic parts. The level of
difficulty adds up further when objects are represented by 3D point clouds,
with variations in shape and unknown coordinate frames. We define keypoints to
be category-specific, if they meaningfully represent objects' shape and their
correspondences can be simply established order-wise across all objects. This
paper aims at learning category-specific 3D keypoints, in an unsupervised
manner, using a collection of misaligned 3D point clouds of objects from an
unknown category. In order to do so, we model shapes defined by the keypoints,
within a category, using the symmetric linear basis shapes without assuming the
plane of symmetry to be known. The usage of symmetry prior leads us to learn
stable keypoints suitable for higher misalignments. To the best of our
knowledge, this is the first work on learning such keypoints directly from 3D
point clouds. Using categories from four benchmark datasets, we demonstrate the
quality of our learned keypoints by quantitative and qualitative evaluations.
Our experiments also show that the keypoints discovered by our method are
geometrically and semantically consistent. | [
"cs.CV"
] |
Domain Adaptation is an actively researched problem in Computer Vision. In
this work, we propose an approach that leverages unsupervised data to bring the
source and target distributions closer in a learned joint feature space. We
accomplish this by inducing a symbiotic relationship between the learned
embedding and a generative adversarial network. This is in contrast to methods
which use the adversarial framework for realistic data generation and
retraining deep models with such data. We demonstrate the strength and
generality of our approach by performing experiments on three different tasks
with varying levels of difficulty: (1) Digit classification (MNIST, SVHN and
USPS datasets) (2) Object recognition using OFFICE dataset and (3) Domain
adaptation from synthetic to real data. Our method achieves state-of-the art
performance in most experimental settings and by far the only GAN-based method
that has been shown to work well across different datasets such as OFFICE and
DIGITS. | [
"cs.CV"
] |
Model pruning seeks to induce sparsity in a deep neural network's various
connection matrices, thereby reducing the number of nonzero-valued parameters
in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep
networks at the cost of only a marginal loss in accuracy and achieve a sizable
reduction in model size. This hints at the possibility that the baseline models
in these experiments are perhaps severely over-parameterized at the outset and
a viable alternative for model compression might be to simply reduce the number
of hidden units while maintaining the model's dense connection structure,
exposing a similar trade-off in model size and accuracy. We investigate these
two distinct paths for model compression within the context of energy-efficient
inference in resource-constrained environments and propose a new gradual
pruning technique that is simple and straightforward to apply across a variety
of models/datasets with minimal tuning and can be seamlessly incorporated
within the training process. We compare the accuracy of large, but pruned
models (large-sparse) and their smaller, but dense (small-dense) counterparts
with identical memory footprint. Across a broad range of neural network
architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find
large-sparse models to consistently outperform small-dense models and achieve
up to 10x reduction in number of non-zero parameters with minimal loss in
accuracy. | [
"stat.ML",
"cs.LG"
] |
We propose to learn a curriculum or a syllabus for supervised learning and
deep reinforcement learning with deep neural networks by an attachable deep
neural network, called ScreenerNet. Specifically, we learn a weight for each
sample by jointly training the ScreenerNet and the main network in an
end-to-end self-paced fashion. The ScreenerNet neither has sampling bias nor
requires to remember the past learning history. We show the networks augmented
with the ScreenerNet achieve early convergence with better accuracy than the
state-of-the-art curricular learning methods in extensive experiments using
three popular vision datasets such as MNIST, CIFAR10 and Pascal VOC2012, and a
Cart-pole task using Deep Q-learning. Moreover, the ScreenerNet can extend
other curriculum learning methods such as Prioritized Experience Replay (PER)
for further accuracy improvement. | [
"cs.CV"
] |
Modern deep convolutional networks (CNNs) are often criticized for not
generalizing under distributional shifts. However, several recent breakthroughs
in transfer learning suggest that these networks can cope with severe
distribution shifts and successfully adapt to new tasks from a few training
examples. In this work we study the interplay between out-of-distribution and
transfer performance of modern image classification CNNs for the first time and
investigate the impact of the pre-training data size, the model scale, and the
data preprocessing pipeline. We find that increasing both the training set and
model sizes significantly improve the distributional shift robustness.
Furthermore, we show that, perhaps surprisingly, simple changes in the
preprocessing such as modifying the image resolution can significantly mitigate
robustness issues in some cases. Finally, we outline the shortcomings of
existing robustness evaluation datasets and introduce a synthetic dataset
SI-Score we use for a systematic analysis across factors of variation common in
visual data such as object size and position. | [
"cs.CV",
"cs.LG"
] |
This paper starts from the observation that multiple top performing
pedestrian detectors can be modelled by using an intermediate layer filtering
low-level features in combination with a boosted decision forest. Based on this
observation we propose a unifying framework and experimentally explore
different filter families. We report extensive results enabling a systematic
analysis.
Using filtered channel features we obtain top performance on the challenging
Caltech and KITTI datasets, while using only HOG+LUV as low-level features.
When adding optical flow features we further improve detection quality and
report the best known results on the Caltech dataset, reaching 93% recall at 1
FPPI. | [
"cs.CV"
] |
Movie genre classification is a challenging task that has increasingly
attracted the attention of researchers. In this paper, we addressed the
multi-label classification of the movie genres in a multimodal way. For this
purpose, we created a dataset composed of trailer video clips, subtitles,
synopses, and movie posters taken from 152,622 movie titles from The Movie
Database. The dataset was carefully curated and organized, and it was also made
available as a contribution of this work. Each movie of the dataset was labeled
according to a set of eighteen genre labels. We extracted features from these
data using different kinds of descriptors, namely Mel Frequency Cepstral
Coefficients, Statistical Spectrum Descriptor , Local Binary Pattern with
spectrograms, Long-Short Term Memory, and Convolutional Neural Networks. The
descriptors were evaluated using different classifiers, such as BinaryRelevance
and ML-kNN. We have also investigated the performance of the combination of
different classifiers/features using a late fusion strategy, which obtained
encouraging results. Based on the F-Score metric, our best result, 0.628, was
obtained by the fusion of a classifier created using LSTM on the synopses, and
a classifier created using CNN on movie trailer frames. When considering the
AUC-PR metric, the best result, 0.673, was also achieved by combining those
representations, but in addition, a classifier based on LSTM created from the
subtitles was used. These results corroborate the existence of complementarity
among classifiers based on different sources of information in this field of
application. As far as we know, this is the most comprehensive study developed
in terms of the diversity of multimedia sources of information to perform movie
genre classification. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
This paper proposes a generative ScatterNet hybrid deep learning (G-SHDL)
network for semantic image segmentation. The proposed generative architecture
is able to train rapidly from relatively small labeled datasets using the
introduced structural priors. In addition, the number of filters in each layer
of the architecture is optimized resulting in a computationally efficient
architecture. The G-SHDL network produces state-of-the-art classification
performance against unsupervised and semi-supervised learning on two image
datasets. Advantages of the G-SHDL network over supervised methods are
demonstrated with experiments performed on training datasets of reduced size. | [
"cs.CV"
] |
Hidden Markov models and their variants are the predominant sequential
classification method in such domains as speech recognition, bioinformatics and
natural language processing. Being generative rather than discriminative
models, however, their classification performance is a drawback. In this paper
we apply ideas from the field of density ratio estimation to bypass the
difficult step of learning likelihood functions in HMMs. By reformulating
inference and model fitting in terms of density ratios and applying a fast
kernel-based estimation method, we show that it is possible to obtain a
striking increase in discriminative performance while retaining the
probabilistic qualities of the HMM. We demonstrate experimentally that this
formulation makes more efficient use of training data than alternative
approaches. | [
"stat.ML",
"cs.LG"
] |
Due to the inherent robustness of segmentation models, traditional
norm-bounded attack methods show limited effect on such type of models. In this
paper, we focus on generating unrestricted adversarial examples for semantic
segmentation models. We demonstrate a simple and effective method to generate
unrestricted adversarial examples using conditional generative adversarial
networks (CGAN) without any hand-crafted metric. The na\"ive implementation of
CGAN, however, yields inferior image quality and low attack success rate.
Instead, we leverage the SPADE (Spatially-adaptive denormalization) structure
with an additional loss item to generate effective adversarial attacks in a
single step. We validate our approach on the popular Cityscapes and ADE20K
datasets, and demonstrate that our synthetic adversarial examples are not only
realistic, but also improve the attack success rate by up to 41.0\% compared
with the state of the art adversarial attack methods including PGD. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In this paper, we propose the 3DFeat-Net which learns both 3D feature
detector and descriptor for point cloud matching using weak supervision. Unlike
many existing works, we do not require manual annotation of matching point
clusters. Instead, we leverage on alignment and attention mechanisms to learn
feature correspondences from GPS/INS tagged 3D point clouds without explicitly
specifying them. We create training and benchmark outdoor Lidar datasets, and
experiments show that 3DFeat-Net obtains state-of-the-art performance on these
gravity-aligned datasets. | [
"cs.CV"
] |
Waste recycling is an important way of saving energy and materials in the
production process. In general cases recyclable objects are mixed with
unrecyclable objects, which raises a need for identification and
classification. This paper proposes a convolutional neural network (CNN) model
to complete both tasks. The model uses transfer learning from a pretrained
Resnet-50 CNN to complete feature extraction. A subsequent fully connected
layer for classification was trained on the augmented TrashNet dataset [1]. In
the application, sliding-window is used for image segmentation in the
pre-classification stage. In the post-classification stage, the labelled sample
points are integrated with Gaussian Clustering to locate the object. The
resulting model has achieved an overall detection rate of 48.4% in simulation
and final classification accuracy of 92.4%. | [
"cs.CV",
"cs.AI"
] |
Graph neural networks have become very popular for machine learning on
molecules due to the expressive power of their learnt representations. However,
molecular machine learning is a classically low-data regime and it isn't clear
that graph neural networks can avoid overfitting in low-resource settings. In
contrast, fingerprint methods are the traditional standard for low-data
environments due to their reduced number of parameters and manually engineered
features. In this work, we investigate whether graph neural networks are
competitive in small data settings compared to the parametrically 'cheaper'
alternative of fingerprint methods. When we find that they are not, we explore
pretraining and the meta-learning method MAML (and variants FO-MAML and ANIL)
for improving graph neural network performance by transfer learning from
related tasks. We find that MAML and FO-MAML do enable the graph neural network
to outperform models based on fingerprints, providing a path to using graph
neural networks even in settings with severely restricted data availability. In
contrast to previous work, we find ANIL performs worse that other meta-learning
approaches in this molecule setting. Our results suggest two reasons: molecular
machine learning tasks may require significant task-specific adaptation, and
distribution shifts in test tasks relative to train tasks may contribute to
worse ANIL performance. | [
"cs.LG",
"cs.AI"
] |
Multi-Task Learning (MTL) networks have emerged as a promising method for
transferring learned knowledge across different tasks. However, MTL must deal
with challenges such as: overfitting to low resource tasks, catastrophic
forgetting, and negative task transfer, or learning interference. Often, in
Natural Language Processing (NLP), a separate model per task is needed to
obtain the best performance. However, many fine-tuning approaches are both
parameter inefficient, i.e., potentially involving one new model per task, and
highly susceptible to losing knowledge acquired during pretraining. We propose
a novel Transformer architecture consisting of a new conditional attention
mechanism as well as a set of task-conditioned modules that facilitate weight
sharing. Through this construction, we achieve more efficient parameter sharing
and mitigate forgetting by keeping half of the weights of a pretrained model
fixed. We also use a new multi-task data sampling strategy to mitigate the
negative effects of data imbalance across tasks. Using this approach, we are
able to surpass single task fine-tuning methods while being parameter and data
efficient (using around 66% of the data for weight updates). Compared to other
BERT Large methods on GLUE, our 8-task model surpasses other Adapter methods by
2.8% and our 24-task model outperforms by 0.7-1.0% models that use MTL and
single task fine-tuning. We show that a larger variant of our single multi-task
model approach performs competitively across 26 NLP tasks and yields
state-of-the-art results on a number of test and development sets. Our code is
publicly available at https://github.com/CAMTL/CA-MTL. | [
"cs.LG",
"stat.ML",
"I.2.7"
] |
Attention has become more attractive in person reidentification (ReID) as it
is capable of biasing the allocation of available resources towards the most
informative parts of an input signal. However, state-of-the-art works
concentrate only on coarse or first-order attention design, e.g. spatial and
channels attention, while rarely exploring higher-order attention mechanism. We
take a step towards addressing this problem. In this paper, we first propose
the High-Order Attention (HOA) module to model and utilize the complex and
high-order statistics information in attention mechanism, so as to capture the
subtle differences among pedestrians and to produce the discriminative
attention proposals. Then, rethinking person ReID as a zero-shot learning
problem, we propose the Mixed High-Order Attention Network (MHN) to further
enhance the discrimination and richness of attention knowledge in an explicit
manner. Extensive experiments have been conducted to validate the superiority
of our MHN for person ReID over a wide variety of state-of-the-art methods on
three large-scale datasets, including Market-1501, DukeMTMC-ReID and CUHK03-NP.
Code is available at http://www.bhchen.cn/. | [
"cs.CV"
] |
In this paper, we investigate the suitability of state-of-the-art
representation learning methods to the analysis of behavioral similarity of
moving individuals, based on CDR trajectories. The core of the contribution is
a novel methodological framework, mob2vec, centered on the combined use of a
recent symbolic trajectory segmentation method for the removal of noise, a
novel trajectory generalization method incorporating behavioral information,
and an unsupervised technique for the learning of vector representations from
sequential data. Mob2vec is the result of an empirical study conducted on real
CDR data through an extensive experimentation. As a result, it is shown that
mob2vec generates vector representations of CDR trajectories in low dimensional
spaces which preserve the similarity of the mobility behavior of individuals. | [
"cs.LG",
"stat.ML",
"I.2; H.0"
] |
Learning quickly and continually is still an ambitious task for neural
networks. Indeed, many real-world applications do not reflect the learning
setting where neural networks shine, as data are usually few, mostly unlabelled
and come as a stream. To narrow this gap, we introduce FUSION - Few-shot
UnSupervIsed cONtinual learning - a novel strategy which aims to deal with
neural networks that "learn in the wild", simulating a real distribution and
flow of unbalanced tasks. We equip FUSION with MEML - Meta-Example
Meta-Learning - a new module that simultaneously alleviates catastrophic
forgetting and favours the generalisation and future learning of new tasks. To
encourage features reuse during the meta-optimisation, our model exploits a
single inner loop per task, taking advantage of an aggregated representation
achieved through the use of a self-attention mechanism. To further enhance the
generalisation capability of MEML, we extend it by adopting a technique that
creates various augmented tasks and optimises over the hardest. Experimental
results on few-shot learning benchmarks show that our model exceeds the other
baselines in both FUSION and fully supervised case. We also explore how it
behaves in standard continual learning consistently outperforming
state-of-the-art approaches. | [
"cs.LG",
"cs.CV"
] |
Recent works using deep learning to solve the Traveling Salesman Problem
(TSP) have focused on learning construction heuristics. Such approaches find
TSP solutions of good quality but require additional procedures such as beam
search and sampling to improve solutions and achieve state-of-the-art
performance. However, few studies have focused on improvement heuristics, where
a given solution is improved until reaching a near-optimal one. In this work,
we propose to learn a local search heuristic based on 2-opt operators via deep
reinforcement learning. We propose a policy gradient algorithm to learn a
stochastic policy that selects 2-opt operations given a current solution.
Moreover, we introduce a policy neural network that leverages a pointing
attention mechanism, which unlike previous works, can be easily extended to
more general k-opt moves. Our results show that the learned policies can
improve even over random initial solutions and approach near-optimal solutions
at a faster rate than previous state-of-the-art deep learning methods. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Copy-move forgery detection identifies a tampered image by detecting pasted
and source regions in the same image. In this paper, we propose a novel
two-stage framework specially for copy-move forgery detection. The first stage
is a backbone self deep matching network, and the second stage is named as
Proposal SuperGlue. In the first stage, atrous convolution and skip matching
are incorporated to enrich spatial information and leverage hierarchical
features. Spatial attention is built on self-correlation to reinforce the
ability to find appearance similar regions. In the second stage, Proposal
SuperGlue is proposed to remove false-alarmed regions and remedy incomplete
regions. Specifically, a proposal selection strategy is designed to enclose
highly suspected regions based on proposal generation and backbone score maps.
Then, pairwise matching is conducted among candidate proposals by deep learning
based keypoint extraction and matching, i.e., SuperPoint and SuperGlue.
Integrated score map generation and refinement methods are designed to
integrate results of both stages and obtain optimized results. Our two-stage
framework unifies end-to-end deep matching and keypoint matching by obtaining
highly suspected proposals, and opens a new gate for deep learning research in
copy-move forgery detection. Experiments on publicly available datasets
demonstrate the effectiveness of our two-stage framework. | [
"cs.CV"
] |
Pedestrian detection in thermal infrared images poses unique challenges
because of the low resolution and noisy nature of the image. Here we propose a
mid-level attribute in the form of multidimensional template, or tensor, using
Local Steering Kernel (LSK) as low-level descriptors for detecting pedestrians
in far infrared images. LSK is specifically designed to deal with intrinsic
image noise and pixel level uncertainty by capturing local image geometry
succinctly instead of collecting local orientation statistics (e.g., histograms
in HOG). Our second contribution is the introduction of a new image similarity
kernel in the popular maximum margin framework of support vector machines that
results in a relatively short and simple training phase for building a rigid
pedestrian detector. Our third contribution is to replace the sluggish but de
facto sliding window based detection methodology with multichannel discrete
Fourier transform, facilitating very fast and efficient pedestrian
localization. The experimental studies on publicly available thermal infrared
images justify our proposals and model assumptions. In addition, the proposed
work also involves the release of our in-house annotations of pedestrians in
more than 17000 frames of OSU Color Thermal database for the purpose of sharing
with the research community. | [
"cs.CV"
] |
Quantization is a key technique to reduce the resource requirement and
improve the performance of neural network deployment. However, different
hardware backends such as x86 CPU, NVIDIA GPU, ARM CPU, and accelerators may
demand different implementations for quantized networks. This diversity calls
for specialized post-training quantization pipelines to built for each hardware
target, an engineering effort that is often too large for developers to keep up
with. We tackle this problem with an automated post-training quantization
framework called HAGO. HAGO provides a set of general quantization graph
transformations based on a user-defined hardware specification and implements a
search mechanism to find the optimal quantization strategy while satisfying
hardware constraints for any model. We observe that HAGO achieves speedups of
2.09x, 1.97x, and 2.48x on Intel Xeon Cascade Lake CPUs, NVIDIA Tesla T4 GPUs,
ARM Cortex-A CPUs on Raspberry Pi4 relative to full precision respectively,
while maintaining the highest reported post-training quantization accuracy in
each case. | [
"cs.CV"
] |
In this paper, we consider the state estimation problem for nonlinear
stochastic discrete-time systems. We combine Lyapunov's method in control
theory and deep reinforcement learning to design the state estimator. We
theoretically prove the convergence of the bounded estimate error solely using
the data simulated from the model. An actor-critic reinforcement learning
algorithm is proposed to learn the state estimator approximated by a deep
neural network. The convergence of the algorithm is analysed. The proposed
Lyapunov-based reinforcement learning state estimator is compared with a number
of existing nonlinear filtering methods through Monte Carlo simulations,
showing its advantage in terms of estimate convergence even under some system
uncertainties such as covariance shift in system noise and randomly missing
measurements. To the best of our knowledge, this is the first reinforcement
learning based nonlinear state estimator with bounded estimate error
performance guarantee. | [
"cs.LG",
"cs.RO",
"cs.SY",
"eess.SY"
] |
We introduce a framework for analyzing transductive combination of Gaussian
process (GP) experts, where independently trained GP experts are combined in a
way that depends on test point location, in order to scale GPs to big data. The
framework provides some theoretical justification for the generalized product
of GP experts (gPoE-GP) which was previously shown to work well in practice but
lacks theoretical basis. Based on the proposed framework, an improvement over
gPoE-GP is introduced and empirically validated. | [
"cs.LG",
"stat.ML"
] |
We propose and demonstrate the use of a model-assisted generative adversarial
network (GAN) to produce fake images that accurately match true images through
the variation of the parameters of the model that describes the features of the
images. The generator learns the model parameter values that produce fake
images that best match the true images. Two case studies show excellent
agreement between the generated best match parameters and the true parameters.
The best match model parameter values can be used to retune the default
simulation to minimize any bias when applying image recognition techniques to
fake and true images. In the case of a real-world experiment, the true images
are experimental data with unknown true model parameter values, and the fake
images are produced by a simulation that takes the model parameters as input.
The model-assisted GAN uses a convolutional neural network to emulate the
simulation for all parameter values that, when trained, can be used as a
conditional generator for fast fake-image production. | [
"cs.CV",
"cs.LG",
"hep-ex",
"stat.ML"
] |
We introduce a general framework for designing and training neural network
layers whose forward passes can be interpreted as solving non-smooth convex
optimization problems, and whose architectures are derived from an optimization
algorithm. We focus on convex games, solved by local agents represented by the
nodes of a graph and interacting through regularization functions. This
approach is appealing for solving imaging problems, as it allows the use of
classical image priors within deep models that are trainable end to end. The
priors used in this presentation include variants of total variation, Laplacian
regularization, bilateral filtering, sparse coding on learned dictionaries, and
non-local self similarities. Our models are fully interpretable as well as
parameter and data efficient. Our experiments demonstrate their effectiveness
on a large diversity of tasks ranging from image denoising and compressed
sensing for fMRI to dense stereo matching. | [
"cs.CV"
] |
Visual Odometry (VO) accumulates a positional drift in long-term robot
navigation tasks. Although Convolutional Neural Networks (CNNs) improve VO in
various aspects, VO still suffers from moving obstacles, discontinuous
observation of features, and poor textures or visual information. While recent
approaches estimate a 6DoF pose either directly from (a series of) images or by
merging depth maps with optical flow (OF), research that combines absolute pose
regression with OF is limited. We propose ViPR, a novel modular architecture
for long-term 6DoF VO that leverages temporal information and synergies between
absolute pose estimates (from PoseNet-like modules) and relative pose estimates
(from FlowNet-based modules) by combining both through recurrent layers.
Experiments on known datasets and on our own Industry dataset show that our
modular design outperforms state of the art in long-term navigation tasks. | [
"cs.CV",
"cs.LG",
"cs.RO",
"eess.IV",
"I.2.9; I.2.10; I.4.1; I.4.10; I.5.4"
] |
We rigorously evaluate three state-of-the-art techniques for inducing
sparsity in deep neural networks on two large-scale learning tasks: Transformer
trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet.
Across thousands of experiments, we demonstrate that complex techniques
(Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression
rates on smaller datasets perform inconsistently, and that simple magnitude
pruning approaches achieve comparable or better results. Additionally, we
replicate the experiments performed by (Frankle & Carbin, 2018) and (Liu et
al., 2018) at scale and show that unstructured sparse architectures learned
through pruning cannot be trained from scratch to the same test set performance
as a model trained with joint sparsification and optimization. Together, these
results highlight the need for large-scale benchmarks in the field of model
compression. We open-source our code, top performing model checkpoints, and
results of all hyperparameter configurations to establish rigorous baselines
for future work on compression and sparsification. | [
"cs.LG",
"stat.ML"
] |
We present a deep learning-based framework for portrait reenactment from a
single picture of a target (one-shot) and a video of a driving subject.
Existing facial reenactment methods suffer from identity mismatch and produce
inconsistent identities when a target and a driving subject are different
(cross-subject), especially in one-shot settings. In this work, we aim to
address identity preservation in cross-subject portrait reenactment from a
single picture. We introduce a novel technique that can disentangle identity
from expressions and poses, allowing identity preserving portrait reenactment
even when the driver's identity is very different from that of the target. This
is achieved by a novel landmark disentanglement network (LD-Net), which
predicts personalized facial landmarks that combine the identity of the target
with expressions and poses from a different subject. To handle portrait
reenactment from unseen subjects, we also introduce a feature dictionary-based
generative adversarial network (FD-GAN), which locally translates 2D landmarks
into a personalized portrait, enabling one-shot portrait reenactment under
large pose and expression variations. We validate the effectiveness of our
identity disentangling capabilities via an extensive ablation study, and our
method produces consistent identities for cross-subject portrait reenactment.
Our comprehensive experiments show that our method significantly outperforms
the state-of-the-art single-image facial reenactment methods. We will release
our code and models for academic use. | [
"cs.CV"
] |
Zero-Shot Learning (ZSL) aims to learn recognition models for recognizing new
classes without labeled data. In this work, we propose a novel approach dubbed
Transferrable Semantic-Visual Relation (TSVR) to facilitate the cross-category
transfer in transductive ZSL. Our approach draws on an intriguing insight
connecting two challenging problems, i.e. domain adaptation and zero-shot
learning. Domain adaptation aims to transfer knowledge across two different
domains (i.e., source domain and target domain) that share the identical
task/label space. For ZSL, the source and target domains have different
tasks/label spaces. Hence, ZSL is usually considered as a more difficult
transfer setting compared with domain adaptation. Although the existing ZSL
approaches use semantic attributes of categories to bridge the source and
target domains, their performances are far from satisfactory due to the large
domain gap between different categories. In contrast, our method directly
transforms ZSL into a domain adaptation task through redrawing ZSL as
predicting the similarity/dissimilarity labels for the pairs of semantic
attributes and visual features. For this redrawn domain adaptation problem, we
propose to use a domain-specific batch normalization component to reduce the
domain discrepancy of semantic-visual pairs. Experimental results over diverse
ZSL benchmarks clearly demonstrate the superiority of our method. | [
"cs.CV"
] |
Generative Adversarial Networks have been crucial in the developments made in
unsupervised learning in recent times. Exemplars of image synthesis from text
or other images, these networks have shown remarkable improvements over
conventional methods in terms of performance. Trained on the adversarial
training philosophy, these networks aim to estimate the potential distribution
from the real data and then use this as input to generate the synthetic data.
Based on this fundamental principle, several frameworks can be generated that
are paragon implementations in several real-life applications such as art
synthesis, generation of high resolution outputs and synthesis of images from
human drawn sketches, to name a few. While theoretically GANs present better
results and prove to be an improvement over conventional methods in many
factors, the implementation of these frameworks for dedicated applications
remains a challenge. This study explores and presents a taxonomy of these
frameworks and their use in various image to image synthesis and text to image
synthesis applications. The basic GANs, as well as a variety of different niche
frameworks, are critically analyzed. The advantages of GANs for image
generation over conventional methods as well their disadvantages amongst other
frameworks are presented. The future applications of GANs in industries such as
healthcare, art and entertainment are also discussed. | [
"cs.LG",
"cs.CV",
"eess.IV",
"stat.ML"
] |
Two main families of reinforcement learning algorithms, Q-learning and policy
gradients, have recently been proven to be equivalent when using a softmax
relaxation on one part, and an entropic regularization on the other. We relate
this result to the well-known convex duality of Shannon entropy and the softmax
function. Such a result is also known as the Donsker-Varadhan formula. This
provides a short proof of the equivalence. We then interpret this duality
further, and use ideas of convex analysis to prove a new policy inequality
relative to soft Q-learning. | [
"cs.LG"
] |
Reliant on too many experiments to learn good actions, current Reinforcement
Learning (RL) algorithms have limited applicability in real-world settings,
which can be too expensive to allow exploration. We propose an algorithm for
batch RL, where effective policies are learned using only a fixed offline
dataset instead of online interactions with the environment. The limited data
in batch RL produces inherent uncertainty in value estimates of states/actions
that were insufficiently represented in the training data. This leads to
particularly severe extrapolation when our candidate policies diverge from one
that generated the data. We propose to mitigate this issue via two
straightforward penalties: a policy-constraint to reduce this divergence and a
value-constraint that discourages overly optimistic estimates. Over a
comprehensive set of 32 continuous-action batch RL benchmarks, our approach
compares favorably to state-of-the-art methods, regardless of how the offline
data were collected. | [
"cs.LG",
"stat.ML"
] |
For deep neural network accelerators, memory movement is both energetically
expensive and can bound computation. Therefore, optimal mapping of tensors to
memory hierarchies is critical to performance. The growing complexity of neural
networks calls for automated memory mapping instead of manual heuristic
approaches; yet the search space of neural network computational graphs have
previously been prohibitively large. We introduce Evolutionary Graph
Reinforcement Learning (EGRL), a method designed for large search spaces, that
combines graph neural networks, reinforcement learning, and evolutionary
search. A set of fast, stateless policies guide the evolutionary search to
improve its sample-efficiency. We train and validate our approach directly on
the Intel NNP-I chip for inference. EGRL outperforms policy-gradient,
evolutionary search and dynamic programming baselines on BERT, ResNet-101 and
ResNet-50. We additionally achieve 28-78\% speed-up compared to the native
NNP-I compiler on all three workloads. | [
"cs.LG",
"cs.AI"
] |
Channel pruning is one of the predominant approaches for deep model
compression. Existing pruning methods either train from scratch with sparsity
constraints on channels, or minimize the reconstruction error between the
pre-trained feature maps and the compressed ones. Both strategies suffer from
some limitations: the former kind is computationally expensive and difficult to
converge, whilst the latter kind optimizes the reconstruction error but ignores
the discriminative power of channels. To overcome these drawbacks, we
investigate a simple-yet-effective method, called discrimination-aware channel
pruning, to choose those channels that really contribute to discriminative
power. To this end, we introduce additional losses into the network to increase
the discriminative power of intermediate layers and then select the most
discriminative channels for each layer by considering the additional loss and
the reconstruction error. Last, we propose a greedy algorithm to conduct
channel selection and parameter optimization in an iterative way. Extensive
experiments demonstrate the effectiveness of our method. For example, on
ILSVRC-12, our pruned ResNet-50 with 30% reduction of channels even outperforms
the original model by 0.39% in top-1 accuracy. | [
"cs.CV"
] |
With the increasing popularity of graph-based learning, graph neural networks
(GNNs) emerge as the essential tool for gaining insights from graphs. However,
unlike the conventional CNNs that have been extensively explored and
exhaustively tested, people are still worrying about the GNNs' robustness under
the critical settings, such as financial services. The main reason is that
existing GNNs usually serve as a black-box in predicting and do not provide the
uncertainty on the predictions. On the other side, the recent advancement of
Bayesian deep learning on CNNs has demonstrated its success of quantifying and
explaining such uncertainties to fortify CNN models. Motivated by these
observations, we propose UAG, the first systematic solution to defend
adversarial attacks on GNNs through identifying and exploiting hierarchical
uncertainties in GNNs. UAG develops a Bayesian Uncertainty Technique (BUT) to
explicitly capture uncertainties in GNNs and further employs an
Uncertainty-aware Attention Technique (UAT) to defend adversarial attacks on
GNNs. Intensive experiments show that our proposed defense approach outperforms
the state-of-the-art solutions by a significant margin. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Networks(CNNs) has achieved remarkable performance
breakthrough in Euclidean structure data. Recently, aggregation-transformation
based Graph Neural networks(GNNs) gradually produce a powerful performance on
non-Euclidean data. In this paper, we propose a cross-correlation based graph
convolution method allowing to naturally generalize CNNs to non-Euclidean
domains and inherit the excellent natures of CNNs, such as local filters,
parameter sharing, flexible receptive field, etc. Meanwhile, it leverages
dynamically generated convolution kernel and cross-correlation operators to
address the shortcomings of prior methods based on aggregation-transformation
or their approximations. Our method has achieved or matched popular
state-of-the-art results across three established graph benchmarks: the Cora,
Citeseer, and Pubmed citation network datasets. | [
"cs.LG"
] |
Precise 3D segmentation of infant brain tissues is an essential step towards
comprehensive volumetric studies and quantitative analysis of early brain
developement. However, computing such segmentations is very challenging,
especially for 6-month infant brain, due to the poor image quality, among other
difficulties inherent to infant brain MRI, e.g., the isointense contrast
between white and gray matter and the severe partial volume effect due to small
brain sizes. This study investigates the problem with an ensemble of semi-dense
fully convolutional neural networks (CNNs), which employs T1-weighted and
T2-weighted MR images as input. We demonstrate that the ensemble agreement is
highly correlated with the segmentation errors. Therefore, our method provides
measures that can guide local user corrections. To the best of our knowledge,
this work is the first ensemble of 3D CNNs for suggesting annotations within
images. Furthermore, inspired by the very recent success of dense networks, we
propose a novel architecture, SemiDenseNet, which connects all convolutional
layers directly to the end of the network. Our architecture allows the
efficient propagation of gradients during training, while limiting the number
of parameters, requiring one order of magnitude less parameters than popular
medical image segmentation networks such as 3D U-Net. Another contribution of
our work is the study of the impact that early or late fusions of multiple
image modalities might have on the performances of deep architectures. We
report evaluations of our method on the public data of the MICCAI iSEG-2017
Challenge on 6-month infant brain MRI segmentation, and show very competitive
results among 21 teams, ranking first or second in most metrics. | [
"cs.CV"
] |
Visual Emotion Analysis (VEA) has attracted increasing attention recently
with the prevalence of sharing images on social networks. Since human emotions
are ambiguous and subjective, it is more reasonable to address VEA in a label
distribution learning (LDL) paradigm rather than a single-label classification
task. Different from other LDL tasks, there exist intrinsic relationships
between emotions and unique characteristics within them, as demonstrated in
psychological theories. Inspired by this, we propose a well-grounded
circular-structured representation to utilize the prior knowledge for visual
emotion distribution learning. To be specific, we first construct an Emotion
Circle to unify any emotional state within it. On the proposed Emotion Circle,
each emotion distribution is represented with an emotion vector, which is
defined with three attributes (i.e., emotion polarity, emotion type, emotion
intensity) as well as two properties (i.e., similarity, additivity). Besides,
we design a novel Progressive Circular (PC) loss to penalize the
dissimilarities between predicted emotion vector and labeled one in a
coarse-to-fine manner, which further boosts the learning process in an
emotion-specific way. Extensive experiments and comparisons are conducted on
public visual emotion distribution datasets, and the results demonstrate that
the proposed method outperforms the state-of-the-art methods. | [
"cs.CV"
] |
Predictive coding theories suggest that the brain learns by predicting
observations at various levels of abstraction. One of the most basic prediction
tasks is view prediction: how would a given scene look from an alternative
viewpoint? Humans excel at this task. Our ability to imagine and fill in
missing information is tightly coupled with perception: we feel as if we see
the world in 3 dimensions, while in fact, information from only the front
surface of the world hits our retinas. This paper explores the role of view
prediction in the development of 3D visual recognition. We propose neural 3D
mapping networks, which take as input 2.5D (color and depth) video streams
captured by a moving camera, and lift them to stable 3D feature maps of the
scene, by disentangling the scene content from the motion of the camera. The
model also projects its 3D feature maps to novel viewpoints, to predict and
match against target views. We propose contrastive prediction losses to replace
the standard color regression loss, and show that this leads to better
performance on complex photorealistic data. We show that the proposed model
learns visual representations useful for (1) semi-supervised learning of 3D
object detectors, and (2) unsupervised learning of 3D moving object detectors,
by estimating the motion of the inferred 3D feature maps in videos of dynamic
scenes. To the best of our knowledge, this is the first work that empirically
shows view prediction to be a scalable self-supervised task beneficial to 3D
object detection. | [
"cs.CV"
] |
Working memory (WM) is a basic part of human cognition, which plays an
important role in the study of human cognitive load. Among various brain
imaging techniques, electroencephalography has shown its advantage on easy
access and reliability. However, one of the critical challenges is that
individual difference may cause the ineffective results, especially when the
established model meets an unfamiliar subject. In this work, we propose a
cross-subject deep adaptation model with spatial attention (CS-DASA) to
generalize the workload classifications across subjects. First, we transform
time-series EEG data into multi-frame EEG images incorporating more
spatio-temporal information. First, the subject-shared module in CS-DASA
receives multi-frame EEG image data from both source and target subjects and
learns the common feature representations. Then, in subject-specific module,
the maximum mean discrepancy is implemented to measure the domain distribution
divergence in a reproducing kernel Hilbert space, which can add an effective
penalty loss for domain adaptation. Additionally, the subject-to-subject
spatial attention mechanism is employed to focus on the most discriminative
spatial feature in EEG image data. Experiments conducted on a public WM EEG
dataset containing 13 subjects show that the proposed model is capable of
achieve better performance than existing state-of-the art methods. | [
"cs.LG",
"cs.CV",
"eess.IV"
] |
Generating natural language descriptions for videos, i.e., video captioning,
essentially requires step-by-step reasoning along the generation process. For
example, to generate the sentence "a man is shooting a basketball", we need to
first locate and describe the subject "man", next reason out the man is
"shooting", then describe the object "basketball" of shooting. However,
existing visual reasoning methods designed for visual question answering are
not appropriate to video captioning, for it requires more complex visual
reasoning on videos over both space and time, and dynamic module composition
along the generation process. In this paper, we propose a novel visual
reasoning approach for video captioning, named Reasoning Module Networks (RMN),
to equip the existing encoder-decoder framework with the above reasoning
capacity. Specifically, our RMN employs 1) three sophisticated spatio-temporal
reasoning modules, and 2) a dynamic and discrete module selector trained by a
linguistic loss with a Gumbel approximation. Extensive experiments on MSVD and
MSR-VTT datasets demonstrate the proposed RMN outperforms the state-of-the-art
methods while providing an explicit and explainable generation process. Our
code is available at https://github.com/tgc1997/RMN. | [
"cs.CV"
] |
Recently deep learning has achieved significant progress on point cloud
analysis tasks. Learning good representations is of vital importance to these
tasks. Most current methods rely on massive labelled data for training. We here
propose a point discriminative learning method for unsupervised representation
learning on 3D point clouds, which can learn local and global geometry
features. We achieve this by imposing a novel point discrimination loss on the
middle level and global level point features produced in the backbone network.
This point discrimination loss enforces the features to be consistent with
points belonging to the shape surface and inconsistent with randomly sampled
noisy points. Our method is simple in design, which works by adding an extra
adaptation module and a point consistency module for unsupervised training of
the encoder in the backbone network. Once trained, these two modules can be
discarded during supervised training of the classifier or decoder for
down-stream tasks. We conduct extensive experiments on 3D object
classification, 3D part segmentation and shape reconstruction in various
unsupervised and transfer settings. Both quantitative and qualitative results
show that our method learns powerful representations and achieves new
state-of-the-art performance. | [
"cs.CV"
] |
Model-based reinforcement learning (MBRL) algorithms can attain significant
sample efficiency but require an appropriate network structure to represent
system dynamics. Current approaches include white-box modeling using analytic
parameterizations and black-box modeling using deep neural networks. However,
both can suffer from a bias-variance trade-off in the learning process, and
neither provides a structured method for injecting domain knowledge into the
network. As an alternative, gray-box modeling leverages prior knowledge in
neural network training but only for simple systems. In this paper, we devise a
nested mixture of experts (NMOE) for representing and learning hybrid dynamical
systems. An NMOE combines both white-box and black-box models while optimizing
bias-variance trade-off. Moreover, an NMOE provides a structured method for
incorporating various types of prior knowledge by training the associative
experts cooperatively or competitively. The prior knowledge includes
information on robots' physical contacts with the environments as well as their
kinematic and dynamic properties. In this paper, we demonstrate how to
incorporate prior knowledge into our NMOE in various continuous control
domains, including hybrid dynamical systems. We also show the effectiveness of
our method in terms of data-efficiency, generalization to unseen data, and
bias-variance trade-off. Finally, we evaluate our NMOE using an MBRL setup,
where the model is integrated with a model-based controller and trained online. | [
"cs.LG",
"cs.RO"
] |
Sparse reward problems are one of the biggest challenges in Reinforcement
Learning. Goal-directed tasks are one such sparse reward problems where a
reward signal is received only when the goal is reached. One promising way to
train an agent to perform goal-directed tasks is to use Hindsight Learning
approaches. In these approaches, even when an agent fails to reach the desired
goal, the agent learns to reach the goal it achieved instead. Doing this over
multiple trajectories while generalizing the policy learned from the achieved
goals, the agent learns a goal conditioned policy to reach any goal. One such
approach is Hindsight Experience replay which uses an off-policy Reinforcement
Learning algorithm to learn a goal conditioned policy. In this approach, a
replay of the past transitions happens in a uniformly random fashion. Another
approach is to use a Hindsight version of the policy gradients to directly
learn a policy. In this work, we discuss different ways to replay past
transitions to improve learning in hindsight experience replay focusing on
prioritized variants in particular. Also, we implement the Hindsight Policy
gradient methods to robotic tasks. | [
"cs.LG",
"stat.ML"
] |
Unsupervised time series clustering is a challenging problem with diverse
industrial applications such as anomaly detection, bio-wearables, etc. These
applications typically involve small, low-power devices on the edge that
collect and process real-time sensory signals. State-of-the-art time-series
clustering methods perform some form of loss minimization that is extremely
computationally intensive from the perspective of edge devices. In this work,
we propose a neuromorphic approach to unsupervised time series clustering based
on Temporal Neural Networks that is capable of ultra low-power, continuous
online learning. We demonstrate its clustering performance on a subset of UCR
Time Series Archive datasets. Our results show that the proposed approach
either outperforms or performs similarly to most of the existing algorithms
while being far more amenable for efficient hardware implementation. Our
hardware assessment analysis shows that in 7 nm CMOS the proposed architecture,
on average, consumes only about 0.005 mm^2 die area and 22 uW power and can
process each signal with about 5 ns latency. | [
"cs.LG",
"cs.AI",
"cs.ET"
] |
The recent success of self-supervised learning can be largely attributed to
content-preserving transformations, which can be used to easily induce
invariances. While transformations generate positive sample pairs in
contrastive loss training, most recent work focuses on developing new objective
formulations, and pays relatively little attention to the transformations
themselves. In this paper, we introduce the framework of Generalized Data
Transformations to (1) reduce several recent self-supervised learning
objectives to a single formulation for ease of comparison, analysis, and
extension, (2) allow a choice between being invariant or distinctive to data
transformations, obtaining different supervisory signals, and (3) derive the
conditions that combinations of transformations must obey in order to lead to
well-posed learning objectives. This framework allows both invariance and
distinctiveness to be injected into representations simultaneously, and lets us
systematically explore novel contrastive objectives. We apply it to study
multi-modal self-supervision for audio-visual representation learning from
unlabelled videos, improving the state-of-the-art by a large margin, and even
surpassing supervised pretraining. We demonstrate results on a variety of
downstream video and audio classification and retrieval tasks, on datasets such
as HMDB-51, UCF-101, DCASE2014, ESC-50 and VGG-Sound. In particular, we achieve
new state-of-the-art accuracies of 72.8% on HMDB-51 and 95.2% on UCF-101. | [
"cs.CV"
] |
Scene understanding includes many related sub-tasks, such as scene
categorization, depth estimation, object detection, etc. Each of these
sub-tasks is often notoriously hard, and state-of-the-art classifiers already
exist for many of them. These classifiers operate on the same raw image and
provide correlated outputs. It is desirable to have an algorithm that can
capture such correlation without requiring any changes to the inner workings of
any classifier.
We propose Feedback Enabled Cascaded Classification Models (FE-CCM), that
jointly optimizes all the sub-tasks, while requiring only a `black-box'
interface to the original classifier for each sub-task. We use a two-layer
cascade of classifiers, which are repeated instantiations of the original ones,
with the output of the first layer fed into the second layer as input. Our
training method involves a feedback step that allows later classifiers to
provide earlier classifiers information about which error modes to focus on. We
show that our method significantly improves performance in all the sub-tasks in
the domain of scene understanding, where we consider depth estimation, scene
categorization, event categorization, object detection, geometric labeling and
saliency detection. Our method also improves performance in two robotic
applications: an object-grasping robot and an object-finding robot. | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
Set classification aims to classify a set of observations as a whole, as
opposed to classifying individual observations separately. To formally
understand the unfamiliar concept of binary set classification, we first
investigate the optimal decision rule under the normal distribution, which
utilizes the empirical covariance of the set to be classified. We show that the
number of observations in the set plays a critical role in bounding the Bayes
risk. Under this framework, we further propose new methods of set
classification. For the case where only a few parameters of the model drive the
difference between two classes, we propose a computationally-efficient approach
to parameter estimation using linear programming, leading to the
Covariance-engaged LInear Programming Set (CLIPS) classifier. Its theoretical
properties are investigated for both independent case and various (short-range
and long-range dependent) time series structures among observations within each
set. The convergence rates of estimation errors and risk of the CLIPS
classifier are established to show that having multiple observations in a set
leads to faster convergence rates, compared to the standard classification
situation in which there is only one observation in the set. The applicable
domains in which the CLIPS performs better than competitors are highlighted in
a comprehensive simulation study. Finally, we illustrate the usefulness of the
proposed methods in classification of real image data in histopathology. | [
"stat.ML",
"cs.LG",
"stat.ME"
] |
In the last decades, large datasets of fundus photographs have been collected
in diabetic retinopathy (DR) screening networks. Through deep learning, these
datasets were used to train automatic detectors for DR and a few other frequent
pathologies, with the goal to automate screening. One challenge limits the
adoption of such systems so far: automatic detectors ignore rare conditions
that ophthalmologists currently detect, such as papilledema or anterior
ischemic optic neuropathy. The reason is that standard deep learning requires
too many examples of these conditions. However, this limitation can be
addressed with few-shot learning, a machine learning paradigm where a
classifier has to generalize to a new category not seen in training, given only
a few examples of this category. This paper presents a new few-shot learning
framework that extends convolutional neural networks (CNNs), trained for
frequent conditions, with an unsupervised probabilistic model for rare
condition detection. It is based on the observation that CNNs often perceive
photographs containing the same anomalies as similar, even though these CNNs
were trained to detect unrelated conditions. This observation was based on the
t-SNE visualization tool, which we decided to incorporate in our probabilistic
model. Experiments on a dataset of 164,660 screening examinations from the
OPHDIAT screening network show that 37 conditions, out of 41, can be detected
with an area under the ROC curve (AUC) greater than 0.8 (average AUC: 0.938).
In particular, this framework significantly outperforms other frameworks for
detecting rare conditions, including multitask learning, transfer learning and
Siamese networks, another few-shot learning solution. We expect these richer
predictions to trigger the adoption of automated eye pathology screening, which
will revolutionize clinical practice in ophthalmology. | [
"cs.CV"
] |
We consider the problem of how a teacher algorithm can enable an unknown Deep
Reinforcement Learning (DRL) student to become good at a skill over a wide
range of diverse environments. To do so, we study how a teacher algorithm can
learn to generate a learning curriculum, whereby it sequentially samples
parameters controlling a stochastic procedural generation of environments.
Because it does not initially know the capacities of its student, a key
challenge for the teacher is to discover which environments are easy, difficult
or unlearnable, and in what order to propose them to maximize the efficiency of
learning over the learnable ones. To achieve this, this problem is transformed
into a surrogate continuous bandit problem where the teacher samples
environments in order to maximize absolute learning progress of its student. We
present a new algorithm modeling absolute learning progress with Gaussian
mixture models (ALP-GMM). We also adapt existing algorithms and provide a
complete study in the context of DRL. Using parameterized variants of the
BipedalWalker environment, we study their efficiency to personalize a learning
curriculum for different learners (embodiments), their robustness to the ratio
of learnable/unlearnable environments, and their scalability to non-linear and
high-dimensional parameter spaces. Videos and code are available at
https://github.com/flowersteam/teachDeepRL. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
The article describes the application of the Hough transform to a honeycomb
block image. The problem of cutting a mold from a honeycomb block is described.
A number of image transformations are considered to increase the efficiency of
the Hough algorithm. A method for obtaining a binary image using a simple
threshold, a method for obtaining a binary image using Otsu binarization, and
the Canny Edge Detection algorithm are considered. The method of binary
skeleton (skeletonization) is considered, in which the skeleton is obtained
using 2 main morphological operations: Dilation and Erosion. As a result of a
number of experiments, the optimal sequence of processing the original image
was revealed, which allows obtaining the coordinates of the maximum number of
faces. This result allows one to choose the optimal places for cutting a
honeycomb block, which will improve the quality of the resulting shapes. | [
"cs.CV",
"cs.RO"
] |
In this paper, we introduce a new online decision making paradigm that we
call Thresholding Graph Bandits. The main goal is to efficiently identify a
subset of arms in a multi-armed bandit problem whose means are above a
specified threshold. While traditionally in such problems, the arms are assumed
to be independent, in our paradigm we further suppose that we have access to
the similarity between the arms in the form of a graph, allowing us gain
information about the arm means in fewer samples. Such settings play a key role
in a wide range of modern decision making problems where rapid decisions need
to be made in spite of the large number of options available at each time. We
present GrAPL, a novel algorithm for the thresholding graph bandit problem. We
demonstrate theoretically that this algorithm is effective in taking advantage
of the graph structure when available and the reward function homophily (that
strongly connected arms have similar rewards) when favorable. We confirm these
theoretical findings via experiments on both synthetic and real data. | [
"cs.LG",
"stat.ML"
] |
We present a method for incorporating missing data in non-parametric
statistical learning without the need for imputation. We focus on a tree-based
method, Bayesian Additive Regression Trees (BART), enhanced with "Missingness
Incorporated in Attributes," an approach recently proposed incorporating
missingness into decision trees (Twala, 2008). This procedure takes advantage
of the partitioning mechanisms found in tree-based models. Simulations on
generated models and real data indicate that our proposed method can forecast
well on complicated missing-at-random and not-missing-at-random models as well
as models where missingness itself influences the response. Our procedure has
higher predictive performance and is more stable than competitors in many
cases. We also illustrate BART's abilities to incorporate missingness into
uncertainty intervals and to detect the influence of missingness on the model
fit. | [
"stat.ML",
"cs.LG"
] |
Many transfer problems require re-using previously optimal decisions for
solving new tasks, which suggests the need for learning algorithms that can
modify the mechanisms for choosing certain actions independently of those for
choosing others. However, there is currently no formalism nor theory for how to
achieve this kind of modular credit assignment. To answer this question, we
define modular credit assignment as a constraint on minimizing the algorithmic
mutual information among feedback signals for different decisions. We introduce
what we call the modularity criterion for testing whether a learning algorithm
satisfies this constraint by performing causal analysis on the algorithm
itself. We generalize the recently proposed societal decision-making framework
as a more granular formalism than the Markov decision process to prove that for
decision sequences that do not contain cycles, certain single-step temporal
difference action-value methods meet this criterion while all policy-gradient
methods do not. Empirical evidence suggests that such action-value methods are
more sample efficient than policy-gradient methods on transfer problems that
require only sparse changes to a sequence of previously optimal decisions. | [
"cs.LG",
"cs.AI",
"cs.IT",
"cs.NE",
"math.IT",
"stat.ML"
] |
Learning a disentangled representation of the latent space has become one of
the most fundamental problems studied in computer vision. Recently, many
Generative Adversarial Networks (GANs) have shown promising results in
generating high fidelity images. However, studies to understand the semantic
layout of the latent space of pre-trained models are still limited. Several
works train conditional GANs to generate faces with required semantic
attributes. Unfortunately, in these attempts, the generated output is often not
as photo-realistic as the unconditional state-of-the-art models. Besides, they
also require large computational resources and specific datasets to generate
high fidelity images. In our work, we have formulated a Markov Decision Process
(MDP) over the latent space of a pre-trained GAN model to learn a conditional
policy for semantic manipulation along specific attributes under defined
identity bounds. Further, we have defined a semantic age manipulation scheme
using a locally linear approximation over the latent space. Results show that
our learned policy samples high fidelity images with required age alterations,
while preserving the identity of the person. | [
"cs.CV"
] |
In this paper, we introduce PeerGAN, a generative adversarial network (GAN)
solution to improve the stability of the generated samples and to mitigate mode
collapse. Built upon the Vanilla GAN's two-player game between the
discriminator $D_1$ and the generator $G$, we introduce a peer discriminator
$D_2$ to the min-max game. Similar to previous work using two discriminators,
the first role of both $D_1$, $D_2$ is to distinguish between generated samples
and real ones, while the generator tries to generate high-quality samples which
are able to fool both discriminators. Different from existing methods, we
introduce another game between $D_1$ and $D_2$ to discourage their agreement
and therefore increase the level of diversity of the generated samples. This
property alleviates the issue of early mode collapse by preventing $D_1$ and
$D_2$ from converging too fast. We provide theoretical analysis for the
equilibrium of the min-max game formed among $G, D_1, D_2$. We offer
convergence behavior of PeerGAN as well as stability of the min-max game. It's
worth mentioning that PeerGAN operates in the unsupervised setting, and the
additional game between $D_1$ and $D_2$ does not need any label supervision.
Experiments results on a synthetic dataset and on real-world image datasets
(MNIST, Fashion MNIST, CIFAR-10, STL-10, CelebA, VGG) demonstrate that PeerGAN
outperforms competitive baseline work in generating diverse and high-quality
samples, while only introduces negligible computation cost. | [
"cs.LG"
] |
Feature selection is an important data pre-processing in data mining and
machine learning, which can reduce feature size without deteriorating model's
performance. Recently, sparse regression based feature selection methods have
received considerable attention due to their good performance. However, because
the $l_{2,0}$-norm regularization term is non-convex, this problem is very hard
to solve. In this paper, unlike most of the other methods which only solve the
approximate problem, a novel method based on homotopy iterative hard threshold
(HIHT) is proposed to solve the $l_{2,0}$-norm regularization least square
problem directly for multi-class feature selection, which can produce exact
row-sparsity solution for the weights matrix. What'more, in order to reduce the
computational time of HIHT, an acceleration version of HIHT (AHIHT) is derived.
Extensive experiments on eight biological datasets show that the proposed
method can achieve higher classification accuracy (ACC) with fewest number of
selected features (No.fea) comparing with the approximate convex counterparts
and state-of-the-art feature selection methods. The robustness of
classification accuracy to the regularization parameter and the number of
selected feature are also exhibited. | [
"cs.LG"
] |
Region-based methods have proven necessary for improving segmentation
accuracy of neuronal structures in electron microscopy (EM) images. Most
region-based segmentation methods use a scoring function to determine region
merging. Such functions are usually learned with supervised algorithms that
demand considerable ground truth data, which are costly to collect. We propose
a semi-supervised approach that reduces this demand. Based on a merge tree
structure, we develop a differentiable unsupervised loss term that enforces
consistent predictions from the learned function. We then propose a Bayesian
model that combines the supervised and the unsupervised information for
probabilistic learning. The experimental results on three EM data sets
demonstrate that by using a subset of only 3% to 7% of the entire ground truth
data, our approach consistently performs close to the state-of-the-art
supervised method with the full labeled data set, and significantly outperforms
the supervised method with the same labeled subset. | [
"cs.CV"
] |
Benefiting from the powerful expressive capability of graphs, graph-based
approaches have achieved impressive performance in various biomedical
applications. Most existing methods tend to define the adjacency matrix among
samples manually based on meta-features, and then obtain the node embeddings
for downstream tasks by Graph Representation Learning (GRL). However, it is not
easy for these approaches to generalize to unseen samples. Meanwhile, the
complex correlation between modalities is also ignored. As a result, these
factors inevitably yield the inadequacy of providing valid information about
the patient's condition for a reliable diagnosis. In this paper, we propose an
end-to-end Multimodal Graph Learning framework (MMGL) for disease prediction.
To effectively exploit the rich information across multi-modality associated
with diseases, amodal-attentional multi-modal fusion is proposed to integrate
the features of each modality by leveraging the correlation and complementarity
between the modalities. Furthermore, instead of defining the adjacency matrix
manually as existing methods, the latent graph structure can be captured
through a novel way of adaptive graph learning. It could be jointly optimized
with the prediction model, thus revealing the intrinsic connections among
samples. Unlike the previous transductive methods, our model is also applicable
to the scenario of inductive learning for those unseen data. An extensive group
of experiments on two disease prediction problems is then carefully designed
and presented, demonstrating that MMGL obtains more favorable performances. In
addition, we also visualize and analyze the learned graph structure to provide
more reliable decision support for doctors in real medical applications and
inspiration for disease research. | [
"cs.LG",
"cs.CV"
] |
A large portion of passenger requests is reportedly unserviced, partially due
to vacant for-hire drivers' cruising behavior during the passenger seeking
process. This paper aims to model the multi-driver repositioning task through a
mean field multi-agent reinforcement learning (MARL) approach that captures
competition among multiple agents. Because the direct application of MARL to
the multi-driver system under a given reward mechanism will likely yield a
suboptimal equilibrium due to the selfishness of drivers, this study proposes a
reward design scheme with which a more desired equilibrium can be reached. To
effectively solve the bilevel optimization problem with upper level as the
reward design and the lower level as a multi-agent system, a Bayesian
optimization (BO) algorithm is adopted to speed up the learning process. We
then apply the bilevel optimization model to two case studies, namely,
e-hailing driver repositioning under service charge and multiclass taxi driver
repositioning under NYC congestion pricing. In the first case study, the model
is validated by the agreement between the derived optimal control from BO and
that from an analytical solution. With a simple piecewise linear service
charge, the objective of the e-hailing platform can be increased by 8.4%. In
the second case study, an optimal toll charge of $5.1 is solved using BO, which
improves the objective of city planners by 7.9%, compared to that without any
toll charge. Under this optimal toll charge, the number of taxis in the NYC
central business district is decreased, indicating a better traffic condition,
without substantially increasing the crowdedness of the subway system. | [
"cs.LG",
"cs.MA",
"stat.ML"
] |
Many important physical phenomena involve subtle signals that are difficult
to observe with the unaided eye, yet visualizing them can be very informative.
Current motion magnification techniques can reveal these small temporal
variations in video, but require precise prior knowledge about the target
signal, and cannot deal with interference motions at a similar frequency. We
present DeepMag an end-to-end deep neural video-processing framework based on
gradient ascent that enables automated magnification of subtle color and motion
signals from a specific source, even in the presence of large motions of
various velocities. While the approach is generalizable, the advantages of
DeepMag are highlighted via the task of video-based physiological
visualization. Through systematic quantitative and qualitative evaluation of
the approach on videos with different levels of head motion, we compare the
magnification of pulse and respiration to existing state-of-the-art methods.
Our method produces magnified videos with substantially fewer artifacts and
blurring whilst magnifying the physiological changes by a similar degree. | [
"cs.CV",
"cs.GR",
"cs.HC"
] |
In the real world, linguistic agents are also embodied agents: they perceive
and act in the physical world. The notion of Language Grounding questions the
interactions between language and embodiment: how do learning agents connect or
ground linguistic representations to the physical world ? This question has
recently been approached by the Reinforcement Learning community under the
framework of instruction-following agents. In these agents, behavioral policies
or reward functions are conditioned on the embedding of an instruction
expressed in natural language. This paper proposes another approach: using
language to condition goal generators. Given any goal-conditioned policy, one
could train a language-conditioned goal generator to generate language-agnostic
goals for the agent. This method allows to decouple sensorimotor learning from
language acquisition and enable agents to demonstrate a diversity of behaviors
for any given instruction. We propose a particular instantiation of this
approach and demonstrate its benefits. | [
"cs.LG",
"cs.AI",
"cs.CL",
"stat.ML"
] |
Vertex classification is vulnerable to perturbations of both graph topology
and vertex attributes, as shown in recent research. As in other machine
learning domains, concerns about robustness to adversarial manipulation can
prevent potential users from adopting proposed methods when the consequence of
action is very high. This paper considers two topological characteristics of
graphs and explores the way these features affect the amount the adversary must
perturb the graph in order to be successful. We show that, if certain vertices
are included in the training set, it is possible to substantially an
adversary's required perturbation budget. On four citation datasets, we
demonstrate that if the training set includes high degree vertices or vertices
that ensure all unlabeled nodes have neighbors in the training set, we show
that the adversary's budget often increases by a substantial factor---often a
factor of 2 or more---over random training for the Nettack poisoning attack.
Even for especially easy targets (those that are misclassified after just one
or two perturbations), the degradation of performance is much slower, assigning
much lower probabilities to the incorrect classes. In addition, we demonstrate
that this robustness either persists when recently proposed defenses are
applied, or is competitive with the resulting performance improvement for the
defender. | [
"cs.LG",
"stat.ML"
] |
Tubular structure segmentation in medical images, e.g., segmenting vessels in
CT scans, serves as a vital step in the use of computers to aid in screening
early stages of related diseases. But automatic tubular structure segmentation
in CT scans is a challenging problem, due to issues such as poor contrast,
noise and complicated background. A tubular structure usually has a
cylinder-like shape which can be well represented by its skeleton and
cross-sectional radii (scales). Inspired by this, we propose a geometry-aware
tubular structure segmentation method, Deep Distance Transform (DDT), which
combines intuitions from the classical distance transform for skeletonization
and modern deep segmentation networks. DDT first learns a multi-task network to
predict a segmentation mask for a tubular structure and a distance map. Each
value in the map represents the distance from each tubular structure voxel to
the tubular structure surface. Then the segmentation mask is refined by
leveraging the shape prior reconstructed from the distance map. We apply our
DDT on six medical image datasets. The experiments show that (1) DDT can boost
tubular structure segmentation performance significantly (e.g., over 13%
improvement measured by DSC for pancreatic duct segmentation), and (2) DDT
additionally provides a geometrical measurement for a tubular structure, which
is important for clinical diagnosis (e.g., the cross-sectional scale of a
pancreatic duct can be an indicator for pancreatic cancer). | [
"cs.CV"
] |
Many real-world applications require the prediction of long sequence
time-series, such as electricity consumption planning. Long sequence
time-series forecasting (LSTF) demands a high prediction capacity of the model,
which is the ability to capture precise long-range dependency coupling between
output and input efficiently. Recent studies have shown the potential of
Transformer to increase the prediction capacity. However, there are several
severe issues with Transformer that prevent it from being directly applicable
to LSTF, including quadratic time complexity, high memory usage, and inherent
limitation of the encoder-decoder architecture. To address these issues, we
design an efficient transformer-based model for LSTF, named Informer, with
three distinctive characteristics: (i) a $ProbSparse$ self-attention mechanism,
which achieves $O(L \log L)$ in time complexity and memory usage, and has
comparable performance on sequences' dependency alignment. (ii) the
self-attention distilling highlights dominating attention by halving cascading
layer input, and efficiently handles extreme long input sequences. (iii) the
generative style decoder, while conceptually simple, predicts the long
time-series sequences at one forward operation rather than a step-by-step way,
which drastically improves the inference speed of long-sequence predictions.
Extensive experiments on four large-scale datasets demonstrate that Informer
significantly outperforms existing methods and provides a new solution to the
LSTF problem. | [
"cs.LG",
"cs.AI",
"cs.IR"
] |
Optimal Transport is a theory that allows to define geometrical notions of
distance between probability distributions and to find correspondences,
relationships, between sets of points. Many machine learning applications are
derived from this theory, at the frontier between mathematics and optimization.
This thesis proposes to study the complex scenario in which the different data
belong to incomparable spaces. In particular we address the following
questions: how to define and apply Optimal Transport between graphs, between
structured data? How can it be adapted when the data are varied and not
embedded in the same metric space? This thesis proposes a set of Optimal
Transport tools for these different cases. An important part is notably devoted
to the study of the Gromov-Wasserstein distance whose properties allow to
define interesting transport problems on incomparable spaces. More broadly, we
analyze the mathematical properties of the various proposed tools, we establish
algorithmic solutions to compute them and we study their applicability in
numerous machine learning scenarii which cover, in particular, classification,
simplification, partitioning of structured data, as well as heterogeneous
domain adaptation. | [
"stat.ML",
"cs.LG"
] |
Graph clustering involves the task of dividing nodes into clusters, so that
the edge density is higher within clusters as opposed to across clusters. A
natural, classic and popular statistical setting for evaluating solutions to
this problem is the stochastic block model, also referred to as the planted
partition model.
In this paper we present a new algorithm--a convexified version of Maximum
Likelihood--for graph clustering. We show that, in the classic stochastic block
model setting, it outperforms existing methods by polynomial factors when the
cluster size is allowed to have general scalings. In fact, it is within
logarithmic factors of known lower bounds for spectral methods, and there is
evidence suggesting that no polynomial time algorithm would do significantly
better.
We then show that this guarantee carries over to a more general extension of
the stochastic block model. Our method can handle the settings of semi-random
graphs, heterogeneous degree distributions, unequal cluster sizes, unaffiliated
nodes, partially observed graphs and planted clique/coloring etc. In
particular, our results provide the best exact recovery guarantees to date for
the planted partition, planted k-disjoint-cliques and planted noisy coloring
models with general cluster sizes; in other settings, we match the best
existing results up to logarithmic factors. | [
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.