text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Multivariate time series naturally exist in many fields, like energy,
bioinformatics, signal processing, and finance. Most of these applications need
to be able to compare these structured data. In this context, dynamic time
warping (DTW) is probably the most common comparison measure. However, not much
research effort has been put into improving it by learning. In this paper, we
propose a novel method for learning similarities based on DTW, in order to
improve time series classification. Making use of the uniform stability
framework, we provide the first theoretical guarantees in the form of a
generalization bound for linear classification. The experimental study shows
that the proposed approach is efficient, while yielding sparse classifiers. | [
"cs.LG"
] |
In this paper, we focus on estimating the 6D pose of objects in point clouds.
Although the topic has been widely studied, pose estimation in point clouds
remains a challenging problem due to the noise and occlusion. To address the
problem, a novel 3DPVNet is presented in this work, which utilizes 3D local
patches to vote for the object 6D poses. 3DPVNet is comprised of three modules.
In particular, a Patch Unification (\textbf{PU}) module is first introduced to
normalize the input patch, and also create a standard local coordinate frame on
it to generate a reliable vote. We then devise a Weight-guided Neighboring
Feature Fusion (\textbf{WNFF}) module in the network, which fuses the
neighboring features to yield a semi-global feature for the center patch. WNFF
module mines the neighboring information of a local patch, such that the
representation capability to local geometric characteristics is significantly
enhanced, making the method robust to a certain level of noise. Moreover, we
present a Patch-level Voting (\textbf{PV}) module to regress transformations
and generates pose votes. After the aggregation of all votes from patches and a
refinement step, the final pose of the object can be obtained. Compared to
recent voting-based methods, 3DPVNet is patch-level, and directly carried out
on point clouds. Therefore, 3DPVNet achieves less computation than
point/pixel-level voting scheme, and has robustness to partial data.
Experiments on several datasets demonstrate that 3DPVNet achieves the
state-of-the-art performance, and is also robust against noise and occlusions. | [
"cs.CV"
] |
Discovering the 3D atomic structure of molecules such as proteins and viruses
is a fundamental research problem in biology and medicine. Electron
Cryomicroscopy (Cryo-EM) is a promising vision-based technique for structure
estimation which attempts to reconstruct 3D structures from 2D images. This
paper addresses the challenging problem of 3D reconstruction from 2D Cryo-EM
images. A new framework for estimation is introduced which relies on modern
stochastic optimization techniques to scale to large datasets. We also
introduce a novel technique which reduces the cost of evaluating the objective
function during optimization by over five orders or magnitude. The net result
is an approach capable of estimating 3D molecular structure from large scale
datasets in about a day on a single workstation. | [
"cs.CV",
"q-bio.QM"
] |
The data distribution commonly evolves over time leading to problems such as
concept drift that often decrease classifier performance. We seek to predict
unseen data (and their labels) allowing us to tackle challenges due to a
non-constant data distribution in a \emph{proactive} manner rather than
detecting and reacting to already existing changes that might already have led
to errors. To this end, we learn a domain transformer in an unsupervised manner
that allows generating data of unseen domains. Our approach first matches
independently learned latent representations of two given domains obtained from
an auto-encoder using a Cycle-GAN. In turn, a transformation of the original
samples can be learned that can be applied iteratively to extrapolate to unseen
domains. Our evaluation on CNNs on image data confirms the usefulness of the
approach. It also achieves very good results on the well-known problem of
unsupervised domain adaption, where labels but not samples have to be
predicted. | [
"cs.LG",
"cs.AI"
] |
Undirected neural sequence models such as BERT (Devlin et al., 2019) have
received renewed interest due to their success on discriminative natural
language understanding tasks such as question-answering and natural language
inference. The problem of generating sequences directly from these models has
received relatively little attention, in part because generating from
undirected models departs significantly from conventional monotonic generation
in directed sequence models. We investigate this problem by proposing a
generalized model of sequence generation that unifies decoding in directed and
undirected models. The proposed framework models the process of generation
rather than the resulting sequence, and under this framework, we derive various
neural sequence models as special cases, such as autoregressive,
semi-autoregressive, and refinement-based non-autoregressive models. This
unification enables us to adapt decoding algorithms originally developed for
directed sequence models to undirected sequence models. We demonstrate this by
evaluating various handcrafted and learned decoding strategies on a BERT-like
machine translation model (Lample & Conneau, 2019). The proposed approach
achieves constant-time translation results on par with linear-time translation
results from the same undirected sequence model, while both are competitive
with the state-of-the-art on WMT'14 English-German translation. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Attention mechanism enables the Graph Neural Networks(GNNs) to learn the
attention weights between the target node and its one-hop neighbors, the
performance is further improved. However, the most existing GNNs are oriented
to homogeneous graphs and each layer can only aggregate the information of
one-hop neighbors. Stacking multi-layer networks will introduce a lot of noise
and easily lead to over smoothing. We propose a Multi-hop Heterogeneous
Neighborhood information Fusion graph representation learning method (MHNF).
Specifically, we first propose a hybrid metapath autonomous extraction model to
efficiently extract multi-hop hybrid neighbors. Then, we propose a hop-level
heterogeneous Information aggregation model, which selectively aggregates
different-hop neighborhood information within the same hybrid metapath.
Finally, a hierarchical semantic attention fusion model (HSAF) is proposed,
which can efficiently integrate different-hop and different-path neighborhood
information respectively. This paper can solve the problem of aggregating the
multi-hop neighborhood information and can learn hybrid metapaths for target
task, reducing the limitation of manually specifying metapaths. In addition,
HSAF can extract the internal node information of the metapaths and better
integrate the semantic information of different levels. Experimental results on
real datasets show that MHNF is superior to state-of-the-art methods in node
classification and clustering tasks (10.94% - 69.09% and 11.58% - 394.93%
relative improvement on average, respectively). | [
"cs.LG",
"cs.AI"
] |
Most of current studies on human gaze and saliency modeling have used
high-quality stimuli. In real world, however, captured images undergo various
types of distortions during the whole acquisition, transmission, and displaying
chain. Some distortion types include motion blur, lighting variations and
rotation. Despite few efforts, influences of ubiquitous distortions on visual
attention and saliency models have not been systematically investigated. In
this paper, we first create a large-scale database including eye movements of
10 observers over 1900 images degraded by 19 types of distortions. Second, by
analyzing eye movements and saliency models, we find that: a) observers look at
different locations over distorted versus original images, and b) performances
of saliency models are drastically hindered over distorted images, with the
maximum performance drop belonging to Rotation and Shearing distortions.
Finally, we investigate the effectiveness of different distortions when serving
as data augmentation transformations. Experimental results verify that some
useful data augmentation transformations which preserve human gaze of reference
images can improve deep saliency models against distortions, while some invalid
transformations which severely change human gaze will degrade the performance. | [
"cs.CV"
] |
In this paper, we address the open question: "What do adversarially robust
models look at?" Recently, it has been reported in many works that there exists
the trade-off between standard accuracy and adversarial robustness. According
to prior works, this trade-off is rooted in the fact that adversarially robust
and standard accurate models might depend on very different sets of features.
However, it has not been well studied what kind of difference actually exists.
In this paper, we analyze this difference through various experiments visually
and quantitatively. Experimental results show that adversarially robust models
look at things at a larger scale than standard models and pay less attention to
fine textures. Furthermore, although it has been claimed that adversarially
robust features are not compatible with standard accuracy, there is even a
positive effect by using them as pre-trained models particularly in low
resolution datasets. | [
"cs.CV"
] |
After DETR was proposed, this novel transformer-based detection paradigm
which performs several cross-attentions between object queries and feature maps
for predictions has subsequently derived a series of transformer-based
detection heads. These models iterate object queries after each
cross-attention. However, they don't renew the query position which indicates
object queries' position information. Thus model needs extra learning to figure
out the newest regions that query position should express and need more
attention. To fix this issue, we propose the Guided Query Position (GQPos)
method to embed the latest location information of object queries to query
position iteratively.
Another problem of such transformer-based detection heads is the high
complexity to perform attention on multi-scale feature maps, which hinders them
from improving detection performance at all scales. Therefore we propose a
novel fusion scheme named Similar Attention (SiA): besides the feature maps is
fused, SiA also fuse the attention weights maps to accelerate the learning of
high-resolution attention weight map by well-learned low-resolution attention
weight map.
Our experiments show that the proposed GQPos improves the performance of a
series of models, including DETR, SMCA, YoloS, and HoiTransformer and SiA
consistently improve the performance of multi-scale transformer-based detection
heads like DETR and HoiTransformer. | [
"cs.CV"
] |
Attention-based architectures have become ubiquitous in machine learning, yet
our understanding of the reasons for their effectiveness remains limited. This
work proposes a new way to understand self-attention networks: we show that
their output can be decomposed into a sum of smaller terms, each involving the
operation of a sequence of attention heads across layers. Using this
decomposition, we prove that self-attention possesses a strong inductive bias
towards "token uniformity". Specifically, without skip connections or
multi-layer perceptrons (MLPs), the output converges doubly exponentially to a
rank-1 matrix. On the other hand, skip connections and MLPs stop the output
from degeneration. Our experiments verify the identified convergence phenomena
on different variants of standard transformer architectures. | [
"cs.LG"
] |
Deep generative models are known to be able to model arbitrary probability
distributions. Among these, a recent deep generative model, dubbed sliceGAN,
proposed a new way of using the generative adversarial network (GAN) to capture
the micro-structural characteristics of a two-dimensional (2D) slice and
generate three-dimensional (3D) volumes with similar properties. While 3D
micrographs are largely beneficial in simulating diverse material behavior,
they are often much harder to obtain than their 2D counterparts. Hence,
sliceGAN opens up many interesting directions of research by learning the
representative distribution from 2D slices, and transferring the learned
knowledge to generate arbitrary 3D volumes. However, one limitation of sliceGAN
is that latent space steering is not possible. Hence, we combine sliceGAN with
AdaIN to endow the model with the ability to disentangle the features and
control the synthesis. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Typical neural networks with external memory do not effectively separate
capacity for episodic and working memory as is required for reasoning in
humans. Applying knowledge gained from psychological studies, we designed a new
model called Differentiable Working Memory (DWM) in order to specifically
emulate human working memory. As it shows the same functional characteristics
as working memory, it robustly learns psychology inspired tasks and converges
faster than comparable state-of-the-art models. Moreover, the DWM model
successfully generalizes to sequences two orders of magnitude longer than the
ones used in training. Our in-depth analysis shows that the behavior of DWM is
interpretable and that it learns to have fine control over memory, allowing it
to retain, ignore or forget information based on its relevance. | [
"cs.LG",
"cs.NE",
"stat.ML",
"I.2.6"
] |
Message-Passing Neural Networks (MPNNs), the most prominent Graph Neural
Network (GNN) framework, celebrate much success in the analysis of
graph-structured data. Concurrently, the sparsification of Neural Network
models attracts a great amount of academic and industrial interest. In this
paper, we conduct a structured study of the effect of sparsification on the
trainable part of MPNNs known as the Update step. To this end, we design a
series of models to successively sparsify the linear transform in the Update
step. Specifically, we propose the ExpanderGNN model with a tuneable
sparsification rate and the Activation-Only GNN, which has no linear transform
in the Update step. In agreement with a growing trend in the literature, the
sparsification paradigm is changed by initialising sparse neural network
architectures rather than expensively sparsifying already trained
architectures. Our novel benchmark models enable a better understanding of the
influence of the Update step on model performance and outperform existing
simplified benchmark models such as the Simple Graph Convolution. The
ExpanderGNNs, and in some cases the Activation-Only models, achieve performance
on par with their vanilla counterparts on several downstream tasks while
containing significantly fewer trainable parameters. In experiments with
matching parameter numbers, our benchmark models outperform the
state-of-the-art GNN models. Our code is publicly available at:
https://github.com/ChangminWu/ExpanderGNN. | [
"cs.LG",
"cs.AI",
"cs.SI"
] |
Deep neural networks have enhanced the performance of decision making systems
in many applications including image understanding, and further gains can be
achieved by constructing ensembles. However, designing an ensemble of deep
networks is often not very beneficial since the time needed to train the
networks is very high or the performance gain obtained is not very significant.
In this paper, we analyse error correcting output coding (ECOC) framework to be
used as an ensemble technique for deep networks and propose different design
strategies to address the accuracy-complexity trade-off. We carry out an
extensive comparative study between the introduced ECOC designs and the
state-of-the-art ensemble techniques such as ensemble averaging and gradient
boosting decision trees. Furthermore, we propose a combinatory technique which
is shown to achieve the highest classification performance amongst all. | [
"cs.LG",
"stat.ML",
"68T07,",
"I.5.2; I.2.0"
] |
How do humans navigate to target objects in novel scenes? Do we use the
semantic/functional priors we have built over years to efficiently search and
navigate? For example, to search for mugs, we search cabinets near the coffee
machine and for fruits we try the fridge. In this work, we focus on
incorporating semantic priors in the task of semantic navigation. We propose to
use Graph Convolutional Networks for incorporating the prior knowledge into a
deep reinforcement learning framework. The agent uses the features from the
knowledge graph to predict the actions. For evaluation, we use the AI2-THOR
framework. Our experiments show how semantic knowledge improves performance
significantly. More importantly, we show improvement in generalization to
unseen scenes and/or objects. The supplementary video can be accessed at the
following link: https://youtu.be/otKjuO805dE . | [
"cs.CV",
"cs.AI",
"cs.RO"
] |
The reinforcement learning (RL) research area is very active, with several
important applications. However, certain challenges still need to be addressed,
amongst which one can mention the ability to find policies that achieve
sufficient exploration and coordination while solving a given task. In this
work, we present an algorithmic framework of two RL agents each with a
different objective. We introduce a novel function approximation approach to
assess the influence $F$ of a certain policy on others. While optimizing $F$ as
a regularizer of $\pi$'s objective, agents learn to coordinate team behavior
while exploiting high-reward regions of the solution space. Additionally, both
agents use prediction error as intrinsic motivation to learn policies that
behave as differently as possible, thus achieving the exploration criterion.
Our method was evaluated on the suite of OpenAI gym tasks as well as
cooperative and mixed scenarios, where agent populations are able to discover
various physical and informational coordination strategies, showing
state-of-the-art performance when compared to famous baselines. | [
"cs.LG"
] |
Accurate rainfall forecasting is critical because it has a great impact on
people's social and economic activities. Recent trends on various literatures
show that Deep Learning (Neural Network) is a promising methodology to tackle
many challenging tasks. In this study, we introduce a brand-new data-driven
precipitation prediction model called DeepRain. This model predicts the amount
of rainfall from weather radar data, which is three-dimensional and
four-channel data, using convolutional LSTM (ConvLSTM). ConvLSTM is a variant
of LSTM (Long Short-Term Memory) containing a convolution operation inside the
LSTM cell. For the experiment, we used radar reflectivity data for a two-year
period whose input is in a time series format in units of 6 min divided into 15
records. The output is the predicted rainfall information for the input data.
Experimental results show that two-stacked ConvLSTM reduced RMSE by 23.0%
compared to linear regression. | [
"cs.LG"
] |
We consider the problem of clustering data that reside on discrete, low
dimensional lattices. Canonical examples for this setting are found in image
segmentation and key point extraction. Our solution is based on a recent
approach to information theoretic clustering where clusters result from an
iterative procedure that minimizes a divergence measure. We replace costly
processing steps in the original algorithm by means of convolutions. These
allow for highly efficient implementations and thus significantly reduce
runtime. This paper therefore bridges a gap between machine learning and signal
processing. | [
"cs.CV"
] |
Learning interpretable and interpolatable latent representations has been an
emerging research direction, allowing researchers to understand and utilize the
derived latent space for further applications such as visual synthesis or
recognition. While most existing approaches derive an interpolatable latent
space and induces smooth transition in image appearance, it is still not clear
how to observe desirable representations which would contain semantic
information of interest. In this paper, we aim to learn meaningful
representations and simultaneously perform semantic-oriented and
visually-smooth interpolation. To this end, we propose an angular
triplet-neighbor loss (ATNL) that enables learning a latent representation
whose distribution matches the semantic information of interest. With the
latent space guided by ATNL, we further utilize spherical semantic
interpolation for generating semantic warping of images, allowing synthesis of
desirable visual data. Experiments on MNIST and CMU Multi-PIE datasets
qualitatively and quantitatively verify the effectiveness of our method. | [
"cs.CV"
] |
Lipschitz constants of neural networks have been explored in various contexts
in deep learning, such as provable adversarial robustness, estimating
Wasserstein distance, stabilising training of GANs, and formulating invertible
neural networks. Such works have focused on bounding the Lipschitz constant of
fully connected or convolutional networks, composed of linear maps and
pointwise non-linearities. In this paper, we investigate the Lipschitz constant
of self-attention, a non-linear neural network module widely used in sequence
modelling. We prove that the standard dot-product self-attention is not
Lipschitz for unbounded input domain, and propose an alternative L2
self-attention that is Lipschitz. We derive an upper bound on the Lipschitz
constant of L2 self-attention and provide empirical evidence for its asymptotic
tightness. To demonstrate the practical relevance of our theoretical work, we
formulate invertible self-attention and use it in a Transformer-based
architecture for a character-level language modelling task. | [
"stat.ML",
"cs.LG"
] |
In this paper, we evaluate dimensionality reduction methods in terms of
difficulty in estimating visual information on original images from
dimensionally reduced ones. Recently, dimensionality reduction has been
receiving attention as the process of not only reducing the number of random
variables, but also protecting visual information for privacy-preserving
machine learning. For such a reason, difficulty in estimating visual
information is discussed. In particular, the random sampling method that was
proposed for privacy-preserving machine learning, is compared with typical
dimensionality reduction methods. In an image classification experiment, the
random sampling method is demonstrated not only to have high difficulty, but
also to be comparable to other dimensionality reduction methods, while
maintaining the property of spatial information invariant. | [
"cs.CV"
] |
Recently, generating adversarial examples has become an important means of
measuring robustness of a deep learning model. Adversarial examples help us
identify the susceptibilities of the model and further counter those
vulnerabilities by applying adversarial training techniques. In natural
language domain, small perturbations in the form of misspellings or paraphrases
can drastically change the semantics of the text. We propose a reinforcement
learning based approach towards generating adversarial examples in black-box
settings. We demonstrate that our method is able to fool well-trained models
for (a) IMDB sentiment classification task and (b) AG's news corpus news
categorization task with significantly high success rates. We find that the
adversarial examples generated are semantics-preserving perturbations to the
original text. | [
"cs.LG",
"cs.CL",
"cs.IR",
"stat.ML"
] |
Automatic body part recognition for CT slices can benefit various medical
image applications. Recent deep learning methods demonstrate promising
performance, with the requirement of large amounts of labeled images for
training. The intrinsic structural or superior-inferior slice ordering
information in CT volumes is not fully exploited. In this paper, we propose a
convolutional neural network (CNN) based Unsupervised Body part Regression
(UBR) algorithm to address this problem. A novel unsupervised learning method
and two inter-sample CNN loss functions are presented. Distinct from previous
work, UBR builds a coordinate system for the human body and outputs a
continuous score for each axial slice, representing the normalized position of
the body part in the slice. The training process of UBR resembles a
self-organization process: slice scores are learned from inter-slice
relationships. The training samples are unlabeled CT volumes that are abundant,
thus no extra annotation effort is needed. UBR is simple, fast, and accurate.
Quantitative and qualitative experiments validate its effectiveness. In
addition, we show two applications of UBR in network initialization and anomaly
detection. | [
"cs.CV"
] |
Robots learning from observations in the real world using inverse
reinforcement learning (IRL) may encounter objects or agents in the
environment, other than the expert, that cause nuisance observations during the
demonstration. These confounding elements are typically removed in
fully-controlled environments such as virtual simulations or lab settings. When
complete removal is impossible the nuisance observations must be filtered out.
However, identifying the source of observations when large amounts of
observations are made is difficult. To address this, we present a hierarchical
Bayesian model that incorporates both the expert's and the confounding
elements' observations thereby explicitly modeling the diverse observations a
robot may receive. We extend an existing IRL algorithm originally designed to
work under partial occlusion of the expert to consider the diverse
observations. In a simulated robotic sorting domain containing both occlusion
and confounding elements, we demonstrate the model's effectiveness. In
particular, our technique outperforms several other comparative methods, second
only to having perfect knowledge of the subject's trajectory. | [
"cs.LG",
"cs.RO",
"I.2.6; I.2.9"
] |
Collaborative filtering (CF) is a successful approach commonly used by many
recommender systems. Conventional CF-based methods use the ratings given to
items by users as the sole source of information for learning to make
recommendation. However, the ratings are often very sparse in many
applications, causing CF-based methods to degrade significantly in their
recommendation performance. To address this sparsity problem, auxiliary
information such as item content information may be utilized. Collaborative
topic regression (CTR) is an appealing recent method taking this approach which
tightly couples the two components that learn from two different sources of
information. Nevertheless, the latent representation learned by CTR may not be
very effective when the auxiliary information is very sparse. To address this
problem, we generalize recent advances in deep learning from i.i.d. input to
non-i.i.d. (CF-based) input and propose in this paper a hierarchical Bayesian
model called collaborative deep learning (CDL), which jointly performs deep
representation learning for the content information and collaborative filtering
for the ratings (feedback) matrix. Extensive experiments on three real-world
datasets from different domains show that CDL can significantly advance the
state of the art. | [
"cs.LG",
"cs.CL",
"cs.IR",
"cs.NE",
"stat.ML"
] |
Multitask learning and transfer learning have proven to be useful in the
field of machine learning when additional knowledge is available to help a
prediction task. We aim at deriving methods following these paradigms for use
in autotuning, where the goal is to find the optimal performance parameters of
an application treated as a black-box function. We show comparative results
with state-of-the-art autotuning techniques. For instance, we observe an
average $1.5x$ improvement of the application runtime compared to the OpenTuner
and HpBandSter autotuners. We explain how our approaches can be more suitable
than some state-of-the-art autotuners for the tuning of any application in
general and of expensive exascale applications in particular. | [
"cs.LG",
"cs.DC",
"stat.ML"
] |
Over the years, datasets and benchmarks have had an outsized influence on the
design of novel algorithms. In this paper, we introduce ChairSegments, a novel
and compact semi-synthetic dataset for object segmentation. We also show
empirical findings in transfer learning that mirror recent findings for image
classification. We particularly show that models that are fine-tuned from a
pretrained set of weights lie in the same basin of the optimization landscape.
ChairSegments consists of a diverse set of prototypical images of chairs with
transparent backgrounds composited into a diverse array of backgrounds. We aim
for ChairSegments to be the equivalent of the CIFAR-10 dataset but for quickly
designing and iterating over novel model architectures for segmentation. On
Chair Segments, a U-Net model can be trained to full convergence in only thirty
minutes using a single GPU. Finally, while this dataset is semi-synthetic, it
can be a useful proxy for real data, leading to state-of-the-art accuracy on
the Object Discovery dataset when used as a source of pretraining. | [
"cs.CV",
"cs.LG"
] |
Scalable Vector Graphics (SVG) are ubiquitous in modern 2D interfaces due to
their ability to scale to different resolutions. However, despite the success
of deep learning-based models applied to rasterized images, the problem of
vector graphics representation learning and generation remains largely
unexplored. In this work, we propose a novel hierarchical generative network,
called DeepSVG, for complex SVG icons generation and interpolation. Our
architecture effectively disentangles high-level shapes from the low-level
commands that encode the shape itself. The network directly predicts a set of
shapes in a non-autoregressive fashion. We introduce the task of complex SVG
icons generation by releasing a new large-scale dataset along with an
open-source library for SVG manipulation. We demonstrate that our network
learns to accurately reconstruct diverse vector graphics, and can serve as a
powerful animation tool by performing interpolations and other latent space
operations. Our code is available at https://github.com/alexandre01/deepsvg. | [
"cs.CV"
] |
The remarkable performance of deep neural networks depends on the
availability of massive labeled data. To alleviate the load of data annotation,
active deep learning aims to select a minimal set of training points to be
labelled which yields maximal model accuracy. Most existing approaches
implement either an `exploration'-type selection criterion, which aims at
exploring the joint distribution of data and labels, or a `refinement'-type
criterion which aims at localizing the detected decision boundaries. We propose
a versatile and efficient criterion that automatically switches from
exploration to refinement when the distribution has been sufficiently mapped.
Our criterion relies on a process of diffusing the existing label information
over a graph constructed from the hidden representation of the data set as
provided by the neural network. This graph representation captures the
intrinsic geometry of the approximated labeling function. The diffusion-based
criterion is shown to be advantageous as it outperforms existing criteria for
deep active learning. | [
"cs.LG",
"stat.ML"
] |
Many real-world tasks such as classification of digital histopathology images
and 3D object detection involve learning from a set of instances. In these
cases, only a group of instances or a set, collectively, contains meaningful
information and therefore only the sets have labels, and not individual data
instances. In this work, we present a permutation invariant neural network
called Memory-based Exchangeable Model (MEM) for learning set functions. The
MEM model consists of memory units that embed an input sequence to high-level
features enabling the model to learn inter-dependencies among instances through
a self-attention mechanism. We evaluated the learning ability of MEM on various
toy datasets, point cloud classification, and classification of lung whole
slide images (WSIs) into two subtypes of lung cancer---Lung Adenocarcinoma, and
Lung Squamous Cell Carcinoma. We systematically extracted patches from lung
WSIs downloaded from The Cancer Genome Atlas~(TCGA) dataset, the largest public
repository of WSIs, achieving a competitive accuracy of 84.84\% for
classification of two sub-types of lung cancer. The results on other datasets
are promising as well, and demonstrate the efficacy of our model. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
We introduce NoisyNet, a deep reinforcement learning agent with parametric
noise added to its weights, and show that the induced stochasticity of the
agent's policy can be used to aid efficient exploration. The parameters of the
noise are learned with gradient descent along with the remaining network
weights. NoisyNet is straightforward to implement and adds little computational
overhead. We find that replacing the conventional exploration heuristics for
A3C, DQN and dueling agents (entropy reward and $\epsilon$-greedy respectively)
with NoisyNet yields substantially higher scores for a wide range of Atari
games, in some cases advancing the agent from sub to super-human performance. | [
"cs.LG",
"stat.ML"
] |
We study the problem of estimating, in the sense of optimal transport
metrics, a measure which is assumed supported on a manifold embedded in a
Hilbert space. By establishing a precise connection between optimal transport
metrics, optimal quantization, and learning theory, we derive new probabilistic
bounds for the performance of a classic algorithm in unsupervised learning
(k-means), when used to produce a probability measure derived from the data. In
the course of the analysis, we arrive at new lower bounds, as well as
probabilistic upper bounds on the convergence rate of the empirical law of
large numbers, which, unlike existing bounds, are applicable to a wide class of
measures. | [
"cs.LG",
"stat.ML",
"K.3.2"
] |
Deep neural network models represent the state-of-the-art methodologies for
natural language processing. Here we build on top of these methodologies to
incorporate temporal information and model how to review data changes with
time. Specifically, we use the dynamic representations of recurrent point
process models, which encode the history of how business or service reviews are
received in time, to generate instantaneous language models with improved
prediction capabilities. Simultaneously, our methodologies enhance the
predictive power of our point process models by incorporating summarized review
content representations. We provide recurrent network and temporal convolution
solutions for modeling the review content. We deploy our methodologies in the
context of recommender systems, effectively characterizing the change in
preference and taste of users as time evolves. Source code is available at [1]. | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
Machine learning models have demonstrated vulnerability to adversarial
attacks, more specifically misclassification of adversarial examples.
In this paper, we investigate an attack-agnostic defense against adversarial
attacks on high-resolution images by detecting suspicious inputs.
The intuition behind our approach is that the essential characteristics of a
normal image are generally consistent with non-essential style transformations,
e.g., slightly changing the facial expression of human portraits.
In contrast, adversarial examples are generally sensitive to such
transformations.
In our approach to detect adversarial instances, we propose an
in\underline{V}ertible \underline{A}utoencoder based on the
\underline{S}tyleGAN2 generator via \underline{A}dversarial training (VASA) to
inverse images to disentangled latent codes that reveal hierarchical styles.
We then build a set of edited copies with non-essential style transformations
by performing latent shifting and reconstruction, based on the correspondences
between latent codes and style transformations.
The classification-based consistency of these edited copies is used to
distinguish adversarial instances. | [
"cs.CV",
"cs.LG"
] |
In this paper we propose a new method to learn the underlying acyclic mixed
graph of a linear non-Gaussian structural equation model given observational
data. We build on an algorithm proposed by Wang and Drton, and we show that one
can augment the hidden variable structure of the recovered model by learning
{\em multidirected edges} rather than only directed and bidirected ones.
Multidirected edges appear when more than two of the observed variables have a
hidden common cause. We detect the presence of such hidden causes by looking at
higher order cumulants and exploiting the multi-trek rule. Our method recovers
the correct structure when the underlying graph is a bow-free acyclic mixed
graph with potential multi-directed edges. | [
"cs.LG",
"math.ST",
"stat.ML",
"stat.TH",
"62H22, 62R01, 62J99"
] |
This paper deals with the scarcity of data for training optical flow
networks, highlighting the limitations of existing sources such as labeled
synthetic datasets or unlabeled real videos. Specifically, we introduce a
framework to generate accurate ground-truth optical flow annotations quickly
and in large amounts from any readily available single real picture. Given an
image, we use an off-the-shelf monocular depth estimation network to build a
plausible point cloud for the observed scene. Then, we virtually move the
camera in the reconstructed environment with known motion vectors and rotation
angles, allowing us to synthesize both a novel view and the corresponding
optical flow field connecting each pixel in the input image to the one in the
new frame. When trained with our data, state-of-the-art optical flow networks
achieve superior generalization to unseen real data compared to the same models
trained either on annotated synthetic datasets or unlabeled videos, and better
specialization if combined with synthetic images. | [
"cs.CV"
] |
Most existing interpretable methods explain a black-box model in a post-hoc
manner, which uses simpler models or data analysis techniques to interpret the
predictions after the model is learned. However, they (a) may derive
contradictory explanations on the same predictions given different methods and
data samples, and (b) focus on using simpler models to provide higher
descriptive accuracy at the sacrifice of prediction accuracy. To address these
issues, we propose a hybrid interpretable model that combines a piecewise
linear component and a nonlinear component. The first component describes the
explicit feature contributions by piecewise linear approximation to increase
the expressiveness of the model. The other component uses a multi-layer
perceptron to capture feature interactions and implicit nonlinearity, and
increase the prediction performance. Different from the post-hoc approaches,
the interpretability is obtained once the model is learned in the form of
feature shapes. We also provide a variant to explore higher-order interactions
among features to demonstrate that the proposed model is flexible for
adaptation. Experiments demonstrate that the proposed model can achieve good
interpretability by describing feature shapes while maintaining
state-of-the-art accuracy. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In reinforcement learning, the state of the real world is often represented
by feature vectors. However, not all of the features may be pertinent for
solving the current task. We propose Feature Selection Explore and Exploit
(FS-EE), an algorithm that automatically selects the necessary features while
learning a Factored Markov Decision Process, and prove that under mild
assumptions, its sample complexity scales with the in-degree of the dynamics of
just the necessary features, rather than the in-degree of all features. This
can result in a much better sample complexity when the in-degree of the
necessary features is smaller than the in-degree of all features. | [
"cs.LG",
"stat.ML"
] |
Modern deep learning algorithms have triggered various image segmentation
approaches. However most of them deal with pixel based segmentation. However,
superpixels provide a certain degree of contextual information while reducing
computation cost. In our approach, we have performed superpixel level semantic
segmentation considering 3 various levels as neighbours for semantic contexts.
Furthermore, we have enlisted a number of ensemble approaches like max-voting
and weighted-average. We have also used the Dempster-Shafer theory of
uncertainty to analyze confusion among various classes. Our method has proved
to be superior to a number of different modern approaches on the same dataset. | [
"cs.CV"
] |
Fully supervised object detection has achieved great success in recent years.
However, abundant bounding boxes annotations are needed for training a detector
for novel classes. To reduce the human labeling effort, we propose a novel
webly supervised object detection (WebSOD) method for novel classes which only
requires the web images without further annotations. Our proposed method
combines bottom-up and top-down cues for novel class detection. Within our
approach, we introduce a bottom-up mechanism based on the well-trained fully
supervised object detector (i.e. Faster RCNN) as an object region estimator for
web images by recognizing the common objectiveness shared by base and novel
classes. With the estimated regions on the web images, we then utilize the
top-down attention cues as the guidance for region classification. Furthermore,
we propose a residual feature refinement (RFR) block to tackle the domain
mismatch between web domain and the target domain. We demonstrate our proposed
method on PASCAL VOC dataset with three different novel/base splits. Without
any target-domain novel-class images and annotations, our proposed webly
supervised object detection model is able to achieve promising performance for
novel classes. Moreover, we also conduct transfer learning experiments on large
scale ILSVRC 2013 detection dataset and achieve state-of-the-art performance. | [
"cs.CV"
] |
Neural architecture search (NAS) searches architectures automatically for
given tasks, e.g., image classification and language modeling. Improving the
search efficiency and effectiveness have attracted increasing attention in
recent years. However, few efforts have been devoted to understanding the
generated architectures. In this paper, we first reveal that existing NAS
algorithms (e.g., DARTS, ENAS) tend to favor architectures with wide and
shallow cell structures. These favorable architectures consistently achieve
fast convergence and are consequently selected by NAS algorithms. Our empirical
and theoretical study further confirms that their fast convergence derives from
their smooth loss landscape and accurate gradient information. Nonetheless,
these architectures may not necessarily lead to better generalization
performance compared with other candidate architectures in the same search
space, and therefore further improvement is possible by revising existing NAS
algorithms. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Multiresolution analysis and matrix factorization are foundational tools in
computer vision. In this work, we study the interface between these two
distinct topics and obtain techniques to uncover hierarchical block structure
in symmetric matrices -- an important aspect in the success of many vision
problems. Our new algorithm, the incremental multiresolution matrix
factorization, uncovers such structure one feature at a time, and hence scales
well to large matrices. We describe how this multiscale analysis goes much
farther than what a direct global factorization of the data can identify. We
evaluate the efficacy of the resulting factorizations for relative leveraging
within regression tasks using medical imaging data. We also use the
factorization on representations learned by popular deep networks, providing
evidence of their ability to infer semantic relationships even when they are
not explicitly trained to do so. We show that this algorithm can be used as an
exploratory tool to improve the network architecture, and within numerous other
settings in vision. | [
"cs.CV",
"cs.NA",
"stat.ML"
] |
Unsupervised visual representation learning remains a largely unsolved
problem in computer vision research. Among a big body of recently proposed
approaches for unsupervised learning of visual representations, a class of
self-supervised techniques achieves superior performance on many challenging
benchmarks. A large number of the pretext tasks for self-supervised learning
have been studied, but other important aspects, such as the choice of
convolutional neural networks (CNN), has not received equal attention.
Therefore, we revisit numerous previously proposed self-supervised models,
conduct a thorough large scale study and, as a result, uncover multiple crucial
insights. We challenge a number of common practices in selfsupervised visual
representation learning and observe that standard recipes for CNN design do not
always translate to self-supervised representation learning. As part of our
study, we drastically boost the performance of previously proposed techniques
and outperform previously published state-of-the-art results by a large margin. | [
"cs.CV"
] |
To recognize the unseen classes with only few samples, few-shot learning
(FSL) uses prior knowledge learned from the seen classes. A major challenge for
FSL is that the distribution of the unseen classes is different from that of
those seen, resulting in poor generalization even when a model is meta-trained
on the seen classes. This class-difference-caused distribution shift can be
considered as a special case of domain shift. In this paper, for the first
time, we propose a domain adaptation prototypical network with attention
(DAPNA) to explicitly tackle such a domain shift problem in a meta-learning
framework. Specifically, armed with a set transformer based attention module,
we construct each episode with two sub-episodes without class overlap on the
seen classes to simulate the domain shift between the seen and unseen classes.
To align the feature distributions of the two sub-episodes with limited
training samples, a feature transfer network is employed together with a margin
disparity discrepancy (MDD) loss. Importantly, theoretical analysis is provided
to give the learning bound of our DAPNA. Extensive experiments show that our
DAPNA outperforms the state-of-the-art FSL alternatives, often by significant
margins. | [
"cs.LG",
"stat.ML"
] |
In this paper, we show that the recent integration of statistical models with
deep recurrent neural networks provides a new way of formulating volatility
(the degree of variation of time series) models that have been widely used in
time series analysis and prediction in finance. The model comprises a pair of
complementary stochastic recurrent neural networks: the generative network
models the joint distribution of the stochastic volatility process; the
inference network approximates the conditional distribution of the latent
variables given the observables. Our focus here is on the formulation of
temporal dynamics of volatility over time under a stochastic recurrent neural
network framework. Experiments on real-world stock price datasets demonstrate
that the proposed model generates a better volatility estimation and prediction
that outperforms mainstream methods, e.g., deterministic models such as GARCH
and its variants, and stochastic models namely the MCMC-based model
\emph{stochvol} as well as the Gaussian process volatility model \emph{GPVol},
on average negative log-likelihood. | [
"cs.LG",
"cs.CE",
"q-fin.ST",
"stat.ML"
] |
Recent progress in scientific visualization has expanded the scope of
visualization from being merely a way of presentation to an analysis and
discovery tool. A given visualization result is usually generated by applying a
series of transformations or filters to the underlying data. Nowadays, such
filters use deterministic algorithms to process the data. In this work, we aim
at extending this methodology towards data-driven filters, thus filters that
expose the abilities of pre-trained machine learning models to the
visualization system. The use of such data-driven filters is of particular
interest in fields like segmentation, classification, etc., where machine
learning models regularly outperform existing algorithmic approaches. To
showcase this idea, we couple Paraview, the well-known flow visualization tool,
with PyTorch, a deep learning framework. Paraview is extended by plugins that
allow users to load pre-trained models of their choice in the form of newly
developed filters. The filters transform the input data by feeding it into the
model and then provide the model's output as input to the remaining
visualization pipeline. A series of simplistic use cases for segmentation and
classification on image and fluid data is presented to showcase the technical
applicability of such data-driven transformations in Paraview for future
complex analysis tasks. | [
"cs.LG",
"cs.GR",
"cs.HC"
] |
Inspired by recent trends in vision and language learning, we explore
applications of attention mechanisms for visio-lingual fusion within an
application to story-based video understanding. Like other video-based QA
tasks, video story understanding requires agents to grasp complex temporal
dependencies. However, as it focuses on the narrative aspect of video it also
requires understanding of the interactions between different characters, as
well as their actions and their motivations. We propose a novel co-attentional
transformer model to better capture long-term dependencies seen in visual
stories such as dramas and measure its performance on the video question
answering task. We evaluate our approach on the recently introduced DramaQA
dataset which features character-centered video story understanding questions.
Our model outperforms the baseline model by 8 percentage points overall, at
least 4.95 and up to 12.8 percentage points on all difficulty levels and
manages to beat the winner of the DramaQA challenge. | [
"cs.CV",
"cs.AI",
"cs.CL"
] |
Network pruning is one of the most dominant methods for reducing the heavy
inference cost of deep neural networks. Existing methods often iteratively
prune networks to attain high compression ratio without incurring significant
loss in performance. However, we argue that conventional methods for retraining
pruned networks (i.e., using small, fixed learning rate) are inadequate as they
completely ignore the benefits from snapshots of iterative pruning. In this
work, we show that strong ensembles can be constructed from snapshots of
iterative pruning, which achieve competitive performance and vary in network
structure. Furthermore, we present simple, general and effective pipeline that
generates strong ensembles of networks during pruning with large learning rate
restarting, and utilizes knowledge distillation with those ensembles to improve
the predictive power of compact models. In standard image classification
benchmarks such as CIFAR and Tiny-Imagenet, we advance state-of-the-art pruning
ratio of structured pruning by integrating simple l1-norm filters pruning into
our pipeline. Specifically, we reduce 75-80% of total parameters and 65-70%
MACs of numerous variants of ResNet architectures while having comparable or
better performance than that of original networks. Code associate with this
paper is made publicly available at https://github.com/lehduong/kesi. | [
"cs.CV",
"cs.LG"
] |
Graph neural networks (GNNs) have been widely used in representation learning
on graphs and achieved state-of-the-art performance in tasks such as node
classification and link prediction. However, most existing GNNs are designed to
learn node representations on the fixed and homogeneous graphs. The limitations
especially become problematic when learning representations on a misspecified
graph or a heterogeneous graph that consists of various types of nodes and
edges. In this paper, we propose Graph Transformer Networks (GTNs) that are
capable of generating new graph structures, which involve identifying useful
connections between unconnected nodes on the original graph, while learning
effective node representation on the new graphs in an end-to-end fashion. Graph
Transformer layer, a core layer of GTNs, learns a soft selection of edge types
and composite relations for generating useful multi-hop connections so-called
meta-paths. Our experiments show that GTNs learn new graph structures, based on
data and tasks without domain knowledge, and yield powerful node representation
via convolution on the new graphs. Without domain-specific graph preprocessing,
GTNs achieved the best performance in all three benchmark node classification
tasks against the state-of-the-art methods that require pre-defined meta-paths
from domain knowledge. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Graphs are the most ubiquitous form of structured data representation used in
machine learning. They model, however, only pairwise relations between nodes
and are not designed for encoding the higher-order relations found in many
real-world datasets. To model such complex relations, hypergraphs have proven
to be a natural representation. Learning the node representations in a
hypergraph is more complex than in a graph as it involves information
propagation at two levels: within every hyperedge and across the hyperedges.
Most current approaches first transform a hypergraph structure to a graph for
use in existing geometric deep learning algorithms. This transformation leads
to information loss, and sub-optimal exploitation of the hypergraph's
expressive power. We present HyperSAGE, a novel hypergraph learning framework
that uses a two-level neural message passing strategy to accurately and
efficiently propagate information through hypergraphs. The flexible design of
HyperSAGE facilitates different ways of aggregating neighborhood information.
Unlike the majority of related work which is transductive, our approach,
inspired by the popular GraphSAGE method, is inductive. Thus, it can also be
used on previously unseen nodes, facilitating deployment in problems such as
evolving or partially observed hypergraphs. Through extensive experimentation,
we show that HyperSAGE outperforms state-of-the-art hypergraph learning methods
on representative benchmark datasets. We also demonstrate that the higher
expressive power of HyperSAGE makes it more stable in learning node
representations as compared to the alternatives. | [
"cs.LG",
"stat.ML"
] |
Instance segmentation can detect where the objects are in an image, but hard
to understand the relationship between them. We pay attention to a typical
relationship, relative saliency. A closely related task, salient object
detection, predicts a binary map highlighting a visually salient region while
hard to distinguish multiple objects. Directly combining two tasks by
post-processing also leads to poor performance. There is a lack of research on
relative saliency at present, limiting the practical applications such as
content-aware image cropping, video summary, and image labeling.
In this paper, we study the Salient Object Ranking (SOR) task, which manages
to assign a ranking order of each detected object according to its visual
saliency. We propose the first end-to-end framework of the SOR task and solve
it in a multi-task learning fashion. The framework handles instance
segmentation and salient object ranking simultaneously. In this framework, the
SOR branch is independent and flexible to cooperate with different detection
methods, so that easy to use as a plugin. We also introduce a
Position-Preserved Attention (PPA) module tailored for the SOR branch. It
consists of the position embedding stage and feature interaction stage.
Considering the importance of position in saliency comparison, we preserve
absolute coordinates of objects in ROI pooling operation and then fuse
positional information with semantic features in the first stage. In the
feature interaction stage, we apply the attention mechanism to obtain
proposals' contextualized representations to predict their relative ranking
orders. Extensive experiments have been conducted on the ASR dataset. Without
bells and whistles, our proposed method outperforms the former state-of-the-art
method significantly. The code will be released publicly available. | [
"cs.CV"
] |
3D reconstruction of large scenes is a challenging problem due to the
high-complexity nature of the solution space, in particular for generative
neural networks. In contrast to traditional generative learned models which
encode the full generative process into a neural network and can struggle with
maintaining local details at the scene level, we introduce a new method that
directly leverages scene geometry from the training database. First, we learn
to synthesize an initial estimate for a 3D scene, constructed by retrieving a
top-k set of volumetric chunks from the scene database. These candidates are
then refined to a final scene generation with an attention-based refinement
that can effectively select the most consistent set of geometry from the
candidates and combine them together to create an output scene, facilitating
transfer of coherent structures and local detail from train scene geometry. We
demonstrate our neural scene reconstruction with a database for the tasks of 3D
super resolution and surface reconstruction from sparse point clouds, showing
that our approach enables generation of more coherent, accurate 3D scenes,
improving on average by over 8% in IoU over state-of-the-art scene
reconstruction. | [
"cs.CV"
] |
Deep neural networks often lack the safety and robustness guarantees needed
to be deployed in safety critical systems. Formal verification techniques can
be used to prove input-output safety properties of networks, but when
properties are difficult to specify, we rely on the solution to various
optimization problems. In this work, we present an algorithm called ZoPE that
solves optimization problems over the output of feedforward ReLU networks with
low-dimensional inputs. The algorithm eagerly splits the input space, bounding
the objective using zonotope propagation at each step, and improves
computational efficiency compared to existing mixed integer programming
approaches. We demonstrate how to formulate and solve three types of
optimization problems: (i) minimization of any convex function over the output
space, (ii) minimization of a convex function over the output of two networks
in series with an adversarial perturbation in the layer between them, and (iii)
maximization of the difference in output between two networks. Using ZoPE, we
observe a $25\times$ speedup on property 1 of the ACAS Xu neural network
verification benchmark and an $85\times$ speedup on a set of linear
optimization problems. We demonstrate the versatility of the optimizer in
analyzing networks by projecting onto the range of a generative adversarial
network and visualizing the differences between a compressed and uncompressed
network. | [
"cs.LG",
"cs.AI",
"math.OC"
] |
Despite the success of Generative Adversarial Networks (GANs) in image
synthesis, there lacks enough understanding on what generative models have
learned inside the deep generative representations and how photo-realistic
images are able to be composed of the layer-wise stochasticity introduced in
recent GANs. In this work, we show that highly-structured semantic hierarchy
emerges as variation factors from synthesizing scenes from the generative
representations in state-of-the-art GAN models, like StyleGAN and BigGAN. By
probing the layer-wise representations with a broad set of semantics at
different abstraction levels, we are able to quantify the causality between the
activations and semantics occurring in the output image. Such a quantification
identifies the human-understandable variation factors learned by GANs to
compose scenes. The qualitative and quantitative results further suggest that
the generative representations learned by the GANs with layer-wise latent codes
are specialized to synthesize different hierarchical semantics: the early
layers tend to determine the spatial layout and configuration, the middle
layers control the categorical objects, and the later layers finally render the
scene attributes as well as color scheme. Identifying such a set of
manipulatable latent variation factors facilitates semantic scene manipulation. | [
"cs.CV",
"cs.GR",
"cs.LG"
] |
We propose the adjacency adaptive graph convolutional long-short term memory
network (AAGC-LSTM) for human pose estimation from sparse inertial
measurements, obtained from only 6 measurement units. The AAGC-LSTM combines
both spatial and temporal dependency in a single network operation. This is
made possible by equipping graph convolutions with adjacency adaptivity, which
also allows for learning unknown dependencies of the human body joints. To
further boost accuracy, we propose longitudinal loss weighting to consider
natural movement patterns, as well as body-aware contralateral data
augmentation. By combining these contributions, we are able to utilize the
inherent graph nature of the human body, and can thus outperform the state of
the art for human pose estimation from sparse inertial measurements. | [
"cs.CV",
"cs.LG"
] |
In most convolution neural networks (CNNs), downsampling hidden layers is
adopted for increasing computation efficiency and the receptive field size.
Such operation is commonly so-called pooling. Maximation and averaging over
sliding windows (max/average pooling), and plain downsampling in the form of
strided convolution are popular pooling methods. Since the pooling is a lossy
procedure, a motivation of our work is to design a new pooling approach for
less lossy in the dimensionality reduction. Inspired by the Fourier spectral
pooling(FSP) proposed by Rippel et. al. [1], we present the Hartley transform
based spectral pooling method in CNNs. Compared with FSP, the proposed spectral
pooling avoids the use of complex arithmetic for frequency representation and
reduces the computation. Spectral pooling preserves more structure features for
network's discriminability than max and average pooling. We empirically show
that Hartley spectral pooling gives rise to the convergence of training CNNs on
MNIST and CIFAR-10 datasets. | [
"cs.CV",
"cs.LG",
"eess.SP",
"stat.ML"
] |
Learning effective representations in image-based environments is crucial for
sample efficient Reinforcement Learning (RL). Unfortunately, in RL,
representation learning is confounded with the exploratory experience of the
agent -- learning a useful representation requires diverse data, while
effective exploration is only possible with coherent representations.
Furthermore, we would like to learn representations that not only generalize
across tasks but also accelerate downstream exploration for efficient
task-specific training. To address these challenges we propose Proto-RL, a
self-supervised framework that ties representation learning with exploration
through prototypical representations. These prototypes simultaneously serve as
a summarization of the exploratory experience of an agent as well as a basis
for representing observations. We pre-train these task-agnostic representations
and prototypes on environments without downstream task information. This
enables state-of-the-art downstream policy learning on a set of difficult
continuous control tasks. | [
"cs.LG",
"cs.AI"
] |
Control policies, trained using the Deep Reinforcement Learning, have been
recently shown to be vulnerable to adversarial attacks introducing even very
small perturbations to the policy input. The attacks proposed so far have been
designed using heuristics, and build on existing adversarial example crafting
techniques used to dupe classifiers in supervised learning. In contrast, this
paper investigates the problem of devising optimal attacks, depending on a
well-defined attacker's objective, e.g., to minimize the main agent average
reward. When the policy and the system dynamics, as well as rewards, are known
to the attacker, a scenario referred to as a white-box attack, designing
optimal attacks amounts to solving a Markov Decision Process. For what we call
black-box attacks, where neither the policy nor the system is known, optimal
attacks can be trained using Reinforcement Learning techniques. Through
numerical experiments, we demonstrate the efficiency of our attacks compared to
existing attacks (usually based on Gradient methods). We further quantify the
potential impact of attacks and establish its connection to the smoothness of
the policy under attack. Smooth policies are naturally less prone to attacks
(this explains why Lipschitz policies, with respect to the state, are more
resilient). Finally, we show that from the main agent perspective, the system
uncertainties and the attacker can be modeled as a Partially Observable Markov
Decision Process. We actually demonstrate that using Reinforcement Learning
techniques tailored to POMDP (e.g. using Recurrent Neural Networks) leads to
more resilient policies. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
User data confidentiality protection is becoming a rising challenge in the
present deep learning research. Without access to data, conventional
data-driven model compression faces a higher risk of performance degradation.
Recently, some works propose to generate images from a specific pretrained
model to serve as training data. However, the inversion process only utilizes
biased feature statistics stored in one model and is from low-dimension to
high-dimension. As a consequence, it inevitably encounters the difficulties of
generalizability and inexact inversion, which leads to unsatisfactory
performance. To address these problems, we propose MixMix based on two simple
yet effective techniques: (1) Feature Mixing: utilizes various models to
construct a universal feature space for generalized inversion; (2) Data Mixing:
mixes the synthesized images and labels to generate exact label information. We
prove the effectiveness of MixMix from both theoretical and empirical
perspectives. Extensive experiments show that MixMix outperforms existing
methods on the mainstream compression tasks, including quantization, knowledge
distillation, and pruning. Specifically, MixMix achieves up to 4% and 20%
accuracy uplift on quantization and pruning, respectively, compared to existing
data-free compression work. | [
"cs.LG",
"cs.CV"
] |
Recently, fully convolutional neural networks (FCNs) have shown significant
performance in image parsing, including scene parsing and object parsing.
Different from generic object parsing tasks, hand parsing is more challenging
due to small size, complex structure, heavy self-occlusion and ambiguous
texture problems. In this paper, we propose a novel parsing framework,
Multi-Scale Dual-Branch Fully Convolutional Network (MSDB-FCN), for hand
parsing tasks. Our network employs a Dual-Branch architecture to extract
features of hand area, paying attention on the hand itself. These features are
used to generate multi-scale features with pyramid pooling strategy. In order
to better encode multi-scale features, we design a Deconvolution and Bilinear
Interpolation Block (DB-Block) for upsampling and merging the features of
different scales. To address data imbalance, which is a common problem in many
computer vision tasks as well as hand parsing tasks, we propose a
generalization of Focal Loss, namely Multi-Class Balanced Focal Loss, to tackle
data imbalance in multi-class classification. Extensive experiments on
RHD-PARSING dataset demonstrate that our MSDB-FCN has achieved the
state-of-the-art performance for hand parsing. | [
"cs.CV"
] |
Vehicle detection in remote sensing images has attracted increasing interest
in recent years. However, its detection ability is limited due to lack of
well-annotated samples, especially in densely crowded scenes. Furthermore,
since a list of remotely sensed data sources is available, efficient
exploitation of useful information from multi-source data for better vehicle
detection is challenging. To solve the above issues, a multi-source active
fine-tuning vehicle detection (Ms-AFt) framework is proposed, which integrates
transfer learning, segmentation, and active classification into a unified
framework for auto-labeling and detection. The proposed Ms-AFt employs a
fine-tuning network to firstly generate a vehicle training set from an
unlabeled dataset. To cope with the diversity of vehicle categories, a
multi-source based segmentation branch is then designed to construct additional
candidate object sets. The separation of high quality vehicles is realized by a
designed attentive classifications network. Finally, all three branches are
combined to achieve vehicle detection. Extensive experimental results conducted
on two open ISPRS benchmark datasets, namely the Vaihingen village and Potsdam
city datasets, demonstrate the superiority and effectiveness of the proposed
Ms-AFt for vehicle detection. In addition, the generalization ability of Ms-AFt
in dense remote sensing scenes is further verified on stereo aerial imagery of
a large camping site. | [
"cs.CV"
] |
In this paper, we propose an end to end solution for image matting i.e
high-precision extraction of foreground objects from natural images. Image
matting and background detection can be achieved easily through chroma keying
in a studio setting when the background is either pure green or blue.
Nonetheless, image matting in natural scenes with complex and uneven depth
backgrounds remains a tedious task that requires human intervention. To achieve
complete automatic foreground extraction in natural scenes, we propose a method
that assimilates semantic segmentation and deep image matting processes into a
single network to generate detailed semantic mattes for image composition task.
The contribution of our proposed method is two-fold, firstly it can be
interpreted as a fully automated semantic image matting method and secondly as
a refinement of existing semantic segmentation models. We propose a novel model
architecture as a combination of segmentation and matting that unifies the
function of upsampling and downsampling operators with the notion of attention.
As shown in our work, attention guided downsampling and upsampling can extract
high-quality boundary details, unlike other normal downsampling and upsampling
techniques. For achieving the same, we utilized an attention guided
encoder-decoder framework which does unsupervised learning for generating an
attention map adaptively from the data to serve and direct the upsampling and
downsampling operators. We also construct a fashion e-commerce focused dataset
with high-quality alpha mattes to facilitate the training and evaluation for
image matting. | [
"cs.CV",
"cs.LG",
"eess.IV",
"I.2.10; I.4.8; I.5.1"
] |
In order to design a more potent and effective chemical entity, it is
essential to identify molecular structures with the desired chemical
properties. Recent advances in generative models using neural networks and
machine learning are being widely used by many emerging startups and
researchers in this domain to design virtual libraries of drug-like compounds.
Although these models can help a scientist to produce novel molecular
structures rapidly, the challenge still exists in the intelligent exploration
of the latent spaces of generative models, thereby reducing the randomness in
the generative procedure. In this work we present a manifold traversal with
heuristic search to explore the latent chemical space. Different heuristics and
scores such as the Tanimoto coefficient, synthetic accessibility, binding
activity, and QED drug-likeness can be incorporated to increase the validity
and proximity for desired molecular properties of the generated molecules. For
evaluating the manifold traversal exploration, we produce the latent chemical
space using various generative models such as grammar variational autoencoders
(with and without attention) as they deal with the randomized generation and
validity of compounds. With this novel traversal method, we are able to find
more unseen compounds and more specific regions to mine in the latent space.
Finally, these components are brought together in a simple platform allowing
users to perform search, visualization and selection of novel generated
compounds. | [
"cs.LG",
"q-bio.BM"
] |
As a scene graph compactly summarizes the high-level content of an image in a
structured and symbolic manner, the similarity between scene graphs of two
images reflects the relevance of their contents. Based on this idea, we propose
a novel approach for image-to-image retrieval using scene graph similarity
measured by graph neural networks. In our approach, graph neural networks are
trained to predict the proxy image relevance measure, computed from
human-annotated captions using a pre-trained sentence similarity model. We
collect and publish the dataset for image relevance measured by human
annotators to evaluate retrieval algorithms. The collected dataset shows that
our method agrees well with the human perception of image similarity than other
competitive baselines. | [
"cs.CV",
"cs.IR",
"cs.LG"
] |
This paper considers online object-level mapping using partial point-cloud
observations obtained online in an unknown environment. We develop and approach
for fully Convolutional Object Retrieval and Symmetry-AIded Registration
(CORSAIR). Our model extends the Fully Convolutional Geometric Features model
to learn a global object-shape embedding in addition to local point-wise
features from the point-cloud observations. The global feature is used to
retrieve a similar object from a category database, and the local features are
used for robust pose registration between the observed and the retrieved
object. Our formulation also leverages symmetries, present in the object
shapes, to obtain promising local-feature pairs from different symmetry classes
for matching. We present results from synthetic and real-world datasets with
different object categories to verify the robustness of our method. | [
"cs.CV",
"cs.RO"
] |
In this paper, we present a new network named Attention Aware Network (AASeg)
for real time semantic image segmentation. Our network incorporates spatial and
channel information using Spatial Attention (SA) and Channel Attention (CA)
modules respectively. It also uses dense local multi-scale context information
using Multi Scale Context (MSC) module. The feature maps are concatenated
individually to produce the final segmentation map. We demonstrate the
effectiveness of our method using a comprehensive analysis, quantitative
experimental results and ablation study using Cityscapes, ADE20K and Camvid
datasets. Our network performs better than most previous architectures with a
74.4\% Mean IOU on Cityscapes test dataset while running at 202.7 FPS. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
This study presents a multimodal machine learning model to predict ICD-10
diagnostic codes. We developed separate machine learning models that can handle
data from different modalities, including unstructured text, semi-structured
text and structured tabular data. We further employed an ensemble method to
integrate all modality-specific models to generate ICD-10 codes. Key evidence
was also extracted to make our prediction more convincing and explainable. We
used the Medical Information Mart for Intensive Care III (MIMIC -III) dataset
to validate our approach. For ICD code prediction, our best-performing model
(micro-F1 = 0.7633, micro-AUC = 0.9541) significantly outperforms other
baseline models including TF-IDF (micro-F1 = 0.6721, micro-AUC = 0.7879) and
Text-CNN model (micro-F1 = 0.6569, micro-AUC = 0.9235). For interpretability,
our approach achieves a Jaccard Similarity Coefficient (JSC) of 0.1806 on text
data and 0.3105 on tabular data, where well-trained physicians achieve 0.2780
and 0.5002 respectively. | [
"cs.LG",
"stat.ML"
] |
To achieve reliable mining results for massive vessel trajectories, one of
the most important challenges is how to efficiently compute the similarities
between different vessel trajectories. The computation of vessel trajectory
similarity has recently attracted increasing attention in the maritime data
mining research community. However, traditional shape- and warping-based
methods often suffer from several drawbacks such as high computational cost and
sensitivity to unwanted artifacts and non-uniform sampling rates, etc. To
eliminate these drawbacks, we propose an unsupervised learning method which
automatically extracts low-dimensional features through a convolutional
auto-encoder (CAE). In particular, we first generate the informative trajectory
images by remapping the raw vessel trajectories into two-dimensional matrices
while maintaining the spatio-temporal properties. Based on the massive vessel
trajectories collected, the CAE can learn the low-dimensional representations
of informative trajectory images in an unsupervised manner. The trajectory
similarity is finally equivalent to efficiently computing the similarities
between the learned low-dimensional features, which strongly correlate with the
raw vessel trajectories. Comprehensive experiments on realistic data sets have
demonstrated that the proposed method largely outperforms traditional
trajectory similarity computation methods in terms of efficiency and
effectiveness. The high-quality trajectory clustering performance could also be
guaranteed according to the CAE-based trajectory similarity computation
results. | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Factor graphs have recently gained increasing attention as a unified
framework for representing and constructing algorithms for signal processing,
estimation, and control. One capability that does not seem to be well explored
within the factor graph tool kit is the ability to handle deterministic
nonlinear transformations, such as those occurring in nonlinear filtering and
smoothing problems, using tabulated message passing rules. In this
contribution, we provide general forward (filtering) and backward (smoothing)
approximate Gaussian message passing rules for deterministic nonlinear
transformation nodes in arbitrary factor graphs fulfilling a Markov property,
based on numerical quadrature procedures for the forward pass and a
Rauch-Tung-Striebel-type approximation of the backward pass. These message
passing rules can be employed for deriving many algorithms for solving
nonlinear problems using factor graphs, as is illustrated by the proposition of
a nonlinear modified Bryson-Frazier (MBF) smoother based on the presented
message passing rules. | [
"stat.ML",
"cs.LG",
"cs.SY",
"eess.SP"
] |
Biomedical research papers use significantly different language and jargon
when compared to typical English text, which reduces the utility of pre-trained
NLP models in this domain. Meanwhile Medline, a database of biomedical
abstracts, introduces nearly a million new documents per-year. Applications
that could benefit from understanding this wealth of publicly available
information, such as scientific writing assistants, chat-bots, or descriptive
hypothesis generation systems, require new domain-centered approaches. A
conditional language model, one that learns the probability of words given some
a priori criteria, is a fundamental building block in many such applications.
We propose a transformer-based conditional language model with a shallow
encoder "condition" stack, and a deep "language model" stack of multi-headed
attention blocks. The condition stack encodes metadata used to alter the output
probability distribution of the language model stack. We sample this
distribution in order to generate biomedical abstracts given only a proposed
title, an intended publication year, and a set of keywords. Using typical
natural language generation metrics, we demonstrate that this proposed approach
is more capable of producing non-trivial relevant entities within the abstract
body than the 1.5B parameter GPT-2 language model. | [
"cs.LG",
"stat.ML"
] |
Scene text image contains two levels of contents: visual texture and semantic
information. Although the previous scene text recognition methods have made
great progress over the past few years, the research on mining semantic
information to assist text recognition attracts less attention, only RNN-like
structures are explored to implicitly model semantic information. However, we
observe that RNN based methods have some obvious shortcomings, such as
time-dependent decoding manner and one-way serial transmission of semantic
context, which greatly limit the help of semantic information and the
computation efficiency. To mitigate these limitations, we propose a novel
end-to-end trainable framework named semantic reasoning network (SRN) for
accurate scene text recognition, where a global semantic reasoning module
(GSRM) is introduced to capture global semantic context through multi-way
parallel transmission. The state-of-the-art results on 7 public benchmarks,
including regular text, irregular text and non-Latin long text, verify the
effectiveness and robustness of the proposed method. In addition, the speed of
SRN has significant advantages over the RNN based methods, demonstrating its
value in practical use. | [
"cs.CV"
] |
Humans tend to learn complex abstract concepts faster if examples are
presented in a structured manner. For instance, when learning how to play a
board game, usually one of the first concepts learned is how the game ends,
i.e. the actions that lead to a terminal state (win, lose or draw). The
advantage of learning end-games first is that once the actions which lead to a
terminal state are understood, it becomes possible to incrementally learn the
consequences of actions that are further away from a terminal state - we call
this an end-game-first curriculum. Currently the state-of-the-art machine
learning player for general board games, AlphaZero by Google DeepMind, does not
employ a structured training curriculum; instead learning from the entire game
at all times. By employing an end-game-first training curriculum to train an
AlphaZero inspired player, we empirically show that the rate of learning of an
artificial player can be improved during the early stages of training when
compared to a player not using a training curriculum. | [
"cs.LG",
"stat.ML"
] |
Molecular activity prediction is critical in drug design. Machine learning
techniques such as kernel methods and random forests have been successful for
this task. These models require fixed-size feature vectors as input while the
molecules are variable in size and structure. As a result, fixed-size
fingerprint representation is poor in handling substructures for large
molecules. In addition, molecular activity tests, or a so-called BioAssays, are
relatively small in the number of tested molecules due to its complexity. Here
we approach the problem through deep neural networks as they are flexible in
modeling structured data such as grids, sequences and graphs. We train multiple
BioAssays using a multi-task learning framework, which combines information
from multiple sources to improve the performance of prediction, especially on
small datasets. We propose Graph Memory Network (GraphMem), a memory-augmented
neural network to model the graph structure in molecules. GraphMem consists of
a recurrent controller coupled with an external memory whose cells dynamically
interact and change through a multi-hop reasoning process. Applied to the
molecules, the dynamic interactions enable an iterative refinement of the
representation of molecular graphs with multiple bond types. GraphMem is
capable of jointly training on multiple datasets by using a specific-task query
fed to the controller as an input. We demonstrate the effectiveness of the
proposed model for separately and jointly training on more than 100K
measurements, spanning across 9 BioAssay activity tests. | [
"cs.LG"
] |
Dynamic graph representation learning strategies are based on different
neural architectures to capture the graph evolution over time. However, the
underlying neural architectures require a large amount of parameters to train
and suffer from high online inference latency, that is several model parameters
have to be updated when new data arrive online. In this study we propose
Distill2Vec, a knowledge distillation strategy to train a compact model with a
low number of trainable parameters, so as to reduce the latency of online
inference and maintain the model accuracy high. We design a distillation loss
function based on Kullback-Leibler divergence to transfer the acquired
knowledge from a teacher model trained on offline data, to a small-size student
model for online data. Our experiments with publicly available datasets show
the superiority of our proposed model over several state-of-the-art approaches
with relative gains up to 5% in the link prediction task. In addition, we
demonstrate the effectiveness of our knowledge distillation strategy, in terms
of number of required parameters, where Distill2Vec achieves a compression
ratio up to 7:100 when compared with baseline approaches. For reproduction
purposes, our implementation is publicly available at
https://stefanosantaris.github.io/Distill2Vec. | [
"cs.LG",
"cs.AI"
] |
The three-dimensional shape and conformation of small-molecule ligands are
critical for biomolecular recognition, yet encoding 3D geometry has not
improved ligand-based virtual screening approaches. We describe an end-to-end
deep learning approach that operates directly on small-molecule conformational
ensembles and identifies key conformational poses of small-molecules. Our
networks leverage two levels of representation learning: 1) individual
conformers are first encoded as spatial graphs using a graph neural network,
and 2) sampled conformational ensembles are represented as sets using an
attention mechanism to aggregate over individual instances. We demonstrate the
feasibility of this approach on a simple task based on bidentate coordination
of biaryl ligands, and show how attention-based pooling can elucidate key
conformational poses in tasks based on molecular geometry. This work
illustrates how set-based learning approaches may be further developed for
small molecule-based virtual screening. | [
"cs.LG",
"physics.chem-ph"
] |
Gaze redirection is the task of changing the gaze to a desired direction for
a given monocular eye patch image. Many applications such as videoconferencing,
films, games, and generation of training data for gaze estimation require
redirecting the gaze, without distorting the appearance of the area surrounding
the eye and while producing photo-realistic images. Existing methods lack the
ability to generate perceptually plausible images. In this work, we present a
novel method to alleviate this problem by leveraging generative adversarial
training to synthesize an eye image conditioned on a target gaze direction. Our
method ensures perceptual similarity and consistency of synthesized images to
the real images. Furthermore, a gaze estimation loss is used to control the
gaze direction accurately. To attain high-quality images, we incorporate
perceptual and cycle consistency losses into our architecture. In extensive
evaluations we show that the proposed method outperforms state-of-the-art
approaches in terms of both image quality and redirection precision. Finally,
we show that generated images can bring significant improvement for the gaze
estimation task if used to augment real training data. | [
"cs.CV"
] |
Image virtual try-on task has abundant applications and has become a hot
research topic recently. Existing 2D image-based virtual try-on methods aim to
transfer a target clothing image onto a reference person, which has two main
disadvantages: cannot control the size and length precisely; unable to
accurately estimate the user's figure in the case of users wearing thick
clothes, resulting in inaccurate dressing effect. In this paper, we put forward
an akin task that aims to dress clothing for underwear models. %, which is also
an urgent need in e-commerce scenarios. To solve the above drawbacks, we
propose a Shape Controllable Virtual Try-On Network (SC-VTON), where a graph
attention network integrates the information of model and clothing to generate
the warped clothing image. In addition, the control points are incorporated
into SC-VTON for the desired clothing shape. Furthermore, by adding a Splitting
Network and a Synthesis Network, we can use clothing/model pair data to help
optimize the deformation module and generalize the task to the typical virtual
try-on task. Extensive experiments show that the proposed method can achieve
accurate shape control. Meanwhile, compared with other methods, our method can
generate high-resolution results with detailed textures. | [
"cs.CV",
"I.4.9"
] |
Dealing with land cover classification of the new image sources has also
turned to be a complex problem requiring large amount of memory and processing
time. In order to cope with these problems, statistical learning has greatly
helped in the last years to develop statistical retrieval and classification
models that can ingest large amounts of Earth observation data. Kernel methods
constitute a family of powerful machine learning algorithms, which have found
wide use in remote sensing and geosciences. However, kernel methods are still
not widely adopted because of the high computational cost when dealing with
large scale problems, such as the inversion of radiative transfer models or the
classification of high spatial-spectral-temporal resolution data. This paper
introduces an efficient kernel method for fast statistical retrieval of
bio-geo-physical parameters and image classification problems. The method
allows to approximate a kernel matrix with a set of projections on random bases
sampled from the Fourier domain. The method is simple, computationally very
efficient in both memory and processing costs, and easily parallelizable. We
show that kernel regression and classification is now possible for datasets
with millions of examples and high dimensionality. Examples on atmospheric
parameter retrieval from hyperspectral infrared sounders like IASI/Metop; large
scale emulation and inversion of the familiar PROSAIL radiative transfer model
on Sentinel-2 data; and the identification of clouds over landmarks in time
series of MSG/Seviri images show the efficiency and effectiveness of the
proposed technique. | [
"cs.LG"
] |
Generating images with conditional descriptions gains increasing interests in
recent years. However, existing conditional inputs are suffering from either
unstructured forms (captions) or limited information and expensive labeling
(scene graphs). For a targeted scene, the core items, objects, are usually
definite while their interactions are flexible and hard to clearly define.
Thus, we introduce a more rational setting, generating a realistic image from
the objects and captions. Under this setting, objects explicitly define the
critical roles in the targeted images and captions implicitly describe their
rich attributes and connections. Correspondingly, a MOC-GAN is proposed to mix
the inputs of two modalities to generate realistic images. It firstly infers
the implicit relations between object pairs from the captions to build a
hidden-state scene graph. So a multi-layer representation containing objects,
relations and captions is constructed, where the scene graph provides the
structures of the scene and the caption provides the image-level guidance. Then
a cascaded attentive generative network is designed to coarse-to-fine generate
phrase patch by paying attention to the most relevant words in the caption. In
addition, a phrase-wise DAMSM is proposed to better supervise the fine-grained
phrase-patch consistency. On COCO dataset, our method outperforms the
state-of-the-art methods on both Inception Score and FID while maintaining high
visual quality. Extensive experiments demonstrate the unique features of our
proposed method. | [
"cs.CV"
] |
Graph convolutional networks (GCNs) achieve promising performance for
skeleton-based action recognition. However, in most GCN-based methods, the
spatial-temporal graph convolution is strictly restricted by the graph topology
while only captures the short-term temporal context, thus lacking the
flexibility of feature extraction. In this work, we present a novel
architecture, named Graph Convolutional skeleton Transformer (GCsT), which
addresses limitations in GCNs by introducing Transformer. Our GCsT employs all
the benefits of Transformer (i.e. dynamical attention and global context) while
keeps the advantages of GCNs (i.e. hierarchy and local topology structure). In
GCsT, the spatial-temporal GCN forces the capture of local dependencies while
Transformer dynamically extracts global spatial-temporal relationships.
Furthermore, the proposed GCsT shows stronger expressive capability by adding
additional information present in skeleton sequences. Incorporating the
Transformer allows that information to be introduced into the model almost
effortlessly. We validate the proposed GCsT by conducting extensive
experiments, which achieves the state-of-the-art performance on NTU RGB+D, NTU
RGB+D 120 and Northwestern-UCLA datasets. | [
"cs.CV",
"cs.AI"
] |
Plant root research can provide a way to attain stress-tolerant crops that
produce greater yield in a diverse array of conditions. Phenotyping roots in
soil is often challenging due to the roots being difficult to access and the
use of time consuming manual methods. Rhizotrons allow visual inspection of
root growth through transparent surfaces. Agronomists currently manually label
photographs of roots obtained from rhizotrons using a line-intersect method to
obtain root length density and rooting depth measurements which are essential
for their experiments. We investigate the effectiveness of an automated image
segmentation method based on the U-Net Convolutional Neural Network (CNN)
architecture to enable such measurements. We design a data-set of 50 annotated
Chicory (Cichorium intybus L.) root images which we use to train, validate and
test the system and compare against a baseline built using the Frangi
vesselness filter. We obtain metrics using manual annotations and
line-intersect counts. Our results on the held out data show our proposed
automated segmentation system to be a viable solution for detecting and
quantifying roots. We evaluate our system using 867 images for which we have
obtained line-intersect counts, attaining a Spearman rank correlation of 0.9748
and an $r^2$ of 0.9217. We also achieve an $F_1$ of 0.7 when comparing the
automated segmentation to the manual annotations, with our automated
segmentation system producing segmentations with higher quality than the manual
annotations for large portions of the image. | [
"cs.CV"
] |
Online hashing has attracted extensive research attention when facing
streaming data. Most online hashing methods, learning binary codes based on
pairwise similarities of training instances, fail to capture the semantic
relationship, and suffer from a poor generalization in large-scale applications
due to large variations. In this paper, we propose to model the similarity
distributions between the input data and the hashing codes, upon which a novel
supervised online hashing method, dubbed as Similarity Distribution based
Online Hashing (SDOH), is proposed, to keep the intrinsic semantic relationship
in the produced Hamming space. Specifically, we first transform the discrete
similarity matrix into a probability matrix via a Gaussian-based normalization
to address the extremely imbalanced distribution issue. And then, we introduce
a scaling Student t-distribution to solve the challenging initialization
problem, and efficiently bridge the gap between the known and unknown
distributions. Lastly, we align the two distributions via minimizing the
Kullback-Leibler divergence (KL-diverence) with stochastic gradient descent
(SGD), by which an intuitive similarity constraint is imposed to update hashing
model on the new streaming data with a powerful generalizing ability to the
past data. Extensive experiments on three widely-used benchmarks validate the
superiority of the proposed SDOH over the state-of-the-art methods in the
online retrieval task. | [
"cs.CV",
"cs.AI",
"cs.MM"
] |
Video super-resolution, which attempts to reconstruct high-resolution video
frames from their corresponding low-resolution versions, has received
increasingly more attention in recent years. Most existing approaches opt to
use deformable convolution to temporally align neighboring frames and apply
traditional spatial attention mechanism (convolution based) to enhance
reconstructed features. However, such spatial-only strategies cannot fully
utilize temporal dependency among video frames. In this paper, we propose a
novel deep learning based VSR algorithm, named Deformable Kernel Spatial
Attention Network (DKSAN). Thanks to newly designed Deformable Kernel
Convolution Alignment (DKC_Align) and Deformable Kernel Spatial Attention
(DKSA) modules, DKSAN can better exploit both spatial and temporal redundancies
to facilitate the information propagation across different layers. We have
tested DKSAN on AIM2020 Video Extreme Super-Resolution Challenge to
super-resolve videos with a scale factor as large as 16. Experimental results
demonstrate that our proposed DKSAN can achieve both better subjective and
objective performance compared with the existing state-of-the-art EDVR on
Vid3oC and IntVID datasets. | [
"cs.CV"
] |
Identification of 3D cephalometric landmarks that serve as proxy to the shape
of human skull is the fundamental step in cephalometric analysis. Since manual
landmarking from 3D computed tomography (CT) images is a cumbersome task even
for the trained experts, automatic 3D landmark detection system is in a great
need. Recently, automatic landmarking of 2D cephalograms using deep learning
(DL) has achieved great success, but 3D landmarking for more than 80 landmarks
has not yet reached a satisfactory level, because of the factors hindering
machine learning such as the high dimensionality of the input data and limited
amount of training data due to ethical restrictions on the use of medical data.
This paper presents a semi-supervised DL method for 3D landmarking that takes
advantage of anonymized landmark dataset with paired CT data being removed. The
proposed method first detects a small number of easy-to-find reference
landmarks, then uses them to provide a rough estimation of the entire landmarks
by utilizing the low dimensional representation learned by variational
autoencoder (VAE). Anonymized landmark dataset is used for training the VAE.
Finally, coarse-to-fine detection is applied to the small bounding box provided
by rough estimation, using separate strategies suitable for mandible and
cranium. For mandibular landmarks, patch-based 3D CNN is applied to the
segmented image of the mandible (separated from the maxilla), in order to
capture 3D morphological features of mandible associated with the landmarks. We
detect 6 landmarks around the condyle all at once, instead of one by one,
because they are closely related to each other. For cranial landmarks, we again
use VAE-based latent representation for more accurate annotation. In our
experiment, the proposed method achieved an averaged 3D point-to-point error of
2.91 mm for 90 landmarks only with 15 paired training data. | [
"cs.CV",
"eess.IV"
] |
Machine learning classifiers are often trained to recognize a set of
pre-defined classes. However, in many applications, it is often desirable to
have the flexibility of learning additional concepts, with limited data and
without re-training on the full training set. This paper addresses this
problem, incremental few-shot learning, where a regular classification network
has already been trained to recognize a set of base classes, and several extra
novel classes are being considered, each with only a few labeled examples.
After learning the novel classes, the model is then evaluated on the overall
classification performance on both base and novel classes. To this end, we
propose a meta-learning model, the Attention Attractor Network, which
regularizes the learning of novel classes. In each episode, we train a set of
new weights to recognize novel classes until they converge, and we show that
the technique of recurrent back-propagation can back-propagate through the
optimization process and facilitate the learning of these parameters. We
demonstrate that the learned attractor network can help recognize novel classes
while remembering old classes without the need to review the original training
set, outperforming various baselines. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
Given a collection of images, humans are able to discover landmarks by
modeling the shared geometric structure across instances. This idea of
geometric equivariance has been widely used for the unsupervised discovery of
object landmark representations. In this paper, we develop a simple and
effective approach by combining instance-discriminative and
spatially-discriminative contrastive learning. We show that when a deep network
is trained to be invariant to geometric and photometric transformations,
representations emerge from its intermediate layers that are highly predictive
of object landmarks. Stacking these across layers in a "hypercolumn" and
projecting them using spatially-contrastive learning further improves their
performance on matching and few-shot landmark regression tasks. We also present
a unified view of existing equivariant and invariant representation learning
approaches through the lens of contrastive learning, shedding light on the
nature of invariances learned. Experiments on standard benchmarks for landmark
learning, as well as a new challenging one we propose, show that the proposed
approach surpasses prior state-of-the-art. | [
"cs.CV"
] |
We present DeepMVI, a deep learning method for missing value imputation in
multidimensional time-series datasets. Missing values are commonplace in
decision support platforms that aggregate data over long time stretches from
disparate sources, and reliable data analytics calls for careful handling of
missing data. One strategy is imputing the missing values, and a wide variety
of algorithms exist spanning simple interpolation, matrix factorization methods
like SVD, statistical models like Kalman filters, and recent deep learning
methods. We show that often these provide worse results on aggregate analytics
compared to just excluding the missing data. DeepMVI uses a neural network to
combine fine-grained and coarse-grained patterns along a time series, and
trends from related series across categorical dimensions. After failing with
off-the-shelf neural architectures, we design our own network that includes a
temporal transformer with a novel convolutional window feature, and kernel
regression with learned embeddings. The parameters and their training are
designed carefully to generalize across different placements of missing blocks
and data characteristics. Experiments across nine real datasets, four different
missing scenarios, comparing seven existing methods show that DeepMVI is
significantly more accurate, reducing error by more than 50% in more than half
the cases, compared to the best existing method. Although slower than simpler
matrix factorization methods, we justify the increased time overheads by
showing that DeepMVI is the only option that provided overall more accurate
analytics than dropping missing values. | [
"cs.LG",
"cs.AI"
] |
Video question answering (VideoQA) is challenging given its multimodal
combination of visual understanding and natural language understanding. While
existing approaches seldom leverage the appearance-motion information in the
video at multiple temporal scales, the interaction between the question and the
visual information for textual semantics extraction is frequently ignored.
Targeting these issues, this paper proposes a novel Temporal Pyramid
Transformer (TPT) model with multimodal interaction for VideoQA. The TPT model
comprises two modules, namely Question-specific Transformer (QT) and Visual
Inference (VI). Given the temporal pyramid constructed from a video, QT builds
the question semantics from the coarse-to-fine multimodal co-occurrence between
each word and the visual content. Under the guidance of such question-specific
semantics, VI infers the visual clues from the local-to-global multi-level
interactions between the question and the video. Within each module, we
introduce a multimodal attention mechanism to aid the extraction of
question-video interactions, with residual connections adopted for the
information passing across different levels. Through extensive experiments on
three VideoQA datasets, we demonstrate better performances of the proposed
method in comparison with the state-of-the-arts. | [
"cs.CV"
] |
Visual affordance grounding aims to segment all possible interaction regions
between people and objects from an image/video, which is beneficial for many
applications, such as robot grasping and action recognition. However, existing
methods mainly rely on the appearance feature of the objects to segment each
region of the image, which face the following two problems: (i) there are
multiple possible regions in an object that people interact with; and (ii)
there are multiple possible human interactions in the same object region. To
address these problems, we propose a Hand-aided Affordance Grounding Network
(HAGNet) that leverages the aided clues provided by the position and action of
the hand in demonstration videos to eliminate the multiple possibilities and
better locate the interaction regions in the object. Specifically, HAG-Net has
a dual-branch structure to process the demonstration video and object image.
For the video branch, we introduce hand-aided attention to enhance the region
around the hand in each video frame and then use the LSTM network to aggregate
the action features. For the object branch, we introduce a semantic enhancement
module (SEM) to make the network focus on different parts of the object
according to the action classes and utilize a distillation loss to align the
output features of the object branch with that of the video branch and transfer
the knowledge in the video branch to the object branch. Quantitative and
qualitative evaluations on two challenging datasets show that our method has
achieved stateof-the-art results for affordance grounding. The source code will
be made available to the public. | [
"cs.CV"
] |
Imbalanced classification on graphs is ubiquitous yet challenging in many
real-world applications, such as fraudulent node detection. Recently, graph
neural networks (GNNs) have shown promising performance on many network
analysis tasks. However, most existing GNNs have almost exclusively focused on
the balanced networks, and would get unappealing performance on the imbalanced
networks. To bridge this gap, in this paper, we present a generative
adversarial graph network model, called ImGAGN to address the imbalanced
classification problem on graphs. It introduces a novel generator for graph
structure data, named GraphGenerator, which can simulate both the minority
class nodes' attribute distribution and network topological structure
distribution by generating a set of synthetic minority nodes such that the
number of nodes in different classes can be balanced. Then a graph
convolutional network (GCN) discriminator is trained to discriminate between
real nodes and fake (i.e., generated) nodes, and also between minority nodes
and majority nodes on the synthetic balanced network. To validate the
effectiveness of the proposed method, extensive experiments are conducted on
four real-world imbalanced network datasets. Experimental results demonstrate
that the proposed method ImGAGN outperforms state-of-the-art algorithms for
semi-supervised imbalanced node classification task. | [
"cs.LG",
"cs.AI"
] |
While single-image super-resolution (SISR) has attracted substantial interest
in recent years, the proposed approaches are limited to learning image priors
in order to add high frequency details. In contrast, multi-frame
super-resolution (MFSR) offers the possibility of reconstructing rich details
by combining signal information from multiple shifted images. This key
advantage, along with the increasing popularity of burst photography, have made
MFSR an important problem for real-world applications.
We propose a novel architecture for the burst super-resolution task. Our
network takes multiple noisy RAW images as input, and generates a denoised,
super-resolved RGB image as output. This is achieved by explicitly aligning
deep embeddings of the input frames using pixel-wise optical flow. The
information from all frames are then adaptively merged using an attention-based
fusion module. In order to enable training and evaluation on real-world data,
we additionally introduce the BurstSR dataset, consisting of smartphone bursts
and high-resolution DSLR ground-truth. We perform comprehensive experimental
analysis, demonstrating the effectiveness of the proposed architecture. | [
"cs.CV"
] |
A significant effort has been made to train neural networks that replicate
algorithmic reasoning, but they often fail to learn the abstract concepts
underlying these algorithms. This is evidenced by their inability to generalize
to data distributions that are outside of their restricted training sets,
namely larger inputs and unseen data. We study these generalization issues at
the level of numerical subroutines that comprise common algorithms like
sorting, shortest paths, and minimum spanning trees. First, we observe that
transformer-based sequence-to-sequence models can learn subroutines like
sorting a list of numbers, but their performance rapidly degrades as the length
of lists grows beyond those found in the training set. We demonstrate that this
is due to attention weights that lose fidelity with longer sequences,
particularly when the input numbers are numerically similar. To address the
issue, we propose a learned conditional masking mechanism, which enables the
model to strongly generalize far outside of its training range with
near-perfect accuracy on a variety of algorithms. Second, to generalize to
unseen data, we show that encoding numbers with a binary representation leads
to embeddings with rich structure once trained on downstream tasks like
addition or multiplication. This allows the embedding to handle missing data by
faithfully interpolating numbers not seen during training. | [
"cs.LG",
"cs.NE",
"cs.PL",
"stat.ML"
] |
Currently, existing state-of-the-art 3D object detectors are in two-stage
paradigm. These methods typically comprise two steps: 1) Utilize region
proposal network to propose a fraction of high-quality proposals in a bottom-up
fashion. 2) Resize and pool the semantic features from the proposed regions to
summarize RoI-wise representations for further refinement. Note that these
RoI-wise representations in step 2) are considered individually as an
uncorrelated entry when fed to following detection headers. Nevertheless, we
observe these proposals generated by step 1) offset from ground truth somehow,
emerging in local neighborhood densely with an underlying probability.
Challenges arise in the case where a proposal largely forsakes its boundary
information due to coordinate offset while existing networks lack corresponding
information compensation mechanism. In this paper, we propose BANet for 3D
object detection from point clouds. Specifically, instead of refining each
proposal independently as previous works do, we represent each proposal as a
node for graph construction within a given cut-off threshold, associating
proposals in the form of local neighborhood graph, with boundary correlations
of an object being explicitly exploited. Besides, we devise a lightweight
Region Feature Aggregation Network to fully exploit voxel-wise, pixel-wise, and
point-wise feature with expanding receptive fields for more informative
RoI-wise representations. As of Apr. 17th, 2021, our BANet achieves on par
performance on KITTI 3D detection leaderboard and ranks $1^{st}$ on $Moderate$
difficulty of $Car$ category on KITTI BEV detection leaderboard. The source
code will be released once the paper is accepted. | [
"cs.CV"
] |
Inspired by the recent success of deep neural networks and the recent efforts
to develop multi-layer dictionary models, we propose a Deep Analysis dictionary
Model (DeepAM) which is optimized to address a specific regression task known
as single image super-resolution. Contrary to other multi-layer dictionary
models, our architecture contains L layers of analysis dictionary and
soft-thresholding operators to gradually extract high-level features and a
layer of synthesis dictionary which is designed to optimize the regression task
at hand. In our approach, each analysis dictionary is partitioned into two
sub-dictionaries: an Information Preserving Analysis Dictionary (IPAD) and a
Clustering Analysis Dictionary (CAD). The IPAD together with the corresponding
soft-thresholds is designed to pass the key information from the previous layer
to the next layer, while the CAD together with the corresponding
soft-thresholding operator is designed to produce a sparse feature
representation of its input data that facilitates discrimination of key
features. DeepAM uses both supervised and unsupervised setup. Simulation
results show that the proposed deep analysis dictionary model achieves better
performance compared to a deep neural network that has the same structure and
is optimized using back-propagation when training datasets are small. | [
"stat.ML",
"cs.CV",
"cs.LG"
] |
Semantic segmentation is a critical method in the field of autonomous
driving. When performing semantic image segmentation, a wider field of view
(FoV) helps to obtain more information about the surrounding environment,
making automatic driving safer and more reliable, which could be offered by
fisheye cameras. However, large public fisheye datasets are not available, and
the fisheye images captured by the fisheye camera with large FoV comes with
large distortion, so commonly-used semantic segmentation model cannot be
directly utilized. In this paper, a seven degrees of freedom (DoF) augmentation
method is proposed to transform rectilinear image to fisheye image in a more
comprehensive way. In the training process, rectilinear images are transformed
into fisheye images in seven DoF, which simulates the fisheye images taken by
cameras of different positions, orientations and focal lengths. The result
shows that training with the seven-DoF augmentation can improve the model's
accuracy and robustness against different distorted fisheye data. This
seven-DoF augmentation provides a universal semantic segmentation solution for
fisheye cameras in different autonomous driving applications. Also, we provide
specific parameter settings of the augmentation for autonomous driving. At
last, we tested our universal semantic segmentation model on real fisheye
images and obtained satisfactory results. The code and configurations are
released at https://github.com/Yaozhuwa/FisheyeSeg. | [
"cs.CV",
"cs.LG",
"stat.ML"
] |
Model-based approaches for image reconstruction, analysis and interpretation
have made significant progress over the last decades. Many of these approaches
are based on either mathematical, physical or biological models. A challenge
for these approaches is the modelling of the underlying processes (e.g. the
physics of image acquisition or the patho-physiology of a disease) with
appropriate levels of detail and realism. With the availability of large
amounts of imaging data and machine learning (in particular deep learning)
techniques, data-driven approaches have become more widespread for use in
different tasks in reconstruction, analysis and interpretation. These
approaches learn statistical models directly from labelled or unlabeled image
data and have been shown to be very powerful for extracting clinically useful
information from medical imaging. While these data-driven approaches often
outperform traditional model-based approaches, their clinical deployment often
poses challenges in terms of robustness, generalization ability and
interpretability. In this article, we discuss what developments have motivated
the shift from model-based approaches towards data-driven strategies and what
potential problems are associated with the move towards purely data-driven
approaches, in particular deep learning. We also discuss some of the open
challenges for data-driven approaches, e.g. generalization to new unseen data
(e.g. transfer learning), robustness to adversarial attacks and
interpretability. Finally, we conclude with a discussion on how these
approaches may lead to the development of more closely coupled imaging
pipelines that are optimized in an end-to-end fashion. | [
"cs.CV"
] |
Objective image quality assessment (IQA) is imperative in the current
multimedia-intensive world, in order to assess the visual quality of an image
at close to a human level of ability. Many~parameters such as color intensity,
structure, sharpness, contrast, presence of an object, etc., draw human
attention to an image. Psychological vision research suggests that human vision
is biased to the center area of an image and display screen. As a result, if
the center part contains any visually salient information, it draws human
attention even more and any distortion in that part will be better perceived
than other parts. To the best of our knowledge, previous IQA methods have not
considered this fact. In this paper, we propose a full reference image quality
assessment (FR-IQA) approach using visual saliency and contrast; however, we
give extra attention to the center by increasing the sensitivity of the
similarity maps in that region. We evaluated our method on three large-scale
popular benchmark databases used by most of the current IQA researchers
(TID2008, CSIQ~and LIVE), having a total of 3345 distorted images with
28~different kinds of distortions. Our~method is compared with 13
state-of-the-art approaches. This comparison reveals the stronger correlation
of our method with human-evaluated values. The prediction-of-quality score is
consistent for distortion specific as well as distortion independent cases.
Moreover, faster processing makes it applicable to any real-time application.
The MATLAB code is publicly available to test the algorithm and can be found
online at http://layek.khu.ac.kr/CEQI. | [
"cs.CV"
] |
Co-occurrence Filter (CoF) is a boundary preserving filter. It is based on
the Bilateral Filter (BF) but instead of using a Gaussian on the range values
to preserve edges it relies on a co-occurrence matrix. Pixel values that
co-occur frequently in the image (i.e., inside textured regions) will have a
high weight in the co-occurrence matrix. This, in turn, means that such pixel
pairs will be averaged and hence smoothed, regardless of their intensity
differences. On the other hand, pixel values that rarely co-occur (i.e., across
texture boundaries) will have a low weight in the co-occurrence matrix. As a
result, they will not be averaged and the boundary between them will be
preserved. The CoF therefore extends the BF to deal with boundaries, not just
edges. It learns co-occurrences directly from the image. We can achieve various
filtering results by directing it to learn the co-occurrence matrix from a part
of the image, or a different image. We give the definition of the filter,
discuss how to use it with color images and show several use cases. | [
"cs.CV"
] |
Nowadays a diverse range of physiological data can be captured continuously
for various applications in particular wellbeing and healthcare. Such data
require efficient methods for classification and analysis. Deep learning
algorithms have shown remarkable potential regarding such analyses, however,
the use of these algorithms on low-power wearable devices is challenged by
resource constraints such as area and power consumption. Most of the available
on-chip deep learning processors contain complex and dense hardware
architectures in order to achieve the highest possible throughput. Such a trend
in hardware design may not be efficient in applications where on-node
computation is required and the focus is more on the area and power efficiency
as in the case of portable and embedded biomedical devices. This paper presents
an efficient time-series classifier capable of automatically detecting
effective features and classifying the input signals in real-time. In the
proposed classifier, throughput is traded off with hardware complexity and cost
using resource sharing techniques. A Convolutional Neural Network (CNN) is
employed to extract input features and then a Long-Short-Term-Memory (LSTM)
architecture with ternary weight precision classifies the input signals
according to the extracted features. Hardware implementation on a Xilinx FPGA
confirm that the proposed hardware can accurately classify multiple complex
biomedical time series data with low area and power consumption and outperform
all previously presented state-of-the-art records. Most notably, our classifier
reaches 1.3$\times$ higher GOPs/Slice than similar state of the art FPGA-based
accelerators. | [
"cs.LG",
"eess.SP"
] |
The problem of adversarial robustness has been studied extensively for neural
networks. However, for boosted decision trees and decision stumps there are
almost no results, even though they are widely used in practice (e.g. XGBoost)
due to their accuracy, interpretability, and efficiency. We show in this paper
that for boosted decision stumps the \textit{exact} min-max robust loss and
test error for an $l_\infty$-attack can be computed in $O(T\log T)$ time per
input, where $T$ is the number of decision stumps and the optimal update step
of the ensemble can be done in $O(n^2\,T\log T)$, where $n$ is the number of
data points. For boosted trees we show how to efficiently calculate and
optimize an upper bound on the robust loss, which leads to state-of-the-art
robust test error for boosted trees on MNIST (12.5% for $\epsilon_\infty=0.3$),
FMNIST (23.2% for $\epsilon_\infty=0.1$), and CIFAR-10 (74.7% for
$\epsilon_\infty=8/255$). Moreover, the robust test error rates we achieve are
competitive to the ones of provably robust convolutional networks. The code of
all our experiments is available at
http://github.com/max-andr/provably-robust-boosting | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.