text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
This paper introduces a new definition of multiscale neighborhoods in 3D
point clouds. This definition, based on spherical neighborhoods and
proportional subsampling, allows the computation of features with a consistent
geometrical meaning, which is not the case when using k-nearest neighbors. With
an appropriate learning strategy, the proposed features can be used in a random
forest to classify 3D points. In this semantic classification task, we show
that our multiscale features outperform state-of-the-art features using the
same experimental conditions. Furthermore, their classification power competes
with more elaborate classification approaches including Deep Learning methods.
|
[
"cs.CV"
] |
Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch.
|
[
"cs.LG",
"cs.CL",
"stat.ML"
] |
Telecommunication (Telco) outdoor position recovery aims to localize outdoor
mobile devices by leveraging measurement report (MR) data. Unfortunately, Telco
position recovery requires sufficient amount of MR samples across different
areas and suffers from high data collection cost. For an area with scarce MR
samples, it is hard to achieve good accuracy. In this paper, by leveraging the
recently developed transfer learning techniques, we design a novel Telco
position recovery framework, called TLoc, to transfer good models in the
carefully selected source domains (those fine-grained small subareas) to a
target one which originally suffers from poor localization accuracy.
Specifically, TLoc introduces three dedicated components: 1) a new coordinate
space to divide an area of interest into smaller domains, 2) a similarity
measurement to select best source domains, and 3) an adaptation of an existing
transfer learning approach. To the best of our knowledge, TLoc is the first
framework that demonstrates the efficacy of applying transfer learning in the
Telco outdoor position recovery. To exemplify, on the 2G GSM and 4G LTE MR
datasets in Shanghai, TLoc outperforms a nontransfer approach by 27.58% and
26.12% less median errors, and further leads to 47.77% and 49.22% less median
errors than a recent fingerprinting approach NBL.
|
[
"cs.LG",
"stat.ML"
] |
The training of Generative Adversarial Networks is a difficult task mainly
due to the nature of the networks. One such issue is when the generator and
discriminator start oscillating, rather than converging to a fixed point.
Another case can be when one agent becomes more adept than the other which
results in the decrease of the other agent's ability to learn, reducing the
learning capacity of the system as a whole. Additionally, there exists the
problem of Mode Collapse which involves the generators output collapsing to a
single sample or a small set of similar samples. To train GANs a careful
selection of the architecture that is used along with a variety of other
methods to improve training. Even when applying these methods there is low
stability of training in relation to the parameters that are chosen. Stochastic
ensembling is suggested as a method for improving the stability while training
GANs.
|
[
"stat.ML",
"cs.CV",
"cs.LG"
] |
There are five features to consider when using generative adversarial
networks to apply makeup to photos of the human face. These features include
(1) facial components, (2) interactive color adjustments, (3) makeup
variations, (4) robustness to poses and expressions, and the (5) use of
multiple reference images. Several related works have been proposed, mainly
using generative adversarial networks (GAN). Unfortunately, none of them have
addressed all five features simultaneously. This paper closes the gap with an
innovative style- and latent-guided GAN (SLGAN). We provide a novel, perceptual
makeup loss and a style-invariant decoder that can transfer makeup styles based
on histogram matching to avoid the identity-shift problem. In our experiments,
we show that our SLGAN is better than or comparable to state-of-the-art
methods. Furthermore, we show that our proposal can interpolate facial makeup
images to determine the unique features, compare existing methods, and help
users find desirable makeup configurations.
|
[
"cs.CV",
"cs.MM"
] |
A major challenge in Bayesian Optimization is the boundary issue (Swersky,
2017) where an algorithm spends too many evaluations near the boundary of its
search space. In this paper, we propose BOCK, Bayesian Optimization with
Cylindrical Kernels, whose basic idea is to transform the ball geometry of the
search space using a cylindrical transformation. Because of the transformed
geometry, the Gaussian Process-based surrogate model spends less budget
searching near the boundary, while concentrating its efforts relatively more
near the center of the search region, where we expect the solution to be
located. We evaluate BOCK extensively, showing that it is not only more
accurate and efficient, but it also scales successfully to problems with a
dimensionality as high as 500. We show that the better accuracy and scalability
of BOCK even allows optimizing modestly sized neural network layers, as well as
neural network hyperparameters.
|
[
"stat.ML",
"cs.LG"
] |
Graph neural networks get significant attention for graph representation and
classification in machine learning community. Attention mechanism applied on
the neighborhood of a node improves the performance of graph neural networks.
Typically, it helps to identify a neighbor node which plays more important role
to determine the label of the node under consideration. But in real world
scenarios, a particular subset of nodes together, but not the individual pairs
in the subset, may be important to determine the label of the graph. To address
this problem, we introduce the concept of subgraph attention for graphs. On the
other hand, hierarchical graph pooling has been shown to be promising in recent
literature. But due to noisy hierarchical structure of real world graphs, not
all the hierarchies of a graph play equal role for graph classification.
Towards this end, we propose a graph classification algorithm called
SubGattPool which jointly learns the subgraph attention and employs two
different types of hierarchical attention mechanisms to find the important
nodes in a hierarchy and the importance of individual hierarchies in a graph.
Experimental evaluation with different types of graph classification algorithms
shows that SubGattPool is able to improve the state-of-the-art or remains
competitive on multiple publicly available graph classification datasets. We
conduct further experiments on both synthetic and real world graph datasets to
justify the usefulness of different components of SubGattPool and to show its
consistent performance on other downstream tasks.
|
[
"cs.LG",
"cs.SI"
] |
The growth in the number of galaxy images is much faster than the speed at
which these galaxies can be labelled by humans. However, by leveraging the
information present in the ever growing set of unlabelled images,
semi-supervised learning could be an effective way of reducing the required
labelling and increasing classification accuracy. We develop a Variational
Autoencoder (VAE) with Equivariant Transformer layers with a classifier network
from the latent space. We show that this novel architecture leads to
improvements in accuracy when used for the galaxy morphology classification
task on the Galaxy Zoo data set. In addition we show that pre-training the
classifier network as part of the VAE using the unlabelled data leads to higher
accuracy with fewer labels compared to exiting approaches. This novel VAE has
the potential to automate galaxy morphology classification with reduced human
labelling efforts.
|
[
"stat.ML",
"cs.LG"
] |
All-goals updating exploits the off-policy nature of Q-learning to update all
possible goals an agent could have from each transition in the world, and was
introduced into Reinforcement Learning (RL) by Kaelbling (1993). In prior work
this was mostly explored in small-state RL problems that allowed tabular
representations and where all possible goals could be explicitly enumerated and
learned separately. In this paper we empirically explore 3 different extensions
of the idea of updating many (instead of all) goals in the context of RL with
deep neural networks (or DeepRL for short). First, in a direct adaptation of
Kaelbling's approach we explore if many-goals updating can be used to achieve
mastery in non-tabular visual-observation domains. Second, we explore whether
many-goals updating can be used to pre-train a network to subsequently learn
faster and better on a single main task of interest. Third, we explore whether
many-goals updating can be used to provide auxiliary task updates in training a
network to learn faster and better on a single main task of interest. We
provide comparisons to baselines for each of the 3 extensions.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] |
Model-free reinforcement learning has recently been shown to successfully
learn navigation policies from raw sensor data. In this work, we address the
problem of learning driving policies for an autonomous agent in a high-fidelity
simulator. Building upon recent research that applies deep reinforcement
learning to navigation problems, we present a modular deep reinforcement
learning approach to predict the steering angle of the car from raw images. The
first module extracts a low-dimensional latent semantic representation of the
image. The control module trained with reinforcement learning takes the latent
vector as input to predict the correct steering angle. The experimental results
have showed that our method is capable of learning to maneuver the car without
any human control signals.
|
[
"cs.LG",
"cs.AI",
"cs.CV",
"cs.RO",
"stat.ML"
] |
In this work, we study the problem of training deep networks for semantic
image segmentation using only a fraction of annotated images, which may
significantly reduce human annotation efforts. Particularly, we propose a
strategy that exploits the unpaired image style transfer capabilities of
CycleGAN in semi-supervised segmentation. Unlike recent works using adversarial
learning for semi-supervised segmentation, we enforce cycle consistency to
learn a bidirectional mapping between unpaired images and segmentation masks.
This adds an unsupervised regularization effect that boosts the segmentation
performance when annotated data is limited. Experiments on three different
public segmentation benchmarks (PASCAL VOC 2012, Cityscapes and ACDC)
demonstrate the effectiveness of the proposed method. The proposed model
achieves 2-4% of improvement with respect to the baseline and outperforms
recent approaches for this task, particularly in low labeled data regime.
|
[
"cs.CV"
] |
Graph Neural Networks (GNNs) have recently received significant research
attention due to their superior performance on a variety of graph-related
learning tasks. Most of the current works focus on either static or dynamic
graph settings, addressing a single particular task, e.g., node/graph
classification, link prediction. In this work, we investigate the question: can
GNNs be applied to continuously learning a sequence of tasks? Towards that, we
explore the Continual Graph Learning (CGL) paradigm and present the Experience
Replay based framework ER-GNN for CGL to alleviate the catastrophic forgetting
problem in existing GNNs. ER-GNN stores knowledge from previous tasks as
experiences and replays them when learning new tasks to mitigate the
catastrophic forgetting issue. We propose three experience node selection
strategies: mean of feature, coverage maximization, and influence maximization,
to guide the process of selecting experience nodes. Extensive experiments on
three benchmark datasets demonstrate the effectiveness of our ER-GNN and shed
light on the incremental graph (non-Euclidean) structure learning.
|
[
"cs.LG",
"stat.ML"
] |
Domestic violence (DV) is a global social and public health issue that is
highly gendered. Being able to accurately predict DV recidivism, i.e.,
re-offending of a previously convicted offender, can speed up and improve risk
assessment procedures for police and front-line agencies, better protect
victims of DV, and potentially prevent future re-occurrences of DV. Previous
work in DV recidivism has employed different classification techniques,
including decision tree (DT) induction and logistic regression, where the main
focus was on achieving high prediction accuracy. As a result, even the diagrams
of trained DTs were often too difficult to interpret due to their size and
complexity, making decision-making challenging. Given there is often a
trade-off between model accuracy and interpretability, in this work our aim is
to employ DT induction to obtain both interpretable trees as well as high
prediction accuracy. Specifically, we implement and evaluate different
approaches to deal with class imbalance as well as feature selection. Compared
to previous work in DV recidivism prediction that employed logistic regression,
our approach can achieve comparable area under the ROC curve results by using
only 3 of 11 available features and generating understandable decision trees
that contain only 4 leaf nodes.
|
[
"cs.LG",
"stat.ML"
] |
3D multi object generative models allow us to synthesize a large range of
novel 3D multi object scenes and also identify objects, shapes, layouts and
their positions. But multi object scenes are difficult to create because of the
dataset being multimodal in nature. The conventional 3D generative adversarial
models are not efficient in generating multi object scenes, they usually tend
to generate either one object or generate fuzzy results of multiple objects.
Auto-encoder models have much scope in feature extraction and representation
learning using the unsupervised paradigm in probabilistic spaces. We try to
make use of this property in our proposed model. In this paper we propose a
novel architecture using 3DConvNets trained with the progressive training
paradigm that has been able to generate realistic high resolution 3D scenes of
rooms, bedrooms, offices etc. with various pieces of furniture and objects. We
make use of the adversarial auto-encoder along with the WGAN-GP loss parameter
in our discriminator loss function. Finally this new approach to multi object
scene generation has also been able to generate more number of objects per
scene.
|
[
"cs.CV"
] |
This paper presents the predictive accuracy using two-variate meteorological
factors, average temperature and average humidity, in neural network
algorithms. We analyze result in five learning architectures such as the
traditional artificial neural network, deep neural network, and extreme
learning machine, long short-term memory, and long-short-term memory with
peephole connections, after manipulating the computer-simulation. Our neural
network modes are trained on the daily time-series dataset during seven years
(from 2014 to 2020). From the trained results for 2500, 5000, and 7500 epochs,
we obtain the predicted accuracies of the meteorological factors produced from
outputs in ten metropolitan cities (Seoul, Daejeon, Daegu, Busan, Incheon,
Gwangju, Pohang, Mokpo, Tongyeong, and Jeonju). The error statistics is found
from the result of outputs, and we compare these values to each other after the
manipulation of five neural networks. As using the long-short-term memory model
in testing 1 (the average temperature predicted from the input layer with six
input nodes), Tonyeong has the lowest root mean squared error (RMSE) value of
0.866 $(%)$ in summer from the computer-simulation in order to predict the
temperature. To predict the humidity, the RMSE is shown the lowest value of
5.732 $(%)$, when using the long short-term memory model in summer in Mokpo in
testing 2 (the average humidity predicted from the input layer with six input
nodes). Particularly, the long short-term memory model is is found to be more
accurate in forecasting daily levels than other neural network models in
temperature and humidity forecastings. Our result may provide a
computer-simuation basis for the necessity of exploring and develping a novel
neural network evaluation method in the future.
|
[
"cs.LG",
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] |
Scene understanding of high resolution aerial images is of great importance
for the task of automated monitoring in various remote sensing applications.
Due to the large within-class and small between-class variance in pixel values
of objects of interest, this remains a challenging task. In recent years, deep
convolutional neural networks have started being used in remote sensing
applications and demonstrate state of the art performance for pixel level
classification of objects. \textcolor{black}{Here we propose a reliable
framework for performant results for the task of semantic segmentation of
monotemporal very high resolution aerial images. Our framework consists of a
novel deep learning architecture, ResUNet-a, and a novel loss function based on
the Dice loss. ResUNet-a uses a UNet encoder/decoder backbone, in combination
with residual connections, atrous convolutions, pyramid scene parsing pooling
and multi-tasking inference. ResUNet-a infers sequentially the boundary of the
objects, the distance transform of the segmentation mask, the segmentation mask
and a colored reconstruction of the input. Each of the tasks is conditioned on
the inference of the previous ones, thus establishing a conditioned
relationship between the various tasks, as this is described through the
architecture's computation graph. We analyse the performance of several
flavours of the Generalized Dice loss for semantic segmentation, and we
introduce a novel variant loss function for semantic segmentation of objects
that has excellent convergence properties and behaves well even under the
presence of highly imbalanced classes.} The performance of our modeling
framework is evaluated on the ISPRS 2D Potsdam dataset. Results show
state-of-the-art performance with an average F1 score of 92.9\% over all
classes for our best model.
|
[
"cs.CV"
] |
Generative Adversarial Networks (GANs) represent a promising class of
generative networks that combine neural networks with game theory. From
generating realistic images and videos to assisting musical creation, GANs are
transforming many fields of arts and sciences. However, their application to
healthcare has not been fully realized, more specifically in generating
electronic health records (EHR) data. In this paper, we propose a framework for
exploring the value of GANs in the context of continuous laboratory time series
data. We devise an unsupervised evaluation method that measures the predictive
power of synthetic laboratory test time series. Further, we show that when it
comes to predicting the impact of drug exposure on laboratory test data,
incorporating representation learning of the training cohorts prior to training
GAN models is beneficial.
|
[
"cs.LG",
"stat.ML"
] |
Attention mechanism has demonstrated great potential in fine-grained visual
recognition tasks. In this paper, we present a counterfactual attention
learning method to learn more effective attention based on causal inference.
Unlike most existing methods that learn visual attention based on conventional
likelihood, we propose to learn the attention with counterfactual causality,
which provides a tool to measure the attention quality and a powerful
supervisory signal to guide the learning process. Specifically, we analyze the
effect of the learned visual attention on network prediction through
counterfactual intervention and maximize the effect to encourage the network to
learn more useful attention for fine-grained image recognition. Empirically, we
evaluate our method on a wide range of fine-grained recognition tasks where
attention plays a crucial role, including fine-grained image categorization,
person re-identification, and vehicle re-identification. The consistent
improvement on all benchmarks demonstrates the effectiveness of our method.
Code is available at https://github.com/raoyongming/CAL
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] |
In this paper, we investigate the reliability of online recognition
platforms, Amazon Rekognition and Microsoft Azure, with respect to changes in
background, acquisition device, and object orientation. We focus on platforms
that are commonly used by the public to better understand their real-world
performances. To assess the variation in recognition performance, we perform a
controlled experiment by changing the acquisition conditions one at a time. We
use three smartphones, one DSLR, and one webcam to capture side views and
overhead views of objects in a living room, an office, and photo studio setups.
Moreover, we introduce a framework to estimate the recognition performance with
respect to backgrounds and orientations. In this framework, we utilize both
handcrafted features based on color, texture, and shape characteristics and
data-driven features obtained from deep neural networks. Experimental results
show that deep learning-based image representations can estimate the
recognition performance variation with a Spearman's rank-order correlation of
0.94 under multifarious acquisition conditions.
|
[
"cs.CV",
"eess.IV",
"I.2; I.4; I.5"
] |
Machine learning models in practical settings are typically confronted with
changes to the distribution of the incoming data. Such changes can severely
affect the model performance, leading for example to misclassifications of
data. This is particularly apparent in the domain of bionic hand prostheses,
where machine learning models promise faster and more intuitive user
interfaces, but are hindered by their lack of robustness to everyday
disturbances, such as electrode shifts. One way to address changes in the data
distribution is transfer learning, that is, to transfer the disturbed data to a
space where the original model is applicable again. In this contribution, we
propose a novel expectation maximization algorithm to learn linear
transformations that maximize the likelihood of disturbed data after the
transformation. We also show that this approach generalizes to discriminative
models, in particular learning vector quantization models. In our evaluation on
data from the bionic prostheses domain we demonstrate that our approach can
learn a transformation which improves classification accuracy significantly and
outperforms all tested baselines, if few data or few classes are available in
the target domain.
|
[
"cs.LG"
] |
Real-world data usually have high dimensionality and it is important to
mitigate the curse of dimensionality. High-dimensional data are usually in a
coherent structure and make the data in relatively small true degrees of
freedom. There are global and local dimensionality reduction methods to
alleviate the problem. Most of existing methods for local dimensionality
reduction obtain an embedding with the eigenvalue or singular value
decomposition, where the computational complexities are very high for a large
amount of data. Here we propose a novel local nonlinear approach named Vec2vec
for general purpose dimensionality reduction, which generalizes recent
advancements in embedding representation learning of words to dimensionality
reduction of matrices. It obtains the nonlinear embedding using a neural
network with only one hidden layer to reduce the computational complexity. To
train the neural network, we build the neighborhood similarity graph of a
matrix and define the context of data points by exploiting the random walk
properties. Experiments demenstrate that Vec2vec is more efficient than several
state-of-the-art local dimensionality reduction methods in a large number of
high-dimensional data. Extensive experiments of data classification and
clustering on eight real datasets show that Vec2vec is better than several
classical dimensionality reduction methods in the statistical hypothesis test,
and it is competitive with recently developed state-of-the-art UMAP.
|
[
"cs.LG"
] |
The overwhelming presence of categorical/sequential data in diverse domains
emphasizes the importance of sequence mining. The challenging nature of
sequences proves the need for continuing research to find a more accurate and
faster approach providing a better understanding of their (dis)similarities.
This paper proposes a new Model-based approach for clustering sequence data,
namely nTreeClus. The proposed method deploys Tree-based Learners, k-mers, and
autoregressive models for categorical time series, culminating with a novel
numerical representation of the categorical sequences. Adopting this new
representation, we cluster sequences, considering the inherent patterns in
categorical time series. Accordingly, the model showed robustness to its
parameter. Under different simulated scenarios, nTreeClus improved the baseline
methods for various internal and external cluster validation metrics for up to
10.7% and 2.7%, respectively. The empirical evaluation using synthetic and real
datasets, protein sequences, and categorical time series showed that nTreeClus
is competitive or superior to most state-of-the-art algorithms.
|
[
"cs.LG",
"stat.ML"
] |
We present McAssoc, a deep learning approach to the as-sociation of detection
bounding boxes in different views ofa multi-camera system. The vast majority of
the academiahas been developing single-camera computer vision algo-rithms,
however, little research attention has been directedto incorporating them into
a multi-camera system. In thispaper, we designed a 3-branch architecture that
leveragesdirect association and additional cross localization infor-mation. A
new metric, image-pair association accuracy(IPAA) is designed specifically for
performance evaluationof cross-camera detection association. We show in the
ex-periments that localization information is critical to suc-cessful
cross-camera association, especially when similar-looking objects are present.
This paper is an experimentalwork prior to MessyTable, which is a large-scale
bench-mark for instance association in mutliple cameras.
|
[
"cs.CV"
] |
Bilinear feature transformation has shown the state-of-the-art performance in
learning fine-grained image representations. However, the computational cost to
learn pairwise interactions between deep feature channels is prohibitively
expensive, which restricts this powerful transformation to be used in deep
neural networks. In this paper, we propose a deep bilinear transformation (DBT)
block, which can be deeply stacked in convolutional neural networks to learn
fine-grained image representations. The DBT block can uniformly divide input
channels into several semantic groups. As bilinear transformation can be
represented by calculating pairwise interactions within each group, the
computational cost can be heavily relieved. The output of each block is further
obtained by aggregating intra-group bilinear features, with residuals from the
entire input features. We found that the proposed network achieves new
state-of-the-art in several fine-grained image recognition benchmarks,
including CUB-Bird, Stanford-Car, and FGVC-Aircraft.
|
[
"cs.CV"
] |
When we fine-tune a well-trained deep learning model for a new set of
classes, the network learns new concepts but gradually forgets the knowledge of
old training. In some real-life applications, we may be interested in learning
new classes without forgetting the capability of previous experience. Such
learning without forgetting problem is often investigated using 2D image
recognition tasks. In this paper, considering the growth of depth camera
technology, we address the same problem for the 3D point cloud object data.
This problem becomes more challenging in the 3D domain than 2D because of the
unavailability of large datasets and powerful pretrained backbone models. We
investigate knowledge distillation techniques on 3D data to reduce catastrophic
forgetting of the previous training. Moreover, we improve the distillation
process by using semantic word vectors of object classes. We observe that
exploring the interrelation of old and new knowledge during training helps to
learn new concepts without forgetting old ones. Experimenting on three 3D point
cloud recognition backbones (PointNet, DGCNN, and PointConv) and synthetic
(ModelNet40, ModelNet10) and real scanned (ScanObjectNN) datasets, we establish
new baseline results on learning without forgetting for 3D data. This research
will instigate many future works in this area.
|
[
"cs.CV"
] |
This report details our solution to the Google AI Open Images Challenge 2019
Object Detection Track. Based on our detailed analysis on the Open Images
dataset, it is found that there are four typical features: large-scale,
hierarchical tag system, severe annotation incompleteness and data imbalance.
Considering these characteristics, many strategies are employed, including
larger backbone, distributed softmax loss, class-aware sampling, expert model,
and heavier classifier. In virtue of these effective strategies, our best
single model could achieve a mAP of 61.90. After ensemble, the final mAP is
boosted to 67.17 in the public leaderboard and 64.21 in the private
leaderboard, which earns 3rd place in the Open Images Challenge 2019.
|
[
"cs.CV"
] |
Image fusion helps in merging two or more images to construct a more
informative single fused image. Recently, unsupervised learning based
convolutional neural networks (CNN) have been utilized for different types of
image fusion tasks such as medical image fusion, infrared-visible image fusion
for autonomous driving as well as multi-focus and multi-exposure image fusion
for satellite imagery. However, it is challenging to analyze the reliability of
these CNNs for the image fusion tasks since no groundtruth is available. This
led to the use of a wide variety of model architectures and optimization
functions yielding quite different fusion results. Additionally, due to the
highly opaque nature of such neural networks, it is difficult to explain the
internal mechanics behind its fusion results. To overcome these challenges, we
present a novel real-time visualization tool, named FuseVis, with which the
end-user can compute per-pixel saliency maps that examine the influence of the
input image pixels on each pixel of the fused image. We trained several image
fusion based CNNs on medical image pairs and then using our FuseVis tool, we
performed case studies on a specific clinical application by interpreting the
saliency maps from each of the fusion methods. We specifically visualized the
relative influence of each input image on the predictions of the fused image
and showed that some of the evaluated image fusion methods are better suited
for the specific clinical application. To the best of our knowledge, currently,
there is no approach for visual analysis of neural networks for image fusion.
Therefore, this work opens up a new research direction to improve the
interpretability of deep fusion networks. The FuseVis tool can also be adapted
in other deep neural network based image processing applications to make them
interpretable.
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] |
Several regularization methods have recently been introduced which force the
latent activations of an autoencoder or deep neural network to conform to
either a Gaussian or hyperspherical distribution, or to minimize the implicit
rank of the distribution in latent space. In the present work, we introduce a
novel regularizing loss function which simulates a pairwise repulsive force
between items and an attractive force of each item toward the origin. We show
that minimizing this loss function in isolation achieves a hyperspherical
distribution. Moreover, when used as a regularizing term, the scaling factor
can be adjusted to allow greater flexibility and tolerance of eccentricity,
thus allowing the latent variables to be stratified according to their relative
importance, while still promoting diversity. We apply this method of Eccentric
Regularization to an autoencoder, and demonstrate its effectiveness in image
generation, representation learning and downstream classification tasks.
|
[
"cs.LG",
"cs.AI",
"I.2.6"
] |
We present a method for compositing virtual objects into a photograph such
that the object colors appear to have been processed by the photo's camera
imaging pipeline. Compositing in such a camera-aware manner is essential for
high realism, and it requires the color transformation in the photo's pipeline
to be inferred, which is challenging due to the inherent one-to-many mapping
that exists from a scene to a photo. To address this problem for the case of a
single photo taken from an unknown camera, we propose a dual-learning approach
in which the reverse color transformation (from the photo to the scene) is
jointly estimated. Learning of the reverse transformation is used to facilitate
learning of the forward mapping, by enforcing cycle consistency of the two
processes. We additionally employ a feature sharing schema to extract evidence
from the target photo in the reverse mapping to guide the forward color
transformation. Our dual-learning approach achieves object compositing results
that surpass those of alternative techniques.
|
[
"cs.CV"
] |
From The late 90th, "Skin Detection" becomes one of the major problems in
image processing. If "Skin Detection" will be done in high accuracy, it can be
used in many cases as face recognition, Human Tracking and etc. Until now so
many methods were presented for solving this problem. In most of these methods,
color space was used to extract feature vector for classifying pixels, but the
most of them have not good accuracy in detecting types of skin. The proposed
approach in this paper is based on "Color based image retrieval" (CBIR)
technique. In this method, first by means of CBIR method and image tiling and
considering the relation between pixel and its neighbors, a feature vector
would be defined and then with using a training step, detecting the skin in the
test stage. The result shows that the presenting approach, in addition to its
high accuracy in detecting type of skin, has no sensitivity to illumination
intensity and moving face orientation.
|
[
"cs.CV"
] |
Application of intelligent systems especially in smart homes and
health-related topics has been drawing more attention in the last decades.
Training Human Activity Recognition (HAR) models -- as a major module --
requires a fair amount of labeled data. Despite training with large datasets,
most of the existing models will face a dramatic performance drop when they are
tested against unseen data from new users. Moreover, recording enough data for
each new user is unviable due to the limitations and challenges of working with
human users. Transfer learning techniques aim to transfer the knowledge which
has been learned from the source domain (subject) to the target domain in order
to decrease the models' performance loss in the target domain. This paper
presents a novel method of adversarial knowledge transfer named SA-GAN stands
for Subject Adaptor GAN which utilizes Generative Adversarial Network framework
to perform cross-subject transfer learning in the domain of wearable
sensor-based Human Activity Recognition. SA-GAN outperformed other
state-of-the-art methods in more than 66% of experiments and showed the second
best performance in the remaining 25% of experiments. In some cases, it reached
up to 90% of the accuracy which can be obtained by supervised training over the
same domain data.
|
[
"cs.LG",
"stat.ML"
] |
Training a Convolutional Neural Network (CNN) to be robust against rotation
has mostly been done with data augmentation. In this paper, another progressive
vision of research direction is highlighted to encourage less dependence on
data augmentation by achieving structural rotational invariance of a network.
The deep equivariance-bridged SO(2) invariant network is proposed to echo such
vision. First, Self-Weighted Nearest Neighbors Graph Convolutional Network
(SWN-GCN) is proposed to implement Graph Convolutional Network (GCN) on the
graph representation of an image to acquire rotationally equivariant
representation, as GCN is more suitable for constructing deeper network than
spectral graph convolution-based approaches. Then, invariant representation is
eventually obtained with Global Average Pooling (GAP), a permutation-invariant
operation suitable for aggregating high-dimensional representations, over the
equivariant set of vertices retrieved from SWN-GCN. Our method achieves the
state-of-the-art image classification performance on rotated MNIST and CIFAR-10
images, where the models are trained with a non-augmented dataset only.
Quantitative validations over invariance of the representations also
demonstrate strong invariance of deep representations of SWN-GCN over
rotations.
|
[
"cs.CV"
] |
Scene understanding from images is a challenging problem encountered in
autonomous driving. On the object level, while 2D methods have gradually
evolved from computing simple bounding boxes to delivering finer grained
results like instance segmentations, the 3D family is still dominated by
estimating 3D bounding boxes. In this paper, we propose a novel approach to
jointly infer the 3D rigid-body poses and shapes of vehicles from a stereo
image pair using shape priors. Unlike previous works that geometrically align
shapes to point clouds from dense stereo reconstruction, our approach works
directly on images by combining a photometric and a silhouette alignment term
in the energy function. An adaptive sparse point selection scheme is proposed
to efficiently measure the consistency with both terms. In experiments, we show
superior performance of our method on 3D pose and shape estimation over the
previous geometric approach and demonstrate that our method can also be applied
as a refinement step and significantly boost the performances of several
state-of-the-art deep learning based 3D object detectors. All related materials
and demonstration videos are available at the project page
https://vision.in.tum.de/research/vslam/direct-shape.
|
[
"cs.CV"
] |
Graph Neural Networks (GNNs) have proven to be useful for many different
practical applications. However, many existing GNN models have implicitly
assumed homophily among the nodes connected in the graph, and therefore have
largely overlooked the important setting of heterophily, where most connected
nodes are from different classes. In this work, we propose a novel framework
called CPGNN that generalizes GNNs for graphs with either homophily or
heterophily. The proposed framework incorporates an interpretable compatibility
matrix for modeling the heterophily or homophily level in the graph, which can
be learned in an end-to-end fashion, enabling it to go beyond the assumption of
strong homophily. Theoretically, we show that replacing the compatibility
matrix in our framework with the identity (which represents pure homophily)
reduces to GCN. Our extensive experiments demonstrate the effectiveness of our
approach in more realistic and challenging experimental settings with
significantly less training data compared to previous works: CPGNN variants
achieve state-of-the-art results in heterophily settings with or without
contextual node features, while maintaining comparable performance in homophily
settings.
|
[
"cs.LG",
"cs.SI",
"stat.ML"
] |
We analyze the problem of determining whether 2 given point clouds in 2D,
with any distinct cardinality and any number of outliers, have subsets of the
same size that can be matched via a rigid motion. This problem is important,
for example, in the application of fingerprint matching with incomplete data.
We propose an algorithm that, under assumptions on the noise tolerance, allows
to find corresponding subclouds of the maximum possible size. Our procedure
optimizes a potential energy function to do so, which was first inspired in the
potential energy interaction that occurs between point charges in
electrostatics.
|
[
"cs.CV"
] |
The proliferation of optical, electron, and scanning probe microscopies gives
rise to large volumes of imaging data of objects as diversified as cells,
bacteria, pollen, to nanoparticles and atoms and molecules. In most cases, the
experimental data streams contain images having arbitrary rotations and
translations within the image. At the same time, for many cases, small amounts
of labeled data are available in the form of prior published results, image
collections, and catalogs, or even theoretical models. Here we develop an
approach that allows generalizing from a small subset of labeled data with a
weak orientational disorder to a large unlabeled dataset with a much stronger
orientational (and positional) disorder, i.e., it performs a classification of
image data given a small number of examples even in the presence of a
distribution shift between the labeled and unlabeled parts. This approach is
based on the semi-supervised rotationally invariant variational autoencoder
(ss-rVAE) model consisting of the encoder-decoder "block" that learns a
rotationally (and translationally) invariant continuous latent representation
of data and a classifier that encodes data into a finite number of discrete
classes. The classifier part of the trained ss-rVAE inherits the rotational
(and translational) invariances and can be deployed independently of the other
parts of the model. The performance of the ss-rVAE is illustrated using the
synthetic data sets with known factors of variation. We further demonstrate its
application for experimental data sets of nanoparticles, creating nanoparticle
libraries and disentangling the representations defining the physical factors
of variation in the data. The code reproducing the results is available at
https://github.com/ziatdinovmax/Semi-Supervised-VAE-nanoparticles.
|
[
"cs.LG",
"cond-mat.dis-nn",
"cond-mat.mtrl-sci",
"physics.data-an"
] |
Neural network models and deep models are one of the leading and state of the
art models in machine learning. Most successful deep neural models are the ones
with many layers which highly increases their number of parameters. Training
such models requires a large number of training samples which is not always
available. One of the fundamental issues in neural networks is overfitting
which is the issue tackled in this thesis. Such problem often occurs when the
training of large models is performed using few training samples. Many
approaches have been proposed to prevent the network from overfitting and
improve its generalization performance such as data augmentation, early
stopping, parameters sharing, unsupervised learning, dropout, batch
normalization, etc.
In this thesis, we tackle the neural network overfitting issue from a
representation learning perspective by considering the situation where few
training samples are available which is the case of many real world
applications. We propose three contributions. The first one presented in
chapter 2 is dedicated to dealing with structured output problems to perform
multivariate regression when the output variable y contains structural
dependencies between its components. The second contribution described in
chapter 3 deals with the classification task where we propose to exploit prior
knowledge about the internal representation of the hidden layers in neural
networks. Our last contribution presented in chapter 4 showed the interest of
transfer learning in applications where only few samples are available. In this
contribution, we provide an automatic system based on such learning scheme with
an application to medical domain. In this application, the task consists in
localizing the third lumbar vertebra in a 3D CT scan. This work has been done
in collaboration with the clinic Rouen Henri Becquerel Center who provided us
with data.
|
[
"cs.LG",
"stat.ML"
] |
This paper proposes a new method for rigid body pose estimation based on
spectrahedral representations of the tautological orbitopes of $SE(2)$ and
$SE(3)$. The approach can use dense point cloud data from stereo vision or an
RGB-D sensor (such as the Microsoft Kinect), as well as visual appearance data.
The method is a convex relaxation of the classical pose estimation problem, and
is based on explicit linear matrix inequality (LMI) representations for the
convex hulls of $SE(2)$ and $SE(3)$. Given these representations, the relaxed
pose estimation problem can be framed as a robust least squares problem with
the optimization variable constrained to these convex sets. Although this
formulation is a relaxation of the original problem, numerical experiments
indicate that it is indeed exact - i.e. its solution is a member of $SE(2)$ or
$SE(3)$ - in many interesting settings. We additionally show that this method
is guaranteed to be exact for a large class of pose estimation problems.
|
[
"cs.CV"
] |
Adversarial examples have emerged as a significant threat to machine learning
algorithms, especially to the convolutional neural networks (CNNs). In this
paper, we propose two quantization-based defense mechanisms, Constant
Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness
of CNNs against adversarial examples. CQ quantizes input pixel intensities
based on a "fixed" number of quantization levels, while in TQ, the quantization
levels are "iteratively learned during the training phase", thereby providing a
stronger defense mechanism. We apply the proposed techniques on undefended CNNs
against different state-of-the-art adversarial attacks from the open-source
\textit{Cleverhans} library. The experimental results demonstrate 50%-96% and
10%-50% increase in the classification accuracy of the perturbed images
generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly
used CNN (Conv2D(64, 8x8) - Conv2D(128, 6x6) - Conv2D(128, 5x5) - Dense(10) -
Softmax()) available in \textit{Cleverhans} library.
|
[
"cs.LG",
"cs.CR",
"stat.ML"
] |
Inertial confinement fusion (ICF) experiments are designed using computer
simulations that are approximations of reality, and therefore must be
calibrated to accurately predict experimental observations. In this work, we
propose a novel nonlinear technique for calibrating from simulations to
experiments, or from low fidelity simulations to high fidelity simulations, via
"transfer learning". Transfer learning is a commonly used technique in the
machine learning community, in which models trained on one task are partially
retrained to solve a separate, but related task, for which there is a limited
quantity of data. We introduce the idea of hierarchical transfer learning, in
which neural networks trained on low fidelity models are calibrated to high
fidelity models, then to experimental data. This technique essentially
bootstraps the calibration process, enabling the creation of models which
predict high fidelity simulations or experiments with minimal computational
cost. We apply this technique to a database of ICF simulations and experiments
carried out at the Omega laser facility. Transfer learning with deep neural
networks enables the creation of models that are more predictive of Omega
experiments than simulations alone. The calibrated models accurately predict
future Omega experiments, and are used to search for new, optimal implosion
designs.
|
[
"cs.LG",
"stat.ML"
] |
Memory units have been widely used to enrich the capabilities of deep
networks on capturing long-term dependencies in reasoning and prediction tasks,
but little investigation exists on deep generative models (DGMs) which are good
at inferring high-level invariant representations from unlabeled data. This
paper presents a deep generative model with a possibly large external memory
and an attention mechanism to capture the local detail information that is
often lost in the bottom-up abstraction process in representation learning. By
adopting a smooth attention model, the whole network is trained end-to-end by
optimizing a variational bound of data likelihood via auto-encoding variational
Bayesian methods, where an asymmetric recognition network is learnt jointly to
infer high-level invariant representations. The asymmetric architecture can
reduce the competition between bottom-up invariant feature extraction and
top-down generation of instance details. Our experiments on several datasets
demonstrate that memory can significantly boost the performance of DGMs and
even achieve state-of-the-art results on various tasks, including density
estimation, image generation, and missing value imputation.
|
[
"cs.LG",
"cs.CV"
] |
Object detection generally requires sliding-window classifiers in tradition
or anchor box based predictions in modern deep learning approaches. However,
either of these approaches requires tedious configurations in boxes. In this
paper, we provide a new perspective where detecting objects is motivated as a
high-level semantic feature detection task. Like edges, corners, blobs and
other feature detectors, the proposed detector scans for feature points all
over the image, for which the convolution is naturally suited. However, unlike
these traditional low-level features, the proposed detector goes for a
higher-level abstraction, that is, we are looking for central points where
there are objects, and modern deep models are already capable of such a
high-level semantic abstraction. Besides, like blob detection, we also predict
the scales of the central points, which is also a straightforward convolution.
Therefore, in this paper, pedestrian and face detection is simplified as a
straightforward center and scale prediction task through convolutions. This
way, the proposed method enjoys a box-free setting. Though structurally simple,
it presents competitive accuracy on several challenging benchmarks, including
pedestrian detection and face detection. Furthermore, a cross-dataset
evaluation is performed, demonstrating a superior generalization ability of the
proposed method
|
[
"cs.CV"
] |
We present Context Forest (ConF), a technique for predicting properties of
the objects in an image based on its global appearance. Compared to standard
nearest-neighbour techniques, ConF is more accurate, fast and memory efficient.
We train ConF to predict which aspects of an object class are likely to appear
in a given image (e.g. which viewpoint). This enables to speed-up
multi-component object detectors, by automatically selecting the most relevant
components to run on that image. This is particularly useful for detectors
trained from large datasets, which typically need many components to fully
absorb the data and reach their peak performance. ConF provides a speed-up of
2x for the DPM detector [1] and of 10x for the EE-SVM detector [2]. To show
ConF's generality, we also train it to predict at which locations objects are
likely to appear in an image. Incorporating this information in the detector
score improves mAP performance by about 2% by removing false positive
detections in unlikely locations.
|
[
"cs.CV"
] |
Face synthesis is an important problem in computer vision with many
applications. In this work, we describe a new method, namely LandmarkGAN, to
synthesize faces based on facial landmarks as input. Facial landmarks are a
natural, intuitive, and effective representation for facial expressions and
orientations, which are independent from the target's texture or color and
background scene. Our method is able to transform a set of facial landmarks
into new faces of different subjects, while retains the same facial expression
and orientation. Experimental results on face synthesis and reenactments
demonstrate the effectiveness of our method.
|
[
"cs.CV"
] |
Flow-based generative models are composed of invertible transformations
between two random variables of the same dimension. Therefore, flow-based
models cannot be adequately trained if the dimension of the data distribution
does not match that of the underlying target distribution. In this paper, we
propose SoftFlow, a probabilistic framework for training normalizing flows on
manifolds. To sidestep the dimension mismatch problem, SoftFlow estimates a
conditional distribution of the perturbed input data instead of learning the
data distribution directly. We experimentally show that SoftFlow can capture
the innate structure of the manifold data and generate high-quality samples
unlike the conventional flow-based models. Furthermore, we apply the proposed
framework to 3D point clouds to alleviate the difficulty of forming thin
structures for flow-based models. The proposed model for 3D point clouds,
namely SoftPointFlow, can estimate the distribution of various shapes more
accurately and achieves state-of-the-art performance in point cloud generation.
|
[
"cs.CV",
"cs.LG"
] |
Hashing technology has been widely used in image retrieval due to its
computational and storage efficiency. Recently, deep unsupervised hashing
methods have attracted increasing attention due to the high cost of human
annotations in the real world and the superiority of deep learning technology.
However, most deep unsupervised hashing methods usually pre-compute a
similarity matrix to model the pairwise relationship in the pre-trained feature
space. Then this similarity matrix would be used to guide hash learning, in
which most of the data pairs are treated equivalently. The above process is
confronted with the following defects: 1) The pre-computed similarity matrix is
inalterable and disconnected from the hash learning process, which cannot
explore the underlying semantic information. 2) The informative data pairs may
be buried by the large number of less-informative data pairs. To solve the
aforementioned problems, we propose a Deep Self-Adaptive Hashing (DSAH) model
to adaptively capture the semantic information with two special designs:
Adaptive Neighbor Discovery (AND) and Pairwise Information Content (PIC).
Firstly, we adopt the AND to initially construct a neighborhood-based
similarity matrix, and then refine this initial similarity matrix with a novel
update strategy to further investigate the semantic structure behind the
learned representation. Secondly, we measure the priorities of data pairs with
PIC and assign adaptive weights to them, which is relies on the assumption that
more dissimilar data pairs contain more discriminative information for hash
learning. Extensive experiments on several datasets demonstrate that the above
two technologies facilitate the deep hashing model to achieve superior
performance.
|
[
"cs.CV",
"cs.IR"
] |
Dexterous multi-fingered hands can accomplish fine manipulation behaviors
that are infeasible with simple robotic grippers. However, sophisticated
multi-fingered hands are often expensive and fragile. Low-cost soft hands offer
an appealing alternative to more conventional devices, but present considerable
challenges in sensing and actuation, making them difficult to apply to more
complex manipulation tasks. In this paper, we describe an approach to learning
from demonstration that can be used to train soft robotic hands to perform
dexterous manipulation tasks. Our method uses object-centric demonstrations,
where a human demonstrates the desired motion of manipulated objects with their
own hands, and the robot autonomously learns to imitate these demonstrations
using reinforcement learning. We propose a novel algorithm that allows us to
blend and select a subset of the most feasible demonstrations to learn to
imitate on the hardware, which we use with an extension of the guided policy
search framework to use multiple demonstrations to learn generalizable neural
network policies. We demonstrate our approach on the RBO Hand 2, with learned
motor skills for turning a valve, manipulating an abacus, and grasping.
|
[
"cs.LG",
"cs.RO"
] |
Synthetic-to-real transfer learning is a framework in which we pre-train
models with synthetically generated images and ground-truth annotations for
real tasks. Although synthetic images overcome the data scarcity issue, it
remains unclear how the fine-tuning performance scales with pre-trained models,
especially in terms of pre-training data size. In this study, we collect a
number of empirical observations and uncover the secret. Through experiments,
we observe a simple and general scaling law that consistently describes
learning curves in various tasks, models, and complexities of synthesized
pre-training data. Further, we develop a theory of transfer learning for a
simplified scenario and confirm that the derived generalization bound is
consistent with our empirical findings.
|
[
"cs.LG",
"cs.CV"
] |
Large Transformer models routinely achieve state-of-the-art results on a
number of tasks but training these models can be prohibitively costly,
especially on long sequences. We introduce two techniques to improve the
efficiency of Transformers. For one, we replace dot-product attention by one
that uses locality-sensitive hashing, changing its complexity from O($L^2$) to
O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use
reversible residual layers instead of the standard residuals, which allows
storing activations only once in the training process instead of $N$ times,
where $N$ is the number of layers. The resulting model, the Reformer, performs
on par with Transformer models while being much more memory-efficient and much
faster on long sequences.
|
[
"cs.LG",
"cs.CL",
"stat.ML"
] |
There is a growing concern that the recent progress made in AI, especially
regarding the predictive competence of deep learning models, will be undermined
by a failure to properly explain their operation and outputs. In response to
this disquiet counterfactual explanations have become massively popular in
eXplainable AI (XAI) due to their proposed computational psychological, and
legal benefits. In contrast however, semifactuals, which are a similar way
humans commonly explain their reasoning, have surprisingly received no
attention. Most counterfactual methods address tabular rather than image data,
partly due to the nondiscrete nature of the latter making good counterfactuals
difficult to define. Additionally generating plausible looking explanations
which lie on the data manifold is another issue which hampers progress. This
paper advances a novel method for generating plausible counterfactuals (and
semifactuals) for black box CNN classifiers doing computer vision. The present
method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE),
modifies all exceptional features in a test image to be normal from the
perspective of the counterfactual class (hence concretely defining a
counterfactual). Two controlled experiments compare this method to others in
the literature, showing that PIECE not only generates the most plausible
counterfactuals on several measures, but also the best semifactuals.
|
[
"cs.LG",
"cs.AI",
"I.2.6; F.2.2"
] |
Physical processes, camera movement, and unpredictable environmental
conditions like the presence of dust can induce noise and artifacts in video
feeds. We observe that popular unsupervised MOT methods are dependent on
noise-free inputs. We show that the addition of a small amount of artificial
random noise causes a sharp degradation in model performance on benchmark
metrics. We resolve this problem by introducing a robust unsupervised
multi-object tracking (MOT) model: AttU-Net. The proposed single-head attention
model helps limit the negative impact of noise by learning visual
representations at different segment scales. AttU-Net shows better unsupervised
MOT tracking performance over variational inference-based state-of-the-art
baselines. We evaluate our method in the MNIST-MOT and the Atari game video
benchmark. We also provide two extended video datasets: ``Kuzushiji-MNIST MOT''
which consists of moving Japanese characters and ``Fashion-MNIST MOT'' to
validate the effectiveness of the MOT models.
|
[
"cs.CV",
"cs.AI",
"cs.LG",
"cs.MM",
"cs.NE"
] |
With the popularity of blockchain technology, the financial security issues
of blockchain transaction networks have become increasingly serious. Phishing
scam detection methods will protect possible victims and build a healthier
blockchain ecosystem. Usually, the existing works define phishing scam
detection as a node classification task by learning the potential features of
users through graph embedding methods such as random walk or graph neural
network (GNN). However, these detection methods are suffered from high
complexity due to the large scale of the blockchain transaction network,
ignoring temporal information of the transaction. Addressing this problem, we
defined the transaction pattern graphs for users and transformed the phishing
scam detection into a graph classification task. To extract richer information
from the input graph, we proposed a multi-channel graph classification model
(MCGC) with multiple feature extraction channels for GNN. The transaction
pattern graphs and MCGC are more able to detect potential phishing scammers by
extracting the transaction pattern features of the target users. Extensive
experiments on seven benchmark and Ethereum datasets demonstrate that the
proposed MCGC can not only achieve state-of-the-art performance in the graph
classification task but also achieve effective phishing scam detection based on
the target users' transaction pattern graphs.
|
[
"cs.LG"
] |
Adversarial attack has inspired great interest in computer vision, by showing
that classification-based solutions are prone to imperceptible attack in many
tasks. In this paper, we propose a method, SMART, to attack action recognizers
which rely on 3D skeletal motions. Our method involves an innovative perceptual
loss which ensures the imperceptibility of the attack. Empirical studies
demonstrate that SMART is effective in both white-box and black-box scenarios.
Its generalizability is evidenced on a variety of action recognizers and
datasets. Its versatility is shown in different attacking strategies. Its
deceitfulness is proven in extensive perceptual studies. Finally, SMART shows
that adversarial attack on 3D skeletal motion, one type of time-series data, is
significantly different from traditional adversarial attack problems.
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] |
Communities in social networks evolve over time as people enter and leave the
network and their activity behaviors shift. The task of predicting structural
changes in communities over time is known as community evolution prediction.
Existing work in this area has focused on the development of frameworks for
defining events while using traditional classification methods to perform the
actual prediction. We present a novel graph neural network for predicting
community evolution events from structural and temporal information. The model
(GNAN) includes a group-node attention component which enables support for
variable-sized inputs and learned representation of groups based on member and
neighbor node features. A comparative evaluation with standard baseline methods
is performed and we demonstrate that our model outperforms the baselines.
Additionally, we show the effects of network trends on model performance.
|
[
"cs.LG",
"cs.SI",
"physics.soc-ph"
] |
Robust MDPs (RMDPs) can be used to compute policies with provable worst-case
guarantees in reinforcement learning. The quality and robustness of an RMDP
solution are determined by the ambiguity set---the set of plausible transition
probabilities---which is usually constructed as a multi-dimensional confidence
region. Existing methods construct ambiguity sets as confidence regions using
concentration inequalities which leads to overly conservative solutions. This
paper proposes a new paradigm that can achieve better solutions with the same
robustness guarantees without using confidence regions as ambiguity sets. To
incorporate prior knowledge, our algorithms optimize the size and position of
ambiguity sets using Bayesian inference. Our theoretical analysis shows the
safety of the proposed method, and the empirical results demonstrate its
practical promise.
|
[
"cs.LG",
"stat.ML"
] |
There has been a significant surge of interest recently around the concept of
explainable artificial intelligence (XAI), where the goal is to produce an
interpretation for a decision made by a machine learning algorithm. Of
particular interest is the interpretation of how deep neural networks make
decisions, given the complexity and `black box' nature of such networks. Given
the infancy of the field, there has been very limited exploration into the
assessment of the performance of explainability methods, with most evaluations
centered around subjective visual interpretation of the produced
interpretations. In this study, we explore a more machine-centric strategy for
quantifying the performance of explainability methods on deep neural networks
via the notion of decision-making impact analysis. We introduce two
quantitative performance metrics: i) Impact Score, which assesses the
percentage of critical factors with either strong confidence reduction impact
or decision changing impact, and ii) Impact Coverage, which assesses the
percentage coverage of adversarially impacted factors in the input. A
comprehensive analysis using this approach was conducted on several
state-of-the-art explainability methods (LIME, SHAP, Expected Gradients,
GSInquire) on a ResNet-50 deep convolutional neural network using a subset of
ImageNet for the task of image classification. Experimental results show that
the critical regions identified by LIME within the tested images had the lowest
impact on the decision-making process of the network (~38%), with progressive
increase in decision-making impact for SHAP (~44%), Expected Gradients (~51%),
and GSInquire (~76%). While by no means perfect, the hope is that the proposed
machine-centric strategy helps push the conversation forward towards better
metrics for evaluating explainability methods and improve trust in deep neural
networks.
|
[
"cs.LG",
"cs.NE"
] |
Weakly supervised learning has drawn considerable attention recently to
reduce the expensive time and labor consumption of labeling massive data. In
this paper, we investigate a novel weakly supervised learning problem of
learning from similarity-confidence (Sconf) data, where we aim to learn an
effective binary classifier from only unlabeled data pairs equipped with
confidence that illustrates their degree of similarity (two examples are
similar if they belong to the same class). To solve this problem, we propose an
unbiased estimator of the classification risk that can be calculated from only
Sconf data and show that the estimation error bound achieves the optimal
convergence rate. To alleviate potential overfitting when flexible models are
used, we further employ a risk correction scheme on the proposed risk
estimator. Experimental results demonstrate the effectiveness of the proposed
methods.
|
[
"stat.ML",
"cs.LG"
] |
3D point cloud understanding has made great progress in recent years.
However, one major bottleneck is the scarcity of annotated real datasets,
especially compared to 2D object detection tasks, since a large amount of labor
is involved in annotating the real scans of a scene. A promising solution to
this problem is to make better use of the synthetic dataset, which consists of
CAD object models, to boost the learning on real datasets. This can be achieved
by the pre-training and fine-tuning procedure. However, recent work on 3D
pre-training exhibits failure when transfer features learned on synthetic
objects to other real-world applications. In this work, we put forward a new
method called RandomRooms to accomplish this objective. In particular, we
propose to generate random layouts of a scene by making use of the objects in
the synthetic CAD dataset and learn the 3D scene representation by applying
object-level contrastive learning on two random scenes generated from the same
set of synthetic objects. The model pre-trained in this way can serve as a
better initialization when later fine-tuning on the 3D object detection task.
Empirically, we show consistent improvement in downstream 3D detection tasks on
several base models, especially when less training data are used, which
strongly demonstrates the effectiveness and generalization of our method.
Benefiting from the rich semantic knowledge and diverse objects from synthetic
data, our method establishes the new state-of-the-art on widely-used 3D
detection benchmarks ScanNetV2 and SUN RGB-D. We expect our attempt to provide
a new perspective for bridging object and scene-level 3D understanding.
|
[
"cs.CV",
"cs.AI",
"cs.LG"
] |
Explaining the output of a complicated machine learning model like a deep
neural network (DNN) is a central challenge in machine learning. Several
proposed local explanation methods address this issue by identifying what
dimensions of a single input are most responsible for a DNN's output. The goal
of this work is to assess the sensitivity of local explanations to DNN
parameter values. Somewhat surprisingly, we find that DNNs with
randomly-initialized weights produce explanations that are both visually and
quantitatively similar to those produced by DNNs with learned weights. Our
conjecture is that this phenomenon occurs because these explanations are
dominated by the lower level features of a DNN, and that a DNN's architecture
provides a strong prior which significantly affects the representations learned
at these lower layers. NOTE: This work is now subsumed by our recent
manuscript, Sanity Checks for Saliency Maps (to appear NIPS 2018), where we
expand on findings and address concerns raised in Sundararajan et. al. (2018).
|
[
"cs.CV",
"cs.LG",
"stat.ML"
] |
Recently, the ever-growing demand for privacy-oriented machine learning has
motivated researchers to develop federated and decentralized learning
techniques, allowing individual clients to train models collaboratively without
disclosing their private datasets. However, widespread adoption has been
limited in domains relying on high levels of user trust, where assessment of
data compatibility is essential. In this work, we define and address low
interoperability induced by underlying client data inconsistencies in federated
learning for tabular data. The proposed method, iFedAvg, builds on federated
averaging adding local element-wise affine layers to allow for a personalized
and granular understanding of the collaborative learning process. Thus,
enabling the detection of outlier datasets in the federation and also learning
the compensation for local data distribution shifts without sharing any
original data. We evaluate iFedAvg using several public benchmarks and a
previously unstudied collection of real-world datasets from the 2014 - 2016
West African Ebola epidemic, jointly forming the largest such dataset in the
world. In all evaluations, iFedAvg achieves competitive average performance
with negligible overhead. It additionally shows substantial improvement on
outlier clients, highlighting increased robustness to individual dataset
shifts. Most importantly, our method provides valuable client-specific insights
at a fine-grained level to guide interoperable federated learning.
|
[
"cs.LG",
"cs.DC"
] |
Investigation of machine learning algorithms robust to changes between the
training and test distributions is an active area of research. In this paper we
explore a special type of dataset shift which we call class-dependent domain
shift. It is characterized by the following features: the input data causally
depends on the label, the shift in the data is fully explained by a known
variable, the variable which controls the shift can depend on the label, there
is no shift in the label distribution. We define a simple optimization problem
with an information theoretic constraint and attempt to solve it with neural
networks. Experiments on a toy dataset demonstrate the proposed method is able
to learn robust classifiers which generalize well to unseen domains.
|
[
"cs.LG",
"stat.ML"
] |
In recent years, endomicroscopy has become increasingly used for diagnostic
purposes and interventional guidance. It can provide intraoperative aids for
real-time tissue characterization and can help to perform visual investigations
aimed for example to discover epithelial cancers. Due to physical constraints
on the acquisition process, endomicroscopy images, still today have a low
number of informative pixels which hampers their quality. Post-processing
techniques, such as Super-Resolution (SR), are a potential solution to increase
the quality of these images. SR techniques are often supervised, requiring
aligned pairs of low-resolution (LR) and high-resolution (HR) images patches to
train a model. However, in our domain, the lack of HR images hinders the
collection of such pairs and makes supervised training unsuitable. For this
reason, we propose an unsupervised SR framework based on an adversarial deep
neural network with a physically-inspired cycle consistency, designed to impose
some acquisition properties on the super-resolved images. Our framework can
exploit HR images, regardless of the domain where they are coming from, to
transfer the quality of the HR images to the initial LR images. This property
can be particularly useful in all situations where pairs of LR/HR are not
available during the training. Our quantitative analysis, validated using a
database of 238 endomicroscopy video sequences from 143 patients, shows the
ability of the pipeline to produce convincing super-resolved images. A Mean
Opinion Score (MOS) study also confirms this quantitative image quality
assessment.
|
[
"cs.CV"
] |
Anomaly mining is an important problem that finds numerous applications in
various real world domains such as environmental monitoring, cybersecurity,
finance, healthcare and medicine, to name a few. In this article, I focus on
two areas, (1) point-cloud and (2) graph-based anomaly mining. I aim to present
a broad view of each area, and discuss classes of main research problems,
recent trends and future directions. I conclude with key take-aways and
overarching open problems.
|
[
"cs.LG",
"cs.SI"
] |
Graph neural networks (GNN) rely on graph operations that include neural
network training for various graph related tasks. Recently, several attempts
have been made to apply the GNNs to functional magnetic resonance image (fMRI)
data. Despite recent progresses, a common limitation is its difficulty to
explain the classification results in a neuroscientifically explainable way.
Here, we develop a framework for analyzing the fMRI data using the Graph
Isomorphism Network (GIN), which was recently proposed as a powerful GNN for
graph classification. One of the important contributions of this paper is the
observation that the GIN is a dual representation of convolutional neural
network (CNN) in the graph space where the shift operation is defined using the
adjacency matrix. This understanding enables us to exploit CNN-based saliency
map techniques for the GNN, which we tailor to the proposed GIN with one-hot
encoding, to visualize the important regions of the brain. We validate our
proposed framework using large-scale resting-state fMRI (rs-fMRI) data for
classifying the sex of the subject based on the graph structure of the brain.
The experiment was consistent with our expectation such that the obtained
saliency map show high correspondence with previous neuroimaging evidences
related to sex differences.
|
[
"cs.CV",
"cs.LG",
"stat.ML"
] |
Forecasting of multivariate time-series is an important problem that has
applications in traffic management, cellular network configuration, and
quantitative finance. A special case of the problem arises when there is a
graph available that captures the relationships between the time-series. In
this paper we propose a novel learning architecture that achieves performance
competitive with or better than the best existing algorithms, without requiring
knowledge of the graph. The key element of our proposed architecture is the
learnable fully connected hard graph gating mechanism that enables the use of
the state-of-the-art and highly computationally efficient fully connected
time-series forecasting architecture in traffic forecasting applications.
Experimental results for two public traffic network datasets illustrate the
value of our approach, and ablation studies confirm the importance of each
element of the architecture. The code is available here:
https://github.com/boreshkinai/fc-gaga.
|
[
"cs.LG",
"stat.ML"
] |
Depth completion aims to recover dense depth maps from sparse depth
measurements. It is of increasing importance for autonomous driving and draws
increasing attention from the vision community. Most of existing methods
directly train a network to learn a mapping from sparse depth inputs to dense
depth maps, which has difficulties in utilizing the 3D geometric constraints
and handling the practical sensor noises. In this paper, to regularize the
depth completion and improve the robustness against noise, we propose a unified
CNN framework that 1) models the geometric constraints between depth and
surface normal in a diffusion module and 2) predicts the confidence of sparse
LiDAR measurements to mitigate the impact of noise. Specifically, our
encoder-decoder backbone predicts surface normals, coarse depth and confidence
of LiDAR inputs simultaneously, which are subsequently inputted into our
diffusion refinement module to obtain the final completion results. Extensive
experiments on KITTI depth completion dataset and NYU-Depth-V2 dataset
demonstrate that our method achieves state-of-the-art performance. Further
ablation study and analysis give more insights into the proposed method and
demonstrate the generalization capability and stability of our model.
|
[
"cs.CV"
] |
Metadata are general characteristics of the data in a well-curated and
condensed format, and have been proven to be useful for decision making,
knowledge discovery, and also heterogeneous data organization of biobank. Among
all data types in the biobank, pathology is the key component of the biobank
and also serves as the gold standard of diagnosis. To maximize the utility of
biobank and allow the rapid progress of biomedical science, it is essential to
organize the data with well-populated pathology metadata. However, manual
annotation of such information is tedious and time-consuming. In the study, we
develop a multimodal multitask learning framework to predict four major
slide-level metadata of pathology images. The framework learns generalizable
representations across tissue slides, pathology reports, and case-level
structured data. We demonstrate improved performance across all four tasks with
the proposed method compared to a single modal single task baseline on two test
sets, one external test set from a distinct data source (TCGA) and one internal
held-out test set (TTH). In the test sets, the performance improvements on the
averaged area under receiver operating characteristic curve across the four
tasks are 16.48% and 9.05% on TCGA and TTH, respectively. Such pathology
metadata prediction system may be adopted to mitigate the effort of expert
annotation and ultimately accelerate the data-driven research by better
utilization of the pathology biobank.
|
[
"cs.CV",
"cs.LG"
] |
Determining the best partition for a dataset can be a challenging task
because of 1) the lack of a priori information within an unsupervised learning
framework; and 2) the absence of a unique clustering validation approach to
evaluate clustering solutions. Here we present reval: a Python package that
leverages stability-based relative clustering validation methods to determine
best clustering solutions as the ones that best generalize to unseen data.
Statistical software, both in R and Python, usually rely on internal validation
metrics, such as silhouette, to select the number of clusters that best fits
the data. Meanwhile, open-source software solutions that easily implement
relative clustering techniques are lacking. Internal validation methods exploit
characteristics of the data itself to produce a result, whereas relative
approaches attempt to leverage the unknown underlying distribution of data
points looking for generalizable and replicable results. The implementation of
relative validation methods can further the theory of clustering by enriching
the already available methods that can be used to investigate clustering
results in different situations and for different data distributions. This work
aims at contributing to this effort by developing a stability-based method that
selects the best clustering solution as the one that replicates, via supervised
learning, on unseen subsets of data. The package works with multiple clustering
and classification algorithms, hence allowing both the automatization of the
labeling process and the assessment of the stability of different clustering
mechanisms.
|
[
"cs.LG",
"stat.ML"
] |
We present an interactive approach to synthesizing realistic variations in
facial hair in images, ranging from subtle edits to existing hair to the
addition of complex and challenging hair in images of clean-shaven subjects. To
circumvent the tedious and computationally expensive tasks of modeling,
rendering and compositing the 3D geometry of the target hairstyle using the
traditional graphics pipeline, we employ a neural network pipeline that
synthesizes realistic and detailed images of facial hair directly in the target
image in under one second. The synthesis is controlled by simple and sparse
guide strokes from the user defining the general structural and color
properties of the target hairstyle. We qualitatively and quantitatively
evaluate our chosen method compared to several alternative approaches. We show
compelling interactive editing results with a prototype user interface that
allows novice users to progressively refine the generated image to match their
desired hairstyle, and demonstrate that our approach also allows for flexible
and high-fidelity scalp hair synthesis.
|
[
"cs.CV",
"cs.GR",
"cs.HC",
"cs.LG"
] |
We present a pixel recursive super resolution model that synthesizes
realistic details into images while enhancing their resolution. A low
resolution image may correspond to multiple plausible high resolution images,
thus modeling the super resolution process with a pixel independent conditional
model often results in averaging different details--hence blurry edges. By
contrast, our model is able to represent a multimodal conditional distribution
by properly modeling the statistical dependencies among the high resolution
image pixels, conditioned on a low resolution input. We employ a PixelCNN
architecture to define a strong prior over natural images and jointly optimize
this prior with a deep conditioning convolutional network. Human evaluations
indicate that samples from our proposed model look more photo realistic than a
strong L2 regression baseline.
|
[
"cs.CV",
"cs.LG"
] |
We incorporate Tensor-Product Representations within the Transformer in order
to better support the explicit representation of relation structure. Our
Tensor-Product Transformer (TP-Transformer) sets a new state of the art on the
recently-introduced Mathematics Dataset containing 56 categories of free-form
math word-problems. The essential component of the model is a novel attention
mechanism, called TP-Attention, which explicitly encodes the relations between
each Transformer cell and the other cells from which values have been retrieved
by attention. TP-Attention goes beyond linear combination of retrieved values,
strengthening representation-building and resolving ambiguities introduced by
multiple layers of standard attention. The TP-Transformer's attention maps give
better insights into how it is capable of solving the Mathematics Dataset's
challenging problems. Pretrained models and code will be made available after
publication.
|
[
"cs.LG",
"stat.ML"
] |
Weakly Supervised Object Detection (WSOD) has emerged as an effective tool to
train object detectors using only the image-level category labels. However,
without object-level labels, WSOD detectors are prone to detect bounding boxes
on salient objects, clustered objects and discriminative object parts.
Moreover, the image-level category labels do not enforce consistent object
detection across different transformations of the same images. To address the
above issues, we propose a Comprehensive Attention Self-Distillation (CASD)
training approach for WSOD. To balance feature learning among all object
instances, CASD computes the comprehensive attention aggregated from multiple
transformations and feature layers of the same images. To enforce consistent
spatial supervision on objects, CASD conducts self-distillation on the WSOD
networks, such that the comprehensive attention is approximated simultaneously
by multiple transformations and feature layers of the same images. CASD
produces new state-of-the-art WSOD results on standard benchmarks such as
PASCAL VOC 2007/2012 and MS-COCO.
|
[
"cs.CV",
"cs.LG"
] |
In recent years, single image super-resolution (SR) methods based on deep
convolutional neural networks (CNNs) have made significant progress. However,
due to the non-adaptive nature of the convolution operation, they cannot adapt
to various characteristics of images, which limits their representational
capability and, consequently, results in unnecessarily large model sizes. To
address this issue, we propose a novel multi-path adaptive modulation network
(MAMNet). Specifically, we propose a multi-path adaptive modulation block
(MAMB), which is a lightweight yet effective residual block that adaptively
modulates residual feature responses by fully exploiting their information via
three paths. The three paths model three types of information suitable for SR:
1) channel-specific information (CSI) using global variance pooling, 2)
inter-channel dependencies (ICD) based on the CSI, 3) and channel-specific
spatial dependencies (CSD) via depth-wise convolution. We demonstrate that the
proposed MAMB is effective and parameter-efficient for image SR than other
feature modulation methods. In addition, experimental results show that our
MAMNet outperforms most of the state-of-the-art methods with a relatively small
number of parameters.
|
[
"cs.CV"
] |
Optimal decision making with limited or no information in stochastic
environments where multiple agents interact is a challenging topic in the realm
of artificial intelligence. Reinforcement learning (RL) is a popular approach
for arriving at optimal strategies by predicating stimuli, such as the reward
for following a strategy, on experience. RL is heavily explored in the
single-agent context, but is a nascent concept in multiagent problems. To this
end, I propose several principled model-free and partially model-based
reinforcement learning approaches for several multiagent settings. In the realm
of normative reinforcement learning, I introduce scalable extensions to Monte
Carlo exploring starts for partially observable Markov Decision Processes
(POMDP), dubbed MCES-P, where I expand the theory and algorithm to the
multiagent setting. I first examine MCES-P with probably approximately correct
(PAC) bounds in the context of multiagent setting, showing MCESP+PAC holds in
the presence of other agents. I then propose a more sample-efficient
methodology for antagonistic settings, MCESIP+PAC. For cooperative settings, I
extend MCES-P to the Multiagent POMDP, dubbed MCESMP+PAC. I then explore the
use of reinforcement learning as a methodology in searching for optima in
realistic and latent model environments. First, I explore a parameterized
Q-learning approach in modeling humans learning to reason in an uncertain,
multiagent environment. Next, I propose an implementation of MCES-P, along with
image segmentation, to create an adaptive team-based reinforcement learning
technique to positively identify the presence of phenotypically-expressed water
and pathogen stress in crop fields.
|
[
"cs.LG",
"cs.AI"
] |
One of the common obstacles for learning causal models from data is that
high-order conditional independence (CI) relationships between random variables
are difficult to estimate. Since CI tests with conditioning sets of low order
can be performed accurately even for a small number of observations, a
reasonable approach to determine casual structures is to base merely on the
low-order CIs. Recent research has confirmed that, e.g. in the case of sparse
true causal models, structures learned even from zero- and first-order
conditional independencies yield good approximations of the models. However, a
challenging task here is to provide methods that faithfully explain a given set
of low-order CIs. In this paper, we propose an algorithm which, for a given set
of conditional independencies of order less or equal to $k$, where $k$ is a
small fixed number, computes a faithful graphical representation of the given
set. Our results complete and generalize the previous work on learning from
pairwise marginal independencies. Moreover, they enable to improve upon the 0-1
graph model which, e.g. is heavily used in the estimation of genome networks.
|
[
"cs.LG",
"cs.AI",
"stat.ML"
] |
A patient's estimated glomerular filtration rate (eGFR) can provide important
information about disease progression and kidney function. Traditionally, an
eGFR time series is interpreted by a human expert labelling it as stable or
unstable. While this approach works for individual patients, the time consuming
nature of it precludes the quick evaluation of risk in large numbers of
patients. However, automating this process poses significant challenges as eGFR
measurements are usually recorded at irregular intervals and the series of
measurements differs in length between patients. Here we present a two-tier
system to automatically classify an eGFR trend. First, we model the time series
using Gaussian process regression (GPR) to fill in `gaps' by resampling a fixed
size vector of fifty time-dependent observations. Second, we classify the
resampled eGFR time series using a K-NN/SVM classifier, and evaluate its
performance via 5-fold cross validation. Using this approach we achieved an
F-score of 0.90, compared to 0.96 for 5 human experts when scored amongst
themselves.
|
[
"cs.LG",
"cs.CE"
] |
We propose a new approach to visualize saliency maps for deep neural network
models and apply it to deep reinforcement learning agents trained on Atari
environments. Our method adds an attention module that we call FLS (Free Lunch
Saliency) to the feature extractor from an established baseline (Mnih et al.,
2015). This addition results in a trainable model that can produce saliency
maps, i.e., visualizations of the importance of different parts of the input
for the agent's current decision making. We show experimentally that a network
with an FLS module exhibits performance similar to the baseline (i.e., it is
"free", with no performance cost) and can be used as a drop-in replacement for
reinforcement learning agents. We also design another feature extractor that
scores slightly lower but provides higher-fidelity visualizations. In addition
to attained scores, we report saliency metrics evaluated on the Atari-HEAD
dataset of human gameplay.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] |
This paper addresses the challenge of localization of anatomical landmarks in
knee X-ray images at different stages of osteoarthritis (OA). Landmark
localization can be viewed as regression problem, where the landmark position
is directly predicted by using the region of interest or even full-size images
leading to large memory footprint, especially in case of high resolution
medical images. In this work, we propose an efficient deep neural networks
framework with an hourglass architecture utilizing a soft-argmax layer to
directly predict normalized coordinates of the landmark points. We provide an
extensive evaluation of different regularization techniques and various loss
functions to understand their influence on the localization performance.
Furthermore, we introduce the concept of transfer learning from low-budget
annotations, and experimentally demonstrate that such approach is improving the
accuracy of landmark localization. Compared to the prior methods, we validate
our model on two datasets that are independent from the train data and assess
the performance of the method for different stages of OA severity. The proposed
approach demonstrates better generalization performance compared to the current
state-of-the-art.
|
[
"cs.CV"
] |
Existing state-of-the-art human pose estimation methods require heavy
computational resources for accurate predictions. One promising technique to
obtain an accurate yet lightweight pose estimator is knowledge distillation,
which distills the pose knowledge from a powerful teacher model to a
less-parameterized student model. However, existing pose distillation works
rely on a heavy pre-trained estimator to perform knowledge transfer and require
a complex two-stage learning procedure. In this work, we investigate a novel
Online Knowledge Distillation framework by distilling Human Pose structure
knowledge in a one-stage manner to guarantee the distillation efficiency,
termed OKDHP. Specifically, OKDHP trains a single multi-branch network and
acquires the predicted heatmaps from each, which are then assembled by a
Feature Aggregation Unit (FAU) as the target heatmaps to teach each branch in
reverse. Instead of simply averaging the heatmaps, FAU which consists of
multiple parallel transformations with different receptive fields, leverages
the multi-scale information, thus obtains target heatmaps with higher-quality.
Specifically, the pixel-wise Kullback-Leibler (KL) divergence is utilized to
minimize the discrepancy between the target heatmaps and the predicted ones,
which enables the student network to learn the implicit keypoint relationship.
Besides, an unbalanced OKDHP scheme is introduced to customize the student
networks with different compression rates. The effectiveness of our approach is
demonstrated by extensive experiments on two common benchmark datasets, MPII
and COCO.
|
[
"cs.CV"
] |
Contextual bandit learning is a reinforcement learning problem where the
learner repeatedly receives a set of features (context), takes an action and
receives a reward based on the action and context. We consider this problem
under a realizability assumption: there exists a function in a (known) function
class, always capable of predicting the expected reward, given the action and
context. Under this assumption, we show three things. We present a new
algorithm---Regressor Elimination--- with a regret similar to the agnostic
setting (i.e. in the absence of realizability assumption). We prove a new lower
bound showing no algorithm can achieve superior performance in the worst case
even with the realizability assumption. However, we do show that for any set of
policies (mapping contexts to actions), there is a distribution over rewards
(given context) such that our new algorithm has constant regret unlike the
previous approaches.
|
[
"cs.LG"
] |
Referring expression comprehension (REC) aims to localize a target object in
an image described by a referring expression phrased in natural language.
Different from the object detection task that queried object labels have been
pre-defined, the REC problem only can observe the queries during the test. It
thus more challenging than a conventional computer vision problem. This task
has attracted a lot of attention from both computer vision and natural language
processing community, and several lines of work have been proposed, from
CNN-RNN model, modular network to complex graph-based model. In this survey, we
first examine the state of the art by comparing modern approaches to the
problem. We classify methods by their mechanism to encode the visual and
textual modalities. In particular, we examine the common approach of joint
embedding images and expressions to a common feature space. We also discuss
modular architectures and graph-based models that interface with structured
graph representation. In the second part of this survey, we review the datasets
available for training and evaluating REC systems. We then group results
according to the datasets, backbone models, settings so that they can be fairly
compared. Finally, we discuss promising future directions for the field, in
particular the compositional referring expression comprehension that requires
longer reasoning chain to address.
|
[
"cs.CV",
"cs.CL"
] |
Automated image captioning is one of the applications of Deep Learning which
involves fusion of work done in computer vision and natural language
processing, and it is typically performed using Encoder-Decoder architectures.
In this project, we have implemented and experimented with various flavors of
multi-modal image captioning networks where ResNet101, DenseNet121 and VGG19
based CNN Encoders and Attention based LSTM Decoders were explored. We have
studied the effect of beam size and the use of pretrained word embeddings and
compared them to baseline CNN encoder and RNN decoder architecture. The goal is
to analyze the performance of each approach using various evaluation metrics
including BLEU, CIDEr, ROUGE and METEOR. We have also explored model
explainability using Visual Attention Maps (VAM) to highlight parts of the
images which has maximum contribution for predicting each word of the generated
caption.
|
[
"cs.CV"
] |
Super-resolution is a classical problem in image processing, with numerous
applications to remote sensing image enhancement. Here, we address the
super-resolution of irregularly-sampled remote sensing images. Using an optimal
interpolation as the low-resolution reconstruction, we explore locally-adapted
multimodal convolutional models and investigate different dictionary-based
decompositions, namely based on principal component analysis (PCA), sparse
priors and non-negativity constraints. We consider an application to the
reconstruction of sea surface height (SSH) fields from two information sources,
along-track altimeter data and sea surface temperature (SST) data. The reported
experiments demonstrate the relevance of the proposed model, especially
locally-adapted parametrizations with non-negativity constraints, to outperform
optimally-interpolated reconstructions.
|
[
"stat.ML"
] |
Clustering is a popular form of unsupervised learning for geometric data.
Unfortunately, many clustering algorithms lead to cluster assignments that are
hard to explain, partially because they depend on all the features of the data
in a complicated way. To improve interpretability, we consider using a small
decision tree to partition a data set into clusters, so that clusters can be
characterized in a straightforward manner. We study this problem from a
theoretical viewpoint, measuring cluster quality by the $k$-means and
$k$-medians objectives: Must there exist a tree-induced clustering whose cost
is comparable to that of the best unconstrained clustering, and if so, how can
it be found? In terms of negative results, we show, first, that popular
top-down decision tree algorithms may lead to clusterings with arbitrarily
large cost, and second, that any tree-induced clustering must in general incur
an $\Omega(\log k)$ approximation factor compared to the optimal clustering. On
the positive side, we design an efficient algorithm that produces explainable
clusters using a tree with $k$ leaves. For two means/medians, we show that a
single threshold cut suffices to achieve a constant factor approximation, and
we give nearly-matching lower bounds. For general $k \geq 2$, our algorithm is
an $O(k)$ approximation to the optimal $k$-medians and an $O(k^2)$
approximation to the optimal $k$-means. Prior to our work, no algorithms were
known with provable guarantees independent of dimension and input size.
|
[
"cs.LG",
"cs.CG",
"cs.DS",
"stat.ML"
] |
Episodic memory lets reinforcement learning algorithms remember and exploit
promising experience from the past to improve agent performance. Previous works
on memory mechanisms show benefits of using episodic-based data structures for
discrete action problems in terms of sample-efficiency. The application of
episodic memory for continuous control with a large action space is not
trivial. Our study aims to answer the question: can episodic memory be used to
improve agent's performance in continuous control? Our proposed algorithm
combines episodic memory with Actor-Critic architecture by modifying critic's
objective. We further improve performance by introducing episodic-based replay
buffer prioritization. We evaluate our algorithm on OpenAI gym domains and show
greater sample-efficiency compared with the state-of-the art model-free
off-policy algorithms.
|
[
"cs.LG"
] |
Mixture-of-Experts (MoE) with sparse conditional computation has been proved
an effective architecture for scaling attention-based models to more parameters
with comparable computation cost. In this paper, we propose Sparse-MLP, scaling
the recent MLP-Mixer model with sparse MoE layers, to achieve a more
computation-efficient architecture. We replace a subset of dense MLP blocks in
the MLP-Mixer model with Sparse blocks. In each Sparse block, we apply two
stages of MoE layers: one with MLP experts mixing information within channels
along image patch dimension, one with MLP experts mixing information within
patches along the channel dimension. Besides, to reduce computational cost in
routing and improve expert capacity, we design Re-represent layers in each
Sparse block. These layers are to re-scale image representations by two simple
but effective linear transformations. When pre-training on ImageNet-1k with
MoCo v3 algorithm, our models can outperform dense MLP models by 2.5\% on
ImageNet Top-1 accuracy with fewer parameters and computational cost. On
small-scale downstream image classification tasks, i.e. Cifar10 and Cifar100,
our Sparse-MLP can still achieve better performance than baselines.
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] |
Generative Adversarial Networks (GAN) have promoted a variety of applications
in computer vision, natural language processing, etc. due to its generative
model's compelling ability to generate realistic examples plausibly drawn from
an existing distribution of samples. GAN not only provides impressive
performance on data generation-based tasks but also stimulates fertilization
for privacy and security oriented research because of its game theoretic
optimization strategy. Unfortunately, there are no comprehensive surveys on GAN
in privacy and security, which motivates this survey paper to summarize those
state-of-the-art works systematically. The existing works are classified into
proper categories based on privacy and security functions, and this survey
paper conducts a comprehensive analysis of their advantages and drawbacks.
Considering that GAN in privacy and security is still at a very initial stage
and has imposed unique challenges that are yet to be well addressed, this paper
also sheds light on some potential privacy and security applications with GAN
and elaborates on some future research directions.
|
[
"cs.LG",
"cs.CR"
] |
Time series modeling has attracted extensive research efforts; however,
achieving both reliable efficiency and interpretability from a unified model
still remains a challenging problem. Among the literature, shapelets offer
interpretable and explanatory insights in the classification tasks, while most
existing works ignore the differing representative power at different time
slices, as well as (more importantly) the evolution pattern of shapelets. In
this paper, we propose to extract time-aware shapelets by designing a two-level
timing factor. Moreover, we define and construct the shapelet evolution graph,
which captures how shapelets evolve over time and can be incorporated into the
time series embeddings by graph embedding algorithms. To validate whether the
representations obtained in this way can be applied effectively in various
scenarios, we conduct experiments based on three public time series datasets,
and two real-world datasets from different domains. Experimental results
clearly show the improvements achieved by our approach compared with 17
state-of-the-art baselines.
|
[
"cs.LG",
"stat.ML",
"10010147.10010257.10010258.10010259.10010263"
] |
We are interested in the design of generative networks. The training of these
mathematical structures is mostly performed with the help of adversarial
(min-max) optimization problems. We propose a simple methodology for
constructing such problems assuring, at the same time, consistency of the
corresponding solution. We give characteristic examples developed by our
method, some of which can be recognized from other applications, and some are
introduced here for the first time. We present a new metric, the likelihood
ratio, that can be employed online to examine the convergence and stability
during the training of different Generative Adversarial Networks (GANs).
Finally, we compare various possibilities by applying them to well-known
datasets using neural networks of different configurations and sizes.
|
[
"cs.LG",
"stat.ML"
] |
AI and reinforcement learning (RL) have improved many areas, but are not yet
widely adopted in economic policy design, mechanism design, or economics at
large. At the same time, current economic methodology is limited by a lack of
counterfactual data, simplistic behavioral models, and limited opportunities to
experiment with policies and evaluate behavioral responses. Here we show that
machine-learning-based economic simulation is a powerful policy and mechanism
design framework to overcome these limitations. The AI Economist is a
two-level, deep RL framework that trains both agents and a social planner who
co-adapt, providing a tractable solution to the highly unstable and novel
two-level RL challenge. From a simple specification of an economy, we learn
rational agent behaviors that adapt to learned planner policies and vice versa.
We demonstrate the efficacy of the AI Economist on the problem of optimal
taxation. In simple one-step economies, the AI Economist recovers the optimal
tax policy of economic theory. In complex, dynamic economies, the AI Economist
substantially improves both utilitarian social welfare and the trade-off
between equality and productivity over baselines. It does so despite emergent
tax-gaming strategies, while accounting for agent interactions and behavioral
change more accurately than economic theory. These results demonstrate for the
first time that two-level, deep RL can be used for understanding and as a
complement to theory for economic design, unlocking a new computational
learning-based approach to understanding economic policy.
|
[
"cs.LG",
"econ.GN",
"q-fin.EC"
] |
Many standard robotic platforms are equipped with at least a fixed 2D laser
range finder and a monocular camera. Although those platforms do not have
sensors for 3D depth sensing capability, knowledge of depth is an essential
part in many robotics activities. Therefore, recently, there is an increasing
interest in depth estimation using monocular images. As this task is inherently
ambiguous, the data-driven estimated depth might be unreliable in robotics
applications. In this paper, we have attempted to improve the precision of
monocular depth estimation by introducing 2D planar observation from the
remaining laser range finder without extra cost. Specifically, we construct a
dense reference map from the sparse laser range data, redefining the depth
estimation task as estimating the distance between the real and the reference
depth. To solve the problem, we construct a novel residual of residual neural
network, and tightly combine the classification and regression losses for
continuous depth estimation. Experimental results suggest that our method
achieves considerable promotion compared to the state-of-the-art methods on
both NYUD2 and KITTI, validating the effectiveness of our method on leveraging
the additional sensory information. We further demonstrate the potential usage
of our method in obstacle avoidance where our methodology provides
comprehensive depth information compared to the solution using monocular camera
or 2D laser range finder alone.
|
[
"cs.CV",
"cs.RO"
] |
Multi-level feature fusion is a fundamental topic in computer vision. It has
been exploited to detect, segment and classify objects at various scales. When
multi-level features meet multi-modal cues, the optimal feature aggregation and
multi-modal learning strategy become a hot potato. In this paper, we leverage
the inherent multi-modal and multi-level nature of RGB-D salient object
detection to devise a novel cascaded refinement network. In particular, first,
we propose to regroup the multi-level features into teacher and student
features using a bifurcated backbone strategy (BBS). Second, we introduce a
depth-enhanced module (DEM) to excavate informative depth cues from the channel
and spatial views. Then, RGB and depth modalities are fused in a complementary
way. Our architecture, named Bifurcated Backbone Strategy Network (BBS-Net), is
simple, efficient, and backbone-independent. Extensive experiments show that
BBS-Net significantly outperforms eighteen SOTA models on eight challenging
datasets under five evaluation measures, demonstrating the superiority of our
approach ($\sim 4 \%$ improvement in S-measure $vs.$ the top-ranked model:
DMRA-iccv2019). In addition, we provide a comprehensive analysis on the
generalization ability of different RGB-D datasets and provide a powerful
training set for future research.
|
[
"cs.CV"
] |
Conditional GANs are widely used in translating an image from one category to
another. Meaningful conditions to GANs provide greater flexibility and control
over the nature of the target domain synthetic data. Existing conditional GANs
commonly encode target domain label information as hard-coded categorical
vectors in the form of 0s and 1s. The major drawbacks of such representations
are inability to encode the high-order semantic information of target
categories and their relative dependencies. We propose a novel end-to-end
learning framework with Graph Convolutional Networks to learn the attribute
representations to condition on the generator. The GAN losses, i.e. the
discriminator and attribute classification losses, are fed back to the Graph
resulting in the synthetic images that are more natural and clearer in
attributes. Moreover, prior-arts are given priorities to condition on the
generator side, not on the discriminator side of GANs. We apply the conditions
to the discriminator side as well via multi-task learning. We enhanced the four
state-of-the art cGANs architectures: Stargan, Stargan-JNT, AttGAN and STGAN.
Our extensive qualitative and quantitative evaluations on challenging face
attributes manipulation data set, CelebA, LFWA, and RaFD, show that the cGANs
enhanced by our methods outperform by a large margin, compared to their
counter-parts and other conditioning methods, in terms of both target
attributes recognition rates and quality measures such as PSNR and SSIM.
|
[
"cs.CV"
] |
Curvature in form of the Hessian or its generalized Gauss-Newton (GGN)
approximation is valuable for algorithms that rely on a local model for the
loss to train, compress, or explain deep networks. Existing methods based on
implicit multiplication via automatic differentiation or Kronecker-factored
block diagonal approximations do not consider noise in the mini-batch. We
present ViViT, a curvature model that leverages the GGN's low-rank structure
without further approximations. It allows for efficient computation of
eigenvalues, eigenvectors, as well as per-sample first- and second-order
directional derivatives. The representation is computed in parallel with
gradients in one backward pass and offers a fine-grained cost-accuracy
trade-off, which allows it to scale. As examples for ViViT's usefulness, we
investigate the directional gradients and curvatures during training, and how
noise information can be used to improve the stability of second-order methods.
|
[
"cs.LG",
"stat.ML"
] |
Additive Manufacturing presents a great application area for Machine Learning
because of the vast volume of data generated and the potential to mine this
data to control outcomes. In this paper we present preliminary work on
classifying infrared time-series data representing melt-pool temperature in a
metal 3D printing process. Our ultimate objective is to use this data to
predict process outcomes (e.g. hardness, porosity, surface roughness). In the
work presented here we simply show that there is a signal in this data that can
be used for the classification of different components and stages of the AM
process. In line with other Machine Learning research on time-series
classification we use k-Nearest Neighbour classifiers. The results we present
suggests that Dynamic Time Warping is an effective distance measure compared
with alternatives for 3D printing data of this type.
|
[
"cs.LG"
] |
During the past decade, many anomaly detection approaches have been
introduced in different fields such as network monitoring, fraud detection, and
intrusion detection. However, they require understanding of data pattern and
often need a long off-line period to build a model or network for the target
data. Providing real-time and proactive anomaly detection for streaming time
series without human intervention and domain knowledge is highly valuable since
it greatly reduces human effort and enables appropriate countermeasures to be
undertaken before a disastrous damage, failure, or other harmful event occurs.
However, this issue has not been well studied yet. To address it, this paper
proposes RePAD, which is a Real-time Proactive Anomaly Detection algorithm for
streaming time series based on Long Short-Term Memory (LSTM). RePAD utilizes
short-term historic data points to predict and determine whether or not the
upcoming data point is a sign that an anomaly is likely to happen in the near
future. By dynamically adjusting the detection threshold over time, RePAD is
able to tolerate minor pattern change in time series and detect anomalies
either proactively or on time. Experiments based on two time series datasets
collected from the Numenta Anomaly Benchmark demonstrate that RePAD is able to
proactively detect anomalies and provide early warnings in real time without
human intervention and domain knowledge.
|
[
"cs.LG",
"stat.ML"
] |
Security inspection often deals with a piece of baggage or suitcase where
objects are heavily overlapped with each other, resulting in an unsatisfactory
performance for prohibited items detection in X-ray images. In the literature,
there have been rare studies and datasets touching this important topic. In
this work, we contribute the first high-quality object detection dataset for
security inspection, named Occluded Prohibited Items X-ray (OPIXray) image
benchmark. OPIXray focused on the widely-occurred prohibited item "cutter",
annotated manually by professional inspectors from the international airport.
The test set is further divided into three occlusion levels to better
understand the performance of detectors. Furthermore, to deal with the
occlusion in X-ray images detection, we propose the De-occlusion Attention
Module (DOAM), a plug-and-play module that can be easily inserted into and thus
promote most popular detectors. Despite the heavy occlusion in X-ray imaging,
shape appearance of objects can be preserved well, and meanwhile different
materials visually appear with different colors and textures. Motivated by
these observations, our DOAM simultaneously leverages the different appearance
information of the prohibited item to generate the attention map, which helps
refine feature maps for the general detectors. We comprehensively evaluate our
module on the OPIXray dataset, and demonstrate that our module can consistently
improve the performance of the state-of-the-art detection methods such as SSD,
FCOS, etc, and significantly outperforms several widely-used attention
mechanisms. In particular, the advantages of DOAM are more significant in the
scenarios with higher levels of occlusion, which demonstrates its potential
application in real-world inspections. The OPIXray benchmark and our model are
released at https://github.com/OPIXray-author/OPIXray.
|
[
"cs.CV"
] |
It is widely known that very small datasets produce overfitting in Deep
Neural Networks (DNNs), i.e., the network becomes highly biased to the data it
has been trained on. This issue is often alleviated using transfer learning,
regularization techniques and/or data augmentation. This work presents a new
approach, independent but complementary to the previous mentioned techniques,
for improving the generalization of DNNs on very small datasets in which the
involved classes share many visual features. The proposed methodology, called
FuCiTNet (Fusion Class inherent Transformations Network), inspired by GANs,
creates as many generators as classes in the problem. Each generator, $k$,
learns the transformations that bring the input image into the k-class domain.
We introduce a classification loss in the generators to drive the leaning of
specific k-class transformations. Our experiments demonstrate that the proposed
transformations improve the generalization of the classification model in three
diverse datasets.
|
[
"cs.CV"
] |
Two key challenges within Reinforcement Learning involve improving (a) agent
learning within environments with sparse extrinsic rewards and (b) the
explainability of agent actions. We describe a curious subgoal focused agent to
address both these challenges. We use a novel method for curiosity produced
from a Generative Adversarial Network (GAN) based model of environment
transitions that is robust to stochastic environment transitions. Additionally,
we use a subgoal generating network to guide navigation. The explainability of
the agent's behavior is increased by decomposing complex tasks into a sequence
of interpretable subgoals that do not require any manual design. We show that
this method also enables the agent to solve challenging procedurally-generated
tasks that contain stochastic transitions above other state-of-the-art methods.
|
[
"cs.LG"
] |
Entity images could provide significant visual information for knowledge
representation learning. Most conventional methods learn knowledge
representations merely from structured triples, ignoring rich visual
information extracted from entity images. In this paper, we propose a novel
Image-embodied Knowledge Representation Learning model (IKRL), where knowledge
representations are learned with both triple facts and images. More
specifically, we first construct representations for all images of an entity
with a neural image encoder. These image representations are then integrated
into an aggregated image-based representation via an attention-based method. We
evaluate our IKRL models on knowledge graph completion and triple
classification. Experimental results demonstrate that our models outperform all
baselines on both tasks, which indicates the significance of visual information
for knowledge representations and the capability of our models in learning
knowledge representations with images.
|
[
"cs.CV",
"cs.CL"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.