text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
Many real data sets contain numerical features (variables) whose distribution
is far from normal (gaussian). Instead, their distribution is often skewed. In
order to handle such data it is customary to preprocess the variables to make
them more normal. The Box-Cox and Yeo-Johnson transformations are well-known
tools for this. However, the standard maximum likelihood estimator of their
transformation parameter is highly sensitive to outliers, and will often try to
move outliers inward at the expense of the normality of the central part of the
data. We propose a modification of these transformations as well as an
estimator of the transformation parameter that is robust to outliers, so the
transformed data can be approximately normal in the center and a few outliers
may deviate from it. It compares favorably to existing techniques in an
extensive simulation study and on real data. | [
"stat.ML",
"cs.LG"
]
|
Graphs and networks are a key research tool for a variety of science fields,
most notably chemistry, biology, engineering and social sciences. Modeling and
generation of graphs with efficient sampling is a key challenge for graphs. In
particular, the non-uniqueness, high dimensionality of the vertices and local
dependencies of the edges may render the task challenging. We apply our
recently introduced method, Generative Examination Networks (GENs) to create
the first text-based generative graph models using one-line text formats as
graph representation. In our GEN, a RNN-generative model for a one-line text
format learns autonomously to predict the next available character. The
training is stopped by an examination mechanism checking validating the
percentage of valid graphs generated. We achieved moderate to high validity
using dense g6 strings (random 67.8 +/- 0.6, canonical 99.1 +/- 0.2). Based on
these results we have adapted the widely used SMILES representation for
molecules to a new input format, which we call linear graph input (LGI). Apart
from the benefits of a short compressible text-format, a major advantage
include the possibility to randomize and augment the format. The generative
models are evaluated for overall performance and for reconstruction of the
property space. The results show that LGI strings are very well suited for
machine-learning and that augmentation is essential for the performance of the
model in terms of validity, uniqueness and novelty. Lastly, the format can
address smaller and larger dataset of graphs and the format can be easily
adapted to define another meaning of the characters used in the LGI-string and
can address sparse graph problems in used in other fields of science. | [
"cs.LG",
"stat.ML"
]
|
We introduce adversarial neural networks for representation learning as a
novel approach to transfer learning in brain-computer interfaces (BCIs). The
proposed approach aims to learn subject-invariant representations by
simultaneously training a conditional variational autoencoder (cVAE) and an
adversarial network. We use shallow convolutional architectures to realize the
cVAE, and the learned encoder is transferred to extract subject-invariant
features from unseen BCI users' data for decoding. We demonstrate a
proof-of-concept of our approach based on analyses of electroencephalographic
(EEG) data recorded during a motor imagery BCI experiment. | [
"cs.LG",
"cs.HC",
"eess.SP"
]
|
Current graph neural network (GNN) architectures naively average or sum node
embeddings into an aggregated graph representation -- potentially losing
structural or semantic information. We here introduce OT-GNN, a model that
computes graph embeddings using parametric prototypes that highlight key facets
of different graph aspects. Towards this goal, we are (to our knowledge) the
first to successfully combine optimal transport (OT) with parametric graph
models. Graph representations are obtained from Wasserstein distances between
the set of GNN node embeddings and "prototype" point clouds as free parameters.
We theoretically prove that, unlike traditional sum aggregation, our function
class on point clouds satisfies a fundamental universal approximation theorem.
Empirically, we address an inherent collapse optimization issue by proposing a
noise contrastive regularizer to steer the model towards truly exploiting the
optimal transport geometry. Finally, we consistently report better
generalization performance on several molecular property prediction tasks,
while exhibiting smoother graph representations. | [
"stat.ML",
"cs.LG"
]
|
We treat the problem of color enhancement as an image translation task, which
we tackle using both supervised and unsupervised learning. Unlike traditional
image to image generators, our translation is performed using a global
parameterized color transformation instead of learning to directly map image
information. In the supervised case, every training image is paired with a
desired target image and a convolutional neural network (CNN) learns from the
expert retouched images the parameters of the transformation. In the unpaired
case, we employ two-way generative adversarial networks (GANs) to learn these
parameters and apply a circularity constraint. We achieve state-of-the-art
results compared to both supervised (paired data) and unsupervised (unpaired
data) image enhancement methods on the MIT-Adobe FiveK benchmark. Moreover, we
show the generalization capability of our method, by applying it on photos from
the early 20th century and to dark video frames. | [
"cs.CV"
]
|
This paper proves that the episodic learning environment of every
finite-horizon decision task has a unique steady state under any behavior
policy, and that the marginal distribution of the agent's input indeed
converges to the steady-state distribution in essentially all episodic learning
processes. This observation supports an interestingly reversed mindset against
conventional wisdom: While the existence of unique steady states was often
presumed in continual learning but considered less relevant in episodic
learning, it turns out their existence is guaranteed for the latter. Based on
this insight, the paper unifies episodic and continual RL around several
important concepts that have been separately treated in these two RL
formalisms. Practically, the existence of unique and approachable steady state
enables a general way to collect data in episodic RL tasks, which the paper
applies to policy gradient algorithms as a demonstration, based on a new
steady-state policy gradient theorem. Finally, the paper also proposes and
experimentally validates a perturbation method that facilitates rapid
steady-state convergence in real-world RL tasks. | [
"cs.LG",
"cs.AI"
]
|
Reinforcement Learning (RL) has been able to solve hard problems such as
playing Atari games or solving the game of Go, with a unified approach. Yet
modern deep RL approaches are still not widely used in real-world applications.
One reason could be the lack of guarantees on the performance of the
intermediate executed policies, compared to an existing (already working)
baseline policy. In this paper, we propose an online model-free algorithm that
solves conservative exploration in the policy optimization problem. We show
that the regret of the proposed approach is bounded by
$\tilde{\mathcal{O}}(\sqrt{T})$ for both discrete and continuous parameter
spaces. | [
"cs.LG",
"stat.ML"
]
|
This research project studies the impact of convolutional neural networks
(CNN) in image classification tasks. We explore different architectures and
training configurations with the use of ReLUs, Nesterov's accelerated gradient,
dropout and maxout networks. We work with the CIFAR-10 dataset as part of a
Kaggle competition to identify objects in images. Initial results show that
CNNs outperform our baseline by acting as invariant feature detectors.
Comparisons between different preprocessing procedures show better results for
global contrast normalization and ZCA whitening. ReLUs are much faster than
tanh units and outperform sigmoids. We provide extensive details about our
training hyperparameters, providing intuition for their selection that could
help enhance learning in similar situations. We design 4 models of
convolutional neural networks that explore characteristics such as depth,
number of feature maps, size and overlap of kernels, pooling regions, and
different subsampling techniques. Results favor models of moderate depth that
use an extensive number of parameters in both convolutional and dense layers.
Maxout networks are able to outperform rectifiers on some models but introduce
too much noise as the complexity of the fully-connected layers increases. The
final discussion explains our results and provides additional techniques that
could improve performance. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
3D multi-object tracking (MOT) and trajectory forecasting are two critical
components in modern 3D perception systems. We hypothesize that it is
beneficial to unify both tasks under one framework to learn a shared feature
representation of agent interaction. To evaluate this hypothesis, we propose a
unified solution for 3D MOT and trajectory forecasting which also incorporates
two additional novel computational units. First, we employ a feature
interaction technique by introducing Graph Neural Networks (GNNs) to capture
the way in which multiple agents interact with one another. The GNN is able to
model complex hierarchical interactions, improve the discriminative feature
learning for MOT association, and provide socially-aware context for trajectory
forecasting. Second, we use a diversity sampling function to improve the
quality and diversity of our forecasted trajectories. The learned sampling
function is trained to efficiently extract a variety of outcomes from a
generative trajectory distribution and helps avoid the problem of generating
many duplicate trajectory samples. We show that our method achieves
state-of-the-art performance on the KITTI dataset. Our project website is at
http://www.xinshuoweng.com/projects/GNNTrkForecast. | [
"cs.CV",
"cs.LG",
"cs.MA",
"cs.RO"
]
|
Generative adversarial networks (GANs) are increasingly attracting attention
in the computer vision, natural language processing, speech synthesis and
similar domains. Arguably the most striking results have been in the area of
image synthesis. However, evaluating the performance of GANs is still an open
and challenging problem. Existing evaluation metrics primarily measure the
dissimilarity between real and generated images using automated statistical
methods. They often require large sample sizes for evaluation and do not
directly reflect human perception of image quality. In this work, we describe
an evaluation metric we call Neuroscore, for evaluating the performance of
GANs, that more directly reflects psychoperceptual image quality through the
utilization of brain signals. Our results show that Neuroscore has superior
performance to the current evaluation metrics in that: (1) It is more
consistent with human judgment; (2) The evaluation process needs much smaller
numbers of samples; and (3) It is able to rank the quality of images on a per
GAN basis. A convolutional neural network (CNN) based neuro-AI interface is
proposed to predict Neuroscore from GAN-generated images directly without the
need for neural responses. Importantly, we show that including neural responses
during the training phase of the network can significantly improve the
prediction capability of the proposed model. Materials related to this work are
provided at https://github.com/villawang/Neuro-AI-Interface. | [
"cs.CV",
"cs.LG",
"eess.IV",
"eess.SP"
]
|
Hypotension in critical care settings is a life-threatening emergency that
must be recognized and treated early. While fluid bolus therapy and
vasopressors are common treatments, it is often unclear which interventions to
give, in what amounts, and for how long. Observational data in the form of
electronic health records can provide a source for helping inform these choices
from past events, but often it is not possible to identify a single best
strategy from observational data alone. In such situations, we argue it is
important to expose the collection of plausible options to a provider. To this
end, we develop SODA-RL: Safely Optimized, Diverse, and Accurate Reinforcement
Learning, to identify distinct treatment options that are supported in the
data. We demonstrate SODA-RL on a cohort of 10,142 ICU stays where hypotension
presented. Our learned policies perform comparably to the observed physician
behaviors, while providing different, plausible alternatives for treatment
decisions. | [
"stat.ML",
"cs.LG"
]
|
Reinforcement learning is well-studied under discrete actions. Integer
actions setting is popular in the industry yet still challenging due to its
high dimensionality. To this end, we study reinforcement learning under integer
actions by incorporating the Soft Actor-Critic (SAC) algorithm with an integer
reparameterization. Our key observation for integer actions is that their
discrete structure can be simplified using their comparability property. Hence,
the proposed integer reparameterization does not need one-hot encoding and is
of low dimensionality. Experiments show that the proposed SAC under integer
actions is as good as the continuous action version on robot control tasks and
outperforms Proximal Policy Optimization on power distribution systems control
tasks. | [
"cs.LG",
"cs.AI"
]
|
Feature learning in the presence of a mixed type of variables, numerical and
categorical types, is an important issue for related modeling problems. For
simple neighborhood queries under mixed data space, standard practice is to
consider numerical and categorical variables separately and combining them
based on some suitable distance functions. Alternatives, such as Kernel
learning or Principal Component do not explicitly consider the inter-dependence
structure among the mixed type of variables. In this work, we propose a novel
strategy to explicitly model the probabilistic dependence structure among the
mixed type of variables by an undirected graph. Spectral decomposition of the
graph Laplacian provides the desired feature transformation. The Eigen spectrum
of the transformed feature space shows increased separability and more
prominent clusterability among the observations. The main novelty of our paper
lies in capturing interactions of the mixed feature type in an unsupervised
framework using a graphical model. We numerically validate the implications of
the feature learning strategy | [
"stat.ML",
"cs.LG",
"stat.AP"
]
|
In this work, we present an investigation into the use of neural feature
extraction in performing scribal hand analysis of the Linear B writing system.
While prior work has demonstrated the usefulness of strategies such as
phylogenetic systematics in tracing Linear B's history, these approaches have
relied on manually extracted features which can be very time consuming to
define by hand. Instead we propose learning features using a fully unsupervised
neural network that does not require any human annotation. Specifically our
model assigns each glyph written by the same scribal hand a shared vector
embedding to represent that author's stylistic patterns, and each glyph
representing the same syllabic sign a shared vector embedding to represent the
identifying shape of that character. Thus the properties of each image in our
dataset are represented as the combination of a scribe embedding and a sign
embedding. We train this model using both a reconstructive loss governed by a
decoder that seeks to reproduce glyphs from their corresponding embeddings, and
a discriminative loss which measures the model's ability to predict whether or
not an embedding corresponds to a given image. Among the key contributions of
this work we (1) present a new dataset of Linear B glyphs, annotated by scribal
hand and sign type, (2) propose a neural model for disentangling properties of
scribal hands from glyph shape, and (3) quantitatively evaluate the learned
embeddings on findplace prediction and similarity to manually extracted
features, showing improvements over simpler baseline methods. | [
"cs.CV",
"cs.LG"
]
|
We propose a novel framework for image clustering that incorporates joint
representation learning and clustering. Our method consists of two heads that
share the same backbone network - a "representation learning" head and a
"clustering" head. The "representation learning" head captures fine-grained
patterns of objects at the instance level which serve as clues for the
"clustering" head to extract coarse-grain information that separates objects
into clusters. The whole model is trained in an end-to-end manner by minimizing
the weighted sum of two sample-oriented contrastive losses applied to the
outputs of the two heads. To ensure that the contrastive loss corresponding to
the "clustering" head is optimal, we introduce a novel critic function called
"log-of-dot-product". Extensive experimental results demonstrate that our
method significantly outperforms state-of-the-art single-stage clustering
methods across a variety of image datasets, improving over the best baseline by
about 5-7% in accuracy on CIFAR10/20, STL10, and ImageNet-Dogs. Further, the
"two-stage" variant of our method also achieves better results than baselines
on three challenging ImageNet subsets. | [
"cs.CV",
"cs.AI"
]
|
Policy gradient methods are widely used in reinforcement learning algorithms
to search for better policies in the parameterized policy space. They do
gradient search in the policy space and are known to converge very slowly.
Nesterov developed an accelerated gradient search algorithm for convex
optimization problems. This has been recently extended for non-convex and also
stochastic optimization. We use Nesterov's acceleration for policy gradient
search in the well-known actor-critic algorithm and show the convergence using
ODE method. We tested this algorithm on a scheduling problem. Here an incoming
job is scheduled into one of the four queues based on the queue lengths. We see
from experimental results that algorithm using Nesterov's acceleration has
significantly better performance compared to algorithm which do not use
acceleration. To the best of our knowledge this is the first time Nesterov's
acceleration has been used with actor-critic algorithm. | [
"cs.LG"
]
|
Humans are able to explain their reasoning. On the contrary, deep neural
networks are not. This paper attempts to bridge this gap by introducing a new
way to design interpretable neural networks for classification, inspired by
physiological evidence of the human visual system's inner-workings. This paper
proposes a neural network design paradigm, termed InterpNET, which can be
combined with any existing classification architecture to generate natural
language explanations of the classifications. The success of the module relies
on the assumption that the network's computation and reasoning is represented
in its internal layer activations. While in principle InterpNET could be
applied to any existing classification architecture, it is evaluated via an
image classification and explanation task. Experiments on a CUB bird
classification and explanation dataset show qualitatively and quantitatively
that the model is able to generate high-quality explanations. While the current
state-of-the-art METEOR score on this dataset is 29.2, InterpNET achieves a
much higher METEOR score of 37.9. | [
"stat.ML",
"cs.LG"
]
|
Music information retrieval faces a challenge in modeling contextualized
musical concepts formulated by a set of co-occurring tags. In this paper, we
investigate the suitability of our recently proposed approach based on a
Siamese neural network in fighting off this challenge. By means of tag features
and probabilistic topic models, the network captures contextualized semantics
from tags via unsupervised learning. This leads to a distributed semantics
space and a potential solution to the out of vocabulary problem which has yet
to be sufficiently addressed. We explore the nature of the resultant
music-based semantics and address computational needs. We conduct experiments
on three public music tag collections -namely, CAL500, MagTag5K and Million
Song Dataset- and compare our approach to a number of state-of-the-art
semantics learning approaches. Comparative results suggest that this approach
outperforms previous approaches in terms of semantic priming and music tag
completion. | [
"cs.LG",
"I.2.6"
]
|
In this article, we propose an approach that can make use of not only labeled
EEG signals but also the unlabeled ones which is more accessible. We also
suggest the use of data fusion to further improve the seizure prediction
accuracy. Data fusion in our vision includes EEG signals, cardiogram signals,
body temperature and time. We use the short-time Fourier transform on 28-s EEG
windows as a pre-processing step. A generative adversarial network (GAN) is
trained in an unsupervised manner where information of seizure onset is
disregarded. The trained Discriminator of the GAN is then used as feature
extractor. Features generated by the feature extractor are classified by two
fully-connected layers (can be replaced by any classifier) for the labeled EEG
signals. This semi-supervised seizure prediction method achieves area under the
operating characteristic curve (AUC) of 77.68% and 75.47% for the CHBMIT scalp
EEG dataset and the Freiburg Hospital intracranial EEG dataset, respectively.
Unsupervised training without the need of labeling is important because not
only it can be performed in real-time during EEG signal recording, but also it
does not require feature engineering effort for each patient. | [
"cs.CV",
"cs.LG",
"stat.ML"
]
|
Reasoning over visual data is a desirable capability for robotics and
vision-based applications. Such reasoning enables forecasting of the next
events or actions in videos. In recent years, various models have been
developed based on convolution operations for prediction or forecasting, but
they lack the ability to reason over spatiotemporal data and infer the
relationships of different objects in the scene. In this paper, we present a
framework based on graph convolution to uncover the spatiotemporal
relationships in the scene for reasoning about pedestrian intent. A scene graph
is built on top of segmented object instances within and across video frames.
Pedestrian intent, defined as the future action of crossing or not-crossing the
street, is a very crucial piece of information for autonomous vehicles to
navigate safely and more smoothly. We approach the problem of intent prediction
from two different perspectives and anticipate the intention-to-cross within
both pedestrian-centric and location-centric scenarios. In addition, we
introduce a new dataset designed specifically for autonomous-driving scenarios
in areas with dense pedestrian populations: the Stanford-TRI Intent Prediction
(STIP) dataset. Our experiments on STIP and another benchmark dataset show that
our graph modeling framework is able to predict the intention-to-cross of the
pedestrians with an accuracy of 79.10% on STIP and 79.28% on \rev{Joint
Attention for Autonomous Driving (JAAD) dataset up to one second earlier than
when the actual crossing happens. These results outperform the baseline and
previous work. Please refer to http://stip.stanford.edu/ for the dataset and
code. | [
"cs.CV"
]
|
Visualizing the perceptual content by analyzing human functional magnetic
resonance imaging (fMRI) has been an active research area. However, due to its
high dimensionality, complex dimensional structure, and small number of samples
available, reconstructing realistic images from fMRI remains challenging.
Recently with the development of convolutional neural network (CNN) and
generative adversarial network (GAN), mapping multi-voxel fMRI data to complex,
realistic images has been made possible. In this paper, we propose a model,
DCNN-GAN, by combining a reconstruction network and GAN. We utilize the CNN for
hierarchical feature extraction and the DCNN-GAN to reconstruct more realistic
images. Extensive experiments have been conducted, showing that our method
outperforms previous works, regarding reconstruction quality and computational
cost. | [
"cs.CV",
"eess.IV"
]
|
In complex transfer learning scenarios new tasks might not be tightly linked
to previous tasks. Approaches that transfer information contained only in the
final parameters of a source model will therefore struggle. Instead, transfer
learning at a higher level of abstraction is needed. We propose Leap, a
framework that achieves this by transferring knowledge across learning
processes. We associate each task with a manifold on which the training process
travels from initialization to final parameters and construct a meta-learning
objective that minimizes the expected length of this path. Our framework
leverages only information obtained during training and can be computed on the
fly at negligible cost. We demonstrate that our framework outperforms competing
methods, both in meta-learning and transfer learning, on a set of computer
vision tasks. Finally, we demonstrate that Leap can transfer knowledge across
learning processes in demanding reinforcement learning environments (Atari)
that involve millions of gradient steps. | [
"cs.LG",
"cs.AI",
"cs.NE",
"stat.ML"
]
|
Taking the deep learning-based algorithms into account has become a crucial
way to boost object detection performance in aerial images. While various
neural network representations have been developed, previous works are still
inefficient to investigate the noise-resilient performance, especially on
aerial images with noise taken by the cameras with telephoto lenses, and most
of the research is concentrated in the field of denoising. Of course, denoising
usually requires an additional computational burden to obtain higher quality
images, while noise-resilient is more of a description of the robustness of the
network itself to different noises, which is an attribute of the algorithm
itself. For this reason, the work will be started by analyzing the
noise-resilient performance of the neural network, and then propose two
hypotheses to build a noise-resilient structure. Based on these hypotheses, we
compare the noise-resilient ability of the Oct-ResNet with frequency division
processing and the commonly used ResNet. In addition, previous feature pyramid
networks used for aerial object detection tasks are not specifically designed
for the frequency division feature maps of the Oct-ResNet, and they usually
lack attention to bridging the semantic gap between diverse feature maps from
different depths. On the basis of this, a novel octave convolution-based
semantic attention feature pyramid network (OcSaFPN) is proposed to get higher
accuracy in object detection with noise. The proposed algorithm tested on three
datasets demonstrates that the proposed OcSaFPN achieves a state-of-the-art
detection performance with Gaussian noise or multiplicative noise. In addition,
more experiments have proved that the OcSaFPN structure can be easily added to
existing algorithms, and the noise-resilient ability can be effectively
improved. | [
"cs.CV"
]
|
A critical challenge for any intelligent system is to infer structure from
continuous data streams. Theories of event-predictive cognition suggest that
the brain segments sensorimotor information into compact event encodings, which
are used to anticipate and interpret environmental dynamics. Here, we introduce
a SUrprise-GAted Recurrent neural network (SUGAR) using a novel form of
counterfactual regularization. We test the model on a hierarchical sequence
prediction task, where sequences are generated by alternating hidden graph
structures. Our model learns to both compress the temporal dynamics of the task
into latent event-predictive encodings and anticipate event transitions at the
right moments, given noisy hidden signals about them. The addition of the
counterfactual regularization term ensures fluid transitions from one latent
code to the next, whereby the resulting latent codes exhibit compositional
properties. The implemented mechanisms offer a host of useful applications in
other domains, including hierarchical reasoning, planning, and decision making. | [
"cs.LG"
]
|
Contributions: Prior studies on education have mostly followed the model of
the cross sectional study, namely, examining the pretest and the posttest
scores. This paper shows that students' knowledge throughout the intervention
can be estimated by time series analysis using a hidden Markov model.
Background: Analyzing time series and the interaction between the students and
the game data can result in valuable information that cannot be gained by only
cross sectional studies of the exams. Research Questions: Can a hidden Markov
model be used to analyze the educational games? Can a hidden Markov model be
used to make a prediction of the students' performance? Methodology: The study
was conducted on (N=854) students who played the Save Patch game. Students were
divided into class 1 and class 2. Class 1 students are those who scored lower
in the test than class 2 students. The analysis is done by choosing various
features of the game as the observations. Findings: The state trajectories can
predict the students' performance accurately for both class 1 and class 2. | [
"stat.ML",
"cs.CY",
"cs.LG",
"stat.AP"
]
|
The main stated contribution of the Deformable Parts Model (DPM) detector of
Felzenszwalb et al. (over the Histogram-of-Oriented-Gradients approach of Dalal
and Triggs) is the use of deformable parts. A secondary contribution is the
latent discriminative learning. Tertiary is the use of multiple components. A
common belief in the vision community (including ours, before this study) is
that their ordering of contributions reflects the performance of detector in
practice. However, what we have experimentally found is that the ordering of
importance might actually be the reverse. First, we show that by increasing the
number of components, and switching the initialization step from their
aspect-ratio, left-right flipping heuristics to appearance-based clustering,
considerable improvement in performance is obtained. But more intriguingly, we
show that with these new components, the part deformations can now be
completely switched off, yet obtaining results that are almost on par with the
original DPM detector. Finally, we also show initial results for using multiple
components on a different problem -- scene classification, suggesting that this
idea might have wider applications in addition to object detection. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
This paper studies active learning (AL) on graphs, whose purpose is to
discover the most informative nodes to maximize the performance of graph neural
networks (GNNs). Previously, most graph AL methods focus on learning node
representations from a carefully selected labeled dataset with large amount of
unlabeled data neglected. Motivated by the success of contrastive learning
(CL), we propose a novel paradigm that seamlessly integrates graph AL with CL.
While being able to leverage the power of abundant unlabeled data in a
self-supervised manner, nodes selected by AL further provide semantic
information that can better guide representation learning. Besides, previous
work measures the informativeness of nodes without considering the neighborhood
propagation scheme of GNNs, so that noisy nodes may be selected. We argue that
due to the smoothing nature of GNNs, the central nodes from homophilous
subgraphs should benefit the model training most. To this end, we present a
minimax selection scheme that explicitly harnesses neighborhood information and
discover homophilous subgraphs to facilitate active selection. Comprehensive,
confounding-free experiments on five public datasets demonstrate the
superiority of our method over state-of-the-arts. | [
"cs.LG",
"stat.ML"
]
|
Visual feature clustering is one of the cost-effective approaches to segment
objects in videos. However, the assumptions made for developing the existing
algorithms prevent them from being used in situations like segmenting an
unknown number of static and moving objects under heavy camera movements. This
paper addresses the problem by introducing a clustering approach based on
superpixels and short-term Histogram of Oriented Optical Flow (HOOF). Salient
Dither Pattern Feature (SDPF) is used as the visual feature to track the flow
and Simple Linear Iterative Clustering (SLIC) is used for obtaining the
superpixels. This new clustering approach is based on merging superpixels by
comparing short term local HOOF and a color cue to form high-level semantic
segments. The new approach was compared with one of the latest feature
clustering approaches based on K-Means in eight-dimensional space and the
results revealed that the new approach is better by means of consistency,
completeness, and spatial accuracy. Further, the new approach completely solved
the problem of not knowing the number of objects in a scene. | [
"cs.CV"
]
|
Over the last few years, the performance of inpainting to fill missing
regions has shown significant improvements by using deep neural networks. Most
of inpainting work create a visually plausible structure and texture, however,
due to them often generating a blurry result, final outcomes appear unrealistic
and make feel heterogeneity. In order to solve this problem, the existing
methods have used a patch based solution with deep neural network, however,
these methods also cannot transfer the texture properly. Motivated by these
observation, we propose a patch based method. Texture Transform Attention
network(TTA-Net) that better produces the missing region inpainting with fine
details. The task is a single refinement network and takes the form of U-Net
architecture that transfers fine texture features of encoder to coarse semantic
features of decoder through skip-connection. Texture Transform Attention is
used to create a new reassembled texture map using fine textures and coarse
semantics that can efficiently transfer texture information as a result. To
stabilize training process, we use a VGG feature layer of ground truth and
patch discriminator. We evaluate our model end-to-end with the publicly
available datasets CelebA-HQ and Places2 and demonstrate that images of higher
quality can be obtained to the existing state-of-the-art approaches. | [
"cs.CV"
]
|
The ability to efficiently search for images over an indexed database is the
cornerstone for several user experiences. Incorporating user feedback, through
multi-modal inputs provide flexible and interaction to serve fine-grained
specificity in requirements. We specifically focus on text feedback, through
descriptive natural language queries. Given a reference image and textual user
feedback, our goal is to retrieve images that satisfy constraints specified by
both of these input modalities. The task is challenging as it requires
understanding the textual semantics from the text feedback and then applying
these changes to the visual representation. To address these challenges, we
propose a novel architecture TRACE which contains a hierarchical feature
aggregation module to learn the composite visio-linguistic representations.
TRACE achieves the SOTA performance on 3 benchmark datasets: FashionIQ, Shoes,
and Birds-to-Words, with an average improvement of at least ~5.7%, ~3%, and ~5%
respectively in R@K metric. Our extensive experiments and ablation studies show
that TRACE consistently outperforms the existing techniques by significant
margins both quantitatively and qualitatively. | [
"cs.CV",
"cs.AI"
]
|
Word embeddings are commonly obtained as optimizers of a criterion function
$f$ of a text corpus, but assessed on word-task performance using a different
evaluation function $g$ of the test data. We contend that a possible source of
disparity in performance on tasks is the incompatibility between classes of
transformations that leave $f$ and $g$ invariant. In particular, word
embeddings defined by $f$ are not unique; they are defined only up to a class
of transformations to which $f$ is invariant, and this class is larger than the
class to which $g$ is invariant. One implication of this is that the apparent
superiority of one word embedding over another, as measured by word task
performance, may largely be a consequence of the arbitrary elements selected
from the respective solution sets. We provide a formal treatment of the above
identifiability issue, present some numerical examples, and discuss possible
resolutions. | [
"stat.ML",
"cs.CL",
"cs.LG",
"stat.CO"
]
|
We describe a new approach to estimating relative risks in time-to-event
prediction problems with censored data in a fully parametric manner. Our
approach does not require making strong assumptions of constant proportional
hazard of the underlying survival distribution, as required by the
Cox-proportional hazard model. By jointly learning deep nonlinear
representations of the input covariates, we demonstrate the benefits of our
approach when used to estimate survival risks through extensive experimentation
on multiple real world datasets with different levels of censoring. We further
demonstrate advantages of our model in the competing risks scenario. To the
best of our knowledge, this is the first work involving fully parametric
estimation of survival times with competing risks in the presence of censoring. | [
"cs.LG",
"stat.AP",
"stat.ML"
]
|
Monocular Depth Estimation is usually treated as a supervised and regression
problem when it actually is very similar to semantic segmentation task since
they both are fundamentally pixel-level classification tasks. We applied depth
increments that increases with depth in discretizing depth values and then
applied Deeplab v2 and the result was higher accuracy. We were able to achieve
a state-of-the-art result on the KITTI dataset and outperformed existing
architecture by an 8% margin. | [
"cs.CV",
"cs.LG"
]
|
The research on human emotion under multimedia stimulation based on
physiological signals is an emerging field, and important progress has been
achieved for emotion recognition based on multi-modal signals. However, it is
challenging to make full use of the complementarity among
spatial-spectral-temporal domain features for emotion recognition, as well as
model the heterogeneity and correlation among multi-modal signals. In this
paper, we propose a novel two-stream heterogeneous graph recurrent neural
network, named HetEmotionNet, fusing multi-modal physiological signals for
emotion recognition. Specifically, HetEmotionNet consists of the
spatial-temporal stream and the spatial-spectral stream, which can fuse
spatial-spectral-temporal domain features in a unified framework. Each stream
is composed of the graph transformer network for modeling the heterogeneity,
the graph convolutional network for modeling the correlation, and the gated
recurrent unit for capturing the temporal domain or spectral domain dependency.
Extensive experiments on two real-world datasets demonstrate that our proposed
model achieves better performance than state-of-the-art baselines. | [
"cs.LG",
"cs.AI",
"cs.HC",
"cs.MM"
]
|
False positive is one of the most serious problems brought by agnostic domain
shift in domain adaptive pedestrian detection. However, it is impossible to
label each box in countless target domains. Therefore, it yields our attention
to suppress false positive in each target domain in an unsupervised way. In
this paper, we model an object detection task into a ranking task among
positive and negative boxes innovatively, and thus transform a false positive
suppression problem into a box re-ranking problem elegantly, which makes it
feasible to solve without manual annotation. An attached problem during box
re-ranking appears that no labeled validation data is available for
cherrypicking. Considering we aim to keep the detection of true positive
unchanged, we propose box number alignment, a self-supervised evaluation
metric, to prevent the optimized model from capacity degeneration. Extensive
experiments conducted on cross-domain pedestrian detection datasets have
demonstrated the effectiveness of our proposed framework. Furthermore, the
extension to two general unsupervised domain adaptive object detection
benchmarks also supports our superiority to other state-of-the-arts. | [
"cs.CV"
]
|
Most existing work that grounds natural language phrases in images starts
with the assumption that the phrase in question is relevant to the image. In
this paper we address a more realistic version of the natural language
grounding task where we must both identify whether the phrase is relevant to an
image and localize the phrase. This can also be viewed as a generalization of
object detection to an open-ended vocabulary, introducing elements of few- and
zero-shot detection. We propose an approach for this task that extends Faster
R-CNN to relate image regions and phrases. By carefully initializing the
classification layers of our network using canonical correlation analysis
(CCA), we encourage a solution that is more discerning when reasoning between
similar phrases, resulting in over double the performance compared to a naive
adaptation on three popular phrase grounding datasets, Flickr30K Entities,
ReferIt Game, and Visual Genome, with test-time phrase vocabulary sizes of 5K,
32K, and 159K, respectively. | [
"cs.CV"
]
|
Globally, in 2016, one out of eleven adults suffered from Diabetes Mellitus.
Diabetic Foot Ulcers (DFU) are a major complication of this disease, which if
not managed properly can lead to amputation. Current clinical approaches to DFU
treatment rely on patient and clinician vigilance, which has significant
limitations such as the high cost involved in the diagnosis, treatment and
lengthy care of the DFU. We collected an extensive dataset of foot images,
which contain DFU from different patients. In this paper, we have proposed the
use of traditional computer vision features for detecting foot ulcers among
diabetic patients, which represent a cost-effective, remote and convenient
healthcare solution. Furthermore, we used Convolutional Neural Networks (CNNs)
for the first time in DFU classification. We have proposed a novel
convolutional neural network architecture, DFUNet, with better feature
extraction to identify the feature differences between healthy skin and the
DFU. Using 10-fold cross-validation, DFUNet achieved an AUC score of 0.962.
This outperformed both the machine learning and deep learning classifiers we
have tested. Here we present the development of a novel and highly sensitive
DFUNet for objectively detecting the presence of DFUs. This novel approach has
the potential to deliver a paradigm shift in diabetic foot care. | [
"cs.CV"
]
|
Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art
collision avoidance systems (CAS). However, the detection of obstacles
especially during night-time is still a challenging task since the lighting
conditions are not sufficient for traditional cameras to function properly.
Therefore, we exploit the powerful attributes of event-based cameras to perform
obstacle detection in low lighting conditions. Event cameras trigger events
asynchronously at high output temporal rate with high dynamic range of up to
120 $dB$. The algorithm filters background activity noise and extracts objects
using robust Hough transform technique. The depth of each detected object is
computed by triangulating 2D features extracted utilising LC-Harris. Finally,
asynchronous adaptive collision avoidance (AACA) algorithm is applied for
effective avoidance. Qualitative evaluation is compared using event-camera and
traditional camera. | [
"cs.CV",
"cs.RO"
]
|
Video frame interpolation is the task of creating an interframe between two
adjacent frames along the time axis. So, instead of simply averaging two
adjacent frames to create an intermediate image, this operation should maintain
semantic continuity with the adjacent frames. Most conventional methods use
optical flow, and various tools such as occlusion handling and object smoothing
are indispensable. Since the use of these various tools leads to complex
problems, we tried to tackle the video interframe generation problem without
using problematic optical flow . To enable this , we have tried to use a deep
neural network with an invertible structure, and developed an U-Net based
Generative Flow which is a modified normalizing flow. In addition, we propose a
learning method with a new consistency loss in the latent space to maintain
semantic temporal consistency between frames. The resolution of the generated
image is guaranteed to be identical to that of the original images by using an
invertible network. Furthermore, as it is not a random image like the ones by
generative models, our network guarantees stable outputs without flicker.
Through experiments, we \sam {confirmed the feasibility of the proposed
algorithm and would like to suggest the U-Net based Generative Flow as a new
possibility for baseline in video frame interpolation. This paper is meaningful
in that it is the world's first attempt to use invertible networks instead of
optical flows for video interpolation. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Graph Neural Networks (GNNs) are information processing architectures for
signals supported on graphs. They are presented here as generalizations of
convolutional neural networks (CNNs) in which individual layers contain banks
of graph convolutional filters instead of banks of classical convolutional
filters. Otherwise, GNNs operate as CNNs. Filters are composed with pointwise
nonlinearities and stacked in layers. It is shown that GNN architectures
exhibit equivariance to permutation and stability to graph deformations. These
properties help explain the good performance of GNNs that can be observed
empirically. It is also shown that if graphs converge to a limit object, a
graphon, GNNs converge to a corresponding limit object, a graphon neural
network. This convergence justifies the transferability of GNNs across networks
with different number of nodes. Concepts are illustrated by the application of
GNNs to recommendation systems, decentralized collaborative control, and
wireless communication networks. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we aim to address the problem of heterogeneous or
cross-spectral face recognition using machine learning to synthesize visual
spectrum face from infrared images. The synthesis of visual-band face images
allows for more optimal extraction of facial features to be used for face
identification and/or verification. We explore the ability to use Generative
Adversarial Networks (GANs) for face image synthesis, and examine the
performance of these images using pre-trained Convolutional Neural Networks
(CNNs). The features extracted using CNNs are applied in face identification
and verification. We explore the performance in terms of acceptance rate when
using various similarity measures for face verification. | [
"cs.CV"
]
|
Tensor network decomposition, originated from quantum physics to model
entangled many-particle quantum systems, turns out to be a promising
mathematical technique to efficiently represent and process big data in
parsimonious manner. In this study, we show that tensor networks can
systematically partition structured data, e.g. color images, for distributed
storage and communication in privacy-preserving manner. Leveraging the sea of
big data and metadata privacy, empirical results show that neighbouring
subtensors with implicit information stored in tensor network formats cannot be
identified for data reconstruction. This technique complements the existing
encryption and randomization techniques which store explicit data
representation at one place and highly susceptible to adversarial attacks such
as side-channel attacks and de-anonymization. Furthermore, we propose a theory
for adversarial examples that mislead convolutional neural networks to
misclassification using subspace analysis based on singular value decomposition
(SVD). The theory is extended to analyze higher-order tensors using
tensor-train SVD (TT-SVD); it helps to explain the level of susceptibility of
different datasets to adversarial attacks, the structural similarity of
different adversarial attacks including global and localized attacks, and the
efficacy of different adversarial defenses based on input transformation. An
efficient and adaptive algorithm based on robust TT-SVD is then developed to
detect strong and static adversarial attacks. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
Even though many existing 3D object detection algorithms rely mostly on
camera and LiDAR, camera and LiDAR are prone to be affected by harsh weather
and lighting conditions. On the other hand, radar is resistant to such
conditions. However, research has found only recently to apply deep neural
networks on radar data. In this paper, we introduce a deep learning approach to
3D object detection with radar only. To the best of our knowledge, we are the
first ones to demonstrate a deep learning-based 3D object detection model with
radar only that was trained on the public radar dataset. To overcome the lack
of radar labeled data, we propose a novel way of making use of abundant LiDAR
data by transforming it into radar-like point cloud data and aggressive radar
augmentation techniques. | [
"cs.CV",
"eess.IV",
"Artificial intelligence"
]
|
In this paper, naive Bayesian and C4.5 Decision Tree Classifiers(DTC) are
successively applied on materials informatics to classify the engineering
materials into different classes for the selection of materials that suit the
input design specifications. Here, the classifiers are analyzed individually
and their performance evaluation is analyzed with confusion matrix predictive
parameters and standard measures, the classification results are analyzed on
different class of materials. Comparison of classifiers has found that naive
Bayesian classifier is more accurate and better than the C4.5 DTC. The
knowledge discovered by the naive bayesian classifier can be employed for
decision making in materials selection in manufacturing industries. | [
"cs.LG"
]
|
Generative adversarial networks (GANs) are neural networks that learn data
distributions through adversarial training. In intensive studies, recent GANs
have shown promising results for reproducing training images. However, in spite
of noise, they reproduce images with fidelity. As an alternative, we propose a
novel family of GANs called noise robust GANs (NR-GANs), which can learn a
clean image generator even when training images are noisy. In particular,
NR-GANs can solve this problem without having complete noise information (e.g.,
the noise distribution type, noise amount, or signal-noise relationship). To
achieve this, we introduce a noise generator and train it along with a clean
image generator. However, without any constraints, there is no incentive to
generate an image and noise separately. Therefore, we propose distribution and
transformation constraints that encourage the noise generator to capture only
the noise-specific components. In particular, considering such constraints
under different assumptions, we devise two variants of NR-GANs for
signal-independent noise and three variants of NR-GANs for signal-dependent
noise. On three benchmark datasets, we demonstrate the effectiveness of NR-GANs
in noise robust image generation. Furthermore, we show the applicability of
NR-GANs in image denoising. Our code is available at
https://github.com/takuhirok/NR-GAN/. | [
"cs.CV",
"cs.LG",
"eess.IV",
"stat.ML"
]
|
Adversarial examples have shown that albeit highly accurate, models learned
by machines, differently from humans, have many weaknesses. However, humans'
perception is also fundamentally different from machines, because we do not see
the signals which arrive at the retina but a rather complex recreation of them.
In this paper, we explore how machines could recreate the input as well as
investigate the benefits of such an augmented perception. In this regard, we
propose Perceptual Deep Neural Networks ($\varphi$DNN) which also recreate
their own input before further processing. The concept is formalized
mathematically and two variations of it are developed (one based on inpainting
the whole image and the other based on a noisy resized super resolution
recreation). Experiments reveal that $\varphi$DNNs and their adversarial
training variations can increase the robustness substantially, surpassing both
state-of-the-art defenses and pre-processing types of defenses in 100% of the
tests. $\varphi$DNNs are shown to scale well to bigger image sizes, keeping a
similar high accuracy throughout; while the state-of-the-art worsen up to 35%.
Moreover, the recreation process intentionally corrupts the input image.
Interestingly, we show by ablation tests that corrupting the input is, although
counter-intuitive, beneficial. Thus, $\varphi$DNNs reveal that input recreation
has strong benefits for artificial neural networks similar to biological ones,
shedding light into the importance of purposely corrupting the input as well as
pioneering an area of perception models based on GANs and autoencoders for
robust recognition in artificial intelligence. | [
"cs.CV",
"cs.AI",
"cs.CR",
"cs.LG"
]
|
In recent studies, Lots of work has been done to solve time series anomaly
detection by applying Variational Auto-Encoders (VAEs). Time series anomaly
detection is a very common but challenging task in many industries, which plays
an important role in network monitoring, facility maintenance, information
security, and so on. However, it is very difficult to detect anomalies in time
series with high accuracy, due to noisy data collected from real world, and
complicated abnormal patterns. From recent studies, we are inspired by Nouveau
VAE (NVAE) and propose our anomaly detection model: Time series to Image VAE
(T2IVAE), an unsupervised model based on NVAE for univariate series,
transforming 1D time series to 2D image as input, and adopting the
reconstruction error to detect anomalies. Besides, we also apply the Generative
Adversarial Networks based techniques to T2IVAE training strategy, aiming to
reduce the overfitting. We evaluate our model performance on three datasets,
and compare it with other several popular models using F1 score. T2IVAE
achieves 0.639 on Numenta Anomaly Benchmark, 0.651 on public dataset from NASA,
and 0.504 on our dataset collected from real-world scenario, outperforms other
comparison models. | [
"cs.LG",
"stat.ML"
]
|
3D object detection from monocular images has proven to be an enormously
challenging task, with the performance of leading systems not yet achieving
even 10\% of that of LiDAR-based counterparts. One explanation for this
performance gap is that existing systems are entirely at the mercy of the
perspective image-based representation, in which the appearance and scale of
objects varies drastically with depth and meaningful distances are difficult to
infer. In this work we argue that the ability to reason about the world in 3D
is an essential element of the 3D object detection task. To this end, we
introduce the orthographic feature transform, which enables us to escape the
image domain by mapping image-based features into an orthographic 3D space.
This allows us to reason holistically about the spatial configuration of the
scene in a domain where scale is consistent and distances between objects are
meaningful. We apply this transformation as part of an end-to-end deep learning
architecture and achieve state-of-the-art performance on the KITTI 3D object
benchmark.\footnote{We will release full source code and pretrained models upon
acceptance of this manuscript for publication. | [
"cs.CV"
]
|
Existing methods on visual emotion analysis mainly focus on coarse-grained
emotion classification, i.e. assigning an image with a dominant discrete
emotion category. However, these methods cannot well reflect the complexity and
subtlety of emotions. In this paper, we study the fine-grained regression
problem of visual emotions based on convolutional neural networks (CNNs).
Specifically, we develop a Polarity-consistent Deep Attention Network (PDANet),
a novel network architecture that integrates attention into a CNN with an
emotion polarity constraint. First, we propose to incorporate both spatial and
channel-wise attentions into a CNN for visual emotion regression, which jointly
considers the local spatial connectivity patterns along each channel and the
interdependency between different channels. Second, we design a novel
regression loss, i.e. polarity-consistent regression (PCR) loss, based on the
weakly supervised emotion polarity to guide the attention generation. By
optimizing the PCR loss, PDANet can generate a polarity preserved attention map
and thus improve the emotion regression performance. Extensive experiments are
conducted on the IAPS, NAPS, and EMOTIC datasets, and the results demonstrate
that the proposed PDANet outperforms the state-of-the-art approaches by a large
margin for fine-grained visual emotion regression. Our source code is released
at: https://github.com/ZizhouJia/PDANet. | [
"cs.CV",
"cs.AI",
"cs.MM"
]
|
Deep Convolutional Neural Networks (CNNs) have been one of the most
influential recent developments in computer vision, particularly for
categorization. There is an increasing demand for explainable AI as these
systems are deployed in the real world. However, understanding the information
represented and processed in CNNs remains in most cases challenging. Within
this paper, we explore the use of new information theoretic techniques
developed in the field of neuroscience to enable novel understanding of how a
CNN represents information. We trained a 10-layer ResNet architecture to
identify 2,000 face identities from 26M images generated using a rigorously
controlled 3D face rendering model that produced variations of intrinsic (i.e.
face morphology, gender, age, expression and ethnicity) and extrinsic factors
(i.e. 3D pose, illumination, scale and 2D translation). With our methodology,
we demonstrate that unlike human's network overgeneralizes face identities even
with extreme changes of face shape, but it is more sensitive to changes of
texture. To understand the processing of information underlying these
counterintuitive properties, we visualize the features of shape and texture
that the network processes to identify faces. Then, we shed a light into the
inner workings of the black box and reveal how hidden layers represent these
features and whether the representations are invariant to pose. We hope that
our methodology will provide an additional valuable tool for interpretability
of CNNs. | [
"cs.CV"
]
|
Given a natural language query, teaching machines to ask clarifying questions
is of immense utility in practical natural language processing systems. Such
interactions could help in filling information gaps for better machine
comprehension of the query. For the task of ranking clarification questions, we
hypothesize that determining whether a clarification question pertains to a
missing entry in a given post (on QA forums such as StackExchange) could be
considered as a special case of Natural Language Inference (NLI), where both
the post and the most relevant clarification question point to a shared latent
piece of information or context. We validate this hypothesis by incorporating
representations from a Siamese BERT model fine-tuned on NLI and Multi-NLI
datasets into our models and demonstrate that our best performing model obtains
a relative performance improvement of 40 percent and 60 percent respectively
(on the key metric of Precision@1), over the state-of-the-art baseline(s) on
the two evaluation sets of the StackExchange dataset, thereby, significantly
surpassing the state-of-the-art. | [
"cs.LG",
"cs.AI",
"cs.IR",
"stat.ML"
]
|
Representation learning is a fundamental but challenging problem, especially
when the distribution of data is unknown. We propose a new representation
learning method, termed Structure Transfer Machine (STM), which enables feature
learning process to converge at the representation expectation in a
probabilistic way. We theoretically show that such an expected value of the
representation (mean) is achievable if the manifold structure can be
transferred from the data space to the feature space. The resulting structure
regularization term, named manifold loss, is incorporated into the loss
function of the typical deep learning pipeline. The STM architecture is
constructed to enforce the learned deep representation to satisfy the intrinsic
manifold structure from the data, which results in robust features that suit
various application scenarios, such as digit recognition, image classification
and object tracking. Compared to state-of-the-art CNN architectures, we achieve
the better results on several commonly used benchmarks\footnote{The source code
is available. https://github.com/stmstmstm/stm }. | [
"cs.LG",
"stat.ML"
]
|
The recent success of natural language understanding (NLU) systems has been
troubled by results highlighting the failure of these models to generalize in a
systematic and robust way. In this work, we introduce a diagnostic benchmark
suite, named CLUTRR, to clarify some key issues related to the robustness and
systematicity of NLU systems. Motivated by classic work on inductive logic
programming, CLUTRR requires that an NLU system infer kinship relations between
characters in short stories. Successful performance on this task requires both
extracting relationships between entities, as well as inferring the logical
rules governing these relationships. CLUTRR allows us to precisely measure a
model's ability for systematic generalization by evaluating on held-out
combinations of logical rules, and it allows us to evaluate a model's
robustness by adding curated noise facts. Our empirical results highlight a
substantial performance gap between state-of-the-art NLU models (e.g., BERT and
MAC) and a graph neural network model that works directly with symbolic
inputs---with the graph-based model exhibiting both stronger generalization and
greater robustness. | [
"cs.LG",
"cs.CL",
"cs.LO",
"stat.ML"
]
|
Fuzzy time series forecasting methods are very popular among researchers for
predicting future values as they are not based on the strict assumptions of
traditional time series forecasting methods. Non-stochastic methods of fuzzy
time series forecasting are preferred by the researchers as they provide more
significant forecasting results. There are generally, four factors that
determine the performance of the forecasting method (1) number of intervals
(NOIs) and length of intervals to partition universe of discourse (UOD) (2)
fuzzification rules or feature representation of crisp time series (3) method
of establishing fuzzy logic rule (FLRs) between input and target values (4)
defuzzification rule to get crisp forecasted value. Considering the first two
factors to improve the forecasting accuracy, we proposed a novel non-stochastic
method fuzzy time series forecasting in which interval index number and
membership value are used as input features to predict future value. We
suggested a simple rounding-off range and suitable step size method to find the
optimal number of intervals (NOIs) and used fuzzy c-means clustering process to
divide UOD into intervals of unequal length. We implement support vector
machine (SVM) to establish FLRs. To test our proposed method we conduct a
simulated study on five widely used real time series and compare the
performance with some recently developed models. We also examine the
performance of the proposed model by using multi-layer perceptron (MLP) instead
of SVM. Two performance measures RSME and SMAPE are used for performance
analysis and observed better forecasting accuracy by the proposed model. | [
"cs.LG"
]
|
Semantic image segmentation plays an important role in modeling
patient-specific anatomy. We propose a convolution neural network, called
Kid-Net, along with a training schema to segment kidney vessels: artery, vein
and collecting system. Such segmentation is vital during the surgical planning
phase in which medical decisions are made before surgical incision. Our main
contribution is developing a training schema that handles unbalanced data,
reduces false positives and enables high-resolution segmentation with a limited
memory budget. These objectives are attained using dynamic weighting, random
sampling and 3D patch segmentation. Manual medical image annotation is both
time-consuming and expensive. Kid-Net reduces kidney vessels segmentation time
from matter of hours to minutes. It is trained end-to-end using 3D patches from
volumetric CT-images. A complete segmentation for a 512x512x512 CT-volume is
obtained within a few minutes (1-2 mins) by stitching the output 3D patches
together. Feature down-sampling and up-sampling are utilized to achieve higher
classification and localization accuracies. Quantitative and qualitative
evaluation results on a challenging testing dataset show Kid-Net competence. | [
"cs.CV"
]
|
Time Series Classification (TSC) has been an important and challenging task
in data mining, especially on multivariate time series and multi-view time
series data sets. Meanwhile, transfer learning has been widely applied in
computer vision and natural language processing applications to improve deep
neural network's generalization capabilities. However, very few previous works
applied transfer learning framework to time series mining problems.
Particularly, the technique of measuring similarities between source domain and
target domain based on dynamic representation such as density estimation with
importance sampling has never been combined with transfer learning framework.
In this paper, we first proposed a general adaptive transfer learning framework
for multi-view time series data, which shows strong ability in storing
inter-view importance value in the process of knowledge transfer. Next, we
represented inter-view importance through some time series similarity
measurements and approximated the posterior distribution in latent space for
the importance sampling via density estimation techniques. We then computed the
matrix norm of sampled importance value, which controls the degree of knowledge
transfer in pre-training process. We further evaluated our work, applied it to
many other time series classification tasks, and observed that our architecture
maintained desirable generalization ability. Finally, we concluded that our
framework could be adapted with deep learning techniques to receive significant
model performance improvements. | [
"cs.LG",
"stat.ML"
]
|
The exploration of novel chemical spaces is one of the most important tasks
of cheminformatics when supporting the drug discovery process. Properly
designed and trained deep neural networks can provide a viable alternative to
brute-force de novo approaches or various other machine-learning techniques for
generating novel drug-like molecules. In this article we present a method to
generate molecules using a long short-term memory (LSTM) neural network and
provide an analysis of the results, including a virtual screening test. Using
the network one million drug-like molecules were generated in 2 hours. The
molecules are novel, diverse (contain numerous novel chemotypes), have good
physicochemical properties and have good synthetic accessibility, even though
these qualities were not specific constraints. Although novel, their structural
features and functional groups remain closely within the drug-like space
defined by the bioactive molecules from ChEMBL. Virtual screening using the
profile QSAR approach confirms that the potential of these novel molecules to
show bioactivity is comparable to the ChEMBL set from which they were derived.
The molecule generator written in Python used in this study is available on
request. | [
"cs.LG",
"q-bio.QM"
]
|
We present a new generative autoencoder model with dual contradistinctive
losses to improve generative autoencoder that performs simultaneous inference
(reconstruction) and synthesis (sampling). Our model, named dual
contradistinctive generative autoencoder (DC-VAE), integrates an instance-level
discriminative loss (maintaining the instance-level fidelity for the
reconstruction/synthesis) with a set-level adversarial loss (encouraging the
set-level fidelity for there construction/synthesis), both being
contradistinctive. Extensive experimental results by DC-VAE across different
resolutions including 32x32, 64x64, 128x128, and 512x512 are reported. The two
contradistinctive losses in VAE work harmoniously in DC-VAE leading to a
significant qualitative and quantitative performance enhancement over the
baseline VAEs without architectural changes. State-of-the-art or competitive
results among generative autoencoders for image reconstruction, image
synthesis, image interpolation, and representation learning are observed.
DC-VAE is a general-purpose VAE model, applicable to a wide variety of
downstream tasks in computer vision and machine learning. | [
"cs.CV"
]
|
Many machine learning algorithms assume that all input samples are
independently and identically distributed from some common distribution on
either the input space X, in the case of unsupervised learning, or the input
and output space X x Y in the case of supervised and semi-supervised learning.
In the last number of years the relaxation of this assumption has been explored
and the importance of incorporation of additional information within machine
learning algorithms became more apparent. Traditionally such fusion of
information was the domain of semi-supervised learning. More recently the
inclusion of knowledge from separate hypothetical spaces has been proposed by
Vapnik as part of the supervised setting. In this work we are interested in
exploring Vapnik's idea of master-class learning and the associated learning
using privileged information, however within the unsupervised setting. Adoption
of the advanced supervised learning paradigm for the unsupervised setting
instigates investigation into the difference between privileged and technical
data. By means of our proposed aRi-MAX method stability of the KMeans algorithm
is improved and identification of the best clustering solution is achieved on
an artificial dataset. Subsequently an information theoretic dot product based
algorithm called P-Dot is proposed. This method has the ability to utilize a
wide variety of clustering techniques, individually or in combination, while
fusing privileged and technical data for improved clustering. Application of
the P-Dot method to the task of digit recognition confirms our findings in a
real-world scenario. | [
"cs.LG",
"stat.ML"
]
|
Predictive process monitoring is concerned with the analysis of events
produced during the execution of a business process in order to predict as
early as possible the final outcome of an ongoing case. Traditionally,
predictive process monitoring methods are optimized with respect to accuracy.
However, in environments where users make decisions and take actions in
response to the predictions they receive, it is equally important to optimize
the stability of the successive predictions made for each case. To this end,
this paper defines a notion of temporal stability for binary classification
tasks in predictive process monitoring and evaluates existing methods with
respect to both temporal stability and accuracy. We find that methods based on
XGBoost and LSTM neural networks exhibit the highest temporal stability. We
then show that temporal stability can be enhanced by hyperparameter-optimizing
random forests and XGBoost classifiers with respect to inter-run stability.
Finally, we show that time series smoothing techniques can further enhance
temporal stability at the expense of slightly lower accuracy. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we propose novel edge and corner detection algorithms for
unorganized point clouds. Our edge detection method evaluates symmetry in a
local neighborhood and uses an adaptive density based threshold to
differentiate 3D edge points. We extend this algorithm to propose a novel
corner detector that clusters curvature vectors and uses their geometrical
statistics to classify a point as corner. We perform rigorous evaluation of the
algorithms on RGB-D semantic segmentation and 3D washer models from the
ShapeNet dataset and report higher precision and recall scores. Finally, we
also demonstrate how our edge and corner detectors can be used as a novel
approach towards automatic weld seam detection for robotic welding. We propose
to generate weld seams directly from a point cloud as opposed to using 3D
models for offline planning of welding paths. For this application, we show a
comparison between Harris 3D and our proposed approach on a panel workpiece. | [
"cs.CV"
]
|
Practitioners often rely on compute-intensive domain randomization to ensure
reinforcement learning policies trained in simulation can robustly transfer to
the real world. Due to unmodeled nonlinearities in the real system, however,
even such simulated policies can still fail to perform stably enough to acquire
experience in real environments. In this paper we propose a novel method that
guarantees a stable region of attraction for the output of a policy trained in
simulation, even for highly nonlinear systems. Our core technique is to use
"bias-shifted" neural networks for constructing the controller and training the
network in the simulator. The modified neural networks not only capture the
nonlinearities of the system but also provably preserve linearity in a certain
region of the state space and thus can be tuned to resemble a linear quadratic
regulator that is known to be stable for the real system. We have tested our
new method by transferring simulated policies for a swing-up inverted pendulum
to real systems and demonstrated its efficacy. | [
"cs.LG",
"cs.AI",
"cs.RO",
"cs.SY",
"eess.SY"
]
|
Accurately predicting drug-target binding affinity (DTA) in silico is a key
task in drug discovery. Most of the conventional DTA prediction methods are
simulation-based, which rely heavily on domain knowledge or the assumption of
having the 3D structure of the targets, which are often difficult to obtain.
Meanwhile, traditional machine learning-based methods apply various features
and descriptors, and simply depend on the similarities between drug-target
pairs. Recently, with the increasing amount of affinity data available and the
success of deep representation learning models on various domains, deep
learning techniques have been applied to DTA prediction. However, these methods
consider either label/one-hot encodings or the topological structure of
molecules, without considering the local chemical context of amino acids and
SMILES sequences. Motivated by this, we propose a novel end-to-end learning
framework, called DeepGS, which uses deep neural networks to extract the local
chemical context from amino acids and SMILES sequences, as well as the
molecular structure from the drugs. To assist the operations on the symbolic
data, we propose to use advanced embedding techniques (i.e., Smi2Vec and
Prot2Vec) to encode the amino acids and SMILES sequences to a distributed
representation. Meanwhile, we suggest a new molecular structure modeling
approach that works well under our framework. We have conducted extensive
experiments to compare our proposed method with state-of-the-art models
including KronRLS, SimBoost, DeepDTA and DeepCPI. Extensive experimental
results demonstrate the superiorities and competitiveness of DeepGS. | [
"cs.LG",
"q-bio.QM"
]
|
Weakly supervised temporal action localization, which aims at temporally
locating action instances in untrimmed videos using only video-level class
labels during training, is an important yet challenging problem in video
analysis. Many current methods adopt the "localization by classification"
framework: first do video classification, then locate temporal area
contributing to the results most. However, this framework fails to locate the
entire action instances and gives little consideration to the local context. In
this paper, we present a novel architecture called Cascaded Pyramid Mining
Network (CPMN) to address these issues using two effective modules. First, to
discover the entire temporal interval of specific action, we design a two-stage
cascaded module with proposed Online Adversarial Erasing (OAE) mechanism, where
new and complementary regions are mined through feeding the erased feature maps
of discovered regions back to the system. Second, to exploit hierarchical
contextual information in videos and reduce missing detections, we design a
pyramid module which produces a scale-invariant attention map through combining
the feature maps from different levels. Final, we aggregate the results of two
modules to perform action localization via locating high score areas in
temporal Class Activation Sequence (CAS). Extensive experiments conducted on
THUMOS14 and ActivityNet-1.3 datasets demonstrate the effectiveness of our
method. | [
"cs.CV"
]
|
Meta-learning or few-shot learning, has been successfully applied in a wide
range of domains from computer vision to reinforcement learning. Among the many
frameworks proposed for meta-learning, bayesian methods are particularly
favoured when accurate and calibrated uncertainty estimate is required. In this
paper, we investigate the similarities and disparities among two recently
published bayesian meta-learning methods: ALPaCA (Harrison et al. [2018]) and
PACOH (Rothfuss et al. [2020]). We provide theoretical analysis as well as
empirical benchmarks across synthetic and real-world dataset. While ALPaCA
holds advantage in computation time by the usage of a linear kernel, general
GP-based methods provide much more flexibility and achieves better result
across datasets when using a common kernel such as SE (Squared Exponential)
kernel. The influence of different loss function choice is also discussed. | [
"cs.LG"
]
|
Generative adversarial networks (GANs) have been successfully used for
considerable computer vision tasks, especially the image-to-image translation.
However, generators in these networks are of complicated architectures with
large number of parameters and huge computational complexities. Existing
methods are mainly designed for compressing and speeding-up deep neural
networks in the classification task, and cannot be directly applied on GANs for
image translation, due to their different objectives and training procedures.
To this end, we develop a novel co-evolutionary approach for reducing their
memory usage and FLOPs simultaneously. In practice, generators for two image
domains are encoded as two populations and synergistically optimized for
investigating the most important convolution filters iteratively. Fitness of
each individual is calculated using the number of parameters, a
discriminator-aware regularization, and the cycle consistency. Extensive
experiments conducted on benchmark datasets demonstrate the effectiveness of
the proposed method for obtaining compact and effective generators. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Data-driven prediction of molecular properties presents unique challenges to
the design of machine learning methods concerning data
structure/dimensionality, symmetry adaption, and confidence management. In this
paper, we present a kernel-based pipeline that can learn and predict the
atomization energy of molecules with high accuracy. The framework employs
Gaussian process regression to perform predictions based on the similarity
between molecules, which is computed using the marginalized graph kernel. To
apply the marginalized graph kernel, a spatial adjacency rule is first employed
to convert molecules into graphs whose vertices and edges are labeled by
elements and interatomic distances, respectively. We then derive formulas for
the efficient evaluation of the kernel. Specific functional components for the
marginalized graph kernel are proposed, while the effect of the associated
hyperparameters on accuracy and predictive confidence are examined. We show
that the graph kernel is particularly suitable for predicting extensive
properties because its convolutional structure coincides with that of the
covariance formula between sums of random variables. Using an active learning
procedure, we demonstrate that the proposed method can achieve a mean absolute
error of 0.62 +- 0.01 kcal/mol using as few as 2000 training samples on the QM7
data set. | [
"cs.LG",
"cond-mat.mtrl-sci",
"cs.CE",
"physics.comp-ph",
"stat.ML"
]
|
The requiring of large amounts of annotated training data has become a common
constraint on various deep learning systems. In this paper, we propose a weakly
supervised scene text detection method (WeText) that trains robust and accurate
scene text detection models by learning from unannotated or weakly annotated
data. With a "light" supervised model trained on a small fully annotated
dataset, we explore semi-supervised and weakly supervised learning on a large
unannotated dataset and a large weakly annotated dataset, respectively. For the
unsupervised learning, the light supervised model is applied to the unannotated
dataset to search for more character training samples, which are further
combined with the small annotated dataset to retrain a superior character
detection model. For the weakly supervised learning, the character searching is
guided by high-level annotations of words/text lines that are widely available
and also much easier to prepare. In addition, we design an unified scene
character detector by adapting regression based deep networks, which greatly
relieves the error accumulation issue that widely exists in most traditional
approaches. Extensive experiments across different unannotated and weakly
annotated datasets show that the scene text detection performance can be
clearly boosted under both scenarios, where the weakly supervised learning can
achieve the state-of-the-art performance by using only 229 fully annotated
scene text images. | [
"cs.CV"
]
|
Context-aware recommender systems (CARS) have gained increasing attention due
to their ability to utilize contextual information. Compared to traditional
recommender systems, CARS are, in general, able to generate more accurate
recommendations. Latent factors approach accounts for a large proportion of
CARS. Recently, a non-linear Gaussian Process (GP) based factorization method
was proven to outperform the state-of-the-art methods in CARS. Despite its
effectiveness, GP model-based methods can suffer from over-fitting and may not
be able to determine the impact of each context automatically. In order to
address such shortcomings, we propose a Gaussian Process Latent Variable Model
Factorization (GPLVMF) method, where we apply an appropriate prior to the
original GP model. Our work is primarily inspired by the Gaussian Process
Latent Variable Model (GPLVM), which was a non-linear dimensionality reduction
method. As a result, we improve the performance on the real datasets
significantly as well as capturing the importance of each context. In addition
to the general advantages, our method provides two main contributions regarding
recommender system settings: (1) addressing the influence of bias by setting a
non-zero mean function, and (2) utilizing real-valued contexts by fixing the
latent space with real values. | [
"cs.LG",
"cs.IR",
"stat.ML"
]
|
Building extraction from aerial images has several applications in problems
such as urban planning, change detection, and disaster management. With the
increasing availability of data, Convolutional Neural Networks (CNNs) for
semantic segmentation of remote sensing imagery has improved significantly in
recent years. However, convolutions operate in local neighborhoods and fail to
capture non-local features that are essential in semantic understanding of
aerial images. In this work, we propose to improve building segmentation of
different sizes by capturing long-range dependencies using contextual pyramid
attention (CPA). The pathways process the input at multiple scales efficiently
and combine them in a weighted manner, similar to an ensemble model. The
proposed method obtains state-of-the-art performance on the Inria Aerial Image
Labelling Dataset with minimal computation costs. Our method improves 1.8
points over current state-of-the-art methods and 12.6 points higher than
existing baselines on the Intersection over Union (IoU) metric without any
post-processing. Code and models will be made publicly available. | [
"cs.CV"
]
|
Deep learning-based point cloud registration models are often generalized
from extensive training over a large volume of data to learn the ability to
predict the desired geometric transformation to register 3D point clouds. In
this paper, we propose a meta-learning based 3D registration model, named 3D
Meta-Registration, that is capable of rapidly adapting and well generalizing to
new 3D registration tasks for unseen 3D point clouds. Our 3D Meta-Registration
gains a competitive advantage by training over a variety of 3D registration
tasks, which leads to an optimized model for the best performance on the
distribution of registration tasks including potentially unseen tasks.
Specifically, the proposed 3D Meta-Registration model consists of two modules:
3D registration learner and 3D registration meta-learner. During the training,
the 3D registration learner is trained to complete a specific registration task
aiming to determine the desired geometric transformation that aligns the source
point cloud with the target one. In the meantime, the 3D registration
meta-learner is trained to provide the optimal parameters to update the 3D
registration learner based on the learned task distribution. After training,
the 3D registration meta-learner, which is learned with the optimized coverage
of distribution of 3D registration tasks, is able to dynamically update 3D
registration learners with desired parameters to rapidly adapt to new
registration tasks. We tested our model on synthesized dataset ModelNet and
FlyingThings3D, as well as real-world dataset KITTI. Experimental results
demonstrate that 3D Meta-Registration achieves superior performance over other
previous techniques (e.g. FlowNet3D). | [
"cs.CV"
]
|
We study the problem of multimodal generative modelling of images based on
generative adversarial networks (GANs). Despite the success of existing
methods, they often ignore the underlying structure of vision data or its
multimodal generation characteristics. To address this problem, we introduce
the Dirichlet prior for multimodal image generation, which leads to a new
Latent Dirichlet Allocation based GAN (LDAGAN). In detail, for the generative
process modelling, LDAGAN defines a generative mode for each sample,
determining which generative sub-process it belongs to. For the adversarial
training, LDAGAN derives a variational expectation-maximization (VEM) algorithm
to estimate model parameters. Experimental results on real-world datasets have
demonstrated the outstanding performance of LDAGAN over other existing GANs. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
The multi-scale defect detection for photovoltaic (PV) cell
electroluminescence (EL) images is a challenging task, due to the feature
vanishing as network deepens. To address this problem, an attention-based
top-down and bottom-up architecture is developed to accomplish multi-scale
feature fusion. This architecture, called Bidirectional Attention Feature
Pyramid Network (BAFPN), can make all layers of the pyramid share similar
semantic features. In BAFPN, cosine similarity is employed to measure the
importance of each pixel in the fused features. Furthermore, a novel object
detector is proposed, called BAF-Detector, which embeds BAFPN into Region
Proposal Network (RPN) in Faster RCNN+FPN. BAFPN improves the robustness of the
network to scales, thus the proposed detector achieves a good performance in
multi-scale defects detection task. Finally, the experimental results on a
large-scale EL dataset including 3629 images, 2129 of which are defective, show
that the proposed method achieves 98.70% (F-measure), 88.07% (mAP), and 73.29%
(IoU) in terms of multi-scale defects classification and detection results in
raw PV cell EL images. | [
"cs.CV"
]
|
The null space of the $k$-th order Laplacian $\mathbf{\mathcal L}_k$, known
as the {\em $k$-th homology vector space}, encodes the non-trivial topology of
a manifold or a network. Understanding the structure of the homology embedding
can thus disclose geometric or topological information from the data. The study
of the null space embedding of the graph Laplacian $\mathbf{\mathcal L}_0$ has
spurred new research and applications, such as spectral clustering algorithms
with theoretical guarantees and estimators of the Stochastic Block Model. In
this work, we investigate the geometry of the $k$-th homology embedding and
focus on cases reminiscent of spectral clustering. Namely, we analyze the {\em
connected sum} of manifolds as a perturbation to the direct sum of their
homology embeddings. We propose an algorithm to factorize the homology
embedding into subspaces corresponding to a manifold's simplest topological
components. The proposed framework is applied to the {\em shortest homologous
loop detection} problem, a problem known to be NP-hard in general. Our spectral
loop detection algorithm scales better than existing methods and is effective
on diverse data such as point clouds and images. | [
"stat.ML",
"cs.LG"
]
|
Continual learning refers to the ability of a biological or artificial system
to seamlessly learn from continuous streams of information while preventing
catastrophic forgetting, i.e., a condition in which new incoming information
strongly interferes with previously learned representations. Since it is
unrealistic to provide artificial agents with all the necessary prior knowledge
to effectively operate in real-world conditions, they must exhibit a rich set
of learning capabilities enabling them to interact in complex environments with
the aim to process and make sense of continuous streams of (often uncertain)
information. While the vast majority of continual learning models are designed
to alleviate catastrophic forgetting on simplified classification tasks, here
we focus on continual learning for autonomous agents and robots required to
operate in much more challenging experimental settings. In particular, we
discuss well-established biological learning factors such as developmental and
curriculum learning, transfer learning, and intrinsic motivation and their
computational counterparts for modeling the progressive acquisition of
increasingly complex knowledge and skills in a continual fashion. | [
"cs.LG",
"cs.AI"
]
|
Late Gadolinium Enhanced Cardiac MRI (LGE-CMRI) for detecting atrial scars in
atrial fibrillation (AF) patients has recently emerged as a promising technique
to stratify patients, guide ablation therapy and predict treatment success.
Visualisation and quantification of scar tissues require a segmentation of both
the left atrium (LA) and the high intensity scar regions from LGE-CMRI images.
These two segmentation tasks are challenging due to the cancelling of healthy
tissue signal, low signal-to-noise ratio and often limited image quality in
these patients. Most approaches require manual supervision and/or a second
bright-blood MRI acquisition for anatomical segmentation. Segmenting both the
LA anatomy and the scar tissues automatically from a single LGE-CMRI
acquisition is highly in demand. In this study, we proposed a novel fully
automated multiview two-task (MVTT) recursive attention model working directly
on LGE-CMRI images that combines a sequential learning and a dilated residual
learning to segment the LA (including attached pulmonary veins) and delineate
the atrial scars simultaneously via an innovative attention model. Compared to
other state-of-the-art methods, the proposed MVTT achieves compelling
improvement, enabling to generate a patient-specific anatomical and atrial scar
assessment model. | [
"cs.CV",
"eess.IV"
]
|
We tackle the problem of learning object detectors without supervision.
Differently from weakly-supervised object detection, we do not assume
image-level class labels. Instead, we extract a supervisory signal from
audio-visual data, using the audio component to "teach" the object detector.
While this problem is related to sound source localisation, it is considerably
harder because the detector must classify the objects by type, enumerate each
instance of the object, and do so even when the object is silent. We tackle
this problem by first designing a self-supervised framework with a contrastive
objective that jointly learns to classify and localise objects. Then, without
using any supervision, we simply use these self-supervised labels and boxes to
train an image-based object detector. With this, we outperform previous
unsupervised and weakly-supervised detectors for the task of object detection
and sound source localization. We also show that we can align this detector to
ground-truth classes with as little as one label per pseudo-class, and show how
our method can learn to detect generic objects that go beyond instruments, such
as airplanes and cats. | [
"cs.CV"
]
|
The computation demand for machine learning (ML) has grown rapidly recently,
which comes with a number of costs. Estimating the energy cost helps measure
its environmental impact and finding greener strategies, yet it is challenging
without detailed information. We calculate the energy use and carbon footprint
of several recent large models-T5, Meena, GShard, Switch Transformer, and
GPT-3-and refine earlier estimates for the neural architecture search that
found Evolved Transformer. We highlight the following opportunities to improve
energy efficiency and CO2 equivalent emissions (CO2e): Large but sparsely
activated DNNs can consume <1/10th the energy of large, dense DNNs without
sacrificing accuracy despite using as many or even more parameters. Geographic
location matters for ML workload scheduling since the fraction of carbon-free
energy and resulting CO2e vary ~5X-10X, even within the same country and the
same organization. We are now optimizing where and when large models are
trained. Specific datacenter infrastructure matters, as Cloud datacenters can
be ~1.4-2X more energy efficient than typical datacenters, and the ML-oriented
accelerators inside them can be ~2-5X more effective than off-the-shelf
systems. Remarkably, the choice of DNN, datacenter, and processor can reduce
the carbon footprint up to ~100-1000X. These large factors also make
retroactive estimates of energy cost difficult. To avoid miscalculations, we
believe ML papers requiring large computational resources should make energy
consumption and CO2e explicit when practical. We are working to be more
transparent about energy use and CO2e in our future research. To help reduce
the carbon footprint of ML, we believe energy usage and CO2e should be a key
metric in evaluating models, and we are collaborating with MLPerf developers to
include energy usage during training and inference in this industry standard
benchmark. | [
"cs.LG",
"cs.CY"
]
|
We propose a new self-supervised approach to image feature learning from
motion cue. This new approach leverages recent advances in deep learning in two
directions: 1) the success of training deep neural network in estimating
optical flow in real data using synthetic flow data; and 2) emerging work in
learning image features from motion cues, such as optical flow. Building on
these, we demonstrate that image features can be learned in self-supervision by
first training an optical flow estimator with synthetic flow data, and then
learning image features from the estimated flows in real motion data. We
demonstrate and evaluate this approach on an image segmentation task. Using the
learned image feature representation, the network performs significantly better
than the ones trained from scratch in few-shot segmentation tasks. | [
"cs.CV"
]
|
Recent advances of deep learning lead to great success of image and video
super-resolution (SR) methods that are based on convolutional neural networks
(CNN). For video SR, advanced algorithms have been proposed to exploit the
temporal correlation between low-resolution (LR) video frames, and/or to
super-resolve a frame with multiple LR frames. These methods pursue higher
quality of super-resolved frames, where the quality is usually measured frame
by frame in e.g. PSNR. However, frame-wise quality may not reveal the
consistency between frames. If an algorithm is applied to each frame
independently (which is the case of most previous methods), the algorithm may
cause temporal inconsistency, which can be observed as flickering. It is a
natural requirement to improve both frame-wise fidelity and between-frame
consistency, which are termed spatial quality and temporal quality,
respectively. Then we may ask, is a method optimized for spatial quality also
optimized for temporal quality? Can we optimize the two quality metrics
jointly? | [
"cs.CV"
]
|
Grounding free-form textual queries necessitates an understanding of these
textual phrases and its relation to the visual cues to reliably reason about
the described locations. Spatial attention networks are known to learn this
relationship and focus its gaze on salient objects in the image. Thus, we
propose to utilize spatial attention networks for image-level visual-textual
fusion preserving local (word) and global (phrase) information to refine region
proposals with an in-network Region Proposal Network (RPN) and detect single or
multiple regions for a phrase query. We focus only on the phrase query - ground
truth pair (referring expression) for a model independent of the constraints of
the datasets i.e. additional attributes, context etc. For such referring
expression dataset ReferIt game, our Multi-region Attention-assisted Grounding
network (MAGNet) achieves over 12\% improvement over the state-of-the-art.
Without the context from image captions and attribute information in Flickr30k
Entities, we still achieve competitive results compared to the
state-of-the-art. | [
"cs.CV",
"cs.CL",
"cs.LG"
]
|
We present a novel Bipartite Graph Reasoning GAN (BiGraphGAN) for the
challenging person image generation task. The proposed graph generator mainly
consists of two novel blocks that aim to model the pose-to-pose and
pose-to-image relations, respectively. Specifically, the proposed Bipartite
Graph Reasoning (BGR) block aims to reason the crossing long-range relations
between the source pose and the target pose in a bipartite graph, which
mitigates some challenges caused by pose deformation. Moreover, we propose a
new Interaction-and-Aggregation (IA) block to effectively update and enhance
the feature representation capability of both person's shape and appearance in
an interactive way. Experiments on two challenging and public datasets, i.e.,
Market-1501 and DeepFashion, show the effectiveness of the proposed BiGraphGAN
in terms of objective quantitative scores and subjective visual realness. The
source code and trained models are available at
https://github.com/Ha0Tang/BiGraphGAN. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Machine learning software accounts for a significant amount of energy
consumed in data centers. These algorithms are usually optimized towards
predictive performance, i.e. accuracy, and scalability. This is the case of
data stream mining algorithms. Although these algorithms are adaptive to the
incoming data, they have fixed parameters from the beginning of the execution.
We have observed that having fixed parameters lead to unnecessary computations,
thus making the algorithm energy inefficient. In this paper we present the nmin
adaptation method for Hoeffding trees. This method adapts the value of the nmin
parameter, which significantly affects the energy consumption of the algorithm.
The method reduces unnecessary computations and memory accesses, thus reducing
the energy, while the accuracy is only marginally affected. We experimentally
compared VFDT (Very Fast Decision Tree, the first Hoeffding tree algorithm) and
CVFDT (Concept-adapting VFDT) with the VFDT-nmin (VFDT with nmin adaptation).
The results show that VFDT-nmin consumes up to 27% less energy than the
standard VFDT, and up to 92% less energy than CVFDT, trading off a few percent
of accuracy in a few datasets. | [
"cs.LG",
"stat.ML"
]
|
Recent advances in the area of plane segmentation from single RGB images show
strong accuracy improvements and now allow a reliable segmentation of indoor
scenes into planes. Nonetheless, fine-grained details of these segmentation
masks are still lacking accuracy, thus restricting the usability of such
techniques on a larger scale in numerous applications, such as inpainting for
Augmented Reality use cases. We propose a post-processing algorithm to align
the segmented plane masks with edges detected in the image. This allows us to
increase the accuracy of state-of-the-art approaches, while limiting ourselves
to cuboid-shaped objects. Our approach is motivated by logistics, where this
assumption is valid and refined planes can be used to perform robust object
detection without the need for supervised learning. Results for two baselines
and our approach are reported on our own dataset, which we made publicly
available. The results show a consistent improvement over the state-of-the-art.
The influence of the prior segmentation and the edge detection is investigated
and finally, areas for future research are proposed. | [
"cs.CV"
]
|
This paper focuses on a novel generative approach for 3D point clouds that
makes use of invertible flow-based models. The main idea of the method is to
treat a point cloud as a probability density in 3D space that is modeled using
a cloud-specific neural network. To capture the similarity between point clouds
we rely on parameter sharing among networks, with each cloud having only a
small embedding vector that defines it. We use invertible flows networks to
generate the individual point clouds, and to regularize the embedding vectors.
We evaluate the generative capabilities of the model both in qualitative and
quantitative manner. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
A conditional Generative Adversarial Network allows for generating samples
conditioned on certain external information. Being able to recover latent and
conditional vectors from a condi- tional GAN can be potentially valuable in
various applications, ranging from image manipulation for entertaining purposes
to diagnosis of the neural networks for security purposes. In this work, we
show that it is possible to recover both latent and conditional vectors from
generated images given the generator of a conditional generative adversarial
network. Such a recovery is not trivial due to the often multi-layered
non-linearity of deep neural networks. Furthermore, the effect of such recovery
applied on real natural images are investigated. We discovered that there
exists a gap between the recovery performance on generated and real images,
which we believe comes from the difference between generated data distribution
and real data distribution. Experiments are conducted to evaluate the recovered
conditional vectors and the reconstructed images from these recovered vectors
quantitatively and qualitatively, showing promising results. | [
"cs.CV",
"cs.LG"
]
|
In this paper, we propose a residual non-local attention network for
high-quality image restoration. Without considering the uneven distribution of
information in the corrupted images, previous methods are restricted by local
convolutional operation and equal treatment of spatial- and channel-wise
features. To address this issue, we design local and non-local attention blocks
to extract features that capture the long-range dependencies between pixels and
pay more attention to the challenging parts. Specifically, we design trunk
branch and (non-)local mask branch in each (non-)local attention block. The
trunk branch is used to extract hierarchical features. Local and non-local mask
branches aim to adaptively rescale these hierarchical features with mixed
attentions. The local mask branch concentrates on more local structures with
convolutional operations, while non-local attention considers more about
long-range dependencies in the whole feature map. Furthermore, we propose
residual local and non-local attention learning to train the very deep network,
which further enhance the representation ability of the network. Our proposed
method can be generalized for various image restoration applications, such as
image denoising, demosaicing, compression artifacts reduction, and
super-resolution. Experiments demonstrate that our method obtains comparable or
better results compared with recently leading methods quantitatively and
visually. | [
"cs.CV"
]
|
In this paper we consider the problem of online stochastic optimization of a
locally smooth function under bandit feedback. We introduce the high-confidence
tree (HCT) algorithm, a novel any-time $\mathcal{X}$-armed bandit algorithm,
and derive regret bounds matching the performance of existing state-of-the-art
in terms of dependency on number of steps and smoothness factor. The main
advantage of HCT is that it handles the challenging case of correlated rewards,
whereas existing methods require that the reward-generating process of each arm
is an identically and independent distributed (iid) random process. HCT also
improves on the state-of-the-art in terms of its memory requirement as well as
requiring a weaker smoothness assumption on the mean-reward function in compare
to the previous anytime algorithms. Finally, we discuss how HCT can be applied
to the problem of policy search in reinforcement learning and we report
preliminary empirical results. | [
"stat.ML",
"cs.LG",
"cs.SY"
]
|
Recent advances in deep learning have resulted in great successes in various
applications. Although semi-supervised or unsupervised learning methods have
been widely investigated, the performance of deep neural networks highly
depends on the annotated data. The problem is that the budget for annotation is
usually limited due to the annotation time and expensive annotation cost in
medical data. Active learning is one of the solutions to this problem where an
active learner is designed to indicate which samples need to be annotated to
effectively train a target model. In this paper, we propose a novel active
learning method, confident coreset, which considers both uncertainty and
distribution for effectively selecting informative samples. By comparative
experiments on two medical image analysis tasks, we show that our method
outperforms other active learning methods. | [
"cs.CV",
"cs.LG"
]
|
Being inspired by child's learning experience - taught first and followed by
observation and questioning, we investigate a critically supervised learning
methodology for object detection in this work. Specifically, we propose a
taught-observe-ask (TOA) method that consists of several novel components such
as negative object proposal, critical example mining, and machine-guided
question-answer (QA) labeling. To consider labeling time and performance
jointly, new evaluation methods are developed to compare the performance of the
TOA method, with the fully and weakly supervised learning methods. Extensive
experiments are conducted on the PASCAL VOC and the Caltech benchmark datasets.
The TOA method provides significantly improved performance of weakly
supervision yet demands only about 3-6% of labeling time of full supervision.
The effectiveness of each novel component is also analyzed. | [
"cs.CV"
]
|
The task of three-dimensional (3D) human pose estimation from a single image
can be divided into two parts: (1) Two-dimensional (2D) human joint detection
from the image and (2) estimating a 3D pose from the 2D joints. Herein, we
focus on the second part, i.e., a 3D pose estimation from 2D joint locations.
The problem with existing methods is that they require either (1) a 3D pose
dataset or (2) 2D joint locations in consecutive frames taken from a video
sequence. We aim to solve these problems. For the first time, we propose a
method that learns a 3D human pose without any 3D datasets. Our method can
predict a 3D pose from 2D joint locations in a single image. Our system is
based on the generative adversarial networks, and the networks are trained in
an unsupervised manner. Our primary idea is that, if the network can predict a
3D human pose correctly, the 3D pose that is projected onto a 2D plane should
not collapse even if it is rotated perpendicularly. We evaluated the
performance of our method using Human3.6M and the MPII dataset and showed that
our network can predict a 3D pose well even if the 3D dataset is not available
during training. | [
"cs.CV"
]
|
Catastrophic forgetting remains a severe hindrance to the broad application
of artificial neural networks (ANNs), however, it continues to be a poorly
understood phenomenon. Despite the extensive amount of work on catastrophic
forgetting, we argue that it is still unclear how exactly the phenomenon should
be quantified, and, moreover, to what degree all of the choices we make when
designing learning systems affect the amount of catastrophic forgetting. We use
various testbeds from the reinforcement learning and supervised learning
literature to (1) provide evidence that the choice of which modern
gradient-based optimization algorithm is used to train an ANN has a significant
impact on the amount of catastrophic forgetting and show that-surprisingly-in
many instances classical algorithms such as vanilla SGD experience less
catastrophic forgetting than the more modern algorithms such as Adam. We
empirically compare four different existing metrics for quantifying
catastrophic forgetting and (2) show that the degree to which the learning
systems experience catastrophic forgetting is sufficiently sensitive to the
metric used that a change from one principled metric to another is enough to
change the conclusions of a study dramatically. Our results suggest that a much
more rigorous experimental methodology is required when looking at catastrophic
forgetting. Based on our results, we recommend inter-task forgetting in
supervised learning must be measured with both retention and relearning metrics
concurrently, and intra-task forgetting in reinforcement learning must-at the
very least-be measured with pairwise interference. | [
"cs.LG",
"cs.AI",
"stat.ML",
"I.2.6"
]
|
We present a conceptually simple, flexible, and general framework for object
instance segmentation. Our approach efficiently detects objects in an image
while simultaneously generating a high-quality segmentation mask for each
instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a
branch for predicting an object mask in parallel with the existing branch for
bounding box recognition. Mask R-CNN is simple to train and adds only a small
overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to
generalize to other tasks, e.g., allowing us to estimate human poses in the
same framework. We show top results in all three tracks of the COCO suite of
challenges, including instance segmentation, bounding-box object detection, and
person keypoint detection. Without bells and whistles, Mask R-CNN outperforms
all existing, single-model entries on every task, including the COCO 2016
challenge winners. We hope our simple and effective approach will serve as a
solid baseline and help ease future research in instance-level recognition.
Code has been made available at: https://github.com/facebookresearch/Detectron | [
"cs.CV"
]
|
Superpixels are widely used in computer vision applications. Nevertheless,
decomposition methods may still fail to efficiently cluster image pixels
according to their local texture. In this paper, we propose a new Nearest
Neighbor-based Superpixel Clustering (NNSC) method to generate texture-aware
superpixels in a limited computational time compared to previous approaches. We
introduce a new clustering framework using patch-based nearest neighbor
matching, while most existing methods are based on a pixel-wise K-means
clustering. Therefore, we directly group pixels in the patch space enabling to
capture texture information. We demonstrate the efficiency of our method with
favorable comparison in terms of segmentation performances on both standard
color and texture datasets. We also show the computational efficiency of NNSC
compared to recent texture-aware superpixel methods. | [
"cs.CV"
]
|
We present a new public dataset with a focus on simulating robotic vision
tasks in everyday indoor environments using real imagery. The dataset includes
20,000+ RGB-D images and 50,000+ 2D bounding boxes of object instances densely
captured in 9 unique scenes. We train a fast object category detector for
instance detection on our data. Using the dataset we show that, although
increasingly accurate and fast, the state of the art for object detection is
still severely impacted by object scale, occlusion, and viewing direction all
of which matter for robotics applications. We next validate the dataset for
simulating active vision, and use the dataset to develop and evaluate a
deep-network-based system for next best move prediction for object
classification using reinforcement learning. Our dataset is available for
download at cs.unc.edu/~ammirato/active_vision_dataset_website/. | [
"cs.CV"
]
|
Recently, neuro-inspired episodic control (EC) methods have been developed to
overcome the data-inefficiency of standard deep reinforcement learning
approaches. Using non-/semi-parametric models to estimate the value function,
they learn rapidly, retrieving cached values from similar past states. In
realistic scenarios, with limited resources and noisy data, maintaining
meaningful representations in memory is essential to speed up the learning and
avoid catastrophic forgetting. Unfortunately, EC methods have a large space and
time complexity. We investigate different solutions to these problems based on
prioritising and ranking stored states, as well as online clustering
techniques. We also propose a new dynamic online k-means algorithm that is both
computationally-efficient and yields significantly better performance at
smaller memory sizes; we validate this approach on classic reinforcement
learning environments and Atari games. | [
"cs.LG",
"cs.NE",
"stat.ML"
]
|
In high energy physics (HEP), jets are collections of correlated particles
produced ubiquitously in particle collisions such as those at the CERN Large
Hadron Collider (LHC). Machine-learning-based generative models, such as
generative adversarial networks (GANs), have the potential to significantly
accelerate LHC jet simulations. However, despite jets having a natural
representation as a set of particles in momentum-space, a.k.a. a particle
cloud, to our knowledge there exist no generative models applied to such a
dataset. We introduce a new particle cloud dataset (JetNet), and, due to
similarities between particle and point clouds, apply to it existing point
cloud GANs. Results are evaluated using (1) the 1-Wasserstein distance between
high- and low-level feature distributions, (2) a newly developed Fr\'{e}chet
ParticleNet Distance, and (3) the coverage and (4) minimum matching distance
metrics. Existing GANs are found to be inadequate for physics applications,
hence we develop a new message passing GAN (MPGAN), which outperforms existing
point cloud GANs on virtually every metric and shows promise for use in HEP. We
propose JetNet as a novel point-cloud-style dataset for the machine learning
community to experiment with, and set MPGAN as a benchmark to improve upon for
future generative models. | [
"cs.LG",
"hep-ex"
]
|
The paucity of large curated hand-labeled training data forms a major
bottleneck in the deployment of machine learning models in computer vision and
other fields. Recent work (Data Programming) has shown how distant supervision
signals in the form of labeling functions can be used to obtain labels for
given data in near-constant time. In this work, we present Adversarial Data
Programming (ADP), which presents an adversarial methodology to generate data
as well as a curated aggregated label, given a set of weak labeling functions.
More interestingly, such labeling functions are often easily generalizable,
thus allowing our framework to be extended to different setups, including
self-supervised labeled image generation, zero-shot text to labeled image
generation, transfer learning, and multi-task learning. | [
"cs.CV"
]
|
Microorganisms are widely distributed in the human daily living environment.
They play an essential role in environmental pollution control, disease
prevention and treatment, and food and drug production. The identification,
counting, and detection are the basic steps for making full use of different
microorganisms. However, the conventional analysis methods are expensive,
laborious, and time-consuming. To overcome these limitations, artificial neural
networks are applied for microorganism image analysis. We conduct this review
to understand the development process of microorganism image analysis based on
artificial neural networks. In this review, the background and motivation are
introduced first. Then, the development of artificial neural networks and
representative networks are introduced. After that, the papers related to
microorganism image analysis based on classical and deep neural networks are
reviewed from the perspectives of different tasks. In the end, the methodology
analysis and potential direction are discussed. | [
"cs.CV",
"cs.AI"
]
|
The manifold hypothesis states that high-dimensional data can be modeled as
lying on or near a low-dimensional, nonlinear manifold. Variational
Autoencoders (VAEs) approximate this manifold by learning mappings from
low-dimensional latent vectors to high-dimensional data while encouraging a
global structure in the latent space through the use of a specified prior
distribution. When this prior does not match the structure of the true data
manifold, it can lead to a less accurate model of the data. To resolve this
mismatch, we introduce the Variational Autoencoder with Learned Latent
Structure (VAELLS) which incorporates a learnable manifold model into the
latent space of a VAE. This enables us to learn the nonlinear manifold
structure from the data and use that structure to define a prior in the latent
space. The integration of a latent manifold model not only ensures that our
prior is well-matched to the data, but also allows us to define generative
transformation paths in the latent space and describe class manifolds with
transformations stemming from examples of each class. We validate our model on
examples with known latent structure and also demonstrate its capabilities on a
real-world dataset. | [
"stat.ML",
"cs.LG"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.