text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Image relighting aims to recalibrate the illumination setting in an image. In
this paper, we propose a deep learning-based method called multi-modal
bifurcated network (MBNet) for depth guided image relighting. That is, given an
image and the corresponding depth maps, a new image with the given illuminant
angle and color temperature is generated by our network. This model extracts
the image and the depth features by the bifurcated network in the encoder. To
use the two features effectively, we adopt the dynamic dilated pyramid modules
in the decoder. Moreover, to increase the variety of training data, we propose
a novel data process pipeline to increase the number of the training data.
Experiments conducted on the VIDIT dataset show that the proposed solution
obtains the \textbf{1}$^{st}$ place in terms of SSIM and PMS in the NTIRE 2021
Depth Guide One-to-one Relighting Challenge. | [
"cs.CV"
] |
Graph convolutional networks (GCNs) have achieved promising performance on
various graph-based tasks. However they suffer from over-smoothing when
stacking more layers. In this paper, we present a quantitative study on this
observation and develop novel insights towards the deeper GCN. First, we
interpret the current graph convolutional operations from an optimization
perspective and argue that over-smoothing is mainly caused by the naive
first-order approximation of the solution to the optimization problem.
Subsequently, we introduce two metrics to measure the over-smoothing on
node-level tasks. Specifically, we calculate the fraction of the pairwise
distance between connected and disconnected nodes to the overall distance
respectively. Based on our theoretical and empirical analysis, we establish a
universal theoretical framework of GCN from an optimization perspective and
derive a novel convolutional kernel named GCN+ which has lower parameter amount
while relieving the over-smoothing inherently. Extensive experiments on
real-world datasets demonstrate the superior performance of GCN+ over
state-of-the-art baseline methods on the node classification tasks. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The objective of this paper is to perform audio-visual sound source
separation, i.e.~to separate component audios from a mixture based on the
videos of sound sources. Moreover, we aim to pinpoint the source location in
the input video sequence. Recent works have shown impressive audio-visual
separation results when using prior knowledge of the source type (e.g. human
playing instrument) and pre-trained motion detectors (e.g. keypoints or optical
flows). However, at the same time, the models are limited to a certain
application domain. In this paper, we address these limitations and make the
following contributions: i) we propose a two-stage architecture, called
Appearance and Motion network (AMnet), where the stages specialise to
appearance and motion cues, respectively. The entire system is trained in a
self-supervised manner; ii) we introduce an Audio-Motion Embedding (AME)
framework to explicitly represent the motions that related to sound; iii) we
propose an audio-motion transformer architecture for audio and motion feature
fusion; iv) we demonstrate state-of-the-art performance on two challenging
datasets (MUSIC-21 and AVE) despite the fact that we do not use any pre-trained
keypoint detectors or optical flow estimators. Project page:
https://ly-zhu.github.io/self-supervised-motion-representations | [
"cs.CV"
] |
Many available formal verification methods have been shown to be instances of
a unified Branch-and-Bound (BaB) formulation. We propose a novel machine
learning framework that can be used for designing an effective branching
strategy as well as for computing better lower bounds. Specifically, we learn
two graph neural networks (GNN) that both directly treat the network we want to
verify as a graph input and perform forward-backward passes through the GNN
layers. We use one GNN to simulate the strong branching heuristic behaviour and
another to compute a feasible dual solution of the convex relaxation, thereby
providing a valid lower bound.
We provide a new verification dataset that is more challenging than those
used in the literature, thereby providing an effective alternative for testing
algorithmic improvements for verification. Whilst using just one of the GNNs
leads to a reduction in verification time, we get optimal performance when
combining the two GNN approaches. Our combined framework achieves a 50\%
reduction in both the number of branches and the time required for verification
on various convolutional networks when compared to several state-of-the-art
verification methods. In addition, we show that our GNN models generalize well
to harder properties on larger unseen networks. | [
"cs.LG",
"cs.AI"
] |
In many learning situations, resources at inference time are significantly
more constrained than resources at training time. This paper studies a general
paradigm, called Differentiable ARchitecture Compression (DARC), that combines
model compression and architecture search to learn models that are
resource-efficient at inference time. Given a resource-intensive base
architecture, DARC utilizes the training data to learn which sub-components can
be replaced by cheaper alternatives. The high-level technique can be applied to
any neural architecture, and we report experiments on state-of-the-art
convolutional neural networks for image classification. For a WideResNet with
$97.2\%$ accuracy on CIFAR-10, we improve single-sample inference speed by
$2.28\times$ and memory footprint by $5.64\times$, with no accuracy loss. For a
ResNet with $79.15\%$ Top1 accuracy on ImageNet, we improve batch inference
speed by $1.29\times$ and memory footprint by $3.57\times$ with $1\%$ accuracy
loss. We also give theoretical Rademacher complexity bounds in simplified
cases, showing how DARC avoids overfitting despite over-parameterization. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
In this paper, we propose a Customizable Architecture Search (CAS) approach
to automatically generate a network architecture for semantic image
segmentation. The generated network consists of a sequence of stacked
computation cells. A computation cell is represented as a directed acyclic
graph, in which each node is a hidden representation (i.e., feature map) and
each edge is associated with an operation (e.g., convolution and pooling),
which transforms data to a new layer. During the training, the CAS algorithm
explores the search space for an optimized computation cell to build a network.
The cells of the same type share one architecture but with different weights.
In real applications, however, an optimization may need to be conducted under
some constraints such as GPU time and model size. To this end, a cost
corresponding to the constraint will be assigned to each operation. When an
operation is selected during the search, its associated cost will be added to
the objective. As a result, our CAS is able to search an optimized architecture
with customized constraints. The approach has been thoroughly evaluated on
Cityscapes and CamVid datasets, and demonstrates superior performance over
several state-of-the-art techniques. More remarkably, our CAS achieves 72.3%
mIoU on the Cityscapes dataset with speed of 108 FPS on an Nvidia TitanXp GPU. | [
"cs.CV"
] |
Fully convolutional deep neural networks have been asserted to be fast and
precise frameworks with great potential in image segmentation. One of the major
challenges in training such networks raises when data is unbalanced, which is
common in many medical imaging applications such as lesion segmentation where
lesion class voxels are often much lower in numbers than non-lesion voxels. A
trained network with unbalanced data may make predictions with high precision
and low recall, being severely biased towards the non-lesion class which is
particularly undesired in most medical applications where FNs are more
important than FPs. Various methods have been proposed to address this problem,
more recently similarity loss functions and focal loss. In this work we trained
fully convolutional deep neural networks using an asymmetric similarity loss
function to mitigate the issue of data imbalance and achieve much better
tradeoff between precision and recall. To this end, we developed a 3D
FC-DenseNet with large overlapping image patches as input and an asymmetric
similarity loss layer based on Tversky index (using Fbeta scores). We used
large overlapping image patches as inputs for intrinsic and extrinsic data
augmentation, a patch selection algorithm, and a patch prediction fusion
strategy using B-spline weighted soft voting to account for the uncertainty of
prediction in patch borders. We applied this method to MS lesion segmentation
based on two different datasets of MSSEG and ISBI longitudinal MS lesion
segmentation challenge, where we achieved top performance in both challenges.
Our network trained with focal loss ranked first according to the ISBI
challenge overall score and resulted in the lowest reported lesion false
positive rate among all submitted methods. Our network trained with the
asymmetric similarity loss led to the lowest surface distance and the best
lesion true positive rate. | [
"cs.CV"
] |
Unmanned aerial vehicles (UAVs) equipped with multiple complementary sensors
have tremendous potential for fast autonomous or remote-controlled semantic
scene analysis, e.g., for disaster examination. In this work, we propose a UAV
system for real-time semantic inference and fusion of multiple sensor
modalities. Semantic segmentation of LiDAR scans and RGB images, as well as
object detection on RGB and thermal images, run online onboard the UAV computer
using lightweight CNN architectures and embedded inference accelerators. We
follow a late fusion approach where semantic information from multiple
modalities augments 3D point clouds and image segmentation masks while also
generating an allocentric semantic map. Our system provides augmented semantic
images and point clouds with $\approx\,$9$\,$Hz. We evaluate the integrated
system in real-world experiments in an urban environment. | [
"cs.CV",
"cs.RO"
] |
To quickly solve new tasks in complex environments, intelligent agents need
to build up reusable knowledge. For example, a learned world model captures
knowledge about the environment that applies to new tasks. Similarly, skills
capture general behaviors that can apply to new tasks. In this paper, we
investigate how these two approaches can be integrated into a single
reinforcement learning agent. Specifically, we leverage the idea of partial
amortization for fast adaptation at test time. For this, actions are produced
by a policy that is learned over time while the skills it conditions on are
chosen using online planning. We demonstrate the benefits of our design
decisions across a suite of challenging locomotion tasks and demonstrate
improved sample efficiency in single tasks as well as in transfer from one task
to another, as compared to competitive baselines. Videos are available at:
https://sites.google.com/view/latent-skill-planning/ | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
] |
Compositionality of semantic concepts in image synthesis and analysis is
appealing as it can help in decomposing known and generatively recomposing
unknown data. For instance, we may learn concepts of changing illumination,
geometry or albedo of a scene, and try to recombine them to generate physically
meaningful, but unseen data for training and testing. In practice however we
often do not have samples from the joint concept space available: We may have
data on illumination change in one data set and on geometric change in another
one without complete overlap. We pose the following question: How can we learn
two or more concepts jointly from different data sets with mutual consistency
where we do not have samples from the full joint space? We present a novel
answer in this paper based on cyclic consistency over multiple concepts,
represented individually by generative adversarial networks (GANs). Our method,
ConceptGAN, can be understood as a drop in for data augmentation to improve
resilience for real world applications. Qualitative and quantitative
evaluations demonstrate its efficacy in generating semantically meaningful
images, as well as one shot face verification as an example application. | [
"cs.CV",
"cs.LG"
] |
Fluorescence microscopy images contain several channels, each indicating a
marker staining the sample. Since many different marker combinations are
utilized in practice, it has been challenging to apply deep learning based
segmentation models, which expect a predefined channel combination for all
training samples as well as at inference for future application. Recent work
circumvents this problem using a modality attention approach to be effective
across any possible marker combination. However, for combinations that do not
exist in a labeled training dataset, one cannot have any estimation of
potential segmentation quality if that combination is encountered during
inference. Without this, not only one lacks quality assurance but one also does
not know where to put any additional imaging and labeling effort. We herein
propose a method to estimate segmentation quality on unlabeled images by (i)
estimating both aleatoric and epistemic uncertainties of convolutional neural
networks for image segmentation, and (ii) training a Random Forest model for
the interpretation of uncertainty features via regression to their
corresponding segmentation metrics. Additionally, we demonstrate that including
these uncertainty measures during training can provide an improvement on
segmentation performance. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
In this thesis, we develop various techniques for working with sets in
machine learning. Each input or output is not an image or a sequence, but a
set: an unordered collection of multiple objects, each object described by a
feature vector. Their unordered nature makes them suitable for modeling a wide
variety of data, ranging from objects in images to point clouds to graphs. Deep
learning has recently shown great success on other types of structured data, so
we aim to build the necessary structures for sets into deep neural networks.
The first focus of this thesis is the learning of better set representations
(sets as input). Existing approaches have bottlenecks that prevent them from
properly modeling relations between objects within the set. To address this
issue, we develop a variety of techniques for different scenarios and show that
alleviating the bottleneck leads to consistent improvements across many
experiments.
The second focus of this thesis is the prediction of sets (sets as output).
Current approaches do not take the unordered nature of sets into account
properly. We determine that this results in a problem that causes discontinuity
issues with many set prediction tasks and prevents them from learning some
extremely simple datasets. To avoid this problem, we develop two models that
properly take the structure of sets into account. Various experiments show that
our set prediction techniques can significantly benefit over existing
approaches. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The timeline of computer vision research is marked with advances in learning
and utilizing efficient contextual representations. Most of them, however, are
targeted at improving model performance on a single downstream task. We
consider a multi-task environment for dense prediction tasks, represented by a
common backbone and independent task-specific heads. Our goal is to find the
most efficient way to refine each task prediction by capturing cross-task
contexts dependent on tasks' relations. We explore various attention-based
contexts, such as global and local, in the multi-task setting and analyze their
behavior when applied to refine each task independently. Empirical findings
confirm that different source-target task pairs benefit from different context
types. To automate the selection process, we propose an Adaptive
Task-Relational Context (ATRC) module, which samples the pool of all available
contexts for each task pair using neural architecture search and outputs the
optimal configuration for deployment. Our method achieves state-of-the-art
performance on two important multi-task benchmarks, namely NYUD-v2 and
PASCAL-Context. The proposed ATRC has a low computational toll and can be used
as a drop-in refinement module for any supervised multi-task architecture. | [
"cs.CV"
] |
Moving Object Detection (MOD) is a critical task for autonomous vehicles as
moving objects represent higher collision risk than static ones. The trajectory
of the ego-vehicle is planned based on the future states of detected moving
objects. It is quite challenging as the ego-motion has to be modelled and
compensated to be able to understand the motion of the surrounding objects. In
this work, we propose a real-time end-to-end CNN architecture for MOD utilizing
spatio-temporal context to improve robustness. We construct a novel time-aware
architecture exploiting temporal motion information embedded within sequential
images in addition to explicit motion maps using optical flow images.We
demonstrate the impact of our algorithm on KITTI dataset where we obtain an
improvement of 8% relative to the baselines. We compare our algorithm with
state-of-the-art methods and achieve competitive results on KITTI-Motion
dataset in terms of accuracy at three times better run-time. The proposed
algorithm runs at 23 fps on a standard desktop GPU targeting deployment on
embedded platforms. | [
"cs.CV",
"cs.LG",
"cs.RO",
"stat.ML"
] |
Accuracy and interpretability are two essential properties for a crime
prediction model. Because of the adverse effects that the crimes can have on
human life, economy and safety, we need a model that can predict future
occurrence of crime as accurately as possible so that early steps can be taken
to avoid the crime. On the other hand, an interpretable model reveals the
reason behind a model's prediction, ensures its transparency and allows us to
plan the crime prevention steps accordingly. The key challenge in developing
the model is to capture the non-linear spatial dependency and temporal patterns
of a specific crime category while keeping the underlying structure of the
model interpretable. In this paper, we develop AIST, an Attention-based
Interpretable Spatio Temporal Network for crime prediction. AIST models the
dynamic spatio-temporal correlations for a crime category based on past crime
occurrences, external features (e.g., traffic flow and point of interest (POI)
information) and recurring trends of crime. Extensive experiments show the
superiority of our model in terms of both accuracy and interpretability using
real datasets. | [
"cs.LG"
] |
This paper presents a novel design methodology for architecting a
light-weight and faster DNN architecture for vision applications. The
effectiveness of the architecture is demonstrated on Color-Constancy use case
an inherent block in camera and imaging pipelines. Specifically, we present a
multi-branch architecture that disassembles the contextual features and color
properties from an image, and later combines them to predict a global property
(e.g. Global Illumination). We also propose an implicit regularization
technique by designing cross-branch regularization block that enables the
network to retain high generalization accuracy. With a conservative use of best
computational operators, the proposed architecture achieves state-of-the-art
accuracy with 30X lesser model parameters and 70X faster inference time for
color constancy. It is also shown that the proposed architecture is generic and
achieves similar efficiency in other vision applications such as Low-Light
photography. | [
"cs.CV"
] |
We propose a graphical model framework for goal-conditioned RL, with an EM
algorithm that operates on the lower bound of the RL objective. The E-step
provides a natural interpretation of how 'learning in hindsight' techniques,
such as HER, to handle extremely sparse goal-conditioned rewards. The M-step
reduces policy optimization to supervised learning updates, which greatly
stabilizes end-to-end training on high-dimensional inputs such as images. We
show that the combined algorithm, hEM significantly outperforms model-free
baselines on a wide range of goal-conditioned benchmarks with sparse rewards. | [
"cs.LG",
"stat.ML"
] |
Recent reinforcement learning algorithms, though achieving impressive results
in various fields, suffer from brittle training effects such as regression in
results and high sensitivity to initialization and parameters. We claim that
some of the brittleness stems from variance differences, i.e. when different
environment areas - states and/or actions - have different rewards variance.
This causes two problems: First, the "Boring Areas Trap" in algorithms such as
Q-learning, where moving between areas depends on the current area variance,
and getting out of a boring area is hard due to its low variance. Second, the
"Manipulative Consultant" problem, when value-estimation functions used in DQN
and Actor-Critic algorithms influence the agent to prefer boring areas,
regardless of the mean rewards return, as they maximize estimation precision
rather than rewards. This sheds a new light on how exploration contribute to
training, as it helps with both challenges. Cognitive experiments in humans
showed that noised reward signals may paradoxically improve performance. We
explain this using the two mentioned problems, claiming that both humans and
algorithms may share similar challenges. Inspired by this result, we propose
the Adaptive Symmetric Reward Noising (ASRN), by which we mean adding Gaussian
noise to rewards according to their states' estimated variance, thus avoiding
the two problems while not affecting the environment's mean rewards behavior.
We conduct our experiments in a Multi Armed Bandit problem with variance
differences. We demonstrate that a Q-learning algorithm shows the brittleness
effect in this problem, and that the ASRN scheme can dramatically improve the
results. We show that ASRN helps a DQN algorithm training process reach better
results in an end to end autonomous driving task using the AirSim driving
simulator. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Breast cancer is one of the most common cause of deaths among women.
Mammography is a widely used imaging modality that can be used for cancer
detection in its early stages. Deep learning is widely used for the detection
of cancerous masses in the images obtained via mammography. The need to improve
accuracy remains constant due to the sensitive nature of the datasets so we
introduce segmentation and wavelet transform to enhance the important features
in the image scans. Our proposed system aids the radiologist in the screening
phase of cancer detection by using a combination of segmentation and wavelet
transforms as pre-processing augmentation that leads to transfer learning in
neural networks. The proposed system with these pre-processing techniques
significantly increases the accuracy of detection on Mini-MIAS. | [
"cs.CV",
"cs.AI",
"cs.HC"
] |
Neural Architecture Search (NAS) achieved many breakthroughs in recent years.
In spite of its remarkable progress, many algorithms are restricted to
particular search spaces. They also lack efficient mechanisms to reuse
knowledge when confronting multiple tasks. These challenges preclude their
applicability, and motivate our proposal of CATCH, a novel Context-bAsed meTa
reinforcement learning (RL) algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to
new tasks while being agnostic to search spaces. CATCH utilizes a probabilistic
encoder to encode task properties into latent context variables, which then
guide CATCH's controller to quickly "catch" top-performing networks. The
contexts also assist a network evaluator in filtering inferior candidates and
speed up learning. Extensive experiments demonstrate CATCH's universality and
search efficiency over many other widely-recognized algorithms. It is also
capable of handling cross-domain architecture search as competitive networks on
ImageNet, COCO, and Cityscapes are identified. This is the first work to our
knowledge that proposes an efficient transferrable NAS solution while
maintaining robustness across various settings. | [
"cs.LG",
"cs.AI"
] |
Object detection in thermal images is an important computer vision task and
has many applications such as unmanned vehicles, robotics, surveillance and
night vision. Deep learning based detectors have achieved major progress, which
usually need large amount of labelled training data. However, labelled data for
object detection in thermal images is scarce and expensive to collect. How to
take advantage of the large number labelled visible images and adapt them into
thermal image domain, is expected to solve. This paper proposes an unsupervised
image-generation enhanced adaptation method for object detection in thermal
images. To reduce the gap between visible domain and thermal domain, the
proposed method manages to generate simulated fake thermal images that are
similar to the target images, and preserves the annotation information of the
visible source domain. The image generation includes a CycleGAN based
image-to-image translation and an intensity inversion transformation. Generated
fake thermal images are used as renewed source domain. And then the
off-the-shelf Domain Adaptive Faster RCNN is utilized to reduce the gap between
generated intermediate domain and the thermal target domain. Experiments
demonstrate the effectiveness and superiority of the proposed method. | [
"cs.CV",
"eess.IV"
] |
Reverse-engineering bar charts extracts textual and numeric information from
the visual representations of bar charts to support application scenarios that
require the underlying information. In this paper, we propose a neural
network-based method for reverse-engineering bar charts. We adopt a neural
network-based object detection model to simultaneously localize and classify
textual information. This approach improves the efficiency of textual
information extraction. We design an encoder-decoder framework that integrates
convolutional and recurrent neural networks to extract numeric information. We
further introduce an attention mechanism into the framework to achieve high
accuracy and robustness. Synthetic and real-world datasets are used to evaluate
the effectiveness of the method. To the best of our knowledge, this work takes
the lead in constructing a complete neural network-based method of
reverse-engineering bar charts. | [
"cs.CV",
"cs.LG"
] |
Human motion prediction aims to forecast future human poses given a
historical motion. Whether based on recurrent or feed-forward neural networks,
existing learning based methods fail to model the observation that human motion
tends to repeat itself, even for complex sports actions and cooking activities.
Here, we introduce an attention based feed-forward network that explicitly
leverages this observation. In particular, instead of modeling frame-wise
attention via pose similarity, we propose to extract motion attention to
capture the similarity between the current motion context and the historical
motion sub-sequences. In this context, we study the use of different types of
attention, computed at joint, body part, and full pose levels. Aggregating the
relevant past motions and processing the result with a graph convolutional
network allows us to effectively exploit motion patterns from the long-term
history to predict the future poses. Our experiments on Human3.6M, AMASS and
3DPW validate the benefits of our approach for both periodical and
non-periodical actions. Thanks to our attention model, it yields
state-of-the-art results on all three datasets. Our code is available at
https://github.com/wei-mao-2019/HisRepItself. | [
"cs.CV"
] |
Accurate traffic state prediction is the foundation of transportation control
and guidance. It is very challenging due to the complex spatiotemporal
dependencies in traffic data. Existing works cannot perform well for multi-step
traffic prediction that involves long future time period. The spatiotemporal
information dilution becomes serve when the time gap between input step and
predicted step is large, especially when traffic data is not sufficient or
noisy. To address this issue, we propose a multi-spatial graph convolution
based Seq2Seq model. Our main novelties are three aspects: (1) We enrich the
spatiotemporal information of model inputs by fusing multi-view features (time,
location and traffic states) (2) We build multiple kinds of spatial
correlations based on both prior knowledge and data-driven knowledge to improve
model performance especially in insufficient or noisy data cases. (3) A
spatiotemporal attention mechanism based on reachability knowledge is novelly
designed to produce high-level features fed into decoder of Seq2Seq directly to
ease information dilution. Our model is evaluated on two real world traffic
datasets and achieves better performance than other competitors. | [
"cs.LG",
"cs.AI"
] |
Advances in image-based dietary assessment methods have allowed nutrition
professionals and researchers to improve the accuracy of dietary assessment,
where images of food consumed are captured using smartphones or wearable
devices. These images are then analyzed using computer vision methods to
estimate energy and nutrition content of the foods. Food image segmentation,
which determines the regions in an image where foods are located, plays an
important role in this process. Current methods are data dependent, thus cannot
generalize well for different food types. To address this problem, we propose a
class-agnostic food image segmentation method. Our method uses a pair of eating
scene images, one before start eating and one after eating is completed. Using
information from both the before and after eating images, we can segment food
images by finding the salient missing objects without any prior information
about the food class. We model a paradigm of top down saliency which guides the
attention of the human visual system (HVS) based on a task to find the salient
missing objects in a pair of images. Our method is validated on food images
collected from a dietary study which showed promising results. | [
"cs.CV",
"cs.LG"
] |
Supervised machine learning (ML) algorithms are aimed at maximizing
classification performance under available energy and storage constraints. They
try to map the training data to the corresponding labels while ensuring
generalizability to unseen data. However, they do not integrate meaning-based
relationships among labels in the decision process. On the other hand, natural
language processing (NLP) algorithms emphasize the importance of semantic
information. In this paper, we synthesize the complementary advantages of
supervised ML and NLP algorithms into one method that we refer to as SECRET
(Semantically Enhanced Classification of REal-world Tasks). SECRET performs
classifications by fusing the semantic information of the labels with the
available data: it combines the feature space of the supervised algorithms with
the semantic space of the NLP algorithms and predicts labels based on this
joint space. Experimental results indicate that, compared to traditional
supervised learning, SECRET achieves up to 14.0% accuracy and 13.1% F1 score
improvements. Moreover, compared to ensemble methods, SECRET achieves up to
12.7% accuracy and 13.3% F1 score improvements. This points to a new research
direction for supervised classification based on incorporation of semantic
information. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
Homotopy model is an excellent tool exploited by diverse research works in
the field of machine learning. However, its flexibility is limited due to lack
of adaptiveness, i.e., manual fixing or tuning the appropriate homotopy
coefficients. To address the problem above, we propose a novel adaptive
homotopy framework (AH) in which the Maclaurin duality is employed, such that
the homotopy parameters can be adaptively obtained. Accordingly, the proposed
AH can be widely utilized to enhance the homotopy-based algorithm. In
particular, in this paper, we apply AH to contrastive learning (AHCL) such that
it can be effectively transferred from weak-supervised learning (given label
priori) to unsupervised learning, where soft labels of contrastive learning are
directly and adaptively learned. Accordingly, AHCL has the adaptive ability to
extract deep features without any sort of prior information. Consequently, the
affinity matrix formulated by the related adaptive labels can be constructed as
the deep Laplacian graph that incorporates the topology of deep representations
for the inputs. Eventually, extensive experiments on benchmark datasets
validate the superiority of our method. | [
"cs.CV"
] |
The optical flow of natural scenes is a combination of the motion of the
observer and the independent motion of objects. Existing algorithms typically
focus on either recovering motion and structure under the assumption of a
purely static world or optical flow for general unconstrained scenes. We
combine these approaches in an optical flow algorithm that estimates an
explicit segmentation of moving objects from appearance and physical
constraints. In static regions we take advantage of strong constraints to
jointly estimate the camera motion and the 3D structure of the scene over
multiple frames. This allows us to also regularize the structure instead of the
motion. Our formulation uses a Plane+Parallax framework, which works even under
small baselines, and reduces the motion estimation to a one-dimensional search
problem, resulting in more accurate estimation. In moving regions the flow is
treated as unconstrained, and computed with an existing optical flow method.
The resulting Mostly-Rigid Flow (MR-Flow) method achieves state-of-the-art
results on both the MPI-Sintel and KITTI-2015 benchmarks. | [
"cs.CV"
] |
Severe weather conditions such as rain and snow adversely affect the visual
quality of images captured under such conditions thus rendering them useless
for further usage and sharing. In addition, such degraded images drastically
affect performance of vision systems. Hence, it is important to solve the
problem of single image de-raining/de-snowing. However, this is a difficult
problem to solve due to its inherent ill-posed nature. Existing approaches
attempt to introduce prior information to convert it into a well-posed problem.
In this paper, we investigate a new point of view in addressing the single
image de-raining problem. Instead of focusing only on deciding what is a good
prior or a good framework to achieve good quantitative and qualitative
performance, we also ensure that the de-rained image itself does not degrade
the performance of a given computer vision algorithm such as detection and
classification. In other words, the de-rained result should be
indistinguishable from its corresponding clear image to a given discriminator.
This criterion can be directly incorporated into the optimization framework by
using the recently introduced conditional generative adversarial networks
(GANs). To minimize artifacts introduced by GANs and ensure better visual
quality, a new refined loss function is introduced. Based on this, we propose a
novel single image de-raining method called Image De-raining Conditional
General Adversarial Network (ID-CGAN), which considers quantitative, visual and
also discriminative performance into the objective function. Experiments
evaluated on synthetic images and real images show that the proposed method
outperforms many recent state-of-the-art single image de-raining methods in
terms of quantitative and visual performance. | [
"cs.CV"
] |
This paper presents a new regularization method to train a fully
convolutional network for semantic tissue segmentation in histopathological
images. This method relies on the benefit of unsupervised learning, in the form
of image reconstruction, for network training. To this end, it puts forward an
idea of defining a new embedding that allows uniting the main supervised task
of semantic segmentation and an auxiliary unsupervised task of image
reconstruction into a single one and proposes to learn this united task by a
single generative model. This embedding generates an output image by
superimposing an input image on its segmentation map. Then, the method learns
to translate the input image to this embedded output image using a conditional
generative adversarial network, which is known as quite effective for
image-to-image translations. This proposal is different than the existing
approach that uses image reconstruction for the same regularization purpose.
The existing approach considers segmentation and image reconstruction as two
separate tasks in a multi-task network, defines their losses independently, and
combines them in a joint loss function. However, the definition of such a
function requires externally determining right contributions of the supervised
and unsupervised losses that yield balanced learning between the segmentation
and image reconstruction tasks. The proposed approach provides an easier
solution to this problem by uniting these two tasks into a single one, which
intrinsically combines their losses. We test our approach on three datasets of
histopathological images. Our experiments demonstrate that it leads to better
segmentation results in these datasets, compared to its counterparts. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
The world of empirical machine learning (ML) strongly relies on benchmarks in
order to determine the relative effectiveness of different algorithms and
methods. This paper proposes the notion of "a benchmark lottery" that describes
the overall fragility of the ML benchmarking process. The benchmark lottery
postulates that many factors, other than fundamental algorithmic superiority,
may lead to a method being perceived as superior. On multiple benchmark setups
that are prevalent in the ML community, we show that the relative performance
of algorithms may be altered significantly simply by choosing different
benchmark tasks, highlighting the fragility of the current paradigms and
potential fallacious interpretation derived from benchmarking ML methods. Given
that every benchmark makes a statement about what it perceives to be important,
we argue that this might lead to biased progress in the community. We discuss
the implications of the observed phenomena and provide recommendations on
mitigating them using multiple machine learning domains and communities as use
cases, including natural language processing, computer vision, information
retrieval, recommender systems, and reinforcement learning. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.CV",
"cs.IR"
] |
Since the proposal of the graph neural network (GNN) by Gori et al. (2005)
and Scarselli et al. (2008), one of the major problems in training GNNs was
their struggle to propagate information between distant nodes in the graph. We
propose a new explanation for this problem: GNNs are susceptible to a
bottleneck when aggregating messages across a long path. This bottleneck causes
the over-squashing of exponentially growing information into fixed-size
vectors. As a result, GNNs fail to propagate messages originating from distant
nodes and perform poorly when the prediction task depends on long-range
interaction. In this paper, we highlight the inherent problem of over-squashing
in GNNs: we demonstrate that the bottleneck hinders popular GNNs from fitting
long-range signals in the training data; we further show that GNNs that absorb
incoming edges equally, such as GCN and GIN, are more susceptible to
over-squashing than GAT and GGNN; finally, we show that prior work, which
extensively tuned GNN models of long-range problems, suffers from
over-squashing, and that breaking the bottleneck improves their
state-of-the-art results without any tuning or additional weights. Our code is
available at https://github.com/tech-srl/bottleneck/ . | [
"cs.LG",
"stat.ML"
] |
In linear inverse problems, the goal is to recover a target signal from
undersampled, incomplete or noisy linear measurements. Typically, the recovery
relies on complex numerical optimization methods; recent approaches perform an
unfolding of a numerical algorithm into a neural network form, resulting in a
substantial reduction of the computational complexity. In this paper, we
consider the recovery of a target signal with the aid of a correlated signal,
the so-called side information (SI), and propose a deep unfolding model that
incorporates SI. The proposed model is used to learn coupled representations of
correlated signals from different modalities, enabling the recovery of
multimodal data at a low computational cost. As such, our work introduces the
first deep unfolding method with SI, which actually comes from a different
modality. We apply our model to reconstruct near-infrared images from
undersampled measurements given RGB images as SI. Experimental results
demonstrate the superior performance of the proposed framework against
single-modal deep learning methods that do not use SI, multimodal deep learning
designs, and optimization algorithms. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
In this work, we address the challenging video scene parsing problem by
developing effective representation learning methods given limited parsing
annotations. In particular, we contribute two novel methods that constitute a
unified parsing framework. (1) \textbf{Predictive feature learning}} from
nearly unlimited unlabeled video data. Different from existing methods learning
features from single frame parsing, we learn spatiotemporal discriminative
features by enforcing a parsing network to predict future frames and their
parsing maps (if available) given only historical frames. In this way, the
network can effectively learn to capture video dynamics and temporal context,
which are critical clues for video scene parsing, without requiring extra
manual annotations. (2) \textbf{Prediction steering parsing}} architecture that
effectively adapts the learned spatiotemporal features to scene parsing tasks
and provides strong guidance for any off-the-shelf parsing model to achieve
better video scene parsing performance. Extensive experiments over two
challenging datasets, Cityscapes and Camvid, have demonstrated the
effectiveness of our methods by showing significant improvement over
well-established baselines. | [
"cs.CV"
] |
Graph neural networks (GNNs) is widely used to learn a powerful
representation of graph-structured data. Recent work demonstrates that
transferring knowledge from self-supervised tasks to downstream tasks could
further improve graph representation. However, there is an inherent gap between
self-supervised tasks and downstream tasks in terms of optimization objective
and training data. Conventional pre-training methods may be not effective
enough on knowledge transfer since they do not make any adaptation for
downstream tasks. To solve such problems, we propose a new transfer learning
paradigm on GNNs which could effectively leverage self-supervised tasks as
auxiliary tasks to help the target task. Our methods would adaptively select
and combine different auxiliary tasks with the target task in the fine-tuning
stage. We design an adaptive auxiliary loss weighting model to learn the
weights of auxiliary tasks by quantifying the consistency between auxiliary
tasks and the target task. In addition, we learn the weighting model through
meta-learning. Our methods can be applied to various transfer learning
approaches, it performs well not only in multi-task learning but also in
pre-training and fine-tuning. Comprehensive experiments on multiple downstream
tasks demonstrate that the proposed methods can effectively combine auxiliary
tasks with the target task and significantly improve the performance compared
to state-of-the-art methods. | [
"cs.LG"
] |
One of the key problems of GNNs is how to describe the importance of neighbor
nodes in the aggregation process for learning node representations. A class of
GNNs solves this problem by learning implicit weights to represent the
importance of neighbor nodes, which we call implicit GNNs such as Graph
Attention Network. The basic idea of implicit GNNs is to introduce graph
information with special properties followed by Learnable Transformation
Structures (LTS) which encode the importance of neighbor nodes via a
data-driven way. In this paper, we argue that LTS makes the special properties
of graph information disappear during the learning process, resulting in graph
information unhelpful for learning node representations. We call this
phenomenon Graph Information Vanishing (GIV). Also, we find that LTS maps
different graph information into highly similar results. To validate the above
two points, we design two sets of 70 random experiments on five Implicit GNNs
methods and seven benchmark datasets by using a random permutation operator to
randomly disrupt the order of graph information and replacing graph information
with random values. We find that randomization does not affect the model
performance in 93\% of the cases, with about 7 percentage causing an average
0.5\% accuracy loss. And the cosine similarity of output results, generated by
LTS mapping different graph information, over 99\% with an 81\% proportion. The
experimental results provide evidence to support the existence of GIV in
Implicit GNNs and imply that the existing methods of Implicit GNNs do not make
good use of graph information. The relationship between graph information and
LTS should be rethought to ensure that graph information is used in node
representation. | [
"cs.LG",
"cs.SI"
] |
Deep energy-based models (EBMs) are very flexible in distribution
parametrization but computationally challenging because of the intractable
partition function. They are typically trained via maximum likelihood, using
contrastive divergence to approximate the gradient of the KL divergence between
data and model distribution. While KL divergence has many desirable properties,
other f-divergences have shown advantages in training implicit density
generative models such as generative adversarial networks. In this paper, we
propose a general variational framework termed f-EBM to train EBMs using any
desired f-divergence. We introduce a corresponding optimization algorithm and
prove its local convergence property with non-linear dynamical systems theory.
Experimental results demonstrate the superiority of f-EBM over contrastive
divergence, as well as the benefits of training EBMs using f-divergences other
than KL. | [
"cs.LG",
"stat.ML"
] |
Together with the rapid development of the Internet of Things (IoT), human
activity recognition (HAR) using wearable Inertial Measurement Units (IMUs)
becomes a promising technology for many research areas. Recently, deep
learning-based methods pave a new way of understanding and performing analysis
of the complex data in the HAR system. However, the performance of these
methods is mostly based on the quality and quantity of the collected data. In
this paper, we innovatively propose to build a large database based on virtual
IMUs and then address technical issues by introducing a multiple-domain deep
learning framework consisting of three technical parts. In the first part, we
propose to learn the single-frame human activity from the noisy IMU data with
hybrid convolutional neural networks (CNNs) in the semi-supervised form. For
the second part, the extracted data features are fused according to the
principle of uncertainty-aware consistency, which reduces the uncertainty by
weighting the importance of the features. The transfer learning is performed in
the last part based on the newly released Archive of Motion Capture as Surface
Shapes (AMASS) dataset, containing abundant synthetic human poses, which
enhances the variety and diversity of the training dataset and is beneficial
for the process of training and feature transfer in the proposed method. The
efficiency and effectiveness of the proposed method have been demonstrated in
the real deep inertial poser (DIP) dataset. The experimental results show that
the proposed methods can surprisingly converge within a few iterations and
outperform all competing methods. | [
"cs.CV"
] |
For a product of interest, we propose a search method to surface a set of
reference products. The reference products can be used as candidates to support
downstream modeling tasks and business applications. The search method consists
of product representation learning and fingerprint-type vector searching. The
product catalog information is transformed into a high-quality embedding of low
dimensions via a novel attention auto-encoder neural network, and the embedding
is further coupled with a binary encoding vector for fast retrieval. We conduct
extensive experiments to evaluate the proposed method, and compare it with peer
services to demonstrate its advantage in terms of search return rate and
precision. | [
"stat.ML",
"cs.IR",
"cs.LG"
] |
Contour shape alignment is a fundamental but challenging problem in computer
vision, especially when the observations are partial, noisy, and largely
misaligned. Recent ConvNet-based architectures that were proposed to align
image structures tend to fail with contour representation of shapes, mostly due
to the use of proximity-insensitive pixel-wise similarity measures as loss
functions in their training processes. This work presents a novel ConvNet,
"ProAlignNet" that accounts for large scale misalignments and complex
transformations between the contour shapes. It infers the warp parameters in a
multi-scale fashion with progressively increasing complex transformations over
increasing scales. It learns --without supervision-- to align contours,
agnostic to noise and missing parts, by training with a novel loss function
which is derived an upperbound of a proximity-sensitive and local
shape-dependent similarity metric that uses classical Morphological Chamfer
Distance Transform. We evaluate the reliability of these proposals on a
simulated MNIST noisy contours dataset via some basic sanity check experiments.
Next, we demonstrate the effectiveness of the proposed models in two real-world
applications of (i) aligning geo-parcel data to aerial image maps and (ii)
refining coarsely annotated segmentation labels. In both applications, the
proposed models consistently perform superior to state-of-the-art methods. | [
"cs.CV",
"cs.LG"
] |
Understanding 3D object structure from a single image is an important but
challenging task in computer vision, mostly due to the lack of 3D object
annotations to real images. Previous research tackled this problem by either
searching for a 3D shape that best explains 2D annotations, or training purely
on synthetic data with ground truth 3D information. In this work, we propose 3D
INterpreter Networks (3D-INN), an end-to-end trainable framework that
sequentially estimates 2D keypoint heatmaps and 3D object skeletons and poses.
Our system learns from both 2D-annotated real images and synthetic 3D data.
This is made possible mainly by two technical innovations. First, heatmaps of
2D keypoints serve as an intermediate representation to connect real and
synthetic data. 3D-INN is trained on real images to estimate 2D keypoint
heatmaps from an input image; it then predicts 3D object structure from
heatmaps using knowledge learned from synthetic 3D shapes. By doing so, 3D-INN
benefits from the variation and abundance of synthetic 3D objects, without
suffering from the domain difference between real and synthesized images, often
due to imperfect rendering. Second, we propose a Projection Layer, mapping
estimated 3D structure back to 2D. During training, it ensures 3D-INN to
predict 3D structure whose projection is consistent with the 2D annotations to
real images. Experiments show that the proposed system performs well on both 2D
keypoint estimation and 3D structure recovery. We also demonstrate that the
recovered 3D information has wide vision applications, such as image retrieval. | [
"cs.CV",
"cs.LG"
] |
Time series shapelets are discriminative sub-sequences and their similarity
to time series can be used for time series classification. Initial shapelet
extraction algorithms searched shapelets by complete enumeration of all
possible data sub-sequences. Research on shapelets for univariate time series
proposed a mechanism called shapelet learning which parameterizes the shapelets
and learns them jointly with a prediction model in an optimization procedure.
Trivial extension of this method to multivariate time series does not yield
very good results due to the presence of noisy channels which lead to
overfitting. In this paper we propose a shapelet learning scheme for
multivariate time series in which we introduce channel masks to discount noisy
channels and serve as an implicit regularization. | [
"cs.LG"
] |
3D scene understanding from point clouds plays a vital role for various
robotic applications. Unfortunately, current state-of-the-art methods use
separate neural networks for different tasks like object detection or room
layout estimation. Such a scheme has two limitations: 1) Storing and running
several networks for different tasks are expensive for typical robotic
platforms. 2) The intrinsic structure of separate outputs are ignored and
potentially violated. To this end, we propose the first transformer
architecture that predicts 3D objects and layouts simultaneously, using point
cloud inputs. Unlike existing methods that either estimate layout keypoints or
edges, we directly parameterize room layout as a set of quads. As such, the
proposed architecture is termed as P(oint)Q(uad)-Transformer. Along with the
novel quad representation, we propose a tailored physical constraint loss
function that discourages object-layout interference. The quantitative and
qualitative evaluations on the public benchmark ScanNet show that the proposed
PQ-Transformer succeeds to jointly parse 3D objects and layouts, running at a
quasi-real-time (8.91 FPS) rate without efficiency-oriented optimization.
Moreover, the new physical constraint loss can improve strong baselines, and
the F1-score of the room layout is significantly promoted from 37.9% to 57.9%. | [
"cs.CV"
] |
Neural machine translation models are used to automatically generate a
document from given source code since this can be regarded as a machine
translation task. Source code summarization is one of the components for
automatic document generation, which generates a summary in natural language
from given source code. This suggests that techniques used in neural machine
translation, such as Long Short-Term Memory (LSTM), can be used for source code
summarization. However, there is a considerable difference between source code
and natural language: Source code is essentially {\em structured}, having loops
and conditional branching, etc. Therefore, there is some obstacle to apply
known machine translation models to source code.
Abstract syntax trees (ASTs) capture these structural properties and play an
important role in recent machine learning studies on source code. Tree-LSTM is
proposed as a generalization of LSTMs for tree-structured data. However, there
is a critical issue when applying it to ASTs: It cannot handle a tree that
contains nodes having an arbitrary number of children and their order
simultaneously, which ASTs generally have such nodes. To address this issue, we
propose an extension of Tree-LSTM, which we call \emph{Multi-way Tree-LSTM} and
apply it for source code summarization. As a result of computational
experiments, our proposal achieved better results when compared with several
state-of-the-art techniques. | [
"cs.LG",
"cs.SE",
"stat.ML"
] |
The paper presents an algorithm for depth map estimation from the light field
images in relatively small amount of time, using only single thread on CPU. The
proposed method improves existing principle of line fitting in 4-dimensional
light field space. Line fitting is based on color values comparison using
kernel density estimation. Our method utilizes result of Semi-Global Matching
(SGM) with Census transform-based matching cost as a border initialization for
line fitting. It provides a significant reduction of computations needed to
find the best depth match. With the suggested evaluation metric we show that
proposed method is applicable for efficient depth map estimation while
preserving low computational time compared to others. | [
"cs.CV"
] |
Multi-graph multi-label learning (\textsc{Mgml}) is a supervised learning
framework, which aims to learn a multi-label classifier from a set of labeled
bags each containing a number of graphs. Prior techniques on the \textsc{Mgml}
are developed based on transfering graphs into instances and focus on learning
the unseen labels only at the bag level. In this paper, we propose a
\textit{coarse} and \textit{fine-grained} Multi-graph Multi-label (cfMGML)
learning framework which directly builds the learning model over the graphs and
empowers the label prediction at both the \textit{coarse} (aka. bag) level and
\textit{fine-grained} (aka. graph in each bag) level. In particular, given a
set of labeled multi-graph bags, we design the scoring functions at both graph
and bag levels to model the relevance between the label and data using specific
graph kernels. Meanwhile, we propose a thresholding rank-loss objective
function to rank the labels for the graphs and bags and minimize the
hamming-loss simultaneously at one-step, which aims to addresses the error
accumulation issue in traditional rank-loss algorithms. To tackle the
non-convex optimization problem, we further develop an effective sub-gradient
descent algorithm to handle high-dimensional space computation required in
cfMGML. Experiments over various real-world datasets demonstrate cfMGML
achieves superior performance than the state-of-arts algorithms. | [
"cs.LG",
"stat.ML"
] |
An explainable machine learning method for point cloud classification, called
the PointHop method, is proposed in this work. The PointHop method consists of
two stages: 1) local-to-global attribute building through iterative one-hop
information exchange, and 2) classification and ensembles. In the attribute
building stage, we address the problem of unordered point cloud data using a
space partitioning procedure and developing a robust descriptor that
characterizes the relationship between a point and its one-hop neighbor in a
PointHop unit. When we put multiple PointHop units in cascade, the attributes
of a point will grow by taking its relationship with one-hop neighbor points
into account iteratively. Furthermore, to control the rapid dimension growth of
the attribute vector associated with a point, we use the Saab transform to
reduce the attribute dimension in each PointHop unit. In the classification and
ensemble stage, we feed the feature vector obtained from multiple PointHop
units to a classifier. We explore ensemble methods to improve the
classification performance furthermore. It is shown by experimental results
that the PointHop method offers classification performance that is comparable
with state-of-the-art methods while demanding much lower training complexity. | [
"cs.CV",
"cs.LG"
] |
Image segmentation is the process of partitioning the image into significant
regions easier to analyze. Nowadays, segmentation has become a necessity in
many practical medical imaging methods as locating tumors and diseases. Hidden
Markov Random Field model is one of several techniques used in image
segmentation. It provides an elegant way to model the segmentation process.
This modeling leads to the minimization of an objective function. Conjugate
Gradient algorithm (CG) is one of the best known optimization techniques. This
paper proposes the use of the Conjugate Gradient algorithm (CG) for image
segmentation, based on the Hidden Markov Random Field. Since derivatives are
not available for this expression, finite differences are used in the CG
algorithm to approximate the first derivative. The approach is evaluated using
a number of publicly available images, where ground truth is known. The Dice
Coefficient is used as an objective criterion to measure the quality of
segmentation. The results show that the proposed CG approach compares favorably
with other variants of Hidden Markov Random Field segmentation algorithms. | [
"cs.CV"
] |
Programming language processing (similar to natural language processing) is a
hot research topic in the field of software engineering; it has also aroused
growing interest in the artificial intelligence community. However, different
from a natural language sentence, a program contains rich, explicit, and
complicated structural information. Hence, traditional NLP models may be
inappropriate for programs. In this paper, we propose a novel tree-based
convolutional neural network (TBCNN) for programming language processing, in
which a convolution kernel is designed over programs' abstract syntax trees to
capture structural information. TBCNN is a generic architecture for programming
language processing; our experiments show its effectiveness in two different
program analysis tasks: classifying programs according to functionality, and
detecting code snippets of certain patterns. TBCNN outperforms baseline
methods, including several neural models for NLP. | [
"cs.LG",
"cs.NE",
"cs.SE"
] |
Advances in deep neural networks (DNN) greatly bolster real-time detection of
anomalous IoT data. However, IoT devices can hardly afford complex DNN models,
and offloading anomaly detection tasks to the cloud incurs long delay. In this
paper, we propose and build a demo for an adaptive anomaly detection approach
for distributed hierarchical edge computing (HEC) systems to solve this
problem, for both univariate and multivariate IoT data. First, we construct
multiple anomaly detection DNN models with increasing complexity, and associate
each model with a layer in HEC from bottom to top. Then, we design an adaptive
scheme to select one of these models on the fly, based on the contextual
information extracted from each input data. The model selection is formulated
as a contextual bandit problem characterized by a single-step Markov decision
process, and is solved using a reinforcement learning policy network. We build
an HEC testbed, implement our proposed approach, and evaluate it using real IoT
datasets. The demo shows that our proposed approach significantly reduces
detection delay (e.g., by 71.4% for univariate dataset) without sacrificing
accuracy, as compared to offloading detection tasks to the cloud. We also
compare it with other baseline schemes and demonstrate that it achieves the
best accuracy-delay tradeoff. Our demo is also available online:
https://rebrand.ly/91a71 | [
"cs.LG",
"cs.DC",
"cs.NI",
"stat.ML"
] |
Popular graph neural networks implement convolution operations on graphs
based on polynomial spectral filters. In this paper, we propose a novel graph
convolutional layer inspired by the auto-regressive moving average (ARMA)
filter that, compared to polynomial ones, provides a more flexible frequency
response, is more robust to noise, and better captures the global graph
structure. We propose a graph neural network implementation of the ARMA filter
with a recursive and distributed formulation, obtaining a convolutional layer
that is efficient to train, localized in the node space, and can be transferred
to new graphs at test time. We perform a spectral analysis to study the
filtering effect of the proposed ARMA layer and report experiments on four
downstream tasks: semi-supervised node classification, graph signal
classification, graph classification, and graph regression. Results show that
the proposed ARMA layer brings significant improvements over graph neural
networks based on polynomial filters. | [
"cs.LG",
"stat.ML"
] |
Due to potential applications in chronic disease management and personalized
healthcare, the EHRs data analysis has attracted much attention of both
researchers and practitioners. There are three main challenges in modeling
longitudinal and heterogeneous EHRs data: heterogeneity, irregular temporality
and interpretability. A series of deep learning methods have made remarkable
progress in resolving these challenges. Nevertheless, most of existing
attention models rely on capturing the 1-order temporal dependencies or 2-order
multimodal relationships among feature elements. In this paper, we propose a
time-guided high-order attention (TGHOA) model. The proposed method has three
major advantages. (1) It can model longitudinal heterogeneous EHRs data via
capturing the 3-order correlations of different modalities and the irregular
temporal impact of historical events. (2) It can be used to identify the
potential concerns of medical features to explain the reasoning process of the
healthcare model. (3) It can be easily expanded into cases with more modalities
and flexibly applied in different prediction tasks. We evaluate the proposed
method in two tasks of mortality prediction and disease ranking on two real
world EHRs datasets. Extensive experimental results show the effectiveness of
the proposed model. | [
"cs.LG"
] |
Recent work by Brock et al. (2018) suggests that Generative Adversarial
Networks (GANs) benefit disproportionately from large mini-batch sizes.
Unfortunately, using large batches is slow and expensive on conventional
hardware. Thus, it would be nice if we could generate batches that were
effectively large though actually small. In this work, we propose a method to
do this, inspired by the use of Coreset-selection in active learning. When
training a GAN, we draw a large batch of samples from the prior and then
compress that batch using Coreset-selection. To create effectively large
batches of 'real' images, we create a cached dataset of Inception activations
of each training image, randomly project them down to a smaller dimension, and
then use Coreset-selection on those projected activations at training time. We
conduct experiments showing that this technique substantially reduces training
time and memory usage for modern GAN variants, that it reduces the fraction of
dropped modes in a synthetic dataset, and that it allows GANs to reach a new
state of the art in anomaly detection. | [
"stat.ML",
"cs.LG"
] |
Automotive cameras, particularly surround-view cameras, tend to get soiled by
mud, water, snow, etc. For higher levels of autonomous driving, it is necessary
to have a soiling detection algorithm which will trigger an automatic cleaning
system. Localized detection of soiling in an image is necessary to control the
cleaning system. It is also necessary to enable partial functionality in
unsoiled areas while reducing confidence in soiled areas. Although this can be
solved using a semantic segmentation task, we explore a more efficient solution
targeting deployment in low power embedded system. We propose a novel method to
regress the area of each soiling type within a tile directly. We refer to this
as coverage. The proposed approach is better than learning the dominant class
in a tile as multiple soiling types occur within a tile commonly. It also has
the advantage of dealing with coarse polygon annotation, which will cause the
segmentation task. The proposed soiling coverage decoder is an order of
magnitude faster than an equivalent segmentation decoder. We also integrated it
into an object detection and semantic segmentation multi-task model using an
asynchronous back-propagation algorithm. A portion of the dataset used will be
released publicly as part of our WoodScape dataset to encourage further
research. | [
"cs.CV",
"cs.RO"
] |
We present a global optimization approach for solving the maximum
a-posteriori (MAP) clustering problem under the Gaussian mixture model.Our
approach can accommodate side constraints and it preserves the combinatorial
structure of the MAP clustering problem by formulating it asa mixed-integer
nonlinear optimization problem (MINLP). We approximate the MINLP through a
mixed-integer quadratic program (MIQP) transformation that improves
computational aspects while guaranteeing $\epsilon$-global optimality. An
important benefit of our approach is the explicit quantification of the degree
of suboptimality, via the optimality gap, en route to finding the globally
optimal MAP clustering. Numerical experiments comparing our method to other
approaches show that our method finds a better solution than standard
clustering methods. Finally, we cluster a real breast cancer gene expression
data set incorporating intrinsic subtype information; the induced constraints
substantially improve the computational performance and produce more coherent
and bio-logically meaningful clusters. | [
"stat.ML",
"cs.LG",
"math.OC",
"stat.ME"
] |
With deep reinforcement learning (RL) methods achieving results that exceed
human capabilities in games, robotics, and simulated environments, continued
scaling of RL training is crucial to its deployment in solving complex
real-world problems. However, improving the performance scalability and power
efficiency of RL training through understanding the architectural implications
of CPU-GPU systems remains an open problem. In this work we investigate and
improve the performance and power efficiency of distributed RL training on
CPU-GPU systems by approaching the problem not solely from the GPU
microarchitecture perspective but following a holistic system-level analysis
approach. We quantify the overall hardware utilization on a state-of-the-art
distributed RL training framework and empirically identify the bottlenecks
caused by GPU microarchitectural, algorithmic, and system-level design choices.
We show that the GPU microarchitecture itself is well-balanced for
state-of-the-art RL frameworks, but further investigation reveals that the
number of actors running the environment interactions and the amount of
hardware resources available to them are the primary performance and power
efficiency limiters. To this end, we introduce a new system design metric,
CPU/GPU ratio, and show how to find the optimal balance between CPU and GPU
resources when designing scalable and efficient CPU-GPU systems for RL
training. | [
"cs.LG",
"cs.AR"
] |
In this paper, we describe how scene depth can be extracted using a
hyperspectral light field capture (H-LF) system. Our H-LF system consists of a
5 x 6 array of cameras, with each camera sampling a different narrow band in
the visible spectrum. There are two parts to extracting scene depth. The first
part is our novel cross-spectral pairwise matching technique, which involves a
new spectral-invariant feature descriptor and its companion matching metric we
call bidirectional weighted normalized cross correlation (BWNCC). The second
part, namely, H-LF stereo matching, uses a combination of spectral-dependent
correspondence and defocus cues that rely on BWNCC. These two new cost terms
are integrated into a Markov Random Field (MRF) for disparity estimation.
Experiments on synthetic and real H-LF data show that our approach can produce
high-quality disparity maps. We also show that these results can be used to
produce the complete plenoptic cube in addition to synthesizing all-focus and
defocused color images under different sensor spectral responses. | [
"cs.CV"
] |
Existing neural network-based autonomous systems are shown to be vulnerable
against adversarial attacks, therefore sophisticated evaluation on their
robustness is of great importance. However, evaluating the robustness only
under the worst-case scenarios based on known attacks is not comprehensive, not
to mention that some of them even rarely occur in the real world. In addition,
the distribution of safety-critical data is usually multimodal, while most
traditional attacks and evaluation methods focus on a single modality. To solve
the above challenges, we propose a flow-based multimodal safety-critical
scenario generator for evaluating decisionmaking algorithms. The proposed
generative model is optimized with weighted likelihood maximization and a
gradient-based sampling procedure is integrated to improve the sampling
efficiency. The safety-critical scenarios are generated by querying the task
algorithms and the log-likelihood of the generated scenarios is in proportion
to the risk level. Experiments on a self-driving task demonstrate our
advantages in terms of testing efficiency and multimodal modeling capability.
We evaluate six Reinforcement Learning algorithms with our generated traffic
scenarios and provide empirical conclusions about their robustness. | [
"cs.LG",
"cs.RO",
"stat.ML"
] |
Depth completion deals with the problem of recovering dense depth maps from
sparse ones, where color images are often used to facilitate this completion.
Recent approaches mainly focus on image guided learning to predict dense
results. However, blurry image guidance and object structures in depth still
impede the performance of image guided frameworks. To tackle these problems, we
explore a repetitive design in our image guided network to sufficiently and
gradually recover depth values. Specifically, the repetition is embodied in a
color image guidance branch and a depth generation branch. In the former
branch, we design a repetitive hourglass network to extract higher-level image
features of complex environments, which can provide powerful context guidance
for depth prediction. In the latter branch, we design a repetitive guidance
module based on dynamic convolution where the convolution factorization is
applied to simultaneously reduce its complexity and progressively model
high-frequency structures, e.g., boundaries. Further, in this module, we
propose an adaptive fusion mechanism to effectively aggregate multi-step depth
features. Extensive experiments show that our method achieves state-of-the-art
result on the NYUv2 dataset and ranks 1st on the KITTI benchmark at the time of
submission. | [
"cs.CV"
] |
This paper is focused on the task of searching for a specific vehicle that
appeared in the surveillance networks. Existing methods usually assume the
vehicle images are well cropped from the surveillance videos, then use visual
attributes, like colors and types, or license plate numbers to match the target
vehicle in the image set. However, a complete vehicle search system should
consider the problems of vehicle detection, representation, indexing, storage,
matching, and so on. Besides, attribute-based search cannot accurately find the
same vehicle due to intra-instance changes in different cameras and the
extremely uncertain environment. Moreover, the license plates may be
misrecognized in surveillance scenes due to the low resolution and noise. In
this paper, a Progressive Vehicle Search System, named as PVSS, is designed to
solve the above problems. PVSS is constituted of three modules: the crawler,
the indexer, and the searcher. The vehicle crawler aims to detect and track
vehicles in surveillance videos and transfer the captured vehicle images,
metadata and contextual information to the server or cloud. Then multi-grained
attributes, such as the visual features and license plate fingerprints, are
extracted and indexed by the vehicle indexer. At last, a query triplet with an
input vehicle image, the time range, and the spatial scope is taken as the
input by the vehicle searcher. The target vehicle will be searched in the
database by a progressive process. Extensive experiments on the public dataset
from a real surveillance network validate the effectiveness of the PVSS. | [
"cs.CV"
] |
Public charging station occupancy prediction plays key importance in
developing a smart charging strategy to reduce electric vehicle (EV) operator
and user inconvenience. However, existing studies are mainly based on
conventional econometric or time series methodologies with limited accuracy. We
propose a new mixed long short-term memory neural network incorporating both
historical charging state sequences and time-related features for multistep
discrete charging occupancy state prediction. Unlike the existing LSTM
networks, the proposed model separates different types of features and handles
them differently with mixed neural network architecture. The model is compared
to a number of state-of-the-art machine learning and deep learning approaches
based on the EV charging data obtained from the open data portal of the city of
Dundee, UK. The results show that the proposed method produces very accurate
predictions (99.99% and 81.87% for 1 step (10 minutes) and 6 step (1 hour)
ahead, respectively, and outperforms the benchmark approaches significantly
(+22.4% for one-step-ahead prediction and +6.2% for 6 steps ahead). A
sensitivity analysis is conducted to evaluate the impact of the model
parameters on prediction accuracy. | [
"cs.LG",
"cs.NE"
] |
Depth Completion deals with the problem of converting a sparse depth map to a
dense one, given the corresponding color image. Convolutional spatial
propagation network (CSPN) is one of the state-of-the-art (SoTA) methods of
depth completion, which recovers structural details of the scene. In this
paper, we propose CSPN++, which further improves its effectiveness and
efficiency by learning adaptive convolutional kernel sizes and the number of
iterations for the propagation, thus the context and computational resources
needed at each pixel could be dynamically assigned upon requests. Specifically,
we formulate the learning of the two hyper-parameters as an architecture
selection problem where various configurations of kernel sizes and numbers of
iterations are first defined, and then a set of soft weighting parameters are
trained to either properly assemble or select from the pre-defined
configurations at each pixel. In our experiments, we find weighted assembling
can lead to significant accuracy improvements, which we referred to as
"context-aware CSPN", while weighted selection, "resource-aware CSPN" can
reduce the computational resource significantly with similar or better
accuracy. Besides, the resource needed for CSPN++ can be adjusted w.r.t. the
computational budget automatically. Finally, to avoid the side effects of noise
or inaccurate sparse depths, we embed a gated network inside CSPN++, which
further improves the performance. We demonstrate the effectiveness of CSPN++on
the KITTI depth completion benchmark, where it significantly improves over CSPN
and other SoTA methods. | [
"cs.CV"
] |
Person search by natural language aims at retrieving a specific person in a
large-scale image pool that matches the given textual descriptions. While most
of the current methods treat the task as a holistic visual and textual feature
matching one, we approach it from an attribute-aligning perspective that allows
grounding specific attribute phrases to the corresponding visual regions. We
achieve success as well as the performance boosting by a robust feature
learning that the referred identity can be accurately bundled by multiple
attribute visual cues. To be concrete, our Visual-Textual Attribute Alignment
model (dubbed as ViTAA) learns to disentangle the feature space of a person
into subspaces corresponding to attributes using a light auxiliary attribute
segmentation computing branch. It then aligns these visual features with the
textual attributes parsed from the sentences by using a novel contrastive
learning loss. Upon that, we validate our ViTAA framework through extensive
experiments on tasks of person search by natural language and by
attribute-phrase queries, on which our system achieves state-of-the-art
performances. Code will be publicly available upon publication. | [
"cs.CV"
] |
The Reinforcement Learning (RL) building blocks, i.e. Q-functions and policy
networks, usually take elements from the cartesian product of two domains as
input. In particular, the input of the Q-function is both the state and the
action, and in multi-task problems (Meta-RL) the policy can take a state and a
context. Standard architectures tend to ignore these variables' underlying
interpretations and simply concatenate their features into a single vector. In
this work, we argue that this choice may lead to poor gradient estimation in
actor-critic algorithms and high variance learning steps in Meta-RL algorithms.
To consider the interaction between the input variables, we suggest using a
Hypernetwork architecture where a primary network determines the weights of a
conditional dynamic network. We show that this approach improves the gradient
approximation and reduces the learning step variance, which both accelerates
learning and improves the final performance. We demonstrate a consistent
improvement across different locomotion tasks and different algorithms both in
RL (TD3 and SAC) and in Meta-RL (MAML and PEARL). | [
"cs.LG",
"cs.AI"
] |
Strong theoretical guarantees of robustness can be given for ensembles of
classifiers generated by input randomization. Specifically, an $\ell_2$ bounded
adversary cannot alter the ensemble prediction generated by an additive
isotropic Gaussian noise, where the radius for the adversary depends on both
the variance of the distribution as well as the ensemble margin at the point of
interest. We build on and considerably expand this work across broad classes of
distributions. In particular, we offer adversarial robustness guarantees and
associated algorithms for the discrete case where the adversary is $\ell_0$
bounded. Moreover, we exemplify how the guarantees can be tightened with
specific assumptions about the function class of the classifier such as a
decision tree. We empirically illustrate these results with and without
functional restrictions across image and molecule datasets. | [
"cs.LG",
"stat.ML"
] |
We introduce Exemplar VAEs, a family of generative models that bridge the gap
between parametric and non-parametric, exemplar based generative models.
Exemplar VAE is a variant of VAE with a non-parametric prior in the latent
space based on a Parzen window estimator. To sample from it, one first draws a
random exemplar from a training set, then stochastically transforms that
exemplar into a latent code and a new observation. We propose retrieval
augmented training (RAT) as a way to speed up Exemplar VAE training by using
approximate nearest neighbor search in the latent space to define a lower bound
on log marginal likelihood. To enhance generalization, model parameters are
learned using exemplar leave-one-out and subsampling. Experiments demonstrate
the effectiveness of Exemplar VAEs on density estimation and representation
learning. Importantly, generative data augmentation using Exemplar VAEs on
permutation invariant MNIST and Fashion MNIST reduces classification error from
1.17% to 0.69% and from 8.56% to 8.16%. | [
"cs.LG",
"cs.CV",
"stat.ML"
] |
With higher-order neighborhood information of graph network, the accuracy of
graph representation learning classification can be significantly improved.
However, the current higher order graph convolutional network has a large
number of parameters and high computational complexity. Therefore, we propose a
Hybrid Lower order and Higher order Graph convolutional networks (HLHG)
learning model, which uses weight sharing mechanism to reduce the number of
network parameters. To reduce computational complexity, we propose a novel
fusion pooling layer to combine the neighborhood information of high order and
low order. Theoretically, we compare the model complexity of the proposed model
with the other state-of-the-art model. Experimentally, we verify the proposed
model on the large-scale text network datasets by supervised learning, and on
the citation network datasets by semi-supervised learning. The experimental
results show that the proposed model achieves highest classification accuracy
with a small set of trainable weight parameters. | [
"cs.LG",
"stat.ML"
] |
Video anomaly detection is a challenging task because of diverse abnormal
events. To this task, methods based on reconstruction and prediction are wildly
used in recent works, which are built on the assumption that learning on normal
data, anomalies cannot be reconstructed or predicated as good as normal
patterns, namely the anomaly result with more errors. In this paper, we propose
to discriminate anomalies from normal ones by the duality of normality-granted
optical flow, which is conducive to predict normal frames but adverse to
abnormal frames. The normality-granted optical flow is predicted from a single
frame, to keep the motion knowledge focused on normal patterns. Meanwhile, We
extend the appearance-motion correspondence scheme from frame reconstruction to
prediction, which not only helps to learn the knowledge about object
appearances and correlated motion, but also meets the fact that motion is the
transformation between appearances. We also introduce a margin loss to enhance
the learning of frame prediction. Experiments on standard benchmark datasets
demonstrate the impressive performance of our approach. | [
"cs.CV"
] |
Unsupervised learning of a generalizable model of the visual appearance of
humans from video data is of major importance for computing systems interacting
naturally with their users and others. We propose a step towards automatic
behavior understanding by integrating principles of Organic Computing into the
posture estimation cycle, thereby relegating the need for human intervention
while simultaneously raising the level of system autonomy. The system extracts
coherent motion from moving upper bodies and autonomously decides about limbs
and their possible spatial relationships. The models from many videos are
integrated into meta-models, which show good generalization to different
individuals, backgrounds, and attire. These models allow robust interpretation
of single video frames without temporal continuity and posture mimicking by an
android robot. | [
"cs.CV",
"I.2.10; I.5.4"
] |
Privacy considerations and bias in datasets are quickly becoming
high-priority issues that the computer vision community needs to face. So far,
little attention has been given to practical solutions that do not involve
collection of new datasets. In this work, we show that for object detection on
COCO, both anonymizing the dataset by blurring faces, as well as swapping faces
in a balanced manner along the gender and skin tone dimension, can retain
object detection performances while preserving privacy and partially balancing
bias. | [
"cs.CV"
] |
Odometry is of key importance for localization in the absence of a map. There
is considerable work in the area of visual odometry (VO), and recent advances
in deep learning have brought novel approaches to VO, which directly learn
salient features from raw images. These learning-based approaches have led to
more accurate and robust VO systems. However, they have not been well applied
to point cloud data yet. In this work, we investigate how to exploit deep
learning to estimate point cloud odometry (PCO), which may serve as a critical
component in point cloud-based downstream tasks or learning-based systems.
Specifically, we propose a novel end-to-end deep parallel neural network called
DeepPCO, which can estimate the 6-DOF poses using consecutive point clouds. It
consists of two parallel sub-networks to estimate 3-D translation and
orientation respectively rather than a single neural network. We validate our
approach on KITTI Visual Odometry/SLAM benchmark dataset with different
baselines. Experiments demonstrate that the proposed approach achieves good
performance in terms of pose accuracy. | [
"cs.CV",
"cs.RO"
] |
With the rapid development of high-throughput technologies, parallel
acquisition of large-scale drug-informatics data provides huge opportunities to
improve pharmaceutical research and development. One significant application is
the purpose prediction of small molecule compounds, aiming to specify
therapeutic properties of extensive purpose-unknown compounds and to repurpose
novel therapeutic properties of FDA-approved drugs. Such problem is very
challenging since compound attributes contain heterogeneous data with various
feature patterns such as drug fingerprint, drug physicochemical property, drug
perturbation gene expression. Moreover, there is complex nonlinear dependency
among heterogeneous data. In this paper, we propose a novel domain-adversarial
multi-task framework for integrating shared knowledge from multiple domains.
The framework utilizes the adversarial strategy to effectively learn target
representations and models their nonlinear dependency. Experiments on two
real-world datasets illustrate that the performance of our approach obtains an
obvious improvement over competitive baselines. The novel therapeutic
properties of purpose-unknown compounds we predicted are mostly reported or
brought to the clinics. Furthermore, our framework can integrate various
attributes beyond the three domains examined here and can be applied in the
industry for screening the purpose of huge amounts of as yet unidentified
compounds. Source codes of this paper are available on Github. | [
"cs.LG",
"stat.ML"
] |
Many real-world applications involve multivariate, geo-tagged time series
data: at each location, multiple sensors record corresponding measurements. For
example, air quality monitoring system records PM2.5, CO, etc. The resulting
time-series data often has missing values due to device outages or
communication errors. In order to impute the missing values, state-of-the-art
methods are built on Recurrent Neural Networks (RNN), which process each time
stamp sequentially, prohibiting the direct modeling of the relationship between
distant time stamps. Recently, the self-attention mechanism has been proposed
for sequence modeling tasks such as machine translation, significantly
outperforming RNN because the relationship between each two time stamps can be
modeled explicitly. In this paper, we are the first to adapt the self-attention
mechanism for multivariate, geo-tagged time series data. In order to jointly
capture the self-attention across multiple dimensions, including time, location
and the sensor measurements, while maintain low computational complexity, we
propose a novel approach called Cross-Dimensional Self-Attention (CDSA) to
process each dimension sequentially, yet in an order-independent manner. Our
extensive experiments on four real-world datasets, including three standard
benchmarks and our newly collected NYC-traffic dataset, demonstrate that our
approach outperforms the state-of-the-art imputation and forecasting methods. A
detailed systematic analysis confirms the effectiveness of our design choices. | [
"cs.LG",
"stat.ML"
] |
Unsupervised representation learning is one of the most important problems in
machine learning. Recent promising methods are based on contrastive learning.
However, contrastive learning often relies on heuristic ideas, and therefore it
is not easy to understand what contrastive learning is doing. This paper
emphasizes that density ratio estimation is a promising goal for unsupervised
representation learning, and promotes understanding to contrastive learning.
Our primal contribution is to theoretically show that density ratio estimation
unifies three frameworks for unsupervised representation learning: Maximization
of mutual information (MI), nonlinear independent component analysis (ICA) and
a novel framework for estimation of a lower-dimensional nonlinear subspace
proposed in this paper. This unified view clarifies under what conditions
contrastive learning can be regarded as maximizing MI, performing nonlinear ICA
or estimating the lower-dimensional nonlinear subspace in the proposed
framework. Furthermore, we also make theoretical contributions in each of the
three frameworks: We show that MI can be maximized through density ratio
estimation under certain conditions, while our analysis for nonlinear ICA
reveals a novel insight for recovery of the latent source components, which is
clearly supported by numerical experiments. In addition, some theoretical
conditions are also established to estimate a nonlinear subspace in the
proposed framework. Based on the unified view, we propose two practical methods
for unsupervised representation learning through density ratio estimation: The
first method is an outlier-robust method for representation learning, while the
second one is a sample-efficient nonlinear ICA method. Finally, we numerically
demonstrate usefulness of the proposed methods in nonlinear ICA and through
application to a downstream task for classification. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Quantitative evaluation has increased dramatically among recent video
inpainting work, but the video and mask content used to gauge performance has
received relatively little attention. Although attributes such as camera and
background scene motion inherently change the difficulty of the task and affect
methods differently, existing evaluation schemes fail to control for them,
thereby providing minimal insight into inpainting failure modes. To address
this gap, we propose the Diagnostic Evaluation of Video Inpainting on
Landscapes (DEVIL) benchmark, which consists of two contributions: (i) a novel
dataset of videos and masks labeled according to several key inpainting failure
modes, and (ii) an evaluation scheme that samples slices of the dataset
characterized by a fixed content attribute, and scores performance on each
slice according to reconstruction, realism, and temporal consistency quality.
By revealing systematic changes in performance induced by particular
characteristics of the input content, our challenging benchmark enables more
insightful analysis into video inpainting methods and serves as an invaluable
diagnostic tool for the field. Our code is available at
https://github.com/MichiganCOG/devil . | [
"cs.CV"
] |
We propose a probabilistic model for inferring the multivariate function from
multiple areal data sets with various granularities. Here, the areal data are
observed not at location points but at regions. Existing regression-based
models can only utilize the sufficiently fine-grained auxiliary data sets on
the same domain (e.g., a city). With the proposed model, the functions for
respective areal data sets are assumed to be a multivariate dependent Gaussian
process (GP) that is modeled as a linear mixing of independent latent GPs.
Sharing of latent GPs across multiple areal data sets allows us to effectively
estimate the spatial correlation for each areal data set; moreover it can
easily be extended to transfer learning across multiple domains. To handle the
multivariate areal data, we design an observation model with a spatial
aggregation process for each areal data set, which is an integral of the mixed
GP over the corresponding region. By deriving the posterior GP, we can predict
the data value at any location point by considering the spatial correlations
and the dependences between areal data sets, simultaneously. Our experiments on
real-world data sets demonstrate that our model can 1) accurately refine
coarse-grained areal data, and 2) offer performance improvements by using the
areal data sets from multiple domains. | [
"stat.ML",
"cs.LG"
] |
Recently, learning-based algorithms have shown impressive performance in
underwater image enhancement. Most of them resort to training on synthetic data
and achieve outstanding performance. However, these methods ignore the
significant domain gap between the synthetic and real data (i.e., interdomain
gap), and thus the models trained on synthetic data often fail to generalize
well to real underwater scenarios. Furthermore, the complex and changeable
underwater environment also causes a great distribution gap among the real data
itself (i.e., intra-domain gap). However, almost no research focuses on this
problem and thus their techniques often produce visually unpleasing artifacts
and color distortions on various real images. Motivated by these observations,
we propose a novel Two-phase Underwater Domain Adaptation network (TUDA) to
simultaneously minimize the inter-domain and intra-domain gap. Concretely, a
new dual-alignment network is designed in the first phase, including a
translation part for enhancing realism of input images, followed by an
enhancement part. With performing image-level and feature-level adaptation in
two parts by jointly adversarial learning, the network can better build
invariance across domains and thus bridge the inter-domain gap. In the second
phase, we perform an easy-hard classification of real data according to the
assessed quality of enhanced images, where a rank-based underwater quality
assessment method is embedded. By leveraging implicit quality information
learned from rankings, this method can more accurately assess the perceptual
quality of enhanced images. Using pseudo labels from the easy part, an
easy-hard adaptation technique is then conducted to effectively decrease the
intra-domain gap between easy and hard samples. | [
"cs.CV"
] |
Many applications, such as autonomous driving, heavily rely on multi-modal
data where spatial alignment between the modalities is required. Most
multi-modal registration methods struggle computing the spatial correspondence
between the images using prevalent cross-modality similarity measures. In this
work, we bypass the difficulties of developing cross-modality similarity
measures, by training an image-to-image translation network on the two input
modalities. This learned translation allows training the registration network
using simple and reliable mono-modality metrics. We perform multi-modal
registration using two networks - a spatial transformation network and a
translation network. We show that by encouraging our translation network to be
geometry preserving, we manage to train an accurate spatial transformation
network. Compared to state-of-the-art multi-modal methods our presented method
is unsupervised, requiring no pairs of aligned modalities for training, and can
be adapted to any pair of modalities. We evaluate our method quantitatively and
qualitatively on commercial datasets, showing that it performs well on several
modalities and achieves accurate alignment. | [
"cs.CV"
] |
Zero-shot action recognition can recognize samples of unseen classes that are
unavailable in training by exploring common latent semantic representation in
samples. However, most methods neglected the connotative relation and
extensional relation between the action classes, which leads to the poor
generalization ability of the zero-shot learning. Furthermore, the learned
classifier incline to predict the samples of seen class, which leads to poor
classification performance. To solve the above problems, we propose a two-stage
deep neural network for zero-shot action recognition, which consists of a
feature generation sub-network serving as the sampling stage and a graph
attention sub-network serving as the classification stage. In the sampling
stage, we utilize a generative adversarial networks (GAN) trained by action
features and word vectors of seen classes to synthesize the action features of
unseen classes, which can balance the training sample data of seen classes and
unseen classes. In the classification stage, we construct a knowledge graph
(KG) based on the relationship between word vectors of action classes and
related objects, and propose a graph convolution network (GCN) based on
attention mechanism, which dynamically updates the relationship between action
classes and objects, and enhances the generalization ability of zero-shot
learning. In both stages, we all use word vectors as bridges for feature
generation and classifier generalization from seen classes to unseen classes.
We compare our method with state-of-the-art methods on UCF101 and HMDB51
datasets. Experimental results show that our proposed method improves the
classification performance of the trained classifier and achieves higher
accuracy. | [
"cs.CV"
] |
Visual question answering (Visual QA) has attracted significant attention
these years. While a variety of algorithms have been proposed, most of them are
built upon different combinations of image and language features as well as
multi-modal attention and fusion. In this paper, we investigate an alternative
approach inspired by conventional QA systems that operate on knowledge graphs.
Specifically, we investigate the use of scene graphs derived from images for
Visual QA: an image is abstractly represented by a graph with nodes
corresponding to object entities and edges to object relationships. We adapt
the recently proposed graph network (GN) to encode the scene graph and perform
structured reasoning according to the input question. Our empirical studies
demonstrate that scene graphs can already capture essential information of
images and graph networks have the potential to outperform state-of-the-art
Visual QA algorithms but with a much cleaner architecture. By analyzing the
features generated by GNs we can further interpret the reasoning process,
suggesting a promising direction towards explainable Visual QA. | [
"cs.CV",
"cs.CL"
] |
We propose to apply deep transfer learning from computer vision to static
malware classification. In the transfer learning scheme, we borrow knowledge
from natural images or objects and apply to the target domain of static malware
detection. As a result, training time of deep neural networks is accelerated
while high classification performance is still maintained. We demonstrate the
effectiveness of our approach on three experiments and show that our proposed
method outperforms other classical machine learning methods measured in
accuracy, false positive rate, true positive rate and $F_1$ score (in binary
classification). We instrument an interpretation component to the algorithm and
provide interpretable explanations to enhance security practitioners' trust to
the model. We further discuss a convex combination scheme of transfer learning
and training from scratch for enhanced malware detection, and provide insights
of the algorithmic interpretation of vision-based malware classification
techniques. | [
"cs.LG",
"cs.CR",
"stat.ML"
] |
Existing Graph Neural Network (GNN) methods that learn inductive unsupervised
graph representations focus on learning node and edge representations by
predicting observed edges in the graph. Although such approaches have shown
advances in downstream node classification tasks, they are ineffective in
jointly representing larger $k$-node sets, $k{>}2$. We propose MHM-GNN, an
inductive unsupervised graph representation approach that combines joint
$k$-node representations with energy-based models (hypergraph Markov networks)
and GNNs. To address the intractability of the loss that arises from this
combination, we endow our optimization with a loss upper bound using a
finite-sample unbiased Markov Chain Monte Carlo estimator. Our experiments show
that the unsupervised MHM-GNN representations of MHM-GNN produce better
unsupervised representations than existing approaches from the literature. | [
"cs.LG",
"cs.AI"
] |
Deep learning based facial expression recognition (FER) has received a lot of
attention in the past few years. Most of the existing deep learning based FER
methods do not consider domain knowledge well, which thereby fail to extract
representative features. In this work, we propose a novel FER framework, named
Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition
branch to generate a facial mask so as to focus on facial muscle moving
regions. To guide the facial mask learning, we propose to incorporate prior
domain knowledge by using the average differences between neutral faces and the
corresponding expressive faces as the training guidance. Extensive experiments
on three facial expression benchmark datasets demonstrate the effectiveness of
the proposed method, compared with the state-of-the-art approaches. | [
"cs.CV"
] |
In static monitoring cameras, useful contextual information can stretch far
beyond the few seconds typical video understanding models might see: subjects
may exhibit similar behavior over multiple days, and background objects remain
static. Due to power and storage constraints, sampling frequencies are low,
often no faster than one frame per second, and sometimes are irregular due to
the use of a motion trigger. In order to perform well in this setting, models
must be robust to irregular sampling rates. In this paper we propose a method
that leverages temporal context from the unlabeled frames of a novel camera to
improve performance at that camera. Specifically, we propose an attention-based
approach that allows our model, Context R-CNN, to index into a long term memory
bank constructed on a per-camera basis and aggregate contextual features from
other frames to boost object detection performance on the current frame.
We apply Context R-CNN to two settings: (1) species detection using camera
traps, and (2) vehicle detection in traffic cameras, showing in both settings
that Context R-CNN leads to performance gains over strong baselines. Moreover,
we show that increasing the contextual time horizon leads to improved results.
When applied to camera trap data from the Snapshot Serengeti dataset, Context
R-CNN with context from up to a month of images outperforms a single-frame
baseline by 17.9% mAP, and outperforms S3D (a 3d convolution based baseline) by
11.2% mAP. | [
"cs.CV",
"cs.LG",
"eess.IV",
"q-bio.PE"
] |
State representation learning aims at learning compact representations from
raw observations in robotics and control applications. Approaches used for this
objective are auto-encoders, learning forward models, inverse dynamics or
learning using generic priors on the state characteristics. However, the
diversity in applications and methods makes the field lack standard evaluation
datasets, metrics and tasks. This paper provides a set of environments, data
generators, robotic control tasks, metrics and tools to facilitate iterative
state representation learning and evaluation in reinforcement learning
settings. | [
"cs.LG",
"stat.ML"
] |
Convolutional Neural Network (CNN) has become the state-of-the-art for object
detection in image task. In this chapter, we have explained different
state-of-the-art CNN based object detection models. We have made this review
with categorization those detection models according to two different
approaches: two-stage approach and one-stage approach. Through this chapter, it
has shown advancements in object detection models from R-CNN to latest
RefineDet. It has also discussed the model description and training details of
each model. Here, we have also drawn a comparison among those models. | [
"cs.CV"
] |
The balance between exploration and exploitation is a key problem for
reinforcement learning methods, especially for Q-learning. In this paper, a
fidelity-based probabilistic Q-learning (FPQL) approach is presented to
naturally solve this problem and applied for learning control of quantum
systems. In this approach, fidelity is adopted to help direct the learning
process and the probability of each action to be selected at a certain state is
updated iteratively along with the learning process, which leads to a natural
exploration strategy instead of a pointed one with configured parameters. A
probabilistic Q-learning (PQL) algorithm is first presented to demonstrate the
basic idea of probabilistic action selection. Then the FPQL algorithm is
presented for learning control of quantum systems. Two examples (a spin- 1/2
system and a lamda-type atomic system) are demonstrated to test the performance
of the FPQL algorithm. The results show that FPQL algorithms attain a better
balance between exploration and exploitation, and can also avoid local optimal
policies and accelerate the learning process. | [
"cs.LG",
"cs.SY",
"stat.ML"
] |
Existing inpainting methods have achieved promising performance in recovering
defected images of specific scenes. However, filling holes involving multiple
semantic categories remains challenging due to the obscure semantic boundaries
and the mixture of different semantic textures. In this paper, we introduce
coherence priors between the semantics and textures which make it possible to
concentrate on completing separate textures in a semantic-wise manner.
Specifically, we adopt a multi-scale joint optimization framework to first
model the coherence priors and then accordingly interleavingly optimize image
inpainting and semantic segmentation in a coarse-to-fine manner. A
Semantic-Wise Attention Propagation (SWAP) module is devised to refine
completed image textures across scales by exploring non-local semantic
coherence, which effectively mitigates mix-up of textures. We also propose two
coherence losses to constrain the consistency between the semantics and the
inpainted image in terms of the overall structure and detailed textures.
Experimental results demonstrate the superiority of our proposed method for
challenging cases with complex holes. | [
"cs.CV"
] |
Label ranking aims to learn a mapping from instances to rankings over a
finite number of predefined labels. Random forest is a powerful and one of the
most successful general-purpose machine learning algorithms of modern times. In
this paper, we present a powerful random forest label ranking method which uses
random decision trees to retrieve nearest neighbors. We have developed a novel
two-step rank aggregation strategy to effectively aggregate neighboring
rankings discovered by the random forest into a final predicted ranking.
Compared with existing methods, the new random forest method has many
advantages including its intrinsically scalable tree data structure, highly
parallel-able computational architecture and much superior performance. We
present extensive experimental results to demonstrate that our new method
achieves the highly competitive performance compared with state-of-the-art
methods for datasets with complete ranking and datasets with only partial
ranking information. | [
"cs.LG",
"stat.ML"
] |
Hindsight Credit Assignment (HCA) refers to a recently proposed family of
methods for producing more efficient credit assignment in reinforcement
learning. These methods work by explicitly estimating the probability that
certain actions were taken in the past given present information. Prior work
has studied the properties of such methods and demonstrated their behaviour
empirically. We extend this work by introducing a particular HCA algorithm
which has provably lower variance than the conventional Monte-Carlo estimator
when the necessary functions can be estimated exactly. This result provides a
strong theoretical basis for how HCA could be broadly useful. | [
"cs.LG",
"cs.AI"
] |
The irregular domain and lack of ordering make it challenging to design deep
neural networks for point cloud processing. This paper presents a novel
framework named Point Cloud Transformer(PCT) for point cloud learning. PCT is
based on Transformer, which achieves huge success in natural language
processing and displays great potential in image processing. It is inherently
permutation invariant for processing a sequence of points, making it
well-suited for point cloud learning. To better capture local context within
the point cloud, we enhance input embedding with the support of farthest point
sampling and nearest neighbor search. Extensive experiments demonstrate that
the PCT achieves the state-of-the-art performance on shape classification, part
segmentation and normal estimation tasks. | [
"cs.CV"
] |
This paper considers a distributed reinforcement learning problem in which a
network of multiple agents aim to cooperatively maximize the globally averaged
return through communication with only local neighbors. A randomized
communication-efficient multi-agent actor-critic algorithm is proposed for
possibly unidirectional communication relationships depicted by a directed
graph. It is shown that the algorithm can solve the problem for strongly
connected graphs by allowing each agent to transmit only two scalar-valued
variables at one time. | [
"cs.LG",
"cs.MA",
"stat.ML"
] |
Transformers have made much progress in dealing with visual tasks. However,
existing vision transformers still do not possess an ability that is important
to visual input: building the attention among features of different scales. The
reasons for this problem are two-fold: (1) Input embeddings of each layer are
equal-scale without cross-scale features; (2) Some vision transformers
sacrifice the small-scale features of embeddings to lower the cost of the
self-attention module. To make up this defect, we propose Cross-scale Embedding
Layer (CEL) and Long Short Distance Attention (LSDA). In particular, CEL blends
each embedding with multiple patches of different scales, providing the model
with cross-scale embeddings. LSDA splits the self-attention module into a
short-distance and long-distance one, also lowering the cost but keeping both
small-scale and large-scale features in embeddings. Through these two designs,
we achieve cross-scale attention. Besides, we propose dynamic position bias for
vision transformers to make the popular relative position bias apply to
variable-sized images. Based on these proposed modules, we construct our vision
architecture called CrossFormer. Experiments show that CrossFormer outperforms
other transformers on several representative visual tasks, especially object
detection and segmentation. The code has been released:
https://github.com/cheerss/CrossFormer. | [
"cs.CV",
"cs.LG"
] |
Empirical mode decomposition (EMD) has developed into a prominent tool for
adaptive, scale-based signal analysis in various fields like robotics, security
and biomedical engineering. Since the dramatic increase in amount of data puts
forward higher requirements for the capability of real-time signal analysis, it
is difficult for existing EMD and its variants to trade off the growth of data
dimension and the speed of signal analysis. In order to decompose
multi-dimensional signals at a faster speed, we present a novel
signal-serialization method (serial-EMD), which concatenates multi-variate or
multi-dimensional signals into a one-dimensional signal and uses various
one-dimensional EMD algorithms to decompose it. To verify the effects of the
proposed method, synthetic multi-variate time series, artificial 2D images with
various textures and real-world facial images are tested. Compared with
existing multi-EMD algorithms, the decomposition time becomes significantly
reduced. In addition, the results of facial recognition with Intrinsic Mode
Functions (IMFs) extracted using our method can achieve a higher accuracy than
those obtained by existing multi-EMD algorithms, which demonstrates the
superior performance of our method in terms of the quality of IMFs.
Furthermore, this method can provide a new perspective to optimize the existing
EMD algorithms, that is, transforming the structure of the input signal rather
than being constrained by developing envelope computation techniques or signal
decomposition methods. In summary, the study suggests that the serial-EMD
technique is a highly competitive and fast alternative for multi-dimensional
signal analysis. | [
"cs.CV"
] |
Generative Adversarial Networks (GANs) have achieved state-of-the-art
performance for several image generation and manipulation tasks. Different
works have improved the limited understanding of the latent space of GANs by
embedding images into specific GAN architectures to reconstruct the original
images. We present a novel StyleGAN-based autoencoder architecture, which can
reconstruct images with very high quality across several data domains. We
demonstrate a previously unknown grade of generalizablility by training the
encoder and decoder independently and on different datasets. Furthermore, we
provide new insights about the significance and capabilities of noise inputs of
the well-known StyleGAN architecture. Our proposed architecture can handle up
to 40 images per second on a single GPU, which is approximately 28x faster than
previous approaches. Finally, our model also shows promising results, when
compared to the state-of-the-art on the image denoising task, although it was
not explicitly designed for this task. | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Deep generative models for graphs have exhibited promising performance in
ever-increasing domains such as design of molecules (i.e, graph of atoms) and
structure prediction of proteins (i.e., graph of amino acids). Existing work
typically focuses on static rather than dynamic graphs, which are actually very
important in the applications such as protein folding, molecule reactions, and
human mobility. Extending existing deep generative models from static to
dynamic graphs is a challenging task, which requires to handle the
factorization of static and dynamic characteristics as well as mutual
interactions among node and edge patterns. Here, this paper proposes a novel
framework of factorized deep generative models to achieve interpretable dynamic
graph generation. Various generative models are proposed to characterize
conditional independence among node, edge, static, and dynamic factors. Then,
variational optimization strategies as well as dynamic graph decoders are
proposed based on newly designed factorized variational autoencoders and
recurrent graph deconvolutions. Extensive experiments on multiple datasets
demonstrate the effectiveness of the proposed models. | [
"cs.LG",
"cs.SI"
] |
There are great interests as well as many challenges in applying
reinforcement learning (RL) to recommendation systems. In this setting, an
online user is the environment; neither the reward function nor the environment
dynamics are clearly defined, making the application of RL challenging. In this
paper, we propose a novel model-based reinforcement learning framework for
recommendation systems, where we develop a generative adversarial network to
imitate user behavior dynamics and learn her reward function. Using this user
model as the simulation environment, we develop a novel Cascading DQN algorithm
to obtain a combinatorial recommendation policy which can handle a large number
of candidate items efficiently. In our experiments with real data, we show this
generative adversarial user model can better explain user behavior than
alternatives, and the RL policy based on this model can lead to a better
long-term reward for the user and higher click rate for the system. | [
"cs.LG",
"cs.IR",
"stat.ML"
] |
Recent domain adaptation methods have demonstrated impressive improvement on
unsupervised domain adaptation problems. However, in the semi-supervised domain
adaptation (SSDA) setting where the target domain has a few labeled instances
available, these methods can fail to improve performance. Inspired by the
effectiveness of pseudo-labels in domain adaptation, we propose a reinforcement
learning based selective pseudo-labeling method for semi-supervised domain
adaptation. It is difficult for conventional pseudo-labeling methods to balance
the correctness and representativeness of pseudo-labeled data. To address this
limitation, we develop a deep Q-learning model to select both accurate and
representative pseudo-labeled instances. Moreover, motivated by large margin
loss's capacity on learning discriminative features with little data, we
further propose a novel target margin loss for our base model training to
improve its discriminability. Our proposed method is evaluated on several
benchmark datasets for SSDA, and demonstrates superior performance to all the
comparison methods. | [
"cs.CV"
] |
With current and upcoming imaging spectrometers, automated band analysis
techniques are needed to enable efficient identification of most informative
bands to facilitate optimized processing of spectral data into estimates of
biophysical variables. This paper introduces an automated spectral band
analysis tool (BAT) based on Gaussian processes regression (GPR) for the
spectral analysis of vegetation properties. The GPR-BAT procedure sequentially
backwards removes the least contributing band in the regression model for a
given variable until only one band is kept. GPR-BAT is implemented within the
framework of the free ARTMO's MLRA (machine learning regression algorithms)
toolbox, which is dedicated to the transforming of optical remote sensing
images into biophysical products. GPR-BAT allows (1) to identify the most
informative bands in relating spectral data to a biophysical variable, and (2)
to find the least number of bands that preserve optimized accurate predictions.
This study concludes that a wise band selection of hyperspectral data is
strictly required for optimal vegetation properties mapping. | [
"cs.CV",
"eess.IV",
"stat.AP"
] |
Generating a novel image by manipulating two input images is an interesting
research problem in the study of generative adversarial networks (GANs). We
propose a new GAN-based network that generates a fusion image with the identity
of input image x and the shape of input image y. Our network can simultaneously
train on more than two image datasets in an unsupervised manner. We define an
identity loss LI to catch the identity of image x and a shape loss LS to get
the shape of y. In addition, we propose a novel training method called
Min-Patch training to focus the generator on crucial parts of an image, rather
than its entirety. We show qualitative results on the VGG Youtube Pose dataset,
Eye dataset (MPIIGaze and UnityEyes), and the Photo-Sketch-Cartoon dataset. | [
"cs.CV"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.