text
stringlengths 29
3.31k
| label
sequencelengths 1
11
|
---|---|
Learning on point cloud is eagerly in demand because the point cloud is a
common type of geometric data and can aid robots to understand environments
robustly. However, the point cloud is sparse, unstructured, and unordered,
which cannot be recognized accurately by a traditional convolutional neural
network (CNN) nor a recurrent neural network (RNN). Fortunately, a graph
convolutional neural network (Graph CNN) can process sparse and unordered data.
Hence, we propose a linked dynamic graph CNN (LDGCNN) to classify and segment
point cloud directly in this paper. We remove the transformation network, link
hierarchical features from dynamic graphs, freeze feature extractor, and
retrain the classifier to increase the performance of LDGCNN. We explain our
network using theoretical analysis and visualization. Through experiments, we
show that the proposed LDGCNN achieves state-of-art performance on two standard
datasets: ModelNet40 and ShapeNet. | [
"cs.CV"
] |
In this paper, we propose a framework for disentangling the appearance and
geometry representations in the face recognition task. To provide supervision
for this aim, we generate geometrically identical faces by incorporating
spatial transformations. We demonstrate that the proposed approach enhances the
performance of deep face recognition models by assisting the training process
in two ways. First, it enforces the early and intermediate convolutional layers
to learn more representative features that satisfy the properties of
disentangled embeddings. Second, it augments the training set by altering faces
geometrically. Through extensive experiments, we demonstrate that integrating
the proposed approach into state-of-the-art face recognition methods
effectively improves their performance on challenging datasets, such as LFW,
YTF, and MegaFace. Both theoretical and practical aspects of the method are
analyzed rigorously by concerning ablation studies and knowledge transfer
tasks. Furthermore, we show that the knowledge leaned by the proposed method
can favor other face-related tasks, such as attribute prediction. | [
"cs.CV",
"cs.LG"
] |
Weakly-supervised temporal action localization is a problem of learning an
action localization model with only video-level action labeling available. The
general framework largely relies on the classification activation, which
employs an attention model to identify the action-related frames and then
categorizes them into different classes. Such method results in the
action-context confusion issue: context frames near action clips tend to be
recognized as action frames themselves, since they are closely related to the
specific classes. To solve the problem, in this paper we propose to model the
class-agnostic frame-wise probability conditioned on the frame attention using
conditional Variational Auto-Encoder (VAE). With the observation that the
context exhibits notable difference from the action at representation level, a
probabilistic model, i.e., conditional VAE, is learned to model the likelihood
of each frame given the attention. By maximizing the conditional probability
with respect to the attention, the action and non-action frames are well
separated. Experiments on THUMOS14 and ActivityNet1.2 demonstrate advantage of
our method and effectiveness in handling action-context confusion problem. Code
is now available on GitHub. | [
"cs.CV"
] |
In the field of pattern recognition research, the method of using deep neural
networks based on improved computing hardware recently attracted attention
because of their superior accuracy compared to conventional methods. Deep
neural networks simulate the human visual system and achieve human equivalent
accuracy in image classification, object detection, and segmentation. This
chapter introduces the basic structure of deep neural networks that simulate
human neural networks. Then we identify the operational processes and
applications of conditional generative adversarial networks, which are being
actively researched based on the bottom-up and top-down mechanisms, the most
important functions of the human visual perception process. Finally, recent
developments in training strategies for effective learning of complex deep
neural networks are addressed. | [
"cs.CV",
"cs.LG"
] |
While activity recognition from inertial sensors holds potential for mobile
health, differences in sensing platforms and user movement patterns cause
performance degradation. Aiming to address these challenges, we propose a
transfer learning framework, TransFall, for sensor-based activity recognition.
TransFall's design contains a two-tier data transformation, a label estimation
layer, and a model generation layer to recognize activities for the new
scenario. We validate TransFall analytically and empirically. | [
"cs.LG",
"cs.HC",
"stat.ML"
] |
In the last years, the consolidation of deep neural network architectures for
information extraction in document images has brought big improvements in the
performance of each of the tasks involved in this process, consisting of text
localization, transcription, and named entity recognition. However, this
process is traditionally performed with separate methods for each task. In this
work we propose an end-to-end model that combines a one stage object detection
network with branches for the recognition of text and named entities
respectively in a way that shared features can be learned simultaneously from
the training error of each of the tasks. By doing so the model jointly performs
handwritten text detection, transcription, and named entity recognition at page
level with a single feed forward step. We exhaustively evaluate our approach on
different datasets, discussing its advantages and limitations compared to
sequential approaches. The results show that the model is capable of benefiting
from shared features for simultaneously solving interdependent tasks. | [
"cs.CV"
] |
We present Graph Neural Diffusion (GRAND) that approaches deep learning on
graphs as a continuous diffusion process and treats Graph Neural Networks
(GNNs) as discretisations of an underlying PDE. In our model, the layer
structure and topology correspond to the discretisation choices of temporal and
spatial operators. Our approach allows a principled development of a broad new
class of GNNs that are able to address the common plights of graph learning
models such as depth, oversmoothing, and bottlenecks. Key to the success of our
models are stability with respect to perturbations in the data and this is
addressed for both implicit and explicit discretisation schemes. We develop
linear and nonlinear versions of GRAND, which achieve competitive results on
many standard graph benchmarks. | [
"cs.LG",
"stat.ML"
] |
Deep model compression has been extensively studied, and state-of-the-art
methods can now achieve high compression ratios with minimal accuracy loss.
This paper studies model compression through a different lens: could we
compress models without hurting their robustness to adversarial attacks, in
addition to maintaining accuracy? Previous literature suggested that the goals
of robustness and compactness might sometimes contradict. We propose a novel
Adversarially Trained Model Compression (ATMC) framework. ATMC constructs a
unified constrained optimization formulation, where existing compression means
(pruning, factorization, quantization) are all integrated into the constraints.
An efficient algorithm is then developed. An extensive group of experiments are
presented, demonstrating that ATMC obtains remarkably more favorable trade-off
among model size, accuracy and robustness, over currently available
alternatives in various settings. The codes are publicly available at:
https://github.com/shupenggui/ATMC. | [
"cs.LG",
"stat.ML"
] |
Most graph neural network models learn embeddings of nodes in static
attributed graphs for predictive analysis. Recent attempts have been made to
learn temporal proximity of the nodes. We find that real dynamic attributed
graphs exhibit complex co-evolution of node attributes and graph structure.
Learning node embeddings for forecasting change of node attributes and birth
and death of links over time remains an open problem. In this work, we present
a novel framework called CoEvoGNN for modeling dynamic attributed graph
sequence. It preserves the impact of earlier graphs on the current graph by
embedding generation through the sequence. It has a temporal self-attention
mechanism to model long-range dependencies in the evolution. Moreover, CoEvoGNN
optimizes model parameters jointly on two dynamic tasks, attribute inference
and link prediction over time. So the model can capture the co-evolutionary
patterns of attribute change and link formation. This framework can adapt to
any graph neural algorithms so we implemented and investigated three methods
based on it: CoEvoGCN, CoEvoGAT, and CoEvoSAGE. Experiments demonstrate the
framework (and its methods) outperform strong baselines on predicting an entire
unseen graph snapshot of personal attributes and interpersonal links in dynamic
social graphs and financial graphs. | [
"cs.LG",
"stat.ML"
] |
In an ever expanding set of research and application areas, deep neural
networks (DNNs) set the bar for algorithm performance. However, depending upon
additional constraints such as processing power and execution time limits, or
requirements such as verifiable safety guarantees, it may not be feasible to
actually use such high-performing DNNs in practice. Many techniques have been
developed in recent years to compress or distill complex DNNs into smaller,
faster or more understandable models and controllers. This work seeks to
identify reduced models that not only preserve a desired performance level, but
also, for example, succinctly explain the latent knowledge represented by a
DNN. We illustrate the effectiveness of the proposed approach on the evaluation
of decision tree variants and kernel machines in the context of benchmark
reinforcement learning tasks. | [
"cs.LG",
"cs.AI"
] |
Exploration in environments with sparse rewards has been a persistent problem
in reinforcement learning (RL). Many tasks are natural to specify with a sparse
reward, and manually shaping a reward function can result in suboptimal
performance. However, finding a non-zero reward is exponentially more difficult
with increasing task horizon or action dimensionality. This puts many
real-world tasks out of practical reach of RL methods. In this work, we use
demonstrations to overcome the exploration problem and successfully learn to
perform long-horizon, multi-step robotics tasks with continuous control such as
stacking blocks with a robot arm. Our method, which builds on top of Deep
Deterministic Policy Gradients and Hindsight Experience Replay, provides an
order of magnitude of speedup over RL on simulated robotics tasks. It is simple
to implement and makes only the additional assumption that we can collect a
small set of demonstrations. Furthermore, our method is able to solve tasks not
solvable by either RL or behavior cloning alone, and often ends up
outperforming the demonstrator policy. | [
"cs.LG",
"cs.AI",
"cs.NE",
"cs.RO"
] |
Face parsing aims to predict pixel-wise labels for facial components of a
target face in an image. Existing approaches usually crop the target face from
the input image with respect to a bounding box calculated during
pre-processing, and thus can only parse inner facial Regions of
Interest~(RoIs). Peripheral regions like hair are ignored and nearby faces that
are partially included in the bounding box can cause distractions. Moreover,
these methods are only trained and evaluated on near-frontal portrait images
and thus their performance for in-the-wild cases has been unexplored. To
address these issues, this paper makes three contributions. First, we introduce
iBugMask dataset for face parsing in the wild, which consists of 21,866
training images and 1,000 testing images. The training images are obtained by
augmenting an existing dataset with large face poses. The testing images are
manually annotated with $11$ facial regions and there are large variations in
sizes, poses, expressions and background. Second, we propose RoI Tanh-polar
transform that warps the whole image to a Tanh-polar representation with a
fixed ratio between the face area and the context, guided by the target
bounding box. The new representation contains all information in the original
image, and allows for rotation equivariance in the convolutional neural
networks~(CNNs). Third, we propose a hybrid residual representation learning
block, coined HybridBlock, that contains convolutional layers in both the
Tanh-polar space and the Tanh-Cartesian space, allowing for receptive fields of
different shapes in CNNs. Through extensive experiments, we show that the
proposed method improves the state-of-the-art for face parsing in the wild and
does not require facial landmarks for alignment. | [
"cs.CV"
] |
In this work, a region-based Deep Convolutional Neural Network framework is
proposed for document structure learning. The contribution of this work
involves efficient training of region based classifiers and effective
ensembling for document image classification. A primary level of `inter-domain'
transfer learning is used by exporting weights from a pre-trained VGG16
architecture on the ImageNet dataset to train a document classifier on whole
document images. Exploiting the nature of region based influence modelling, a
secondary level of `intra-domain' transfer learning is used for rapid training
of deep learning models for image segments. Finally, stacked generalization
based ensembling is utilized for combining the predictions of the base deep
neural network models. The proposed method achieves state-of-the-art accuracy
of 92.2% on the popular RVL-CDIP document image dataset, exceeding benchmarks
set by existing algorithms. | [
"cs.CV",
"cs.LG"
] |
Our research is focused on understanding and applying biological memory
transfers to new AI systems that can fundamentally improve their performance,
throughout their fielded lifetime experience. We leverage current understanding
of biological memory transfer to arrive at AI algorithms for memory
consolidation and replay. In this paper, we propose the use of generative
memory that can be recalled in batch samples to train a multi-task agent in a
pseudo-rehearsal manner. We show results motivating the need for task-agnostic
separation of latent space for the generative memory to address issues of
catastrophic forgetting in lifelong learning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The problem of topic modeling can be seen as a generalization of the
clustering problem, in that it posits that observations are generated due to
multiple latent factors (e.g., the words in each document are generated as a
mixture of several active topics, as opposed to just one). This increased
representational power comes at the cost of a more challenging unsupervised
learning problem of estimating the topic probability vectors (the distributions
over words for each topic), when only the words are observed and the
corresponding topics are hidden.
We provide a simple and efficient learning procedure that is guaranteed to
recover the parameters for a wide class of mixture models, including the
popular latent Dirichlet allocation (LDA) model. For LDA, the procedure
correctly recovers both the topic probability vectors and the prior over the
topics, using only trigram statistics (i.e., third order moments, which may be
estimated with documents containing just three words). The method, termed
Excess Correlation Analysis (ECA), is based on a spectral decomposition of low
order moments (third and fourth order) via two singular value decompositions
(SVDs). Moreover, the algorithm is scalable since the SVD operations are
carried out on $k\times k$ matrices, where $k$ is the number of latent factors
(e.g. the number of topics), rather than in the $d$-dimensional observed space
(typically $d \gg k$). | [
"cs.LG",
"stat.ML"
] |
Transfer learning has emerged as a powerful technique for improving the
performance of machine learning models on new domains where labeled training
data may be scarce. In this approach a model trained for a source task, where
plenty of labeled training data is available, is used as a starting point for
training a model on a related target task with only few labeled training data.
Despite recent empirical success of transfer learning approaches, the benefits
and fundamental limits of transfer learning are poorly understood. In this
paper we develop a statistical minimax framework to characterize the
fundamental limits of transfer learning in the context of regression with
linear and one-hidden layer neural network models. Specifically, we derive a
lower-bound for the target generalization error achievable by any algorithm as
a function of the number of labeled source and target data as well as
appropriate notions of similarity between the source and target tasks. Our
lower bound provides new insights into the benefits and limitations of transfer
learning. We further corroborate our theoretical finding with various
experiments. | [
"cs.LG",
"cs.IT",
"math.IT",
"stat.ML"
] |
Existing techniques for certifying the robustness of models for discrete data
either work only for a small class of models or are general at the expense of
efficiency or tightness. Moreover, they do not account for sparsity in the
input which, as our findings show, is often essential for obtaining non-trivial
guarantees. We propose a model-agnostic certificate based on the randomized
smoothing framework which subsumes earlier work and is tight, efficient, and
sparsity-aware. Its computational complexity does not depend on the number of
discrete categories or the dimension of the input (e.g. the graph size), making
it highly scalable. We show the effectiveness of our approach on a wide variety
of models, datasets, and tasks -- specifically highlighting its use for Graph
Neural Networks. So far, obtaining provable guarantees for GNNs has been
difficult due to the discrete and non-i.i.d. nature of graph data. Our method
can certify any GNN and handles perturbations to both the graph structure and
the node attributes. | [
"cs.LG",
"cs.CR",
"cs.SI",
"stat.ML"
] |
Two optical flow estimation problems are addressed: i) occlusion estimation
and handling, and ii) estimation from image sequences longer than two frames.
The proposed ContinualFlow method estimates occlusions before flow, avoiding
the use of flow corrupted by occlusions for their estimation. We show that
providing occlusion masks as an additional input to flow estimation improves
the standard performance metric by more than 25\% on both KITTI and Sintel. As
a second contribution, a novel method for incorporating information from past
frames into flow estimation is introduced. The previous frame flow serves as an
input to occlusion estimation and as a prior in occluded regions, i.e. those
without visual correspondences. By continually using the previous frame flow,
ContinualFlow performance improves further by 18\% on KITTI and 7\% on Sintel,
achieving top performance on KITTI and Sintel. | [
"cs.CV"
] |
Single Image Super Resolution (SISR) techniques based on Super Resolution
Convolutional Neural Networks (SRCNN) are applied to micro-computed tomography
({\mu}CT) images of sandstone and carbonate rocks. Digital rock imaging is
limited by the capability of the scanning device resulting in trade-offs
between resolution and field of view, and super resolution methods tested in
this study aim to compensate for these limits. SRCNN models SR-Resnet, Enhanced
Deep SR (EDSR), and Wide-Activation Deep SR (WDSR) are used on the Digital Rock
Super Resolution 1 (DRSRD1) Dataset of 4x downsampled images, comprising of
2000 high resolution (800x800) raw micro-CT images of Bentheimer sandstone and
Estaillades carbonate. The trained models are applied to the validation and
test data within the dataset and show a 3-5 dB rise in image quality compared
to bicubic interpolation, with all tested models performing within a 0.1 dB
range. Difference maps indicate that edge sharpness is completely recovered in
images within the scope of the trained model, with only high frequency noise
related detail loss. We find that aside from generation of high-resolution
images, a beneficial side effect of super resolution methods applied to
synthetically downgraded images is the removal of image noise while recovering
edgewise sharpness which is beneficial for the segmentation process. The model
is also tested against real low-resolution images of Bentheimer rock with image
augmentation to account for natural noise and blur. The SRCNN method is shown
to act as a preconditioner for image segmentation under these circumstances
which naturally leads to further future development and training of models that
segment an image directly. Image restoration by SRCNN on the rock images is of
significantly higher quality than traditional methods and suggests SRCNN
methods are a viable processing step in a digital rock workflow. | [
"cs.CV"
] |
Humans and animals have the ability to reason and make predictions about
different courses of action at many time scales. In reinforcement learning,
option models (Sutton, Precup \& Singh, 1999; Precup, 2000) provide the
framework for this kind of temporally abstract prediction and reasoning.
Natural intelligent agents are also able to focus their attention on courses of
action that are relevant or feasible in a given situation, sometimes termed
affordable actions. In this paper, we define a notion of affordances for
options, and develop temporally abstract partial option models, that take into
account the fact that an option might be affordable only in certain situations.
We analyze the trade-offs between estimation and approximation error in
planning and learning when using such models, and identify some interesting
special cases. Additionally, we demonstrate empirically the potential impact of
partial option models on the efficiency of planning. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
In this paper, we introduce Random Path Generative Adversarial Network
(RPGAN) -- an alternative design of GANs that can serve as a tool for
generative model analysis. While the latent space of a typical GAN consists of
input vectors, randomly sampled from the standard Gaussian distribution, the
latent space of RPGAN consists of random paths in a generator network. As we
show, this design allows to understand factors of variation, captured by
different generator layers, providing their natural interpretability. With
experiments on standard benchmarks, we demonstrate that RPGAN reveals several
interesting insights about the roles that different layers play in the image
generation process. Aside from interpretability, the RPGAN model also provides
competitive generation quality and allows efficient incremental learning on new
data. | [
"cs.CV",
"eess.IV"
] |
The Monte Carlo dropout method has proved to be a scalable and easy-to-use
approach for estimating the uncertainty of deep neural network predictions.
This approach was recently applied to Fault Detection and Di-agnosis (FDD)
applications to improve the classification performance on incipient faults. In
this paper, we propose a novel approach of augmenting the classification model
with an additional unsupervised learning task. We justify our choice of
algorithm design via an information-theoretical analysis. Our experimental
results on three datasets from diverse application domains show that the
proposed method leads to improved fault detection and diagnosis performance,
especially on out-of-distribution examples including both incipient and unknown
faults. | [
"cs.LG",
"eess.SP",
"stat.ML"
] |
Black-box machine learning learning methods are now routinely used in
high-risk settings, like medical diagnostics, which demand uncertainty
quantification to avoid consequential model failures. Distribution-free
uncertainty quantification (distribution-free UQ) is a user-friendly paradigm
for creating statistically rigorous confidence intervals/sets for such
predictions. Critically, the intervals/sets are valid without distributional
assumptions or model assumptions, with explicit guarantees with finitely many
datapoints. Moreover, they adapt to the difficulty of the input; when the input
example is difficult, the uncertainty intervals/sets are large, signaling that
the model might be wrong. Without much work, one can use distribution-free
methods on any underlying algorithm, such as a neural network, to produce
confidence sets guaranteed to contain the ground truth with a user-specified
probability, such as 90%. Indeed, the methods are easy-to-understand and
general, applying to many modern prediction problems arising in the fields of
computer vision, natural language processing, deep reinforcement learning, and
so on. This hands-on introduction is aimed at a reader interested in the
practical implementation of distribution-free UQ, including conformal
prediction and related methods, who is not necessarily a statistician. We will
include many explanatory illustrations, examples, and code samples in Python,
with PyTorch syntax. The goal is to provide the reader a working understanding
of distribution-free UQ, allowing them to put confidence intervals on their
algorithms, with one self-contained document. | [
"cs.LG",
"cs.AI",
"math.ST",
"stat.ME",
"stat.ML",
"stat.TH"
] |
Stagewise training strategy is widely used for learning neural networks,
which runs a stochastic algorithm (e.g., SGD) starting with a relatively large
step size (aka learning rate) and geometrically decreasing the step size after
a number of iterations. It has been observed that the stagewise SGD has much
faster convergence than the vanilla SGD with a polynomially decaying step size
in terms of both training error and testing error. {\it But how to explain this
phenomenon has been largely ignored by existing studies.} This paper provides
some theoretical evidence for explaining this faster convergence. In
particular, we consider a stagewise training strategy for minimizing empirical
risk that satisfies the Polyak-\L ojasiewicz (PL) condition, which has been
observed/proved for neural networks and also holds for a broad family of convex
functions. For convex loss functions and two classes of "nice-behaviored"
non-convex objectives that are close to a convex function, we establish faster
convergence of stagewise training than the vanilla SGD under the PL condition
on both training error and testing error. Experiments on stagewise learning of
deep residual networks exhibits that it satisfies one type of non-convexity
assumption and therefore can be explained by our theory. Of independent
interest, the testing error bounds for the considered non-convex loss functions
are dimensionality and norm independent. | [
"stat.ML",
"cs.LG",
"math.OC"
] |
Standard methods in deep learning for natural language processing fail to
capture the compositional structure of human language that allows for
systematic generalization outside of the training distribution. However, human
learners readily generalize in this way, e.g. by applying known grammatical
rules to novel words. Inspired by work in neuroscience suggesting separate
brain systems for syntactic and semantic processing, we implement a
modification to standard approaches in neural machine translation, imposing an
analogous separation. The novel model, which we call Syntactic Attention,
substantially outperforms standard methods in deep learning on the SCAN
dataset, a compositional generalization task, without any hand-engineered
features or additional supervision. Our work suggests that separating syntactic
from semantic learning may be a useful heuristic for capturing compositional
structure. | [
"cs.LG",
"cs.CL",
"stat.ML"
] |
The monitoring and management of numerous and diverse time series data at
Alibaba Group calls for an effective and scalable time series anomaly detection
service. In this paper, we propose RobustTAD, a Robust Time series Anomaly
Detection framework by integrating robust seasonal-trend decomposition and
convolutional neural network for time series data. The seasonal-trend
decomposition can effectively handle complicated patterns in time series, and
meanwhile significantly simplifies the architecture of the neural network,
which is an encoder-decoder architecture with skip connections. This
architecture can effectively capture the multi-scale information from time
series, which is very useful in anomaly detection. Due to the limited labeled
data in time series anomaly detection, we systematically investigate data
augmentation methods in both time and frequency domains. We also introduce
label-based weight and value-based weight in the loss function by utilizing the
unbalanced nature of the time series anomaly detection problem. Compared with
the widely used forecasting-based anomaly detection algorithms,
decomposition-based algorithms, traditional statistical algorithms, as well as
recent neural network based algorithms, RobustTAD performs significantly better
on public benchmark datasets. It is deployed as a public online service and
widely adopted in different business scenarios at Alibaba Group. | [
"cs.LG",
"eess.SP",
"stat.AP",
"stat.ML"
] |
Recurrent Neural Networks were, until recently, one of the best ways to
capture the timely dependencies in sequences. However, with the introduction of
the Transformer, it has been proven that an architecture with only
attention-mechanisms without any RNN can improve on the results in various
sequence processing tasks (e.g. NLP). Multiple studies since then have shown
that similar approaches can be applied for images, point clouds, video, audio
or time series forecasting. Furthermore, solutions such as the Perceiver or the
Informer have been introduced to expand on the applicability of the
Transformer. Our main objective is testing and evaluating the effectiveness of
applying Transformer-like models on time series data, tackling susceptibility
to anomalies, context awareness and space complexity by fine-tuning the
hyperparameters, preprocessing the data, applying dimensionality reduction or
convolutional encodings, etc. We are also looking at the problem of next-frame
prediction and exploring ways to modify existing solutions in order to achieve
higher performance and learn generalized knowledge. | [
"cs.LG"
] |
Information Extraction from visual documents enables convenient and
intelligent assistance to end users. We present a Neighborhood-based
Information Extraction (NIE) approach that uses contextual language models and
pays attention to the local neighborhood context in the visual documents to
improve information extraction accuracy. We collect two different visual
document datasets and show that our approach outperforms the state-of-the-art
global context-based IE technique. In fact, NIE outperforms existing approaches
in both small and large model sizes. Our on-device implementation of NIE on a
mobile platform that generally requires small models showcases NIE's usefulness
in practical real-world applications. | [
"cs.LG",
"cs.IR"
] |
Automated monitoring and analysis of passenger movement in safety-critical
parts of transport infrastructures represent a relevant visual surveillance
task. Recent breakthroughs in visual representation learning and spatial
sensing opened up new possibilities for detecting and tracking humans and
objects within a 3D spatial context. This paper proposes a flexible analysis
scheme and a thorough evaluation of various processing pipelines to detect and
track humans on a ground plane, calibrated automatically via stereo depth and
pedestrian detection. We consider multiple combinations within a set of RGB-
and depth-based detection and tracking modalities. We exploit the modular
concepts of Meshroom [2] and demonstrate its use as a generic vision processing
pipeline and scalable evaluation framework. Furthermore, we introduce a novel
open RGB-D railway platform dataset with annotations to support research
activities in automated RGB-D surveillance. We present quantitative results for
multiple object detection and tracking for various algorithmic combinations on
our dataset. Results indicate that the combined use of depth-based spatial
information and learned representations yields substantially enhanced detection
and tracking accuracies. As demonstrated, these enhancements are especially
pronounced in adverse situations when occlusions and objects not captured by
learned representations are present. | [
"cs.CV"
] |
Karyotyping is a process in which chromosomes in a dividing cell are properly
stained, identified and displayed in a standard format, which helps geneticist
to study and diagnose genetic factors behind various genetic diseases and for
studying cancer. M-FISH (Multiplex Fluorescent In-Situ Hybridization) provides
color karyotyping. In this paper, an automated method for M-FISH chromosome
segmentation based on watershed transform followed by naive Bayes
classification of each region using the features, mean and standard deviation,
is presented. Also, a post processing step is added to re-classify the small
chromosome segments to the neighboring larger segment for reducing the chances
of misclassification. The approach provided improved accuracy when compared to
the pixel-by-pixel approach. The approach was tested on 40 images from the
dataset and achieved an accuracy of 84.21 %. | [
"cs.CV"
] |
Heterogeneous face recognition between color image and depth image is a much
desired capacity for real world applications where shape information is looked
upon as merely involved in gallery. In this paper, we propose a cross-modal
deep learning method as an effective and efficient workaround for this
challenge. Specifically, we begin with learning two convolutional neural
networks (CNNs) to extract 2D and 2.5D face features individually. Once
trained, they can serve as pre-trained models for another two-way CNN which
explores the correlated part between color and depth for heterogeneous
matching. Compared with most conventional cross-modal approaches, our method
additionally conducts accurate depth image reconstruction from single color
image with Conditional Generative Adversarial Nets (cGAN), and further enhances
the recognition performance by fusing multi-modal matching results. Through
both qualitative and quantitative experiments on benchmark FRGC 2D/3D face
database, we demonstrate that the proposed pipeline outperforms
state-of-the-art performance on heterogeneous face recognition and ensures a
drastically efficient on-line stage. | [
"cs.CV"
] |
We present AIST++, a new multi-modal dataset of 3D dance motion and music,
along with FACT, a Full-Attention Cross-modal Transformer network for
generating 3D dance motion conditioned on music. The proposed AIST++ dataset
contains 5.2 hours of 3D dance motion in 1408 sequences, covering 10 dance
genres with multi-view videos with known camera poses -- the largest dataset of
this kind to our knowledge. We show that naively applying sequence models such
as transformers to this dataset for the task of music conditioned 3D motion
generation does not produce satisfactory 3D motion that is well correlated with
the input music. We overcome these shortcomings by introducing key changes in
its architecture design and supervision: FACT model involves a deep cross-modal
transformer block with full-attention that is trained to predict $N$ future
motions. We empirically show that these changes are key factors in generating
long sequences of realistic dance motion that are well-attuned to the input
music. We conduct extensive experiments on AIST++ with user studies, where our
method outperforms recent state-of-the-art methods both qualitatively and
quantitatively. | [
"cs.CV",
"cs.GR",
"cs.MM"
] |
Federated learning (FL) has attracted increasing attention in recent years.
As a privacy-preserving collaborative learning paradigm, it enables a broader
range of applications, especially for computer vision and natural language
processing tasks. However, to date, there is limited research of federated
learning on relational data, namely Knowledge Graph (KG). In this work, we
present a modified version of the graph neural network algorithm that performs
federated modeling over KGs across different participants. Specifically, to
tackle the inherent data heterogeneity issue and inefficiency in algorithm
convergence, we propose a novel optimization algorithm, named FedAlign, with 1)
optimal transportation (OT) for on-client personalization and 2) weight
constraint to speed up the convergence. Extensive experiments have been
conducted on several widely used datasets. Empirical results show that our
proposed method outperforms the state-of-the-art FL methods, such as FedAVG and
FedProx, with better convergence. | [
"cs.LG",
"cs.AI",
"68T07",
"I.2.6; I.2.11"
] |
In an attempt to gather a deeper understanding of how convolutional neural
networks (CNNs) reason about human-understandable concepts, we present a method
to infer labeled concept data from hidden layer activations and interpret the
concepts through a shallow decision tree. The decision tree can provide
information about which concepts a model deems important, as well as provide an
understanding of how the concepts interact with each other. Experiments
demonstrate that the extracted decision tree is capable of accurately
representing the original CNN's classifications at low tree depths, thus
encouraging human-in-the-loop understanding of discriminative concepts. | [
"cs.LG",
"stat.ML"
] |
We propose Significance-Offset Convolutional Neural Network, a deep
convolutional network architecture for regression of multivariate asynchronous
time series. The model is inspired by standard autoregressive (AR) models and
gating mechanisms used in recurrent neural networks. It involves an AR-like
weighting system, where the final predictor is obtained as a weighted sum of
adjusted regressors, while the weights are datadependent functions learnt
through a convolutional network. The architecture was designed for applications
on asynchronous time series and is evaluated on such datasets: a hedge fund
proprietary dataset of over 2 million quotes for a credit derivative index, an
artificially generated noisy autoregressive series and UCI household
electricity consumption dataset. The proposed architecture achieves promising
results as compared to convolutional and recurrent neural networks. | [
"cs.LG"
] |
Large convolutional neural network models have recently demonstrated
impressive performance on video attention prediction. Conventionally, these
models are with intensive computation and large memory. To address these
issues, we design an extremely light-weight network with ultrafast speed, named
UVA-Net. The network is constructed based on depth-wise convolutions and takes
low-resolution images as input. However, this straight-forward acceleration
method will decrease performance dramatically. To this end, we propose a
coupled knowledge distillation strategy to augment and train the network
effectively. With this strategy, the model can further automatically discover
and emphasize implicit useful cues contained in the data. Both spatial and
temporal knowledge learned by the high-resolution complex teacher networks also
can be distilled and transferred into the proposed low-resolution light-weight
spatiotemporal network. Experimental results show that the performance of our
model is comparable to 11 state-of-the-art models in video attention
prediction, while it costs only 0.68 MB memory footprint, runs about 10,106 FPS
on GPU and 404 FPS on CPU, which is 206 times faster than previous models. | [
"cs.CV"
] |
This work introduces a new unsupervised representation learning technique
called Deep Convolutional Transform Learning (DCTL). By stacking convolutional
transforms, our approach is able to learn a set of independent kernels at
different layers. The features extracted in an unsupervised manner can then be
used to perform machine learning tasks, such as classification and clustering.
The learning technique relies on a well-sounded alternating proximal
minimization scheme with established convergence guarantees. Our experimental
results show that the proposed DCTL technique outperforms its shallow version
CTL, on several benchmark datasets. | [
"cs.LG",
"stat.ML"
] |
There are two main issues in RGB-D salient object detection: (1) how to
effectively integrate the complementarity from the cross-modal RGB-D data; (2)
how to prevent the contamination effect from the unreliable depth map. In fact,
these two problems are linked and intertwined, but the previous methods tend to
focus only on the first problem and ignore the consideration of depth map
quality, which may yield the model fall into the sub-optimal state. In this
paper, we address these two issues in a holistic model synergistically, and
propose a novel network named DPANet to explicitly model the potentiality of
the depth map and effectively integrate the cross-modal complementarity. By
introducing the depth potentiality perception, the network can perceive the
potentiality of depth information in a learning-based manner, and guide the
fusion process of two modal data to prevent the contamination occurred. The
gated multi-modality attention module in the fusion process exploits the
attention mechanism with a gate controller to capture long-range dependencies
from a cross-modal perspective. Experimental results compared with 15
state-of-the-art methods on 8 datasets demonstrate the validity of the proposed
approach both quantitatively and qualitatively. | [
"cs.CV"
] |
We establish finite-sample guarantees for a polynomial-time algorithm for
learning a nonlinear, nonparametric directed acyclic graphical (DAG) model from
data. The analysis is model-free and does not assume linearity, additivity,
independent noise, or faithfulness. Instead, we impose a condition on the
residual variances that is closely related to previous work on linear models
with equal variances. Compared to an optimal algorithm with oracle knowledge of
the variable ordering, the additional cost of the algorithm is linear in the
dimension $d$ and the number of samples $n$. Finally, we compare the proposed
algorithm to existing approaches in a simulation study. | [
"stat.ML",
"cs.LG",
"math.ST",
"stat.TH"
] |
The contemporary process-aware information systems possess the capabilities
to record the activities generated during the process execution. To leverage
these process specific fine-granular data, process mining has recently emerged
as a promising research discipline. As an important branch of process mining,
predictive business process management, pursues the objective to generate
forward-looking, predictive insights to shape business processes. In this
study, we propose a conceptual framework sought to establish and promote
understanding of decision-making environment, underlying business processes and
nature of the user characteristics for developing explainable business process
prediction solutions. Consequently, with regard to the theoretical and
practical implications of the framework, this study proposes a novel local
post-hoc explanation approach for a deep learning classifier that is expected
to facilitate the domain experts in justifying the model decisions. In contrary
to alternative popular perturbation-based local explanation approaches, this
study defines the local regions from the validation dataset by using the
intermediate latent space representations learned by the deep neural networks.
To validate the applicability of the proposed explanation method, the real-life
process log data delivered by the Volvo IT Belgium's incident management system
are used.The adopted deep learning classifier achieves a good performance with
the Area Under the ROC Curve of 0.94. The generated local explanations are also
visualized and presented with relevant evaluation measures that are expected to
increase the users' trust in the black-box-model. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The computational analysis of Mass Spectrometry Imaging (MSI) data aims at
the identification of interesting mass co-localizations and the visualization
of their lateral distribution in the sample, usually a tissue cross section.
But as the morphological structure of tissues and the different kinds of mass
co-localization naturally show a huge diversity, the selection and tuning of
the computational method is a time-consuming effort. In this work we address
the special problem of computationally grouping mass channel images according
to their similarities in their lateral distribution patterns. Such an analysis
is driven by the idea, that groups of molecules that feature a similar
distribution pattern may have a functional relation. But the selection of the
similarity function and other parameters is often done by a time-consuming and
unsatsifactory trial and error. We propose a new flexible workflow scheme
called SoRC (sum of ranked cluster indices) for automating this tuning step and
making it much more efficient. We test SoRC using three different data sets
acquired from the lab for three different kinds of samples (barley seed, mouse
bladder tissue, human PXE skin). We show, that SORC can be applied to score and
visualize the results obtained with the applied methods in short time without
too much effort. In our application example, the SoRC results for the three
data sets reveal that a) some well-known similarity functions are suited to
achieve good results for all three data sets and b) for the MSI data featuring
a higher degree of irregularity improved results can be achieved by applying
non-standard similarity functions. The SoRC scores computed with our approach
indicate that an automated testing and scoring of different methods for mass
channel image grouping can improve the final outcome of a study by finally
selecting the methods of the highest scores. | [
"cs.CV",
"eess.IV",
"q-bio.QM",
"stat.AP",
"stat.CO"
] |
Recurrent Neural Networks (RNNs) yield attractive properties for constructing
Intrusion Detection Systems (IDSs) for network data. With the rise of
ubiquitous Machine Learning (ML) systems, malicious actors have been catching
up quickly to find new ways to exploit ML vulnerabilities for profit. Recently
developed adversarial ML techniques focus on computer vision and their
applicability to network traffic is not straightforward: Network packets expose
fewer features than an image, are sequential and impose several constraints on
their features.
We show that despite these completely different characteristics, adversarial
samples can be generated reliably for RNNs. To understand a classifier's
potential for misclassification, we extend existing explainability techniques
and propose new ones, suitable particularly for sequential data. Applying them
shows that already the first packets of a communication flow are of crucial
importance and are likely to be targeted by attackers. Feature importance
methods show that even relatively unimportant features can be effectively
abused to generate adversarial samples. Since traditional evaluation metrics
such as accuracy are not sufficient for quantifying the adversarial threat, we
propose the Adversarial Robustness Score (ARS) for comparing IDSs, capturing a
common notion of adversarial robustness, and show that an adversarial training
procedure can significantly and successfully reduce the attack surface. | [
"cs.LG",
"cs.CR",
"cs.NI",
"stat.ML"
] |
Scaling adaptive traffic-signal control involves dealing with combinatorial
state and action spaces. Multi-agent reinforcement learning attempts to address
this challenge by distributing control to specialized agents. However,
specialization hinders generalization and transferability, and the
computational graphs underlying neural-networks architectures -- dominating in
the multi-agent setting -- do not offer the flexibility to handle an arbitrary
number of entities which changes both between road networks, and over time as
vehicles traverse the network. We introduce Inductive Graph Reinforcement
Learning (IG-RL) based on graph-convolutional networks which adapts to the
structure of any road network, to learn detailed representations of
traffic-controllers and their surroundings. Our decentralized approach enables
learning of a transferable-adaptive-traffic-signal-control policy. After being
trained on an arbitrary set of road networks, our model can generalize to new
road networks, traffic distributions, and traffic regimes, with no additional
training and a constant number of parameters, enabling greater scalability
compared to prior methods. Furthermore, our approach can exploit the
granularity of available data by capturing the (dynamic) demand at both the
lane and the vehicle levels. The proposed method is tested on both road
networks and traffic settings never experienced during training. We compare
IG-RL to multi-agent reinforcement learning and domain-specific baselines. In
both synthetic road networks and in a larger experiment involving the control
of the 3,971 traffic signals of Manhattan, we show that different
instantiations of IG-RL outperform baselines. | [
"cs.LG",
"stat.ML"
] |
Recently, the Vision Transformer (ViT) has shown impressive performance on
high-level and low-level vision tasks. In this paper, we propose a new ViT
architecture, named Hybrid Local-Global Vision Transformer (HyLoG-ViT), for
single image dehazing. The HyLoG-ViT block consists of two paths, the local ViT
path and the global ViT path, which are used to capture local and global
dependencies. The hybrid features are fused via convolution layers. As a
result, the HyLoG-ViT reduces the computational complexity and introduces
locality in the networks. Then, the HyLoG-ViT blocks are incorporated within
our dehazing networks, which jointly learn the intrinsic image decomposition
and image dehazing. Specifically, the network consists of one shared encoder
and three decoders for reflectance prediction, shading prediction, and
haze-free image generation. The tasks of reflectance and shading prediction can
produce meaningful intermediate features that can serve as complementary
features for haze-free image generation. To effectively aggregate the
complementary features, we propose a complementary features selection module
(CFSM) to select the useful ones for image dehazing. Extensive experiments on
homogeneous, non-homogeneous, and nighttime dehazing tasks reveal that our
proposed Transformer-based dehazing network can achieve comparable or even
better performance than CNNs-based dehazing models. | [
"cs.CV",
"68U10 (Primary) 94A08, 54H30 (Secondary)",
"I.4.3; I.4.4"
] |
Most existing policy learning solutions require the learning agents to
receive high-quality supervision signals, e.g., rewards in reinforcement
learning (RL) or high-quality expert's demonstrations in behavioral cloning
(BC). These quality supervisions are either infeasible or prohibitively
expensive to obtain in practice. We aim for a unified framework that leverages
the weak supervisions to perform policy learning efficiently. To handle this
problem, we treat the "weak supervisions" as imperfect information coming from
a peer agent, and evaluate the learning agent's policy based on a "correlated
agreement" with the peer agent's policy (instead of simple agreements). Our way
of leveraging peer agent's information offers us a family of solutions that
learn effectively from weak supervisions with theoretical guarantees. Extensive
evaluations on tasks including RL with noisy reward, BC with weak
demonstrations and standard policy co-training (RL + BC) show that the proposed
approach leads to substantial improvements, especially when the complexity or
the noise of the learning environments grows. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
This work is about the use of regularized optimal-transport distances for
convex, histogram-based image segmentation. In the considered framework, fixed
exemplar histograms define a prior on the statistical features of the two
regions in competition. In this paper, we investigate the use of various
transport-based cost functions as discrepancy measures and rely on a
primal-dual algorithm to solve the obtained convex optimization problem. | [
"cs.CV"
] |
A popular paradigm for 3D point cloud registration is by extracting 3D
keypoint correspondences, then estimating the registration function from the
correspondences using a robust algorithm. However, many existing 3D keypoint
techniques tend to produce large proportions of erroneous correspondences or
outliers, which significantly increases the cost of robust estimation. An
alternative approach is to directly search for the subset of correspondences
that are pairwise consistent, without optimising the registration function.
This gives rise to the combinatorial problem of matching with pairwise
constraints. In this paper, we propose a very efficient maximum clique
algorithm to solve matching with pairwise constraints. Our technique combines
tree searching with efficient bounding and pruning based on graph colouring. We
demonstrate that, despite the theoretical intractability, many real problem
instances can be solved exactly and quickly (seconds to minutes) with our
algorithm, which makes our approach an excellent alternative to standard robust
techniques for 3D registration. | [
"cs.CV",
"I.4"
] |
In this paper, we introduce our submissions for the tasks of trimmed activity
recognition (Kinetics) and trimmed event recognition (Moments in Time) for
Activitynet Challenge 2018. In the two tasks, non-local neural networks and
temporal segment networks are implemented as our base models. Multi-modal cues
such as RGB image, optical flow and acoustic signal have also been used in our
method. We also propose new non-local-based models for further improvement on
the recognition accuracy. The final submissions after ensembling the models
achieve 83.5% top-1 accuracy and 96.8% top-5 accuracy on the Kinetics
validation set, 35.81% top-1 accuracy and 62.59% top-5 accuracy on the MIT
validation set. | [
"cs.CV"
] |
Gradient-based methods are often used for policy optimization in deep
reinforcement learning, despite being vulnerable to local optima and saddle
points. Although gradient-free methods (e.g., genetic algorithms or evolution
strategies) help mitigate these issues, poor initialization and local optima
are still concerns in highly nonconvex spaces. This paper presents a method for
policy optimization based on Monte-Carlo tree search and gradient-free
optimization. Our method, called Monte-Carlo tree search for policy
optimization (MCTSPO), provides a better exploration-exploitation trade-off
through the use of the upper confidence bound heuristic. We demonstrate
improved performance on reinforcement learning tasks with deceptive or sparse
reward functions compared to popular gradient-based and deep genetic algorithm
baselines. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
Depth estimation is a fundamental issue in 4-D light field processing and
analysis. Although recent supervised learning-based light field depth
estimation methods have significantly improved the accuracy and efficiency of
traditional optimization-based ones, these methods rely on the training over
light field data with ground-truth depth maps which are challenging to obtain
or even unavailable for real-world light field data. Besides, due to the
inevitable gap (or domain difference) between real-world and synthetic data,
they may suffer from serious performance degradation when generalizing the
models trained with synthetic data to real-world data. By contrast, we propose
an unsupervised learning-based method, which does not require ground-truth
depth as supervision during training. Specifically, based on the basic
knowledge of the unique geometry structure of light field data, we present an
occlusion-aware strategy to improve the accuracy on occlusion areas, in which
we explore the angular coherence among subsets of the light field views to
estimate initial depth maps, and utilize a constrained unsupervised loss to
learn their corresponding reliability for final depth prediction. Additionally,
we adopt a multi-scale network with a weighted smoothness loss to handle the
textureless areas. Experimental results on synthetic data show that our method
can significantly shrink the performance gap between the previous unsupervised
method and supervised ones, and produce depth maps with comparable accuracy to
traditional methods with obviously reduced computational cost. Moreover,
experiments on real-world datasets show that our method can avoid the domain
shift problem presented in supervised methods, demonstrating the great
potential of our method. | [
"cs.CV"
] |
Understanding how goal states control behavior is a question ripe for
interrogation by new methods from machine learning. These methods require large
and labeled datasets to train models. To annotate a large-scale image dataset
with observed search fixations, we collected 16,184 fixations from people
searching for either microwaves or clocks in a dataset of 4,366 images
(MS-COCO). We then used this behaviorally-annotated dataset and the machine
learning method of Inverse-Reinforcement Learning (IRL) to learn
target-specific reward functions and policies for these two target goals.
Finally, we used these learned policies to predict the fixations of 60 new
behavioral searchers (clock = 30, microwave = 30) in a disjoint test dataset of
kitchen scenes depicting both a microwave and a clock (thus controlling for
differences in low-level image contrast). We found that the IRL model predicted
behavioral search efficiency and fixation-density maps using multiple metrics.
Moreover, reward maps from the IRL model revealed target-specific patterns that
suggest, not just attention guidance by target features, but also guidance by
scene context (e.g., fixations along walls in the search of clocks). Using
machine learning and the psychologically-meaningful principle of reward, it is
possible to learn the visual features used in goal-directed attention control. | [
"cs.CV"
] |
A unified neural network structure is presented for joint 3D object detection
and point cloud segmentation in this paper. We leverage rich supervision from
both detection and segmentation labels rather than using just one of them. In
addition, an extension based on single-stage object detectors is proposed based
on the implicit function widely used in 3D scene and object understanding. The
extension branch takes the final feature map from the object detection module
as input, and produces an implicit function that generates semantic
distribution for each point for its corresponding voxel center. We demonstrated
the performance of our structure on nuScenes-lidarseg, a large-scale outdoor
dataset. Our solution achieves competitive results against state-of-the-art
methods in both 3D object detection and point cloud segmentation with little
additional computation load compared with object detection solutions. The
capability of efficient weakly supervision semantic segmentation of the
proposed method is also validated by experiments. | [
"cs.CV"
] |
Integrating model-free and model-based approaches in reinforcement learning
has the potential to achieve the high performance of model-free algorithms with
low sample complexity. However, this is difficult because an imperfect dynamics
model can degrade the performance of the learning algorithm, and in
sufficiently complex environments, the dynamics model will almost always be
imperfect. As a result, a key challenge is to combine model-based approaches
with model-free learning in such a way that errors in the model do not degrade
performance. We propose stochastic ensemble value expansion (STEVE), a novel
model-based technique that addresses this issue. By dynamically interpolating
between model rollouts of various horizon lengths for each individual example,
STEVE ensures that the model is only utilized when doing so does not introduce
significant errors. Our approach outperforms model-free baselines on
challenging continuous control benchmarks with an order-of-magnitude increase
in sample efficiency, and in contrast to previous model-based approaches,
performance does not degrade in complex environments. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The fragility of modern machine learning models has drawn a considerable
amount of attention from both academia and the public. While immense interests
were in either crafting adversarial attacks as a way to measure the robustness
of neural networks or devising worst-case analytical robustness verification
with guarantees, few methods could enjoy both scalability and robustness
guarantees at the same time. As an alternative to these attempts, randomized
smoothing adopts a different prediction rule that enables statistical
robustness arguments which easily scale to large networks. However, in this
paper, we point out the side effects of current randomized smoothing workflows.
Specifically, we articulate and prove two major points: 1) the decision
boundaries of smoothed classifiers will shrink, resulting in disparity in
class-wise accuracy; 2) applying noise augmentation in the training process
does not necessarily resolve the shrinking issue due to the inconsistent
learning objectives. | [
"cs.LG",
"stat.ML"
] |
This paper presents and characterizes an Open Application Repository for
Federated Learning (OARF), a benchmark suite for federated machine learning
systems. Previously available benchmarks for federated learning have focused
mainly on synthetic datasets and use a very limited number of applications.
OARF includes different data partitioning methods (horizontal, vertical and
hybrid) as well as emerging applications in image, text and structured data,
which represent different scenarios in federated learning. Our characterization
shows that the benchmark suite is diverse in data size, distribution, feature
distribution and learning task complexity. We have developed reference
implementations, and evaluated the important aspects of federated learning,
including model accuracy, communication cost, differential privacy, secure
multiparty computation and vertical federated learning. | [
"cs.LG",
"stat.ML"
] |
Recent advancements in graph representation learning have led to the
emergence of condensed encodings that capture the main properties of a graph.
However, even though these abstract representations are powerful for downstream
tasks, they are not equally suitable for visualisation purposes. In this work,
we merge Mapper, an algorithm from the field of Topological Data Analysis
(TDA), with the expressive power of Graph Neural Networks (GNNs) to produce
hierarchical, topologically-grounded visualisations of graphs. These
visualisations do not only help discern the structure of complex graphs but
also provide a means of understanding the models applied to them for solving
various tasks. We further demonstrate the suitability of Mapper as a
topological framework for graph pooling by mathematically proving an
equivalence with Min-Cut and Diff Pool. Building upon this framework, we
introduce a novel pooling algorithm based on PageRank, which obtains
competitive results with state of the art methods on graph classification
benchmarks. | [
"cs.LG",
"cs.SI",
"stat.ML"
] |
Our research aims to propose a new performance-explainability analytical
framework to assess and benchmark machine learning methods. The framework
details a set of characteristics that systematize the
performance-explainability assessment of existing machine learning methods. In
order to illustrate the use of the framework, we apply it to benchmark the
current state-of-the-art multivariate time series classifiers. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
The cost of some drugs and medical treatments has risen in recent years that
many patients are having to go without. A classification project could make
researchers more efficient.
One of the more surprising reasons behind the cost is how long it takes to
bring new treatments to market. Despite improvements in technology and science,
research and development continues to lag. In fact, finding new treatment
takes, on average, more than 10 years and costs hundreds of millions of
dollars. In turn, greatly decreasing the cost of treatments can make ensure
these treatments get to patients faster. This work aims at solving a part of
this problem by creating a cellular image classification model which can
decipher the genetic perturbations in cell (occurring naturally or
artificially). Another interesting question addressed is what makes the
deep-learning model decide in a particular fashion, which can further help in
demystifying the mechanism of action of certain perturbations and paves a way
towards the explainability of the deep-learning model.
We show the results of Grad-CAM visualizations and make a case for the
significance of certain features over others. Further we discuss how these
significant features are pivotal in extracting useful diagnostic information
from the deep-learning model. | [
"cs.CV"
] |
The large model size, high computational operations, and vulnerability
against membership inference attack (MIA) have impeded deep learning or deep
neural networks (DNNs) popularity, especially on mobile devices. To address the
challenge, we envision that the weight pruning technique will help DNNs against
MIA while reducing model storage and computational operation. In this work, we
propose a pruning algorithm, and we show that the proposed algorithm can find a
subnetwork that can prevent privacy leakage from MIA and achieves competitive
accuracy with the original DNNs. We also verify our theoretical insights with
experiments. Our experimental results illustrate that the attack accuracy using
model compression is up to 13.6% and 10% lower than that of the baseline and
Min-Max game, accordingly. | [
"cs.LG",
"stat.ML"
] |
The 'Clever Hans' effect occurs when the learned model produces correct
predictions based on the 'wrong' features. This effect which undermines the
generalization capability of an ML model and goes undetected by standard
validation techniques has been frequently observed for supervised learning
where the training algorithm leverages spurious correlations in the data. The
question whether Clever Hans also occurs in unsupervised learning, and in which
form, has received so far almost no attention. Therefore, this paper will
contribute an explainable AI (XAI) procedure that can highlight the relevant
features used by popular anomaly detection models of different type. Our
analysis reveals that the Clever Hans effect is widespread in anomaly detection
and occurs in many (unexpected) forms. Interestingly, the observed Clever Hans
effects are in this case not so much due to the data, but due to the anomaly
detection models themselves whose structure makes them unable to detect the
truly relevant features, even though vast amounts of data points are available.
Overall, our work contributes a warning against an unrestrained use of existing
anomaly detection models in practical applications, but it also points at a
possible way out of the Clever Hans dilemma, specifically, by allowing multiple
anomaly models to mutually cancel their individual structural weaknesses to
jointly produce a better and more trustworthy anomaly detector. | [
"cs.LG",
"cs.AI",
"stat.ML"
] |
A dramatic rise in the flow of manipulated image content on the Internet has
led to an aggressive response from the media forensics research community. New
efforts have incorporated increased usage of techniques from computer vision
and machine learning to detect and profile the space of image manipulations.
This paper addresses Image Provenance Analysis, which aims at discovering
relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been
addressed directly is the edit ordering of images that share full content or
are near-duplicates. The existing large networks that generate image
descriptors for tasks such as object recognition may not encode the subtle
differences between these image covariates. This paper introduces a novel deep
learning-based approach to provide a plausible ordering to images that have
been generated from a single image through transformations. Our approach learns
transformation-aware descriptors using weak supervision via composited
transformations and a rank-based quadruplet loss. To establish the efficacy of
the proposed approach, comparisons with state-of-the-art handcrafted and deep
learning-based descriptors, and image matching approaches are made. Further
experimentation validates the proposed approach in the context of image
provenance analysis. | [
"cs.CV"
] |
Video captioning targets interpreting the complex visual contents as text
descriptions, which requires the model to fully understand video scenes
including objects and their interactions. Prevailing methods adopt
off-the-shelf object detection networks to give object proposals and use the
attention mechanism to model the relations between objects. They often miss
some undefined semantic concepts of the pretrained model and fail to identify
exact predicate relationships between objects. In this paper, we investigate an
open research task of generating text descriptions for the given videos, and
propose Cross-Modal Graph (CMG) with meta concepts for video captioning.
Specifically, to cover the useful semantic concepts in video captions, we
weakly learn the corresponding visual regions for text descriptions, where the
associated visual regions and textual words are named cross-modal meta
concepts. We further build meta concept graphs dynamically with the learned
cross-modal meta concepts. We also construct holistic video-level and local
frame-level video graphs with the predicted predicates to model video sequence
structures. We validate the efficacy of our proposed techniques with extensive
experiments and achieve state-of-the-art results on two public datasets. | [
"cs.CV"
] |
Video frame interpolation can up-convert the frame rate and enhance the video
quality. In recent years, although the interpolation performance has achieved
great success, image blur usually occurs at the object boundaries owing to the
large motion. It has been a long-standing problem, and has not been addressed
yet. In this paper, we propose to reduce the image blur and get the clear shape
of objects by preserving the edges in the interpolated frames. To this end, the
proposed Edge-Aware Network (EA-Net) integrates the edge information into the
frame interpolation task. It follows an end-to-end architecture and can be
separated into two stages, \emph{i.e.}, edge-guided flow estimation and
edge-protected frame synthesis. Specifically, in the flow estimation stage,
three edge-aware mechanisms are developed to emphasize the frame edges in
estimating flow maps, so that the edge-maps are taken as the auxiliary
information to provide more guidance to boost the flow accuracy. In the frame
synthesis stage, the flow refinement module is designed to refine the flow map,
and the attention module is carried out to adaptively focus on the
bidirectional flow maps when synthesizing the intermediate frames. Furthermore,
the frame and edge discriminators are adopted to conduct the adversarial
training strategy, so as to enhance the reality and clarity of synthesized
frames. Experiments on three benchmarks, including Vimeo90k, UCF101 for
single-frame interpolation and Adobe240-fps for multi-frame interpolation, have
demonstrated the superiority of the proposed EA-Net for the video frame
interpolation task. | [
"cs.CV"
] |
Most segmentation losses are arguably variants of the Cross-Entropy (CE) or
Dice loss. In the literature, there is no clear consensus as to which of these
losses is a better choice, with varying performances for each across different
benchmarks and applications. We develop a theoretical analysis that links these
two types of losses, exposing their advantages and weaknesses. First, we
explicitly demonstrate that CE and Dice share a much deeper connection than
previously thought: CE is an upper bound on both logarithmic and linear Dice
losses. Furthermore, we provide an information-theoretic analysis, which
highlights hidden label-marginal biases : Dice has an intrinsic bias towards
imbalanced solutions, whereas CE implicitly encourages the ground-truth region
proportions. Our theoretical results explain the wide experimental evidence in
the medical-imaging literature, whereby Dice losses bring improvements for
imbalanced segmentation. It also explains why CE dominates natural-image
problems with diverse class proportions, in which case Dice might have
difficulty adapting to different label-marginal distributions. Based on our
theoretical analysis, we propose a principled and simple solution, which
enables to control explicitly the label-marginal bias. Our loss integrates CE
with explicit ${\cal L}_1$ regularization, which encourages label marginals to
match target class proportions, thereby mitigating class imbalance but without
losing generality. Comprehensive experiments and ablation studies over
different losses and applications validate our theoretical analysis, as well as
the effectiveness of our explicit label-marginal regularizers. | [
"cs.CV"
] |
One of the most critical pieces of the self-driving puzzle is the task of
predicting future movement of surrounding traffic actors, which allows the
autonomous vehicle to safely and effectively plan its future route in a complex
world. Recently, a number of algorithms have been proposed to address this
important problem, spurred by a growing interest of researchers from both
industry and academia. Methods based on top-down scene rasterization on one
side and Generative Adversarial Networks (GANs) on the other have shown to be
particularly successful, obtaining state-of-the-art accuracies on the task of
traffic movement prediction. In this paper we build upon these two directions
and propose a raster-based conditional GAN architecture, powered by a novel
differentiable rasterizer module at the input of the conditional discriminator
that maps generated trajectories into the raster space in a differentiable
manner. This simplifies the task for the discriminator as trajectories that are
not scene-compliant are easier to discern, and allows the gradients to flow
back forcing the generator to output better, more realistic trajectories. We
evaluated the proposed method on a large-scale, real-world data set, showing
that it outperforms state-of-the-art GAN-based baselines. | [
"cs.LG",
"cs.RO",
"eess.IV",
"stat.ML"
] |
Non-invasive detection of cardiovascular disorders from radiology scans
requires quantitative image analysis of the heart and its substructures. There
are well-established measurements that radiologists use for diseases assessment
such as ejection fraction, volume of four chambers, and myocardium mass. These
measurements are derived as outcomes of precise segmentation of the heart and
its substructures. The aim of this paper is to provide such measurements
through an accurate image segmentation algorithm that automatically delineates
seven substructures of the heart from MRI and/or CT scans. Our proposed method
is based on multi-planar deep convolutional neural networks (CNN) with an
adaptive fusion strategy where we automatically utilize complementary
information from different planes of the 3D scans for improved delineations.
For CT and MRI, we have separately designed three CNNs (the same architectural
configuration) for three planes, and have trained the networks from scratch for
voxel-wise labeling for the following cardiac structures: myocardium of left
ventricle (Myo), left atrium (LA), left ventricle (LV), right atrium (RA),
right ventricle (RV), ascending aorta (Ao), and main pulmonary artery (PA). We
have evaluated the proposed method with 4-fold-cross validation on the
multi-modality whole heart segmentation challenge (MM-WHS 2017) dataset. The
precision and dice index of 0.93 and 0.90, and 0.87 and 0.85 were achieved for
CT and MR images, respectively. While a CT volume was segmented about 50
seconds, an MRI scan was segmented around 17 seconds with the GPUs/CUDA
implementation. | [
"stat.ML",
"cs.CV"
] |
We present a multi-agent actor-critic method that aims to implicitly address
the credit assignment problem under fully cooperative settings. Our key
motivation is that credit assignment among agents may not require an explicit
formulation as long as (1) the policy gradients derived from a centralized
critic carry sufficient information for the decentralized agents to maximize
their joint action value through optimal cooperation and (2) a sustained level
of exploration is enforced throughout training. Under the centralized training
with decentralized execution (CTDE) paradigm, we achieve the former by
formulating the centralized critic as a hypernetwork such that a latent state
representation is integrated into the policy gradients through its
multiplicative association with the stochastic policies; to achieve the latter,
we derive a simple technique called adaptive entropy regularization where
magnitudes of the entropy gradients are dynamically rescaled based on the
current policy stochasticity to encourage consistent levels of exploration. Our
algorithm, referred to as LICA, is evaluated on several benchmarks including
the multi-agent particle environments and a set of challenging StarCraft II
micromanagement tasks, and we show that LICA significantly outperforms previous
methods. | [
"cs.LG",
"cs.MA",
"stat.ML"
] |
The task of clustering unlabeled time series and sequences entails a
particular set of challenges, namely to adequately model temporal relations and
variable sequence lengths. If these challenges are not properly handled, the
resulting clusters might be of suboptimal quality. As a key solution, we
present a joint clustering and feature learning framework for time series based
on deep learning. For a given set of time series, we train a recurrent network
to represent, or embed, each time series in a vector space such that a
divergence-based clustering loss function can discover the underlying cluster
structure in an end-to-end manner. Unlike previous approaches, our model
inherently handles multivariate time series of variable lengths and does not
require specification of a distance-measure in the input space. On a diverse
set of benchmark datasets we illustrate that our proposed Recurrent Deep
Divergence-based Clustering approach outperforms, or performs comparable to,
previous approaches. | [
"stat.ML",
"cs.LG"
] |
Automated design of neural network architectures tailored for a specific task
is an extremely promising, albeit inherently difficult, avenue to explore.
While most results in this domain have been achieved on image classification
and language modelling problems, here we concentrate on dense per-pixel tasks,
in particular, semantic image segmentation using fully convolutional networks.
In contrast to the aforementioned areas, the design choices of a fully
convolutional network require several changes, ranging from the sort of
operations that need to be used---e.g., dilated convolutions---to a solving of
a more difficult optimisation problem. In this work, we are particularly
interested in searching for high-performance compact segmentation
architectures, able to run in real-time using limited resources. To achieve
that, we intentionally over-parameterise the architecture during the training
time via a set of auxiliary cells that provide an intermediate supervisory
signal and can be omitted during the evaluation phase. The design of the
auxiliary cell is emitted by a controller, a neural network with the fixed
structure trained using reinforcement learning. More crucially, we demonstrate
how to efficiently search for these architectures within limited time and
computational budgets. In particular, we rely on a progressive strategy that
terminates non-promising architectures from being further trained, and on
Polyak averaging coupled with knowledge distillation to speed-up the
convergence. Quantitatively, in 8 GPU-days our approach discovers a set of
architectures performing on-par with state-of-the-art among compact models on
the semantic segmentation, pose estimation and depth prediction tasks. Code
will be made available here: https://github.com/drsleep/nas-segm-pytorch | [
"cs.CV"
] |
Active learning aims to improve the performance of task model by selecting
the most informative samples with a limited budget. Unlike most recent works
that focused on applying active learning for image classification, we propose
an effective Consistency-based Active Learning method for object Detection
(CALD), which fully explores the consistency between original and augmented
data. CALD has three appealing benefits. (i) CALD is systematically designed by
investigating the weaknesses of existing active learning methods, which do not
take the unique challenges of object detection into account. (ii) CALD unifies
box regression and classification with a single metric, which is not concerned
by active learning methods for classification. CALD also focuses on the most
informative local region rather than the whole image, which is beneficial for
object detection. (iii) CALD not only gauges individual information for sample
selection, but also leverages mutual information to encourage a balanced data
distribution. Extensive experiments show that CALD significantly outperforms
existing state-of-the-art task-agnostic and detection-specific active learning
methods on general object detection datasets. Based on the Faster R-CNN
detector, CALD consistently surpasses the baseline method (random selection) by
2.9/2.8/0.8 mAP on average on PASCAL VOC 2007, PASCAL VOC 2012, and MS COCO.
Code is available at \url{https://github.com/we1pingyu/CALD} | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
While several convolution-like operators have recently been proposed for
extracting features out of point clouds, down-sampling an unordered point cloud
in a deep neural network has not been rigorously studied. Existing methods
down-sample the points regardless of their importance for the output. As a
result, some important points in the point cloud may be removed, while less
valuable points may be passed to the next layers. In contrast, adaptive
down-sampling methods sample the points by taking into account the importance
of each point, which varies based on the application, task and training data.
In this paper, we propose a permutation-invariant learning-based adaptive
down-sampling layer, called Critical Points Layer (CPL), which reduces the
number of points in an unordered point cloud while retaining the important
points. Unlike most graph-based point cloud down-sampling methods that use
$k$-NN search algorithm to find the neighbouring points, CPL is a global
down-sampling method, rendering it computationally very efficient. The proposed
layer can be used along with any graph-based point cloud convolution layer to
form a convolutional neural network, dubbed CP-Net in this paper. We introduce
a CP-Net for $3$D object classification that achieves the best accuracy for the
ModelNet$40$ dataset among point cloud-based methods, which validates the
effectiveness of the CPL. | [
"cs.CV"
] |
Aerators are essential and crucial auxiliary devices in intensive culture,
especially in industrial culture in China. The traditional methods cannot
accurately detect abnormal condition of aerators in time. Surveillance cameras
are widely used as visual perception modules of the Internet of Things, and
then using these widely existing surveillance cameras to realize real-time
anomaly detection of aerators is a cost-free and easy-to-promote method.
However, it is difficult to develop such an expert system due to some technical
and applied challenges, e.g., illumination, occlusion, complex background, etc.
To tackle these aforementioned challenges, we propose a real-time expert system
based on computer vision technology and existing surveillance cameras for
anomaly detection of aerators, which consists of two modules, i.e., object
region detection and working state detection. First, it is difficult to detect
the working state for some small object regions in whole images, and the time
complexity of global feature comparison is also high, so we present an object
region detection method based on the region proposal idea. Moreover, we propose
a novel algorithm called reference frame Kanade-Lucas-Tomasi (RF-KLT) algorithm
for motion feature extraction in fixed regions. Then, we present a dimension
reduction method of time series for establishing a feature dataset with obvious
boundaries between classes. Finally, we use machine learning algorithms to
build the feature classifier. The experimental results in both the actual video
dataset and the augmented video dataset show that the accuracy for detecting
object region and working state of aerators is 100% and 99.9% respectively, and
the detection speed is 77-333 frames per second (FPS) according to the
different types of surveillance cameras. | [
"cs.CV"
] |
The increasing availability of electrocardiogram (ECG) data has motivated the
use of data-driven models for automating various clinical tasks based on ECG
data. The development of subject-specific models are limited by the cost and
difficulty of obtaining sufficient training data for each individual. The
alternative of population model, however, faces challenges caused by the
significant inter-subject variations within the ECG data. We address this
challenge by investigating for the first time the problem of learning
representations for clinically-informative variables while disentangling other
factors of variations within the ECG data. In this work, we present a
conditional variational autoencoder (VAE) to extract the subject-specific
adjustment to the ECG data, conditioned on task-specific representations
learned from a deterministic encoder. To encourage the representation for
inter-subject variations to be independent from the task-specific
representation, maximum mean discrepancy is used to match all the moments
between the distributions learned by the VAE conditioning on the code from the
deterministic encoder. The learning of the task-specific representation is
regularized by a weak supervision in the form of contrastive regularization. We
apply the proposed method to a novel yet important clinical task of classifying
the origin of ventricular tachycardia (VT) into pre-defined segments,
demonstrating the efficacy of the proposed method against the standard VAE. | [
"cs.LG",
"stat.ML"
] |
Video surveillance is a well researched area of study with substantial work
done in the aspects of object detection, tracking and behavior analysis. With
the abundance of video data captured over a long period of time, we can
understand patterns in human behavior and scene dynamics through data-driven
temporal analytics. In this work, we propose two schemes to perform descriptive
and predictive analytics on long-term video surveillance data. We generate
heatmap and footmap visualizations to describe spatially pooled trajectory
patterns with respect to time and location. We also present two approaches for
anomaly prediction at the day-level granularity: a trajectory-based statistical
approach, and a time-series based approach. Experimentation with one year data
from a single camera demonstrates the ability to uncover interesting insights
about the scene and to predict anomalies reasonably well. | [
"cs.CV"
] |
We study a security threat to batch reinforcement learning and control where
the attacker aims to poison the learned policy. The victim is a reinforcement
learner / controller which first estimates the dynamics and the rewards from a
batch data set, and then solves for the optimal policy with respect to the
estimates. The attacker can modify the data set slightly before learning
happens, and wants to force the learner into learning a target policy chosen by
the attacker. We present a unified framework for solving batch policy poisoning
attacks, and instantiate the attack on two standard victims: tabular certainty
equivalence learner in reinforcement learning and linear quadratic regulator in
control. We show that both instantiation result in a convex optimization
problem on which global optimality is guaranteed, and provide analysis on
attack feasibility and attack cost. Experiments show the effectiveness of
policy poisoning attacks. | [
"cs.LG",
"stat.ML"
] |
Advances in Artificial Intelligence and Image Processing are changing the way
people interacts with digital images and video. Widespread mobile apps like
FACEAPP make use of the most advanced Generative Adversarial Networks (GAN) to
produce extreme transformations on human face photos such gender swap, aging,
etc. The results are utterly realistic and extremely easy to be exploited even
for non-experienced users. This kind of media object took the name of Deepfake
and raised a new challenge in the multimedia forensics field: the Deepfake
detection challenge. Indeed, discriminating a Deepfake from a real image could
be a difficult task even for human eyes but recent works are trying to apply
the same technology used for generating images for discriminating them with
preliminary good results but with many limitations: employed Convolutional
Neural Networks are not so robust, demonstrate to be specific to the context
and tend to extract semantics from images. In this paper, a new approach aimed
to extract a Deepfake fingerprint from images is proposed. The method is based
on the Expectation-Maximization algorithm trained to detect and extract a
fingerprint that represents the Convolutional Traces (CT) left by GANs during
image generation. The CT demonstrates to have high discriminative power
achieving better results than state-of-the-art in the Deepfake detection task
also proving to be robust to different attacks. Achieving an overall
classification accuracy of over 98%, considering Deepfakes from 10 different
GAN architectures not only involved in images of faces, the CT demonstrates to
be reliable and without any dependence on image semantic. Finally, tests
carried out on Deepfakes generated by FACEAPP achieving 93% of accuracy in the
fake detection task, demonstrated the effectiveness of the proposed technique
on a real-case scenario. | [
"cs.CV"
] |
Automatic Salient object detection has received tremendous attention from
research community and has been an increasingly important tool in many computer
vision tasks. This paper proposes a novel bottom-up salient object detection
framework which considers both foreground and background cues. First, A series
of background and foreground seeds are selected from an image reliably, and
then used for calculation of saliency map separately. Next, a combination of
foreground and background saliency map is performed. Last, a refinement step
based on geodesic distance is utilized to enhance salient regions, thus
deriving the final saliency map. Particularly we provide a robust scheme for
seeds selection which contributes a lot to accuracy improvement in saliency
detection. Extensive experimental evaluations demonstrate the effectiveness of
our proposed method against other outstanding methods. | [
"cs.CV"
] |
Recent research put a big effort in the development of deep learning
architectures and optimizers obtaining impressive results in areas ranging from
vision to language processing. However little attention has been addressed to
the need of a methodological process of data collection. In this work we
hypothesize that high quality data for supervised learning can be selected in
an unsupervised manner and that by doing so one can obtain models capable to
generalize better than in the case of random training set construction.
However, preliminary results are not robust and further studies on the subject
should be carried out. | [
"cs.CV"
] |
In this paper, we study the problem of learning Graph Convolutional Networks
(GCNs) for regression. Current architectures of GCNs are limited to the small
receptive field of convolution filters and shared transformation matrix for
each node. To address these limitations, we propose Semantic Graph
Convolutional Networks (SemGCN), a novel neural network architecture that
operates on regression tasks with graph-structured data. SemGCN learns to
capture semantic information such as local and global node relationships, which
is not explicitly represented in the graph. These semantic relationships can be
learned through end-to-end training from the ground truth without additional
supervision or hand-crafted rules. We further investigate applying SemGCN to 3D
human pose regression. Our formulation is intuitive and sufficient since both
2D and 3D human poses can be represented as a structured graph encoding the
relationships between joints in the skeleton of a human body. We carry out
comprehensive studies to validate our method. The results prove that SemGCN
outperforms state of the art while using 90% fewer parameters. | [
"cs.CV"
] |
Contrastive learning has revolutionized self-supervised image representation
learning field, and recently been adapted to video domain. One of the greatest
advantages of contrastive learning is that it allows us to flexibly define
powerful loss objectives as long as we can find a reasonable way to formulate
positive and negative samples to contrast. However, existing approaches rely
heavily on the short-range spatiotemporal salience to form clip-level
contrastive signals, thus limit themselves from using global context. In this
paper, we propose a new video-level contrastive learning method based on
segments to formulate positive pairs. Our formulation is able to capture global
context in a video, thus robust to temporal content change. We also incorporate
a temporal order regularization term to enforce the inherent sequential
structure of videos. Extensive experiments show that our video-level
contrastive learning framework (VCLR) is able to outperform previous
state-of-the-arts on five video datasets for downstream action classification,
action localization and video retrieval. Code is available at
https://github.com/amazon-research/video-contrastive-learning. | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
State-of-the-art object detection systems rely on an accurate set of region
proposals. Several recent methods use a neural network architecture to
hypothesize promising object locations. While these approaches are
computationally efficient, they rely on fixed image regions as anchors for
predictions. In this paper we propose to use a search strategy that adaptively
directs computational resources to sub-regions likely to contain objects.
Compared to methods based on fixed anchor locations, our approach naturally
adapts to cases where object instances are sparse and small. Our approach is
comparable in terms of accuracy to the state-of-the-art Faster R-CNN approach
while using two orders of magnitude fewer anchors on average. Code is publicly
available. | [
"cs.CV"
] |
Unsupervised learning methods for feature extraction are becoming more and
more popular. We combine the popular contrastive learning method (prototypical
contrastive learning) and the classic representation learning method
(autoencoder) to design an unsupervised feature learning network for
hyperspectral classification. Experiments have proved that our two proposed
autoencoder networks have good feature learning capabilities by themselves, and
the contrastive learning network we designed can better combine the features of
the two to learn more representative features. As a result, our method
surpasses other comparison methods in the hyperspectral classification
experiments, including some supervised methods. Moreover, our method maintains
a fast feature extraction speed than baseline methods. In addition, our method
reduces the requirements for huge computing resources, separates feature
extraction and contrastive learning, and allows more researchers to conduct
research and experiments on unsupervised contrastive learning. | [
"cs.CV"
] |
Face synthesis is an important problem in computer vision with many
applications. In this work, we describe a new method, namely LandmarkGAN, to
synthesize faces based on facial landmarks as input. Facial landmarks are a
natural, intuitive, and effective representation for facial expressions and
orientations, which are independent from the target's texture or color and
background scene. Our method is able to transform a set of facial landmarks
into new faces of different subjects, while retains the same facial expression
and orientation. Experimental results on face synthesis and reenactments
demonstrate the effectiveness of our method. | [
"cs.CV"
] |
Foreground map evaluation is crucial for gauging the progress of object
segmentation algorithms, in particular in the filed of salient object detection
where the purpose is to accurately detect and segment the most salient object
in a scene. Several widely-used measures such as Area Under the Curve (AUC),
Average Precision (AP) and the recently proposed Fbw have been utilized to
evaluate the similarity between a non-binary saliency map (SM) and a
ground-truth (GT) map. These measures are based on pixel-wise errors and often
ignore the structural similarities. Behavioral vision studies, however, have
shown that the human visual system is highly sensitive to structures in scenes.
Here, we propose a novel, efficient, and easy to calculate measure known an
structural similarity measure (Structure-measure) to evaluate non-binary
foreground maps. Our new measure simultaneously evaluates region-aware and
object-aware structural similarity between a SM and a GT map. We demonstrate
superiority of our measure over existing ones using 5 meta-measures on 5
benchmark datasets. | [
"cs.CV"
] |
Spleen volume estimation using automated image segmentation technique may be
used to detect splenomegaly (abnormally enlarged spleen) on Magnetic Resonance
Imaging (MRI) scans. In recent years, Deep Convolutional Neural Networks (DCNN)
segmentation methods have demonstrated advantages for abdominal organ
segmentation. However, variations in both size and shape of the spleen on MRI
images may result in large false positive and false negative labeling when
deploying DCNN based methods. In this paper, we propose the Splenomegaly
Segmentation Network (SSNet) to address spatial variations when segmenting
extraordinarily large spleens. SSNet was designed based on the framework of
image-to-image conditional generative adversarial networks (cGAN).
Specifically, the Global Convolutional Network (GCN) was used as the generator
to reduce false negatives, while the Markovian discriminator (PatchGAN) was
used to alleviate false positives. A cohort of clinically acquired 3D MRI scans
(both T1 weighted and T2 weighted) from patients with splenomegaly were used to
train and test the networks. The experimental results demonstrated that a mean
Dice coefficient of 0.9260 and a median Dice coefficient of 0.9262 using SSNet
on independently tested MRI volumes of patients with splenomegaly. | [
"cs.CV"
] |
This work proposes an unsupervised fusion framework based on deep
convolutional transform learning. The great learning ability of convolutional
filters for data analysis is well acknowledged. The success of convolutive
features owes to convolutional neural network (CNN). However, CNN cannot
perform learning tasks in an unsupervised fashion. In a recent work, we show
that such shortcoming can be addressed by adopting a convolutional transform
learning (CTL) approach, where convolutional filters are learnt in an
unsupervised fashion. The present paper aims at (i) proposing a deep version of
CTL; (ii) proposing an unsupervised fusion formulation taking advantage of the
proposed deep CTL representation; (iii) developing a mathematically sounded
optimization strategy for performing the learning task. We apply the proposed
technique, named DeConFuse, on the problem of stock forecasting and trading.
Comparison with state-of-the-art methods (based on CNN and long short-term
memory network) shows the superiority of our method for performing a reliable
feature extraction. | [
"cs.LG"
] |
The characterisation of time-series data via their most salient features is
extremely important in a range of machine learning task, not least of all with
regards to classification and clustering. While there exist many feature
extraction techniques suitable for non-intermittent time-series data, these
approaches are not always appropriate for intermittent time-series data, where
intermittency is characterized by constant values for large periods of time
punctuated by sharp and transient increases or decreases in value.
Motivated by this, we present aggregation, mode decomposition and projection
(AMP) a feature extraction technique particularly suited to intermittent
time-series data which contain time-frequency patterns. For our method all
individual time-series within a set are combined to form a non-intermittent
aggregate. This is decomposed into a set of components which represent the
intrinsic time-frequency signals within the data set. Individual time-series
can then be fit to these components to obtain a set of numerical features that
represent their intrinsic time-frequency patterns. To demonstrate the
effectiveness of AMP, we evaluate against the real word task of clustering
intermittent time-series data. Using synthetically generated data we show that
a clustering approach which uses the features derived from AMP significantly
outperforms traditional clustering methods. Our technique is further
exemplified on a real world data set where AMP can be used to discover
groupings of individuals which correspond to real world sub-populations. | [
"cs.LG",
"G.3"
] |
Segmenting aerial images is being of great potential in surveillance and
scene understanding of urban areas. It provides a mean for automatic reporting
of the different events that happen in inhabited areas. This remarkably
promotes public safety and traffic management applications. After the wide
adoption of convolutional neural networks methods, the accuracy of semantic
segmentation algorithms could easily surpass 80% if a robust dataset is
provided. Despite this success, the deployment of a pre-trained segmentation
model to survey a new city that is not included in the training set
significantly decreases the accuracy. This is due to the domain shift between
the source dataset on which the model is trained and the new target domain of
the new city images. In this paper, we address this issue and consider the
challenge of domain adaptation in semantic segmentation of aerial images. We
design an algorithm that reduces the domain shift impact using Generative
Adversarial Networks (GANs). In the experiments, we test the proposed
methodology on the International Society for Photogrammetry and Remote Sensing
(ISPRS) semantic segmentation dataset and found that our method improves the
overall accuracy from 35% to 52% when passing from Potsdam domain (considered
as source domain) to Vaihingen domain (considered as target domain). In
addition, the method allows recovering efficiently the inverted classes due to
sensor variation. In particular, it improves the average segmentation accuracy
of the inverted classes due to sensor variation from 14% to 61%. | [
"cs.CV"
] |
In real applications, object detectors based on deep networks still face
challenges of the large domain gap between the labeled training data and
unlabeled testing data. To reduce the gap, recent techniques are proposed by
aligning the image/instance-level features between source and unlabeled target
domains. However, these methods suffer from the suboptimal problem mainly
because of ignoring the category information of object instances. To tackle
this issue, we develop a fine-grained domain alignment approach with a
well-designed domain classifier bank that achieves the instance-level alignment
respecting to their categories. Specifically, we first employ the mean teacher
paradigm to generate pseudo labels for unlabeled samples. Then we implement the
class-level domain classifiers and group them together, called domain
classifier bank, in which each domain classifier is responsible for aligning
features of a specific class. We assemble the bare object detector with the
proposed fine-grained domain alignment mechanism as the adaptive detector, and
optimize it with a developed crossed adaptive weighting mechanism. Extensive
experiments on three popular transferring benchmarks demonstrate the
effectiveness of our method and achieve the new remarkable state-of-the-arts. | [
"cs.CV"
] |
Scene graph generation models understand the scene through object and
predicate recognition, but are prone to mistakes due to the challenges of
perception in the wild. Perception errors often lead to nonsensical
compositions in the output scene graph, which do not follow real-world rules
and patterns, and can be corrected using commonsense knowledge. We propose the
first method to acquire visual commonsense such as affordance and intuitive
physics automatically from data, and use that to improve the robustness of
scene understanding. To this end, we extend Transformer models to incorporate
the structure of scene graphs, and train our Global-Local Attention Transformer
on a scene graph corpus. Once trained, our model can be applied on any scene
graph generation model and correct its obvious mistakes, resulting in more
semantically plausible scene graphs. Through extensive experiments, we show our
model learns commonsense better than any alternative, and improves the accuracy
of state-of-the-art scene graph generation methods. | [
"cs.CV",
"cs.LG"
] |
We introduce a new type of programming challenge called programming puzzles,
as an objective and comprehensive evaluation of program synthesis, and release
an open-source dataset of Python Programming Puzzles (P3). Each puzzle is
defined by a short Python program $f$, and the goal is to find an input $x$
which makes $f$ output "True". The puzzles are objective in that each one is
specified entirely by the source code of its verifier $f$, so evaluating $f(x)$
is all that is needed to test a candidate solution $x$. They do not require an
answer key or input/output examples, nor do they depend on natural language
understanding. The dataset is comprehensive in that it spans problems of a
range of difficulties and domains, ranging from trivial string manipulation
problems that are immediately obvious to human programmers (but not necessarily
to AI), to classic programming puzzles (e.g., Towers of Hanoi), to
interview/competitive-programming problems (e.g., dynamic programming), to
longstanding open problems in algorithms and mathematics (e.g., factoring). The
objective nature of P3 readily supports self-supervised bootstrapping. We
develop baseline enumerative program synthesis and GPT-3 solvers that are
capable of solving easy puzzles -- even without access to any reference
solutions -- by learning from their own past solutions. Based on a small user
study, we find puzzle difficulty to correlate between human programmers and the
baseline AI solvers. | [
"cs.LG",
"cs.AI",
"cs.CL",
"cs.PL",
"cs.SE"
] |
We present a practical approach for processing mobile sensor time series data
for continual deep learning predictions. The approach comprises data cleaning,
normalization, capping, time-based compression, and finally classification with
a recurrent neural network. We demonstrate the effectiveness of the approach in
a case study with 279 participants. On the basis of sparse sensor events, the
network continually predicts whether the participants would attend to a
notification within 10 minutes. Compared to a random baseline, the classifier
achieves a 40% performance increase (AUC of 0.702) on a withheld test set. This
approach allows to forgo resource-intensive, domain-specific, error-prone
feature engineering, which may drastically increase the applicability of
machine learning to mobile phone sensor data. | [
"cs.LG",
"cs.HC"
] |
To fully exploit the performance potential of modern multi-core processors,
machine learning and data mining algorithms for big data must be parallelized
in multiple ways. Today's CPUs consist of multiple cores, each following an
independent thread of control, and each equipped with multiple arithmetic units
which can perform the same operation on a vector of multiple data objects.
Graph embedding, i.e. converting the vertices of a graph into numerical vectors
is a data mining task of high importance and is useful for graph drawing
(low-dimensional vectors) and graph representation learning (high-dimensional
vectors). In this paper, we propose MulticoreGEMPE (Graph Embedding by
Minimizing the Predictive Entropy), an information-theoretic method which can
generate low and high-dimensional vectors. MulticoreGEMPE applies MIMD
(Multiple Instructions Multiple Data, using OpenMP) and SIMD (Single
Instructions Multiple Data, using AVX-512) parallelism. We propose general
ideas applicable in other graph-based algorithms like \emph{vectorized hashing}
and \emph{vectorized reduction}. Our experimental evaluation demonstrates the
superiority of our approach. | [
"cs.LG",
"cs.DC"
] |
We adapt the optimization's concept of momentum to reinforcement learning.
Seeing the state-action value functions as an analog to the gradients in
optimization, we interpret momentum as an average of consecutive $q$-functions.
We derive Momentum Value Iteration (MoVI), a variation of Value Iteration that
incorporates this momentum idea. Our analysis shows that this allows MoVI to
average errors over successive iterations. We show that the proposed approach
can be readily extended to deep learning. Specifically, we propose a simple
improvement on DQN based on MoVI, and experiment it on Atari games. | [
"cs.LG",
"stat.ML"
] |
Continual learning aims to improve the ability of modern learning systems to
deal with non-stationary distributions, typically by attempting to learn a
series of tasks sequentially. Prior art in the field has largely considered
supervised or reinforcement learning tasks, and often assumes full knowledge of
task labels and boundaries. In this work, we propose an approach (CURL) to
tackle a more general problem that we will refer to as unsupervised continual
learning. The focus is on learning representations without any knowledge about
task identity, and we explore scenarios when there are abrupt changes between
tasks, smooth transitions from one task to another, or even when the data is
shuffled. The proposed approach performs task inference directly within the
model, is able to dynamically expand to capture new concepts over its lifetime,
and incorporates additional rehearsal-based techniques to deal with
catastrophic forgetting. We demonstrate the efficacy of CURL in an unsupervised
learning setting with MNIST and Omniglot, where the lack of labels ensures no
information is leaked about the task. Further, we demonstrate strong
performance compared to prior art in an i.i.d setting, or when adapting the
technique to supervised tasks such as incremental class learning. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
] |
Multivariate time-series forecasting plays a crucial role in many real-world
applications. It is a challenging problem as one needs to consider both
intra-series temporal correlations and inter-series correlations
simultaneously. Recently, there have been multiple works trying to capture both
correlations, but most, if not all of them only capture temporal correlations
in the time domain and resort to pre-defined priors as inter-series
relationships.
In this paper, we propose Spectral Temporal Graph Neural Network (StemGNN) to
further improve the accuracy of multivariate time-series forecasting. StemGNN
captures inter-series correlations and temporal dependencies \textit{jointly}
in the \textit{spectral domain}. It combines Graph Fourier Transform (GFT)
which models inter-series correlations and Discrete Fourier Transform (DFT)
which models temporal dependencies in an end-to-end framework. After passing
through GFT and DFT, the spectral representations hold clear patterns and can
be predicted effectively by convolution and sequential learning modules.
Moreover, StemGNN learns inter-series correlations automatically from the data
without using pre-defined priors. We conduct extensive experiments on ten
real-world datasets to demonstrate the effectiveness of StemGNN. Code is
available at https://github.com/microsoft/StemGNN/ | [
"cs.LG",
"cs.AI"
] |
Binary image segmentation plays an important role in computer vision and has
been widely used in many applications such as image and video editing, object
extraction, and photo composition. In this paper, we propose a novel
interactive binary image segmentation method based on the Markov Random Field
(MRF) framework and the fast bilateral solver (FBS) technique. Specifically, we
employ the geodesic distance component to build the unary term. To ensure both
computation efficiency and effective responsiveness for interactive
segmentation, superpixels are used in computing geodesic distances instead of
pixels. Furthermore, we take a bilateral affinity approach for the pairwise
term in order to preserve edge information and denoise. Through the alternating
direction strategy, the MRF energy minimization problem is divided into two
subproblems, which then can be easily solved by steepest gradient descent (SGD)
and FBS respectively. Experimental results on the VGG interactive image
segmentation dataset show that the proposed algorithm outperforms several
state-of-the-art ones, and in particular, it can achieve satisfactory
edge-smooth segmentation results even when the foreground and background color
appearances are quite indistinctive. | [
"cs.CV"
] |
Much of the recent work on learning molecular representations has been based
on Graph Convolution Networks (GCN). These models rely on local aggregation
operations and can therefore miss higher-order graph properties. To remedy
this, we propose Path-Augmented Graph Transformer Networks (PAGTN) that are
explicitly built on longer-range dependencies in graph-structured data.
Specifically, we use path features in molecular graphs to create global
attention layers. We compare our PAGTN model against the GCN model and show
that our model consistently outperforms GCNs on molecular property prediction
datasets including quantum chemistry (QM7, QM8, QM9), physical chemistry (ESOL,
Lipophilictiy) and biochemistry (BACE, BBBP). | [
"cs.LG",
"stat.ML"
] |
In this paper, we propose an improved quantitative evaluation framework for
Generative Adversarial Networks (GANs) on generating domain-specific images,
where we improve conventional evaluation methods on two levels: the feature
representation and the evaluation metric. Unlike most existing evaluation
frameworks which transfer the representation of ImageNet inception model to map
images onto the feature space, our framework uses a specialized encoder to
acquire fine-grained domain-specific representation. Moreover, for datasets
with multiple classes, we propose Class-Aware Frechet Distance (CAFD), which
employs a Gaussian mixture model on the feature space to better fit the
multi-manifold feature distribution. Experiments and analysis on both the
feature level and the image level were conducted to demonstrate improvements of
our proposed framework over the recently proposed state-of-the-art FID method.
To our best knowledge, we are the first to provide counter examples where FID
gives inconsistent results with human judgments. It is shown in the experiments
that our framework is able to overcome the shortness of FID and improves
robustness. Code will be made available. | [
"cs.CV"
] |
In this paper, we propose a method of targetless and automatic Camera-LiDAR
calibration. Our approach is an extension of hand-eye calibration framework to
2D-3D calibration. By using the sensor fusion odometry method, the scaled
camera motions are calculated with high accuracy. In addition to this, we
clarify the suitable motion for this calibration method.
The proposed method only requires the three-dimensional point cloud and the
camera image and does not need other information such as reflectance of LiDAR
and to give initial extrinsic parameter. In the experiments, we demonstrate our
method using several sensor configurations in indoor and outdoor scenes to
verify the effectiveness. The accuracy of our method achieves more than other
comparable state-of-the-art methods. | [
"cs.CV",
"cs.RO"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.