text
stringlengths 29
3.31k
| label
listlengths 1
11
|
---|---|
Given key performance indicators collected with fine granularity as time
series, our aim is to predict and explain failures in storage environments.
Although explainable predictive modeling based on spiky telemetry data is key
in many domains, current approaches cannot tackle this problem. Deep learning
methods suitable for sequence modeling and learning temporal dependencies, such
as RNNs, are effective, but opaque from an explainability perspective. Our
approach first extracts the anomalous spikes from time series as events and
then builds an RNN classifier with attention mechanisms to embed the
irregularity and frequency of these events. A preliminary evaluation on real
world storage environments shows that our approach can predict failures within
a 3-day prediction window with comparable accuracy as traditional RNN-based
classifiers. At the same time it can explain the predictions by returning the
key anomalous events which led to those failure predictions. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we introduce a new reinforcement learning (RL) based neural
architecture search (NAS) methodology for effective and efficient generative
adversarial network (GAN) architecture search. The key idea is to formulate the
GAN architecture search problem as a Markov decision process (MDP) for smoother
architecture sampling, which enables a more effective RL-based search algorithm
by targeting the potential global optimal architecture. To improve efficiency,
we exploit an off-policy GAN architecture search algorithm that makes efficient
use of the samples generated by previous policies. Evaluation on two standard
benchmark datasets (i.e., CIFAR-10 and STL-10) demonstrates that the proposed
method is able to discover highly competitive architectures for generally
better image generation results with a considerably reduced computational
burden: 7 GPU hours. Our code is available at
https://github.com/Yuantian013/E2GAN. | [
"cs.CV"
]
|
Vector representations of graphs and relational structures, whether
hand-crafted feature vectors or learned representations, enable us to apply
standard data analysis and machine learning techniques to the structures. A
wide range of methods for generating such embeddings have been studied in the
machine learning and knowledge representation literature. However, vector
embeddings have received relatively little attention from a theoretical point
of view.
Starting with a survey of embedding techniques that have been used in
practice, in this paper we propose two theoretical approaches that we see as
central for understanding the foundations of vector embeddings. We draw
connections between the various approaches and suggest directions for future
research. | [
"cs.LG",
"cs.DB",
"cs.DM",
"stat.ML"
]
|
Feed-forward neural networks consist of a sequence of layers, in which each
layer performs some processing on the information from the previous layer. A
downside to this approach is that each layer (or module, as multiple modules
can operate in parallel) is tasked with processing the entire hidden state,
rather than a particular part of the state which is most relevant for that
module. Methods which only operate on a small number of input variables are an
essential part of most programming languages, and they allow for improved
modularity and code re-usability. Our proposed method, Neural Function Modules
(NFM), aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and
bottom-up feedback is limited to classification problems. The key contribution
of our work is to combine attention, sparsity, top-down and bottom-up feedback,
in a flexible algorithm which, as we show, improves the results in standard
classification, out-of-domain generalization, generative modeling, and learning
representations in the context of reinforcement learning. | [
"cs.LG",
"stat.ML"
]
|
Dense prediction models are widely used for image segmentation. One important
challenge is to sufficiently train these models to yield good generalizations
for hard-to-learn pixels. A typical group of such hard-to-learn pixels are
boundaries between instances. Many studies have proposed to give specific
attention to learning the boundary pixels. They include designing multi-task
networks with an additional task of boundary prediction and increasing the
weights of boundary pixels' predictions in the loss function. Such strategies
require defining what to attend beforehand and incorporating this defined
attention to the learning model. However, there may exist other groups of
hard-to-learn pixels and manually defining and incorporating the appropriate
attention for each group may not be feasible. In order to provide a more
attainable and scalable solution, this paper proposes AttentionBoost, which is
a new multi-attention learning model based on adaptive boosting. AttentionBoost
designs a multi-stage network and introduces a new loss adjustment mechanism
for a dense prediction model to adaptively learn what to attend at each stage
directly on image data without necessitating any prior definition about what to
attend. This mechanism modulates the attention of each stage to correct the
mistakes of previous stages, by adjusting the loss weight of each pixel
prediction separately with respect to how accurate the previous stages are on
this pixel. This mechanism enables AttentionBoost to learn different attentions
for different pixels at the same stage, according to difficulty of learning
these pixels, as well as multiple attentions for the same pixel at different
stages, according to confidence of these stages on their predictions for this
pixel. Using gland segmentation as a showcase application, our experiments
demonstrate that AttentionBoost improves the results of its counterparts. | [
"cs.CV",
"cs.LG"
]
|
Recent neural-network-based architectures for image segmentation make
extensive usage of feature forwarding mechanisms to integrate information from
multiple scales. Although yielding good results, even deeper architectures and
alternative methods for feature fusion at different resolutions have been
scarcely investigated for medical applications. In this work we propose to
implement segmentation via an encoder-decoder architecture which differs from
any other previously published method since (i) it employs a very deep
architecture based on residual learning and (ii) combines features via a
convolutional Long Short Term Memory (LSTM), instead of concatenation or
summation. The intuition is that the memory mechanism implemented by LSTMs can
better integrate features from different scales through a coarse-to-fine
strategy; hence the name Coarse-to-Fine Context Memory (CFCM). We demonstrate
the remarkable advantages of this approach on two datasets: the Montgomery
county lung segmentation dataset, and the EndoVis 2015 challenge dataset for
surgical instrument segmentation. | [
"cs.CV"
]
|
We present SOSELETO (SOurce SELEction for Target Optimization), a new method
for exploiting a source dataset to solve a classification problem on a target
dataset. SOSELETO is based on the following simple intuition: some source
examples are more informative than others for the target problem. To capture
this intuition, source samples are each given weights; these weights are solved
for jointly with the source and target classification problems via a bilevel
optimization scheme. The target therefore gets to choose the source samples
which are most informative for its own classification task. Furthermore, the
bilevel nature of the optimization acts as a kind of regularization on the
target, mitigating overfitting. SOSELETO may be applied to both classic
transfer learning, as well as the problem of training on datasets with noisy
labels; we show state of the art results on both of these problems. | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
We propose a novel video inpainting algorithm that simultaneously
hallucinates missing appearance and motion (optical flow) information, building
upon the recent 'Deep Image Prior' (DIP) that exploits convolutional network
architectures to enforce plausible texture in static images. In extending DIP
to video we make two important contributions. First, we show that coherent
video inpainting is possible without a priori training. We take a generative
approach to inpainting based on internal (within-video) learning without
reliance upon an external corpus of visual data to train a one-size-fits-all
model for the large space of general videos. Second, we show that such a
framework can jointly generate both appearance and flow, whilst exploiting
these complementary modalities to ensure mutual consistency. We show that
leveraging appearance statistics specific to each video achieves visually
plausible results whilst handling the challenging problem of long-term
consistency. | [
"cs.CV"
]
|
Stochastic gradient descent (SGD) is almost ubiquitously used for training
non-convex optimization tasks. Recently, a hypothesis proposed by Keskar et al.
[2017] that large batch methods tend to converge to sharp minimizers has
received increasing attention. We theoretically justify this hypothesis by
providing new properties of SGD in both finite-time and asymptotic regimes. In
particular, we give an explicit escaping time of SGD from a local minimum in
the finite-time regime and prove that SGD tends to converge to flatter minima
in the asymptotic regime (although may take exponential time to converge)
regardless of the batch size. We also find that SGD with a larger ratio of
learning rate to batch size tends to converge to a flat minimum faster,
however, its generalization performance could be worse than the SGD with a
smaller ratio of learning rate to batch size. We include numerical experiments
to corroborate these theoretical findings. | [
"stat.ML",
"cs.LG"
]
|
Model-free reinforcement learning based methods such as Proximal Policy
Optimization, or Q-learning typically require thousands of interactions with
the environment to approximate the optimum controller which may not always be
feasible in robotics due to safety and time consumption. Model-based methods
such as PILCO or BlackDrops, while data-efficient, provide solutions with
limited robustness and complexity. To address this tradeoff, we introduce
active uncertainty reduction-based virtual environments, which are formed
through limited trials conducted in the original environment. We provide an
efficient method for uncertainty management, which is used as a metric for
self-improvement by identification of the points with maximum expected
improvement through adaptive sampling. Capturing the uncertainty also allows
for better mimicking of the reward responses of the original system. Our
approach enables the use of complex policy structures and reward functions
through a unique combination of model-based and model-free methods, while still
retaining the data efficiency. We demonstrate the validity of our method on
several classic reinforcement learning problems in OpenAI gym. We prove that
our approach offers a better modeling capacity for complex system dynamics as
compared to established methods. | [
"cs.LG",
"cs.RO",
"stat.ML"
]
|
Modeling fashion compatibility is challenging due to its complexity and
subjectivity. Existing work focuses on predicting compatibility between product
images (e.g. an image containing a t-shirt and an image containing a pair of
jeans). However, these approaches ignore real-world 'scene' images (e.g.
selfies); such images are hard to deal with due to their complexity, clutter,
variations in lighting and pose (etc.) but on the other hand could potentially
provide key context (e.g. the user's body type, or the season) for making more
accurate recommendations. In this work, we propose a new task called 'Complete
the Look', which seeks to recommend visually compatible products based on scene
images. We design an approach to extract training data for this task, and
propose a novel way to learn the scene-product compatibility from fashion or
interior design images. Our approach measures compatibility both globally and
locally via CNNs and attention mechanisms. Extensive experiments show that our
method achieves significant performance gains over alternative systems. Human
evaluation and qualitative analysis are also conducted to further understand
model behavior. We hope this work could lead to useful applications which link
large corpora of real-world scenes with shoppable products. | [
"cs.CV",
"cs.IR",
"cs.MM"
]
|
Visual crowd counting has been recently studied as a way to enable people
counting in crowd scenes from images. Albeit successful, vision-based crowd
counting approaches could fail to capture informative features in extreme
conditions, e.g., imaging at night and occlusion. In this work, we introduce a
novel task of audiovisual crowd counting, in which visual and auditory
information are integrated for counting purposes. We collect a large-scale
benchmark, named auDiovISual Crowd cOunting (DISCO) dataset, consisting of
1,935 images and the corresponding audio clips, and 170,270 annotated
instances. In order to fuse the two modalities, we make use of a linear
feature-wise fusion module that carries out an affine transformation on visual
and auditory features. Finally, we conduct extensive experiments using the
proposed dataset and approach. Experimental results show that introducing
auditory information can benefit crowd counting under different illumination,
noise, and occlusion conditions. The dataset and code will be released. Code
and data have been made available | [
"cs.CV"
]
|
Overfit is a fundamental problem in machine learning in general, and in deep
learning in particular. In order to reduce overfit and improve generalization
in the classification of images, some employ invariance to a group of
transformations, such as rotations and reflections. However, since not all
objects exhibit necessarily the same invariance, it seems desirable to allow
the network to learn the useful level of invariance from the data. To this end,
motivated by self-supervision, we introduce an architecture enhancement for
existing neural network models based on input transformations, termed
'TransNet', together with a training algorithm suitable for it. Our model can
be employed during training time only and then pruned for prediction, resulting
in an equivalent architecture to the base model. Thus pruned, we show that our
model improves performance on various data-sets while exhibiting improved
generalization, which is achieved in turn by enforcing soft invariance on the
convolutional kernels of the last layer in the base model. Theoretical analysis
is provided to support the proposed method. | [
"cs.LG",
"cs.CV"
]
|
Given the input graph and its label/property, several key problems of graph
learning, such as finding interpretable subgraphs, graph denoising and graph
compression, can be attributed to the fundamental problem of recognizing a
subgraph of the original one. This subgraph shall be as informative as
possible, yet contains less redundant and noisy structure. This problem setting
is closely related to the well-known information bottleneck (IB) principle,
which, however, has less been studied for the irregular graph data and graph
neural networks (GNNs). In this paper, we propose a framework of Graph
Information Bottleneck (GIB) for the subgraph recognition problem in deep graph
learning. Under this framework, one can recognize the maximally informative yet
compressive subgraph, named IB-subgraph. However, the GIB objective is
notoriously hard to optimize, mostly due to the intractability of the mutual
information of irregular graph data and the unstable optimization process. In
order to tackle these challenges, we propose: i) a GIB objective based-on a
mutual information estimator for the irregular graph data; ii) a bi-level
optimization scheme to maximize the GIB objective; iii) a connectivity loss to
stabilize the optimization process. We evaluate the properties of the
IB-subgraph in three application scenarios: improvement of graph
classification, graph interpretation and graph denoising. Extensive experiments
demonstrate that the information-theoretic IB-subgraph enjoys superior graph
properties. | [
"cs.LG",
"stat.ML"
]
|
Automatic polyp detection and segmentation are highly desirable for colon
screening due to polyp miss rate by physicians during colonoscopy, which is
about 25%. However, this computerization is still an unsolved problem due to
various polyp-like structures in the colon and high interclass polyp variations
in terms of size, color, shape, and texture. In this paper, we adapt Mask R-CNN
and evaluate its performance with different modern convolutional neural
networks (CNN) as its feature extractor for polyp detection and segmentation.
We investigate the performance improvement of each feature extractor by adding
extra polyp images to the training dataset to answer whether we need deeper and
more complex CNNs or better dataset for training in automatic polyp detection
and segmentation. Finally, we propose an ensemble method for further
performance improvement. We evaluate the performance on the 2015 MICCAI polyp
detection dataset. The best results achieved are 72.59% recall, 80% precision,
70.42% dice, and 61.24% Jaccard. The model achieved state-of-the-art
segmentation performance. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Automatic extraction of temporal relations between event pairs is an
important task for several natural language processing applications such as
Question Answering, Information Extraction, and Summarization. Since most
existing methods are supervised and require large corpora, which for many
languages do not exist, we have concentrated our efforts to reduce the need for
annotated data as much as possible. This paper presents two different
algorithms towards this goal. The first algorithm is a weakly supervised
machine learning approach for classification of temporal relations between
events. In the first stage, the algorithm learns a general classifier from an
annotated corpus. Then, inspired by the hypothesis of "one type of temporal
relation per discourse, it extracts useful information from a cluster of
topically related documents. We show that by combining the global information
of such a cluster with local decisions of a general classifier, a bootstrapping
cross-document classifier can be built to extract temporal relations between
events. Our experiments show that without any additional annotated data, the
accuracy of the proposed algorithm is higher than that of several previous
successful systems. The second proposed method for temporal relation extraction
is based on the expectation maximization (EM) algorithm. Within EM, we used
different techniques such as a greedy best-first search and integer linear
programming for temporal inconsistency removal. We think that the experimental
results of our EM based algorithm, as a first step toward a fully unsupervised
temporal relation extraction method, is encouraging. | [
"cs.LG",
"cs.CL"
]
|
Deep learning has recently achieved very promising results in a wide range of
areas such as computer vision, speech recognition and natural language
processing. It aims to learn hierarchical representations of data by using deep
architecture models. In a smart city, a lot of data (e.g. videos captured from
many distributed sensors) need to be automatically processed and analyzed. In
this paper, we review the deep learning algorithms applied to video analytics
of smart city in terms of different research topics: object detection, object
tracking, face recognition, image classification and scene labeling. | [
"cs.CV"
]
|
In this work, we present a new multi-view depth estimation method that
utilizes both conventional SfM reconstruction and learning-based priors over
the recently proposed neural radiance fields (NeRF). Unlike existing neural
network based optimization method that relies on estimated correspondences, our
method directly optimizes over implicit volumes, eliminating the challenging
step of matching pixels in indoor scenes. The key to our approach is to utilize
the learning-based priors to guide the optimization process of NeRF. Our system
firstly adapts a monocular depth network over the target scene by finetuning on
its sparse SfM reconstruction. Then, we show that the shape-radiance ambiguity
of NeRF still exists in indoor environments and propose to address the issue by
employing the adapted depth priors to monitor the sampling process of volume
rendering. Finally, a per-pixel confidence map acquired by error computation on
the rendered image can be used to further improve the depth quality.
Experiments show that our proposed framework significantly outperforms
state-of-the-art methods on indoor scenes, with surprising findings presented
on the effectiveness of correspondence-based optimization and NeRF-based
optimization over the adapted depth priors. In addition, we show that the
guided optimization scheme does not sacrifice the original synthesis capability
of neural radiance fields, improving the rendering quality on both seen and
novel views. Code is available at https://github.com/weiyithu/NerfingMVS. | [
"cs.CV"
]
|
We propose Graphical Generative Adversarial Networks (Graphical-GAN) to model
structured data. Graphical-GAN conjoins the power of Bayesian networks on
compactly representing the dependency structures among random variables and
that of generative adversarial networks on learning expressive dependency
functions. We introduce a structured recognition model to infer the posterior
distribution of latent variables given observations. We generalize the
Expectation Propagation (EP) algorithm to learn the generative model and
recognition model jointly. Finally, we present two important instances of
Graphical-GAN, i.e. Gaussian Mixture GAN (GMGAN) and State Space GAN (SSGAN),
which can successfully learn the discrete and temporal structures on visual
datasets, respectively. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
We propose a learning-based network for depth map estimation from multi-view
stereo (MVS) images. Our proposed network consists of three sub-networks: 1) a
base network for initial depth map estimation from an unstructured stereo image
pair, 2) a novel refinement network that leverages both photometric and
geometric information, and 3) an attentional multi-view aggregation framework
that enables efficient information exchange and integration among different
stereo image pairs. The proposed network, called A-TVSNet, is evaluated on
various MVS datasets and shows the ability to produce high quality depth map
that outperforms competing approaches. Our code is available at
https://github.com/daiszh/A-TVSNet. | [
"cs.CV"
]
|
Nowadays deep learning is dominating the field of machine learning with
state-of-the-art performance in various application areas. Recently, spiking
neural networks (SNNs) have been attracting a great deal of attention, notably
owning to their power efficiency, which can potentially allow us to implement a
low-power deep learning engine suitable for real-time/mobile applications.
However, implementing SNN-based deep learning remains challenging, especially
gradient-based training of SNNs by error backpropagation. We cannot simply
propagate errors through SNNs in conventional way because of the property of
SNNs that process discrete data in the form of a series. Consequently, most of
the previous studies employ a workaround technique, which first trains a
conventional weighted-sum deep neural network and then maps the learning
weights to the SNN under training, instead of training SNN parameters directly.
In order to eliminate this workaround, recently proposed is a new class of SNN
named deep spiking networks (DSNs), which can be trained directly (without a
mapping from conventional deep networks) by error backpropagation with
stochastic gradient descent. In this paper, we show that the initialization of
the membrane potential on the backward path is an important step in DSN
training, through diverse experiments performed under various conditions.
Furthermore, we propose a simple and efficient method that can improve DSN
training by controlling the initial membrane potential on the backward path. In
our experiments, adopting the proposed approach allowed us to boost the
performance of DSN training in terms of converging time and accuracy. | [
"cs.LG",
"cs.NE"
]
|
The US EIA estimated in 2017 about 39\% of total U.S. energy consumption was
by the residential and commercial sectors. Therefore, Intelligent Building
Management (IBM) solutions that minimize consumption while maintaining tenant
comfort are an important component in addressing climate change. A forecasting
capability for accurate prediction of indoor temperatures in a planning horizon
of 24 hours is essential to IBM. It should predict the indoor temperature in
both short-term (e.g. 15 minutes) and long-term (e.g. 24 hours) periods
accurately including weekends, major holidays, and minor holidays. Other
requirements include the ability to predict the maximum and the minimum indoor
temperatures precisely and provide the confidence for each prediction. To
achieve these requirements, we propose a novel adjoint neural network
architecture for time series prediction that uses an ancillary neural network
to capture weekend and holiday information. We studied four long short-term
memory (LSTM) based time series prediction networks within this architecture.
We observed that the ancillary neural network helps to improve the prediction
accuracy, the maximum and the minimum temperature prediction and model
reliability for all networks tested. | [
"cs.LG",
"cs.NE"
]
|
Kernel methods have been among the most popular techniques in machine
learning, where learning tasks are solved using the property of reproducing
kernel Hilbert space (RKHS). In this paper, we propose a novel data analysis
framework with reproducing kernel Hilbert $C^*$-module (RKHM) and kernel mean
embedding (KME) in RKHM. Since RKHM contains richer information than RKHS or
vector-valued RKHS (vv RKHS), analysis with RKHM enables us to capture and
extract structural properties in multivariate data, functional data and other
structured data. We show a branch of theories for RKHM to apply to data
analysis, including the representer theorem, and the injectivity and
universality of the proposed KME. We also show RKHM generalizes RKHS and vv
RKHS. Then, we provide concrete procedures for employing RKHM and the proposed
KME to data analysis. | [
"stat.ML",
"cs.LG",
"math.OA"
]
|
Convolutional neural networks (CNN) have had unprecedented success in medical
imaging and, in particular, in medical image segmentation. However, despite the
fact that segmentation results are closer than ever to the inter-expert
variability, CNNs are not immune to producing anatomically inaccurate
segmentations, even when built upon a shape prior. In this paper, we present a
framework for producing cardiac image segmentation maps that are guaranteed to
respect pre-defined anatomical criteria, while remaining within the
inter-expert variability. The idea behind our method is to use a well-trained
CNN, have it process cardiac images, identify the anatomically implausible
results and warp these results toward the closest anatomically valid cardiac
shape. This warping procedure is carried out with a constrained variational
autoencoder (cVAE) trained to learn a representation of valid cardiac shapes
through a smooth, yet constrained, latent space. With this cVAE, we can project
any implausible shape into the cardiac latent space and steer it toward the
closest correct shape. We tested our framework on short-axis MRI as well as
apical two and four-chamber view ultrasound images, two modalities for which
cardiac shapes are drastically different. With our method, CNNs can now produce
results that are both within the inter-expert variability and always
anatomically plausible without having to rely on a shape prior. | [
"cs.CV",
"cs.LG",
"eess.IV"
]
|
Learning over multi-view data is a challenging problem with strong practical
applications. Most related studies focus on the classification point of view
and assume that all the views are available at any time. We consider an
extension of this framework in two directions. First, based on the BiGAN model,
the Multi-view BiGAN (MV-BiGAN) is able to perform density estimation from
multi-view inputs. Second, it can deal with missing views and is able to update
its prediction when additional views are provided. We illustrate these
properties on a set of experiments over different datasets. | [
"cs.LG"
]
|
Machine learning methods are widely used in the natural sciences to model and
predict physical systems from observation data. Yet, they are often used as
poorly understood "black boxes," disregarding existing mathematical structure
and invariants of the problem. Recently, the proposal of Hamiltonian Neural
Networks (HNNs) took a first step towards a unified "gray box" approach, using
physical insight to improve performance for Hamiltonian systems. In this paper,
we explore a significantly improved training method for HNNs, exploiting the
symplectic structure of Hamiltonian systems with a different loss function.
This frees the loss from an artificial lower bound. We mathematically guarantee
the existence of an exact Hamiltonian function which the HNN can learn. This
allows us to prove and numerically analyze the errors made by HNNs which, in
turn, renders them fully explainable. Finally, we present a novel post-training
correction to obtain the true Hamiltonian only from discretized observation
data, up to an arbitrary order. | [
"cs.LG",
"cs.NA",
"math.NA"
]
|
In this paper, we propose a deep learning architecture that produces accurate
dense depth for the outdoor scene from a single color image and a sparse depth.
Inspired by the indoor depth completion, our network estimates surface normals
as the intermediate representation to produce dense depth, and can be trained
end-to-end. With a modified encoder-decoder structure, our network effectively
fuses the dense color image and the sparse LiDAR depth. To address outdoor
specific challenges, our network predicts a confidence mask to handle mixed
LiDAR signals near foreground boundaries due to occlusion, and combines
estimates from the color image and surface normals with learned attention maps
to improve the depth accuracy especially for distant areas. Extensive
experiments demonstrate that our model improves upon the state-of-the-art
performance on KITTI depth completion benchmark. Ablation study shows the
positive impact of each model components to the final performance, and
comprehensive analysis shows that our model generalizes well to the input with
higher sparsity or from indoor scenes. | [
"cs.CV"
]
|
Unsupervised learning of visual similarities is of paramount importance to
computer vision, particularly due to lacking training data for fine-grained
similarities. Deep learning of similarities is often based on relationships
between pairs or triplets of samples. Many of these relations are unreliable
and mutually contradicting, implying inconsistencies when trained without
supervision information that relates different tuples or triplets to each
other. To overcome this problem, we use local estimates of reliable
(dis-)similarities to initially group samples into compact surrogate classes
and use local partial orders of samples to classes to link classes to each
other. Similarity learning is then formulated as a partial ordering task with
soft correspondences of all samples to classes. Adopting a strategy of
self-supervision, a CNN is trained to optimally represent samples in a mutually
consistent manner while updating the classes. The similarity learning and
grouping procedure are integrated in a single model and optimized jointly. The
proposed unsupervised approach shows competitive performance on detailed pose
estimation and object classification. | [
"cs.CV"
]
|
Graph neural networks have achieved great success in learning node
representations for graph tasks such as node classification and link
prediction. Graph representation learning requires graph pooling to obtain
graph representations from node representations. It is challenging to develop
graph pooling methods due to the variable sizes and isomorphic structures of
graphs. In this work, we propose to use second-order pooling as graph pooling,
which naturally solves the above challenges. In addition, compared to existing
graph pooling methods, second-order pooling is able to use information from all
nodes and collect second-order statistics, making it more powerful. We show
that direct use of second-order pooling with graph neural networks leads to
practical problems. To overcome these problems, we propose two novel global
graph pooling methods based on second-order pooling; namely, bilinear mapping
and attentional second-order pooling. In addition, we extend attentional
second-order pooling to hierarchical graph pooling for more flexible use in
GNNs. We perform thorough experiments on graph classification tasks to
demonstrate the effectiveness and superiority of our proposed methods.
Experimental results show that our methods improve the performance
significantly and consistently. | [
"cs.LG",
"cs.CV"
]
|
Over the past few years, self-attention is shining in the field of deep
learning, especially in the domain of natural language processing(NLP). Its
impressive effectiveness, along with ubiquitous implementations, have aroused
our interest in efficiently scheduling the data-flow of corresponding
computations onto architectures with many computing units to realize parallel
computing. In this paper, based on the theory of self-attention mechanism and
state-of-the-art realization of self-attention in language models, we propose a
general scheduling algorithm, which is derived from the optimum scheduling for
small instances solved by a satisfiability checking(SAT) solver, to parallelize
typical computations of self-attention. Strategies for further optimization on
skipping redundant computations are put forward as well, with which reductions
of almost 25% and 50% of the original computations are respectively achieved
for two widely-adopted application schemes of self-attention. With the proposed
optimization adopted, we have correspondingly come up with another two
scheduling algorithms. The proposed algorithms are applicable regardless of
problem sizes, as long as the number of input vectors is divisible to the
number of computing units available in the architecture. Due to the complexity
of proving the correctness of the algorithms mathematically for general cases,
we have conducted experiments to reveal their validity, together with the
superior quality of the solutions provided by which, by solving SAT problems
for particular instances. | [
"cs.LG",
"cs.AR"
]
|
Route planning is important in transportation. Existing works focus on
finding the shortest path solution or using metrics such as safety and energy
consumption to determine the planning. It is noted that most of these studies
rely on prior knowledge of road network, which may be not available in certain
situations. In this paper, we design a route planning algorithm based on deep
reinforcement learning (DRL) for pedestrians. We use travel time consumption as
the metric, and plan the route by predicting pedestrian flow in the road
network. We put an agent, which is an intelligent robot, on a virtual map.
Different from previous studies, our approach assumes that the agent does not
need any prior information about road network, but simply relies on the
interaction with the environment. We propose a dynamically adjustable route
planning (DARP) algorithm, where the agent learns strategies through a dueling
deep Q network to avoid congested roads. Simulation results show that the DARP
algorithm saves 52% of the time under congestion condition when compared with
traditional shortest path planning algorithms. | [
"cs.LG",
"cs.AI",
"cs.SY",
"eess.SY"
]
|
Causal discovery from observational data is a challenging task to which an
exact solution cannot always be identified. Under assumptions about the
data-generative process, the causal graph can often be identified up to an
equivalence class. Proposing new realistic assumptions to circumscribe such
equivalence classes is an active field of research. In this work, we propose a
new set of assumptions that constrain possible causal relationships based on
the nature of the variables. We thus introduce typed directed acyclic graphs,
in which variable types are used to determine the validity of causal
relationships. We demonstrate, both theoretically and empirically, that the
proposed assumptions can result in significant gains in the identification of
the causal graph. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
We introduce a novel single-shot object detector to ease the imbalance of
foreground-background class by suppressing the easy negatives while increasing
the positives. To achieve this, we propose an Anchor Promotion Module (APM)
which predicts the probability of each anchor as positive and adjusts their
initial locations and shapes to promote both the quality and quantity of
positive anchors. In addition, we design an efficient Feature Alignment Module
(FAM) to extract aligned features for fitting the promoted anchors with the
help of both the location and shape transformation information from the APM. We
assemble the two proposed modules to the backbone of VGG-16 and ResNet-101
network with an encoder-decoder architecture. Extensive experiments on MS COCO
well demonstrate our model performs competitively with alternative methods
(40.0\% mAP on \textit{test-dev} set) and runs faster (28.6 \textit{fps}). | [
"cs.CV"
]
|
Policy gradient methods are appealing in deep reinforcement learning but
suffer from high variance of gradient estimate. To reduce the variance, the
state value function is applied commonly. However, the effect of the state
value function becomes limited in stochastic dynamic environments, where the
unexpected state dynamics and rewards will increase the variance. In this
paper, we propose to replace the state value function with a novel hindsight
value function, which leverages the information from the future to reduce the
variance of the gradient estimate for stochastic dynamic environments.
Particularly, to obtain an ideally unbiased gradient estimate, we propose an
information-theoretic approach, which optimizes the embeddings of the future to
be independent of previous actions. In our experiments, we apply the proposed
hindsight value function in stochastic dynamic environments, including
discrete-action environments and continuous-action environments. Compared with
the standard state value function, the proposed hindsight value function
consistently reduces the variance, stabilizes the training, and improves the
eventual policy. | [
"cs.LG",
"cs.AI"
]
|
Exploration in sparse reward reinforcement learning remains an open
challenge. Many state-of-the-art methods use intrinsic motivation to complement
the sparse extrinsic reward signal, giving the agent more opportunities to
receive feedback during exploration. Commonly these signals are added as bonus
rewards, which results in a mixture policy that neither conducts exploration
nor task fulfillment resolutely. In this paper, we instead learn separate
intrinsic and extrinsic task policies and schedule between these different
drives to accelerate exploration and stabilize learning. Moreover, we introduce
a new type of intrinsic reward denoted as successor feature control (SFC),
which is general and not task-specific. It takes into account statistics over
complete trajectories and thus differs from previous methods that only use
local information to evaluate intrinsic motivation. We evaluate our proposed
scheduled intrinsic drive (SID) agent using three different environments with
pure visual inputs: VizDoom, DeepMind Lab and DeepMind Control Suite. The
results show a substantially improved exploration efficiency with SFC and the
hierarchical usage of the intrinsic drives. A video of our experimental results
can be found at https://youtu.be/b0MbY3lUlEI. | [
"cs.LG",
"cs.AI",
"cs.RO",
"stat.ML"
]
|
This paper presents a new approach to 3D object detection that leverages the
properties of the data obtained by a LiDAR sensor. State-of-the-art detectors
use neural network architectures based on assumptions valid for camera images.
However, point clouds obtained from LiDAR are fundamentally different. Most
detectors use shared filter kernels to extract features which do not take into
account the range dependent nature of the point cloud features. To show this,
different detectors are trained on two splits of the KITTI dataset: close range
(objects up to 25 meters from LiDAR) and long-range. Top view images are
generated from point clouds as input for the networks. Combined results
outperform the baseline network trained on the full dataset with a single
backbone. Additional research compares the effect of using different input
features when converting the point cloud to image. The results indicate that
the network focuses on the shape and structure of the objects, rather than
exact values of the input. This work proposes an improvement for 3D object
detectors by taking into account the properties of LiDAR point clouds over
distance. Results show that training separate networks for close-range and
long-range objects boosts performance for all KITTI benchmark difficulties. | [
"cs.CV"
]
|
Stability is a fundamental property of dynamical systems, yet to this date it
has had little bearing on the practice of recurrent neural networks. In this
work, we conduct a thorough investigation of stable recurrent models.
Theoretically, we prove stable recurrent neural networks are well approximated
by feed-forward networks for the purpose of both inference and training by
gradient descent. Empirically, we demonstrate stable recurrent models often
perform as well as their unstable counterparts on benchmark sequence tasks.
Taken together, these findings shed light on the effective power of recurrent
networks and suggest much of sequence learning happens, or can be made to
happen, in the stable regime. Moreover, our results help to explain why in many
cases practitioners succeed in replacing recurrent models by feed-forward
models. | [
"cs.LG",
"stat.ML"
]
|
Self-supervised visual representation learning has seen huge progress
recently, but no large scale evaluation has compared the many models now
available. We evaluate the transfer performance of 13 top self-supervised
models on 40 downstream tasks, including many-shot and few-shot recognition,
object detection, and dense prediction. We compare their performance to a
supervised baseline and show that on most tasks the best self-supervised models
outperform supervision, confirming the recently observed trend in the
literature. We find ImageNet Top-1 accuracy to be highly correlated with
transfer to many-shot recognition, but increasingly less so for few-shot,
object detection and dense prediction. No single self-supervised method
dominates overall, suggesting that universal pre-training is still unsolved.
Our analysis of features suggests that top self-supervised learners fail to
preserve colour information as well as supervised alternatives, but tend to
induce better classifier calibration, and less attentive overfitting than
supervised learners. | [
"cs.CV"
]
|
This paper aims to establish an entropy-regularized value-based reinforcement
learning method that can ensure the monotonic improvement of policies at each
policy update. Unlike previously proposed lower-bounds on policy improvement in
general infinite-horizon MDPs, we derive an entropy-regularization aware lower
bound. Since our bound only requires the expected policy advantage function to
be estimated, it is scalable to large-scale (continuous) state-space problems.
We propose a novel reinforcement learning algorithm that exploits this
lower-bound as a criterion for adjusting the degree of a policy update for
alleviating policy oscillation. We demonstrate the effectiveness of our
approach in both discrete-state maze and continuous-state inverted pendulum
tasks using a linear function approximator for value estimation. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Optimal transport distances are powerful tools to compare probability
distributions and have found many applications in machine learning. Yet their
algorithmic complexity prevents their direct use on large scale datasets. To
overcome this challenge, practitioners compute these distances on minibatches
{\em i.e.} they average the outcome of several smaller optimal transport
problems. We propose in this paper an analysis of this practice, which effects
are not well understood so far. We notably argue that it is equivalent to an
implicit regularization of the original problem, with appealing properties such
as unbiased estimators, gradients and a concentration bound around the
expectation, but also with defects such as loss of distance property. Along
with this theoretical analysis, we also conduct empirical experiments on
gradient flows, GANs or color transfer that highlight the practical interest of
this strategy. | [
"stat.ML",
"cs.LG"
]
|
Unsupervised image-to-image translation methods have received a lot of
attention in the last few years. Multiple techniques emerged tackling the
initial challenge from different perspectives. Some focus on learning as much
as possible from several target style images for translations while other make
use of object detection in order to produce more realistic results on
content-rich scenes. In this work, we assess how a method that has initially
been developed for single object translation performs on more diverse and
content-rich images. Our work is based on the FUNIT[1] framework and we train
it with a more diverse dataset. This helps understanding how such method
behaves beyond their initial frame of application. We present a way to extend a
dataset based on object detection. Moreover, we propose a way to adapt the
FUNIT framework in order to leverage the power of object detection that one can
see in other methods. | [
"cs.CV",
"cs.LG"
]
|
Fictitious play with reinforcement learning is a general and effective
framework for zero-sum games. However, using the current deep neural network
models, the implementation of fictitious play faces crucial challenges. Neural
network model training employs gradient descent approaches to update all
connection weights, and thus is easy to forget the old opponents after training
to beat the new opponents. Existing approaches often maintain a pool of
historical policy models to avoid the forgetting. However, learning to beat a
pool in stochastic games, i.e., a wide distribution over policy models, is
either sample-consuming or insufficient to exploit all models with limited
amount of samples. In this paper, we propose a learning process with neural
fictitious play to alleviate the above issues. We train a single model as our
policy model, which consists of sub-models and a selector. Everytime facing a
new opponent, the model is expanded by adding a new sub-model, where only the
new sub-model is updated instead of the whole model. At the same time, the
selector is also updated to mix up the new sub-model with the previous ones at
the state-level, so that the model is maintained as a behavior strategy instead
of a wide distribution over policy models. Experiments on Kuhn poker, a
grid-world Treasure Hunting game, and Mini-RTS environments show that the
proposed approach alleviates the forgetting problem, and consequently improves
the learning efficiency and the robustness of neural fictitious play. | [
"cs.LG",
"cs.AI",
"stat.ML"
]
|
Furnishing and rendering an indoor scene is a common but tedious task for
interior design: an artist needs to observe the space, create a conceptual
design, build a 3D model, and perform rendering. In this paper, we introduce a
new problem of domain-specific image synthesis using generative modeling,
namely neural scene decoration. Given a photograph of an empty indoor space, we
aim to synthesize a new image of the same space that is fully furnished and
decorated. Neural scene decoration can be applied in practice to efficiently
generate conceptual but realistic interior designs, bypassing the traditional
multi-step and time-consuming pipeline. Our attempt to neural scene decoration
in this paper is a generative adversarial neural network that takes the input
photograph and directly produce the image of the desired furnishing and
decorations. Our network contains a novel image generator that transforms an
initial point-based object layout into a realistic photograph. We demonstrate
the performance of our proposed method by showing that it outperforms the
baselines built upon previous works on image translations both qualitatively
and quantitatively. Our user study further validates the plausibility and
aesthetics in the generated designs. | [
"cs.CV",
"cs.GR"
]
|
Scene text recognition (STR) is still a hot research topic in computer vision
field due to its various applications. Existing works mainly focus on learning
a general model with a huge number of synthetic text images to recognize
unconstrained scene texts, and have achieved substantial progress. However,
these methods are not quite applicable in many real-world scenarios where 1)
high recognition accuracy is required, while 2) labeled samples are lacked. To
tackle this challenging problem, this paper proposes a few-shot adversarial
sequence domain adaptation (FASDA) approach to build sequence adaptation
between the synthetic source domain (with many synthetic labeled samples) and a
specific target domain (with only some or a few real labeled samples). This is
done by simultaneously learning each character's feature representation with an
attention mechanism and establishing the corresponding character-level latent
subspace with adversarial learning. Our approach can maximize the
character-level confusion between the source domain and the target domain, thus
achieves the sequence-level adaptation with even a small number of labeled
samples in the target domain. Extensive experiments on various datasets show
that our method significantly outperforms the finetuning scheme, and obtains
comparable performance to the state-of-the-art STR methods. | [
"cs.CV"
]
|
In this paper, we focus on recognizing 3D shapes from arbitrary views, i.e.,
arbitrary numbers and positions of viewpoints. It is a challenging and
realistic setting for view-based 3D shape recognition. We propose a canonical
view representation to tackle this challenge. We first transform the original
features of arbitrary views to a fixed number of view features, dubbed
canonical view representation, by aligning the arbitrary view features to a set
of learnable reference view features using optimal transport. In this way, each
3D shape with arbitrary views is represented by a fixed number of canonical
view features, which are further aggregated to generate a rich and robust 3D
shape representation for shape recognition. We also propose a canonical view
feature separation constraint to enforce that the view features in canonical
view representation can be embedded into scattered points in a Euclidean space.
Experiments on the ModelNet40, ScanObjectNN, and RGBD datasets show that our
method achieves competitive results under the fixed viewpoint settings, and
significantly outperforms the applicable methods under the arbitrary view
setting. | [
"cs.CV",
"cs.AI"
]
|
Most of existing statistical theories on deep neural networks have sample
complexities cursed by the data dimension and therefore cannot well explain the
empirical success of deep learning on high-dimensional data. To bridge this
gap, we propose to exploit low-dimensional geometric structures of the real
world data sets. We establish theoretical guarantees of convolutional residual
networks (ConvResNet) in terms of function approximation and statistical
estimation for binary classification. Specifically, given the data lying on a
$d$-dimensional manifold isometrically embedded in $\mathbb{R}^D$, we prove
that if the network architecture is properly chosen, ConvResNets can (1)
approximate Besov functions on manifolds with arbitrary accuracy, and (2) learn
a classifier by minimizing the empirical logistic risk, which gives an excess
risk in the order of $n^{-\frac{s}{2s+2(s\vee d)}}$, where $s$ is a smoothness
parameter. This implies that the sample complexity depends on the intrinsic
dimension $d$, instead of the data dimension $D$. Our results demonstrate that
ConvResNets are adaptive to low-dimensional structures of data sets. | [
"stat.ML",
"cs.LG"
]
|
Recently, the introduction of the generative adversarial network (GAN) and
its variants has enabled the generation of realistic synthetic samples, which
has been used for enlarging training sets. Previous work primarily focused on
data augmentation for semi-supervised and supervised tasks. In this paper, we
instead focus on unsupervised anomaly detection and propose a novel generative
data augmentation framework optimized for this task. In particular, we propose
to oversample infrequent normal samples - normal samples that occur with small
probability, e.g., rare normal events. We show that these samples are
responsible for false positives in anomaly detection. However, oversampling of
infrequent normal samples is challenging for real-world high-dimensional data
with multimodal distributions. To address this challenge, we propose to use a
GAN variant known as the adversarial autoencoder (AAE) to transform the
high-dimensional multimodal data distributions into low-dimensional unimodal
latent distributions with well-defined tail probability. Then, we
systematically oversample at the `edge' of the latent distributions to increase
the density of infrequent normal samples. We show that our oversampling
pipeline is a unified one: it is generally applicable to datasets with
different complex data distributions. To the best of our knowledge, our method
is the first data augmentation technique focused on improving performance in
unsupervised anomaly detection. We validate our method by demonstrating
consistent improvements across several real-world datasets. | [
"cs.LG",
"stat.ML"
]
|
Safety remains a central obstacle preventing widespread use of RL in the real
world: learning new tasks in uncertain environments requires extensive
exploration, but safety requires limiting exploration. We propose Recovery RL,
an algorithm which navigates this tradeoff by (1) leveraging offline data to
learn about constraint violating zones before policy learning and (2)
separating the goals of improving task performance and constraint satisfaction
across two policies: a task policy that only optimizes the task reward and a
recovery policy that guides the agent to safety when constraint violation is
likely. We evaluate Recovery RL on 6 simulation domains, including two
contact-rich manipulation tasks and an image-based navigation task, and an
image-based obstacle avoidance task on a physical robot. We compare Recovery RL
to 5 prior safe RL methods which jointly optimize for task performance and
safety via constrained optimization or reward shaping and find that Recovery RL
outperforms the next best prior method across all domains. Results suggest that
Recovery RL trades off constraint violations and task successes 2 - 20 times
more efficiently in simulation domains and 3 times more efficiently in physical
experiments. See https://tinyurl.com/rl-recovery for videos and supplementary
material. | [
"cs.LG",
"cs.AI",
"cs.RO"
]
|
The task of translating between programming languages differs from the
challenge of translating natural languages in that programming languages are
designed with a far more rigid set of structural and grammatical rules.
Previous work has used a tree-to-tree encoder/decoder model to take advantage
of the inherent tree structure of programs during translation. Neural decoders,
however, by default do not exploit known grammar rules of the target language.
In this paper, we describe a tree decoder that leverages knowledge of a
language's grammar rules to exclusively generate syntactically correct
programs. We find that this grammar-based tree-to-tree model outperforms the
state of the art tree-to-tree model in translating between two programming
languages on a previously used synthetic task. | [
"cs.LG",
"cs.PL",
"stat.ML"
]
|
Unsupervised deep learning has recently demonstrated the promise to produce
high-quality samples. While it has tremendous potential to promote the image
colorization task, the performance is limited owing to the manifold hypothesis
in machine learning. This study presents a novel scheme that exploiting the
score-based generative model in wavelet domain to address the issue. By taking
advantage of the multi-scale and multi-channel representation via wavelet
transform, the proposed model learns the priors from stacked wavelet
coefficient components, thus learns the image characteristics under coarse and
detail frequency spectrums jointly and effectively. Moreover, such a highly
flexible generative model without adversarial optimization can execute
colorization tasks better under dual consistency terms in wavelet domain,
namely data-consistency and structure-consistency. Specifically, in the
training phase, a set of multi-channel tensors consisting of wavelet
coefficients are used as the input to train the network by denoising score
matching. In the test phase, samples are iteratively generated via annealed
Langevin dynamics with data and structure consistencies. Experiments
demonstrated remarkable improvements of the proposed model on colorization
quality, particularly on colorization robustness and diversity. | [
"cs.CV"
]
|
Vision-and-language navigation (VLN) is a multimodal task where an agent
follows natural language instructions and navigates in visual environments.
Multiple setups have been proposed, and researchers apply new model
architectures or training techniques to boost navigation performance. However,
recent studies witness a slow-down in the performance improvements in both
indoor and outdoor VLN tasks, and the agents' inner mechanisms for making
navigation decisions remain unclear. To the best of our knowledge, the way the
agents perceive the multimodal input is under-studied and clearly needs
investigations. In this work, we conduct a series of diagnostic experiments to
unveil agents' focus during navigation. Results show that indoor navigation
agents refer to both object tokens and direction tokens in the instruction when
making decisions. In contrast, outdoor navigation agents heavily rely on
direction tokens and have a poor understanding of the object tokens.
Furthermore, instead of merely staring at surrounding objects, indoor
navigation agents can set their sights on objects further from the current
viewpoint. When it comes to vision-and-language alignments, many models claim
that they are able to align object tokens with certain visual targets, but we
cast doubt on the reliability of such alignments. | [
"cs.CV",
"cs.AI",
"cs.CL"
]
|
Rendering 3D scenes requires access to arbitrary viewpoints from the scene.
Storage of such a 3D scene can be done in two ways; (1) storing 2D images taken
from the 3D scene that can reconstruct the scene back through interpolations,
or (2) storing a representation of the 3D scene itself that already encodes
views from all directions. So far, traditional 3D compression methods have
focused on the first type of storage and compressed the original 2D images with
image compression techniques. With this approach, the user first decodes the
stored 2D images and then renders the 3D scene. However, this separated
procedure is inefficient since a large amount of 2D images have to be stored.
In this work, we take a different approach and compress a functional
representation of 3D scenes. In particular, we introduce a method to compress
3D scenes by compressing the neural networks that represent the scenes as
neural radiance fields. Our method provides more efficient storage of 3D scenes
since it does not store 2D images -- which are redundant when we render the
scene from the neural functional representation. | [
"cs.CV",
"cs.LG"
]
|
Nowadays, deep learning methods, especially the Graph Convolutional Network
(GCN), have shown impressive performance in hyperspectral image (HSI)
classification. However, the current GCN-based methods treat graph construction
and image classification as two separate tasks, which often results in
suboptimal performance. Another defect of these methods is that they mainly
focus on modeling the local pairwise importance between graph nodes while lack
the capability to capture the global contextual information of HSI. In this
paper, we propose a Multi-level GCN with Automatic Graph Learning method
(MGCN-AGL) for HSI classification, which can automatically learn the graph
information at both local and global levels. By employing attention mechanism
to characterize the importance among spatially neighboring regions, the most
relevant information can be adaptively incorporated to make decisions, which
helps encode the spatial context to form the graph information at local level.
Moreover, we utilize multiple pathways for local-level graph convolution, in
order to leverage the merits from the diverse spatial context of HSI and to
enhance the expressive power of the generated representations. To reconstruct
the global contextual relations, our MGCN-AGL encodes the long range
dependencies among image regions based on the expressive representations that
have been produced at local level. Then inference can be performed along the
reconstructed graph edges connecting faraway regions. Finally, the multi-level
information is adaptively fused to generate the network output. In this means,
the graph learning and image classification can be integrated into a unified
framework and benefit each other. Extensive experiments have been conducted on
three real-world hyperspectral datasets, which are shown to outperform the
state-of-the-art methods. | [
"cs.CV",
"cs.LG"
]
|
Learning knowledge from driving encounters could help self-driving cars make
appropriate decisions when driving in complex settings with nearby vehicles
engaged. This paper develops an unsupervised classifier to group naturalistic
driving encounters into distinguishable clusters by combining an auto-encoder
with k-means clustering (AE-kMC). The effectiveness of AE-kMC was validated
using the data of 10,000 naturalistic driving encounters which were collected
by the University of Michigan, Ann Arbor in the past five years. We compare our
developed method with the $k$-means clustering methods and experimental results
demonstrate that the AE-kMC method outperforms the original k-means clustering
method. | [
"cs.LG"
]
|
As a crucial task of autonomous driving, 3D object detection has made great
progress in recent years. However, monocular 3D object detection remains a
challenging problem due to the unsatisfactory performance in depth estimation.
Most existing monocular methods typically directly regress the scene depth
while ignoring important relationships between the depth and various geometric
elements (e.g. bounding box sizes, 3D object dimensions, and object poses). In
this paper, we propose to learn geometry-guided depth estimation with
projective modeling to advance monocular 3D object detection. Specifically, a
principled geometry formula with projective modeling of 2D and 3D depth
predictions in the monocular 3D object detection network is devised. We further
implement and embed the proposed formula to enable geometry-aware deep
representation learning, allowing effective 2D and 3D interactions for boosting
the depth estimation. Moreover, we provide a strong baseline through addressing
substantial misalignment between 2D annotation and projected boxes to ensure
robust learning with the proposed geometric formula. Experiments on the KITTI
dataset show that our method remarkably improves the detection performance of
the state-of-the-art monocular-based method without extra data by 2.80% on the
moderate test setting. The model and code will be released at
https://github.com/YinminZhang/MonoGeo. | [
"cs.CV"
]
|
Existing on-policy imitation learning algorithms, such as DAgger, assume
access to a fixed supervisor. However, there are many settings where the
supervisor may evolve during policy learning, such as a human performing a
novel task or an improving algorithmic controller. We formalize imitation
learning from a "converging supervisor" and provide sublinear static and
dynamic regret guarantees against the best policy in hindsight with labels from
the converged supervisor, even when labels during learning are only from
intermediate supervisors. We then show that this framework is closely connected
to a class of reinforcement learning (RL) algorithms known as dual policy
iteration (DPI), which alternate between training a reactive learner with
imitation learning and a model-based supervisor with data from the learner.
Experiments suggest that when this framework is applied with the
state-of-the-art deep model-based RL algorithm PETS as an improving supervisor,
it outperforms deep RL baselines on continuous control tasks and provides up to
an 80-fold speedup in policy evaluation. | [
"cs.LG",
"cs.AI",
"cs.RO"
]
|
This paper presents an extension proposal of the semi-supervised learning
method known as Particle Competition and Cooperation for carrying out tasks of
image segmentation. Preliminary results show that this is a promising approach.
Este artigo apresenta uma proposta de extens\~ao do modelo de aprendizado
semi-supervisionado conhecido como Competi\c{c}\~ao e Coopera\c{c}\~ao entre
Part\'iculas para a realiza\c{c}\~ao de tarefas de segmenta\c{c}\~ao de
imagens. Resultados preliminares mostram que esta \'e uma abordagem promissora. | [
"cs.LG",
"cs.NE"
]
|
Objective: This paper presents an Alzheimer's disease (AD) detection method
based on learning structural similarity between Magnetic Resonance Images
(MRIs) and representing this similarity as a graph. Methods: We construct the
similarity graph using embedded features of the input image (i.e., Non-Demented
(ND), Very Mild Demented (VMD), Mild Demented (MD), and Moderated Demented
(MDTD)). We experiment and compare different dimension-reduction and clustering
algorithms to construct the best similarity graph to capture the similarity
between the same class images using the cosine distance as a similarity
measure. We utilize the similarity graph to present (sample) the training data
to a convolutional neural network (CNN). We use the similarity graph as a
regularizer in the loss function of a CNN model to minimize the distance
between the input images and their k-nearest neighbours in the similarity graph
while minimizing the categorical cross-entropy loss between the training image
predictions and the actual image class labels. Results: We conduct extensive
experiments with several pre-trained CNN models and compare the results to
other recent methods. Conclusion: Our method achieves superior performance on
the testing dataset (accuracy = 0.986, area under receiver operating
characteristics curve = 0.998, F1 measure = 0.987). Significance: The
classification results show an improvement in the prediction accuracy compared
to the other methods. We release all the code used in our experiments to
encourage reproducible research in this area | [
"cs.CV",
"cs.AI",
"cs.LG"
]
|
Conventionally, AI models are thought to trade off explainability for lower
accuracy. We develop a training strategy that not only leads to a more
explainable AI system for object classification, but as a consequence, suffers
no perceptible accuracy degradation. Explanations are defined as regions of
visual evidence upon which a deep classification network makes a decision. This
is represented in the form of a saliency map conveying how much each pixel
contributed to the network's decision. Our training strategy enforces a
periodic saliency-based feedback to encourage the model to focus on the image
regions that directly correspond to the ground-truth object. We quantify
explainability using an automated metric, and using human judgement. We propose
explainability as a means for bridging the visual-semantic gap between
different domains where model explanations are used as a means of disentagling
domain specific information from otherwise relevant features. We demonstrate
that this leads to improved generalization to new domains without hindering
performance on the original domain. | [
"cs.CV"
]
|
Concept learning approaches based on refinement operators explore partially
ordered solution spaces to compute concepts, which are used as binary
classification models for individuals. However, the refinement trees spanned by
these approaches can easily grow to millions of nodes for complex learning
problems. This leads to refinement-based approaches often failing to detect
optimal concepts efficiently. In this paper, we propose a supervised machine
learning approach for learning concept lengths, which allows predicting the
length of the target concept and therefore facilitates the reduction of the
search space during concept learning. To achieve this goal, we compare four
neural architectures and evaluate them on four benchmark knowledge
graphs--Carcinogenesis, Mutagenesis, Semantic Bible, Family Benchmark. Our
evaluation results suggest that recurrent neural network architectures perform
best at concept length prediction with an F-measure of up to 92%. We show that
integrating our concept length predictor into the CELOE (Class Expression
Learner for Ontology Engineering) algorithm improves CELOE's runtime by a
factor of up to 13.4 without any significant changes to the quality of the
results it generates. For reproducibility, we provide our implementation in the
public GitHub repository at
https://github.com/ConceptLengthLearner/ReproducibilityRepo | [
"cs.LG"
]
|
Scattering transforms are non-trainable deep convolutional architectures that
exploit the multi-scale resolution of a wavelet filter bank to obtain an
appropriate representation of data. More importantly, they are proven invariant
to translations, and stable to perturbations that are close to translations.
This stability property dons the scattering transform with a robustness to
small changes in the metric domain of the data. When considering network data,
regular convolutions do not hold since the data domain presents an irregular
structure given by the network topology.
In this work, we extend scattering transforms to network data by using
multiresolution graph wavelets, whose computation can be obtained by means of
graph convolutions. Furthermore, we prove that the resulting graph scattering
transforms are stable to metric perturbations of the underlying network. This
renders graph scattering transforms robust to changes on the network topology,
making it particularly useful for cases of transfer learning, topology
estimation or time-varying graphs. | [
"cs.LG",
"stat.ML"
]
|
We propose a novel method for fine-grained high-quality image segmentation of
both objects and scenes. Inspired by dilation and erosion from morphological
image processing techniques, we treat the pixel level segmentation problems as
squeezing object boundary. From this perspective, we propose \textbf{Boundary
Squeeze} module: a novel and efficient module that squeezes the object boundary
from both inner and outer directions which leads to precise mask
representation. To generate such squeezed representation, we propose a new
bidirectionally flow-based warping process and design specific loss signals to
supervise the learning process. Boundary Squeeze Module can be easily applied
to both instance and semantic segmentation tasks as a plug-and-play module by
building on top of existing models. We show that our simple yet effective
design can lead to high qualitative results on several different datasets and
we also provide several different metrics on boundary to prove the
effectiveness over previous work. Moreover, the proposed module is
light-weighted and thus has potential for practical usage. Our method yields
large gains on COCO, Cityscapes, for both instance and semantic segmentation
and outperforms previous state-of-the-art PointRend in both accuracy and speed
under the same setting. Code and model will be available. | [
"cs.CV"
]
|
This paper presents a novel data-driven approach for predicting the number of
vegetation-related outages that occur in power distribution systems on a
monthly basis. In order to develop an approach that is able to successfully
fulfill this objective, there are two main challenges that ought to be
addressed. The first challenge is to define the extent of the target area. An
unsupervised machine learning approach is proposed to overcome this difficulty.
The second challenge is to correctly identify the main causes of
vegetation-related outages and to thoroughly investigate their nature. In this
paper, these outages are categorized into two main groups: growth-related and
weather-related outages, and two types of models, namely time series and
non-linear machine learning regression models are proposed to conduct the
prediction tasks, respectively. Moreover, various features that can explain the
variability in vegetation-related outages are engineered and employed. Actual
outage data, obtained from a major utility in the U.S., in addition to
different types of weather and geographical data are utilized to build the
proposed approach. Finally, by utilizing various time series models and machine
learning methods, a comprehensive case study is carried out to demonstrate how
the proposed approach can be used to successfully predict the number of
vegetation-related outages and to help decision-makers to detect vulnerable
zones in their systems. | [
"cs.LG",
"stat.ML"
]
|
Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most
popular algorithms for computational inference in Graphical Models (GM). In
principle, MCMC is an exact probabilistic method which, however, often suffers
from exponentially slow mixing. In contrast, BP is a deterministic method,
which is typically fast, empirically very successful, however in general
lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC
algorithms correcting the approximation error of BP, i.e., we provide a way to
compensate for BP errors via a consecutive BP-aware MCMC. Our framework is
based on the Loop Calculus (LC) approach which allows expressing the BP error
as a sum of weighted generalized loops. Although the full series is
computationally intractable, it is known that a truncated series, summing up
all 2-regular loops, is computable in polynomial-time for planar pair-wise
binary GMs and it also provides a highly accurate approximation empirically.
Motivated by this, we first propose a polynomial-time approximation MCMC scheme
for the truncated series of general (non-planar) pair-wise binary models. Our
main idea here is to use the Worm algorithm, known to provide fast mixing in
other (related) problems, and then design an appropriate rejection scheme to
sample 2-regular loops. Furthermore, we also design an efficient rejection-free
MCMC scheme for approximating the full series. The main novelty underlying our
design is in utilizing the concept of cycle basis, which provides an efficient
decomposition of the generalized loops. In essence, the proposed MCMC schemes
run on transformed GM built upon the non-trivial BP solution, and our
experiments show that this synthesis of BP and MCMC outperforms both direct
MCMC and bare BP schemes. | [
"stat.ML",
"cs.AI",
"cs.DS"
]
|
Segmentation of objects of interest is one of the central tasks in medical
image analysis, which is indispensable for quantitative analysis. When
developing machine-learning based methods for automated segmentation, manual
annotations are usually used as the ground truth toward which the models learn
to mimic. While the bulky parts of the segmentation targets are relatively easy
to label, the peripheral areas are often difficult to handle due to ambiguous
boundaries and the partial volume effect, etc., and are likely to be labeled
with uncertainty. This uncertainty in labeling may, in turn, result in
unsatisfactory performance of the trained models. In this paper, we propose
superpixel-based label softening to tackle the above issue. Generated by
unsupervised over-segmentation, each superpixel is expected to represent a
locally homogeneous area. If a superpixel intersects with the annotation
boundary, we consider a high probability of uncertain labeling within this
area. Driven by this intuition, we soften labels in this area based on signed
distances to the annotation boundary and assign probability values within [0,
1] to them, in comparison with the original "hard", binary labels of either 0
or 1. The softened labels are then used to train the segmentation models
together with the hard labels. Experimental results on a brain MRI dataset and
an optical coherence tomography dataset demonstrate that this conceptually
simple and implementation-wise easy method achieves overall superior
segmentation performances to baseline and comparison methods for both 3D and 2D
medical images. | [
"cs.CV"
]
|
Continual lifelong learning is essential to many applications. In this paper,
we propose a simple but effective approach to continual deep learning. Our
approach leverages the principles of deep model compression, critical weights
selection, and progressive networks expansion. By enforcing their integration
in an iterative manner, we introduce an incremental learning method that is
scalable to the number of sequential tasks in a continual learning process. Our
approach is easy to implement and owns several favorable characteristics.
First, it can avoid forgetting (i.e., learn new tasks while remembering all
previous tasks). Second, it allows model expansion but can maintain the model
compactness when handling sequential tasks. Besides, through our compaction and
selection/expansion mechanism, we show that the knowledge accumulated through
learning previous tasks is helpful to build a better model for the new tasks
compared to training the models independently with tasks. Experimental results
show that our approach can incrementally learn a deep model tackling multiple
tasks without forgetting, while the model compactness is maintained with the
performance more satisfiable than individual task training. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we propose a novel unsupervised color constancy method, called
Probabilistic Color Constancy (PCC). We define a framework for estimating the
illumination of a scene by weighting the contribution of different image
regions using a graph-based representation of the image. To estimate the weight
of each (super-)pixel, we rely on two assumptions: (Super-)pixels with similar
colors contribute similarly and darker (super-)pixels contribute less. The
resulting system has one global optimum solution. The proposed method achieves
competitive performance, compared to the state-of-the-art, on INTEL-TAU
dataset. | [
"cs.CV",
"eess.IV"
]
|
The explosive growth of easily-accessible unlabeled data has lead to growing
interest in active learning, a paradigm in which data-hungry learning
algorithms adaptively select informative examples in order to lower
prohibitively expensive labeling costs. Unfortunately, in standard worst-case
models of learning, the active setting often provides no improvement over
non-adaptive algorithms. To combat this, a series of recent works have
considered a model in which the learner may ask enriched queries beyond labels.
While such models have seen success in drastically lowering label costs, they
tend to come at the expense of requiring large amounts of memory. In this work,
we study what families of classifiers can be learned in bounded memory. To this
end, we introduce a novel streaming-variant of enriched-query active learning
along with a natural combinatorial parameter called lossless sample compression
that is sufficient for learning not only with bounded memory, but in a
query-optimal and computationally efficient manner as well. Finally, we give
three fundamental examples of classifier families with small, easy to compute
lossless compression schemes when given access to basic enriched queries:
axis-aligned rectangles, decision trees, and halfspaces in two dimensions. | [
"cs.LG",
"stat.ML",
"68Q32"
]
|
Learning classifier systems (LCSs) are population-based predictive systems
that were originally envisioned as agents to act in reinforcement learning (RL)
environments. These systems can suffer from population bloat and so are
amenable to compaction techniques that try to strike a balance between
population size and performance. A well-studied LCS architecture is XCSF, which
in the RL setting acts as a Q-function approximator. We apply XCSF to a
deterministic and stochastic variant of the FrozenLake8x8 environment from
OpenAI Gym, with its performance compared in terms of function approximation
error and policy accuracy to the optimal Q-functions and policies produced by
solving the environments via dynamic programming. We then introduce a novel
compaction algorithm (Greedy Niche Mass Compaction - GNMC) and study its
operation on XCSF's trained populations. Results show that given a suitable
parametrisation, GNMC preserves or even slightly improves function
approximation error while yielding a significant reduction in population size.
Reasonable preservation of policy accuracy also occurs, and we link this metric
to the commonly used steps-to-goal metric in maze-like environments,
illustrating how the metrics are complementary rather than competitive. | [
"cs.LG",
"stat.ML"
]
|
In this work, we propose several enhancements to a geometric transformation
based model for anomaly detection in images (GeoTranform). The model assumes
that the anomaly class is unknown and that only inlier samples are available
for training. We introduce new filter based transformations useful for
detecting anomalies in astronomical images, that highlight artifact properties
to make them more easily distinguishable from real objects. In addition, we
propose a transformation selection strategy that allows us to find
indistinguishable pairs of transformations. This results in an improvement of
the area under the Receiver Operating Characteristic curve (AUROC) and accuracy
performance, as well as in a dimensionality reduction. The models were tested
on astronomical images from the High Cadence Transient Survey (HiTS) and Zwicky
Transient Facility (ZTF) datasets. The best models obtained an average AUROC of
99.20% for HiTS and 91.39% for ZTF. The improvement over the original
GeoTransform algorithm and baseline methods such as One-Class Support Vector
Machine, and deep learning based methods is significant both statistically and
in practice. | [
"cs.CV",
"astro-ph.IM"
]
|
Computer-vision hospital systems can greatly assist healthcare workers and
improve medical facility treatment, but often face patient resistance due to
the perceived intrusiveness and violation of privacy associated with visual
surveillance. We downsample video frames to extremely low resolutions to
degrade private information from surveillance videos. We measure the amount of
activity-recognition information retained in low resolution depth images, and
also apply a privately-trained DCSCN super-resolution model to enhance the
utility of our images. We implement our techniques with two actual
healthcare-surveillance scenarios, hand-hygiene compliance and ICU
activity-logging, and show that our privacy-preserving techniques preserve
enough information for realistic healthcare tasks. | [
"cs.CV"
]
|
Few tone mapping operators (TMOs) take color management into consideration,
limiting compression to luminance values only. This may lead to changes in
image chroma and hues which are typically managed with a post-processing step.
However, current post-processing techniques for tone reproduction do not
explicitly consider the target display gamut. Gamut mapping on the other hand,
deals with mapping images from one color gamut to another, usually smaller,
gamut but has traditionally focused on smaller scale, chromatic changes. In
this context, we present a novel gamut and tone management framework for
color-accurate reproduction of high dynamic range (HDR) images, which is
conceptually and computationally simple, parameter-free, and compatible with
existing TMOs. In the CIE LCh color space, we compress chroma to fit the gamut
of the output color space. This prevents hue and luminance shifts while taking
gamut boundaries into consideration. We also propose a compatible lightness
compression scheme that minimizes the number of color space conversions. Our
results show that our gamut management method effectively compresses the chroma
of tone mapped images, respecting the target gamut and without reducing image
quality. | [
"cs.CV",
"cs.GR",
"cs.MM"
]
|
Accurate and reliable image segmentation is an essential part of biomedical
image analysis. In this paper, we consider the problem of biomedical image
segmentation using deep convolutional neural networks. We propose a new
end-to-end network architecture that effectively integrates local and global
contextual patterns of histologic primitives to obtain a more reliable
segmentation result. Specifically, we introduce a deep fully convolution
residual network with a new skip connection strategy to control the contextual
information passed forward. Moreover, our trained model is also computationally
inexpensive due to its small number of network parameters. We evaluate our
method on two public datasets for epithelium segmentation and tubule
segmentation tasks. Our experimental results show that the proposed method
provides a fast and effective way of producing a pixel-wise dense prediction of
biomedical images. | [
"cs.CV"
]
|
Video object removal is a challenging task in video processing that often
requires massive human efforts. Given the mask of the foreground object in each
frame, the goal is to complete (inpaint) the object region and generate a video
without the target object. While recently deep learning based methods have
achieved great success on the image inpainting task, they often lead to
inconsistent results between frames when applied to videos. In this work, we
propose a novel learning-based Video Object Removal Network (VORNet) to solve
the video object removal task in a spatio-temporally consistent manner, by
combining the optical flow warping and image-based inpainting model.
Experiments are done on our Synthesized Video Object Removal (SVOR) dataset
based on the YouTube-VOS video segmentation dataset, and both the objective and
subjective evaluation demonstrate that our VORNet generates more spatially and
temporally consistent videos compared with existing methods. | [
"cs.CV"
]
|
Uncertainty quantification (UQ) plays a pivotal role in reduction of
uncertainties during both optimization and decision making processes. It can be
applied to solve a variety of real-world applications in science and
engineering. Bayesian approximation and ensemble learning techniques are two
most widely-used UQ methods in the literature. In this regard, researchers have
proposed different UQ methods and examined their performance in a variety of
applications such as computer vision (e.g., self-driving cars and object
detection), image processing (e.g., image restoration), medical image analysis
(e.g., medical image classification and segmentation), natural language
processing (e.g., text classification, social media texts and recidivism
risk-scoring), bioinformatics, etc. This study reviews recent advances in UQ
methods used in deep learning. Moreover, we also investigate the application of
these methods in reinforcement learning (RL). Then, we outline a few important
applications of UQ methods. Finally, we briefly highlight the fundamental
research challenges faced by UQ methods and discuss the future research
directions in this field. | [
"cs.LG",
"cs.AI",
"cs.CV"
]
|
In this work, we present a simple yet effective framework to address the
domain translation problem between different sensor modalities with unique data
formats. By relying only on the semantics of the scene, our modular generative
framework can, for the first time, synthesize a panoramic color image from a
given full 3D LiDAR point cloud. The framework starts with semantic
segmentation of the point cloud, which is initially projected onto a spherical
surface. The same semantic segmentation is applied to the corresponding camera
image. Next, our new conditional generative model adversarially learns to
translate the predicted LiDAR segment maps to the camera image counterparts.
Finally, generated image segments are processed to render the panoramic scene
images. We provide a thorough quantitative evaluation on the SemanticKitti
dataset and show that our proposed framework outperforms other strong baseline
models.
Our source code is available at
https://github.com/halmstad-University/TITAN-NET | [
"cs.CV"
]
|
Recently, there have been some breakthroughs in graph analysis by applying
the graph neural networks (GNNs) following a neighborhood aggregation scheme,
which demonstrate outstanding performance in many tasks. However, we observe
that the parameters of the network and the embedding of nodes are represented
in real-valued matrices in existing GNN-based graph embedding approaches which
may limit the efficiency and scalability of these models. It is well-known that
binary vector is usually much more space and time efficient than the
real-valued vector. This motivates us to develop a binarized graph neural
network to learn the binary representations of the nodes with binary network
parameters following the GNN-based paradigm. Our proposed method can be
seamlessly integrated into the existing GNN-based embedding approaches to
binarize the model parameters and learn the compact embedding. Extensive
experiments indicate that the proposed binarized graph neural network, namely
BGN, is orders of magnitude more efficient in terms of both time and space
while matching the state-of-the-art performance. | [
"cs.LG",
"stat.ML"
]
|
The problem of inhomogeneous cluster densities has been a long-standing issue
for distance-based and density-based algorithms in clustering and anomaly
detection. These algorithms implicitly assume that all clusters have
approximately the same density. As a result, they often exhibit a bias towards
dense clusters in the presence of sparse clusters. Many remedies have been
suggested; yet, we show that they are partial solutions which do not address
the issue satisfactorily. To match the implicit assumption, we propose to
transform a given dataset such that the transformed clusters have approximately
the same density while all regions of locally low density become globally low
density -- homogenising cluster density while preserving the cluster structure
of the dataset. We show that this can be achieved by using a new
multi-dimensional Cumulative Distribution Function in a transform-and-shift
method. The method can be applied to every dataset, before the dataset is used
in many existing algorithms to match their implicit assumption without
algorithmic modification. We show that the proposed method performs better than
existing remedies. | [
"cs.LG",
"cs.AI",
"cs.CV",
"stat.ML"
]
|
We study the problem of efficiently estimating the effect of an intervention
on a single variable (atomic interventions) using observational samples in a
causal Bayesian network. Our goal is to give algorithms that are efficient in
both time and sample complexity in a non-parametric setting.
Tian and Pearl (AAAI `02) have exactly characterized the class of causal
graphs for which causal effects of atomic interventions can be identified from
observational data. We make their result quantitative. Suppose P is a causal
model on a set $\vec{V}$ of n observable variables with respect to a given
causal graph G with observable distribution $P$. Let $P_x$ denote the
interventional distribution over the observables with respect to an
intervention of a designated variable X with x. Assuming that $G$ has bounded
in-degree, bounded c-components ($k$), and that the observational distribution
is identifiable and satisfies certain strong positivity condition, we give an
algorithm that takes $m=\tilde{O}(n\epsilon^{-2})$ samples from $P$ and $O(mn)$
time, and outputs with high probability a description of a distribution
$\hat{P}$ such that $d_{\mathrm{TV}}(P_x, \hat{P}) \leq \epsilon$, and:
1. [Evaluation] the description can return in $O(n)$ time the probability
$\hat{P}(\vec{v})$ for any assignment $\vec{v}$ to $\vec{V}$
2. [Generation] the description can return an iid sample from $\hat{P}$ in
$O(n)$ time.
We also show lower bounds for the sample complexity showing that our sample
complexity has an optimal dependence on the parameters $n$ and $\epsilon$, as
well as if $k=1$ on the strong positivity parameter. | [
"cs.LG",
"cs.AI",
"cs.DS",
"stat.ML",
"I.2.6"
]
|
Building on top of the success of generative adversarial networks (GANs),
conditional GANs attempt to better direct the data generation process by
conditioning with certain additional information. Inspired by the most recent
AC-GAN, in this paper we propose a fast-converging conditional GAN (FC-GAN). In
addition to the real/fake classifier used in vanilla GANs, our discriminator
has an advanced auxiliary classifier which distinguishes each real class from
an extra `fake' class. The `fake' class avoids mixing generated data with real
data, which can potentially confuse the classification of real data as AC-GAN
does, and makes the advanced auxiliary classifier behave as another real/fake
classifier. As a result, FC-GAN can accelerate the process of differentiation
of all classes, thus boost the convergence speed. Experimental results on image
synthesis demonstrate our model is competitive in the quality of images
generated while achieving a faster convergence rate. | [
"cs.CV"
]
|
This paper aims to analyze knowledge consistency between pre-trained deep
neural networks. We propose a generic definition for knowledge consistency
between neural networks at different fuzziness levels. A task-agnostic method
is designed to disentangle feature components, which represent the consistent
knowledge, from raw intermediate-layer features of each neural network. As a
generic tool, our method can be broadly used for different applications. In
preliminary experiments, we have used knowledge consistency as a tool to
diagnose representations of neural networks. Knowledge consistency provides new
insights to explain the success of existing deep-learning techniques, such as
knowledge distillation and network compression. More crucially, knowledge
consistency can also be used to refine pre-trained networks and boost
performance. | [
"cs.LG",
"cs.CV",
"stat.ML"
]
|
We propose a new formulation of Multiple-Instance Learning (MIL). In typical
MIL settings, a unit of data is given as a set of instances called a bag and
the goal is to find a good classifier of bags based on similarity from a single
or finitely many "shapelets" (or patterns), where the similarity of the bag
from a shapelet is the maximum similarity of instances in the bag. Classifiers
based on a single shapelet are not sufficiently strong for certain
applications. Additionally, previous work with multiple shapelets has
heuristically chosen some of the instances as shapelets with no theoretical
guarantee of its generalization ability. Our formulation provides a richer
class of the final classifiers based on infinitely many shapelets. We provide
an efficient algorithm for the new formulation, in addition to generalization
bound. Our empirical study demonstrates that our approach is effective not only
for MIL tasks but also for Shapelet Learning for time-series classification. | [
"cs.LG",
"stat.ML"
]
|
Recent work on explainable clustering allows describing clusters when the
features are interpretable. However, much modern machine learning focuses on
complex data such as images, text, and graphs where deep learning is used but
the raw features of data are not interpretable. This paper explores a novel
setting for performing clustering on complex data while simultaneously
generating explanations using interpretable tags. We propose deep descriptive
clustering that performs sub-symbolic representation learning on complex data
while generating explanations based on symbolic data. We form good clusters by
maximizing the mutual information between empirical distribution on the inputs
and the induced clustering labels for clustering objectives. We generate
explanations by solving an integer linear programming that generates concise
and orthogonal descriptions for each cluster. Finally, we allow the explanation
to inform better clustering by proposing a novel pairwise loss with
self-generated constraints to maximize the clustering and explanation module's
consistency. Experimental results on public data demonstrate that our model
outperforms competitive baselines in clustering performance while offering
high-quality cluster-level explanations. | [
"cs.LG",
"cs.AI"
]
|
Transforming a thermal infrared image into a realistic RGB image is a
challenging task. In this paper we propose a deep learning method to bridge
this gap. We propose learning the transformation mapping using a coarse-to-fine
generator that preserves the details. Since the standard mean squared loss
cannot penalize the distance between colorized and ground truth images well, we
propose a composite loss function that combines content, adversarial,
perceptual and total variation losses. The content loss is used to recover
global image information while the latter three losses are used to synthesize
local realistic textures. Quantitative and qualitative experiments demonstrate
that our approach significantly outperforms existing approaches. | [
"cs.CV"
]
|
A key problem in multi-task learning (MTL) research is how to select
high-quality auxiliary tasks automatically. This paper presents GradTS, an
automatic auxiliary task selection method based on gradient calculation in
Transformer-based models. Compared to AUTOSEM, a strong baseline method, GradTS
improves the performance of MT-DNN with a bert-base-cased backend model, from
0.33% to 17.93% on 8 natural language understanding (NLU) tasks in the GLUE
benchmarks. GradTS is also time-saving since (1) its gradient calculations are
based on single-task experiments and (2) the gradients are re-used without
additional experiments when the candidate task set changes. On the 8 GLUE
classification tasks, for example, GradTS costs on average 21.32% less time
than AUTOSEM with comparable GPU consumption. Further, we show the robustness
of GradTS across various task settings and model selections, e.g. mixed
objectives among candidate tasks. The efficiency and efficacy of GradTS in
these case studies illustrate its general applicability in MTL research without
requiring manual task filtering or costly parameter tuning. | [
"cs.LG",
"cs.CL"
]
|
Video summarization is among challenging tasks in computer vision, which aims
at identifying highlight frames or shots over a lengthy video input. In this
paper, we propose an novel attention-based framework for video summarization
with complex video data. Unlike previous works which only apply attention
mechanism on the correspondence between frames, our multi-concept video
self-attention (MC-VSA) model is presented to identify informative regions
across temporal and concept video features, which jointly exploit context
diversity over time and space for summarization purposes. Together with
consistency between video and summary enforced in our framework, our model can
be applied to both labeled and unlabeled data, making our method preferable to
real-world applications. Extensive and complete experiments on two benchmarks
demonstrate the effectiveness of our model both quantitatively and
qualitatively, and confirms its superiority over the stateof-the-arts. | [
"cs.CV"
]
|
Recent progress in reinforcement learning has led to remarkable performance
in a range of applications, but its deployment in high-stakes settings remains
quite rare. One reason is a limited understanding of the behavior of
reinforcement algorithms, both in terms of their regret and their ability to
learn the underlying system dynamics---existing work is focused almost
exclusively on characterizing rates, with little attention paid to the
constants multiplying those rates that can be critically important in practice.
To start to address this challenge, we study perhaps the simplest non-bandit
reinforcement learning problem: linear quadratic adaptive control (LQAC). By
carefully combining recent finite-sample performance bounds for the LQAC
problem with a particular (less-recent) martingale central limit theorem, we
are able to derive asymptotically-exact expressions for the regret, estimation
error, and prediction error of a rate-optimal stepwise-updating LQAC algorithm.
In simulations on both stable and unstable systems, we find that our asymptotic
theory also describes the algorithm's finite-sample behavior remarkably well. | [
"cs.LG",
"cs.SY",
"eess.SY",
"math.ST",
"stat.TH"
]
|
Non-invasive detection of cardiovascular disorders from radiology scans
requires quantitative image analysis of the heart and its substructures. There
are well-established measurements that radiologists use for diseases assessment
such as ejection fraction, volume of four chambers, and myocardium mass. These
measurements are derived as outcomes of precise segmentation of the heart and
its substructures. The aim of this paper is to provide such measurements
through an accurate image segmentation algorithm that automatically delineates
seven substructures of the heart from MRI and/or CT scans. Our proposed method
is based on multi-planar deep convolutional neural networks (CNN) with an
adaptive fusion strategy where we automatically utilize complementary
information from different planes of the 3D scans for improved delineations.
For CT and MRI, we have separately designed three CNNs (the same architectural
configuration) for three planes, and have trained the networks from scratch for
voxel-wise labeling for the following cardiac structures: myocardium of left
ventricle (Myo), left atrium (LA), left ventricle (LV), right atrium (RA),
right ventricle (RV), ascending aorta (Ao), and main pulmonary artery (PA). We
have evaluated the proposed method with 4-fold-cross validation on the
multi-modality whole heart segmentation challenge (MM-WHS 2017) dataset. The
precision and dice index of 0.93 and 0.90, and 0.87 and 0.85 were achieved for
CT and MR images, respectively. While a CT volume was segmented about 50
seconds, an MRI scan was segmented around 17 seconds with the GPUs/CUDA
implementation. | [
"stat.ML",
"cs.CV"
]
|
Automatic neural architecture design has shown its potential in discovering
powerful neural network architectures. Existing methods, no matter based on
reinforcement learning or evolutionary algorithms (EA), conduct architecture
search in a discrete space, which is highly inefficient. In this paper, we
propose a simple and efficient method to automatic neural architecture design
based on continuous optimization. We call this new approach neural architecture
optimization (NAO). There are three key components in our proposed approach:
(1) An encoder embeds/maps neural network architectures into a continuous
space. (2) A predictor takes the continuous representation of a network as
input and predicts its accuracy. (3) A decoder maps a continuous representation
of a network back to its architecture. The performance predictor and the
encoder enable us to perform gradient based optimization in the continuous
space to find the embedding of a new architecture with potentially better
accuracy. Such a better embedding is then decoded to a network by the decoder.
Experiments show that the architecture discovered by our method is very
competitive for image classification task on CIFAR-10 and language modeling
task on PTB, outperforming or on par with the best results of previous
architecture search methods with a significantly reduction of computational
resources. Specifically we obtain 1.93% test set error rate for CIFAR-10 image
classification task and 56.0 test set perplexity of PTB language modeling task.
Furthermore, combined with the recent proposed weight sharing mechanism, we
discover powerful architecture on CIFAR-10 (with error rate 2.93%) and on PTB
(with test set perplexity 56.6), with very limited computational resources
(less than 10 GPU hours) for both tasks. | [
"cs.LG",
"stat.ML"
]
|
In this paper, we study the task of 3D human pose estimation in the wild.
This task is challenging due to lack of training data, as existing datasets are
either in the wild images with 2D pose or in the lab images with 3D pose.
We propose a weakly-supervised transfer learning method that uses mixed 2D
and 3D labels in a unified deep neutral network that presents two-stage
cascaded structure. Our network augments a state-of-the-art 2D pose estimation
sub-network with a 3D depth regression sub-network. Unlike previous two stage
approaches that train the two sub-networks sequentially and separately, our
training is end-to-end and fully exploits the correlation between the 2D pose
and depth estimation sub-tasks. The deep features are better learnt through
shared representations. In doing so, the 3D pose labels in controlled lab
environments are transferred to in the wild images. In addition, we introduce a
3D geometric constraint to regularize the 3D pose prediction, which is
effective in the absence of ground truth depth labels. Our method achieves
competitive results on both 2D and 3D benchmarks. | [
"cs.CV"
]
|
The information contained in hierarchical topology, intrinsic to many
networks, is currently underutilised. A novel architecture is explored which
exploits this information through a multiscale decomposition. A dendrogram is
produced by a Girvan-Newman hierarchical clustering algorithm. It is segmented
and fed through graph convolutional layers, allowing the architecture to learn
multiple scale latent space representations of the network, from fine to coarse
grained. The architecture is tested on a benchmark citation network,
demonstrating competitive performance. Given the abundance of hierarchical
networks, possible applications include quantum molecular property prediction,
protein interface prediction and multiscale computational substrates for
partial differential equations. | [
"cs.LG",
"stat.ML"
]
|
Deep learning algorithms mine knowledge from the training data and thus would
likely inherit the dataset's bias information. As a result, the obtained model
would generalize poorly and even mislead the decision process in real-life
applications. We propose to remove the bias information misused by the target
task with a cross-sample adversarial debiasing (CSAD) method. CSAD explicitly
extracts target and bias features disentangled from the latent representation
generated by a feature extractor and then learns to discover and remove the
correlation between the target and bias features. The correlation measurement
plays a critical role in adversarial debiasing and is conducted by a
cross-sample neural mutual information estimator. Moreover, we propose joint
content and local structural representation learning to boost mutual
information estimation for better performance. We conduct thorough experiments
on publicly available datasets to validate the advantages of the proposed
method over state-of-the-art approaches. | [
"cs.LG",
"cs.AI",
"cs.CV"
]
|
Vision Transformers (ViTs) have shown competitive accuracy in image
classification tasks compared with CNNs. Yet, they generally require much more
data for model pre-training. Most of recent works thus are dedicated to
designing more complex architectures or training methods to address the
data-efficiency issue of ViTs. However, few of them explore improving the
self-attention mechanism, a key factor distinguishing ViTs from CNNs. Different
from existing works, we introduce a conceptually simple scheme, called refiner,
to directly refine the self-attention maps of ViTs. Specifically, refiner
explores attention expansion that projects the multi-head attention maps to a
higher-dimensional space to promote their diversity. Further, refiner applies
convolutions to augment local patterns of the attention maps, which we show is
equivalent to a distributed local attention features are aggregated locally
with learnable kernels and then globally aggregated with self-attention.
Extensive experiments demonstrate that refiner works surprisingly well.
Significantly, it enables ViTs to achieve 86% top-1 classification accuracy on
ImageNet with only 81M parameters. | [
"cs.CV"
]
|
In this paper we are proposing the use of Kaniadakis entropy in the bi-level
thresholding of images, in the framework of a maximum entropy principle. We
discuss the role of its entropic index in determining the threshold and in
driving an "image transition", that is, an abrupt transition in the appearance
of the corresponding bi-level image. Some examples are proposed to illustrate
the method and for comparing it to the approach which is using the Tsallis
entropy. | [
"cs.CV"
]
|
Contour detection has been a fundamental component in many image segmentation
and object detection systems. Most previous work utilizes low-level features
such as texture or saliency to detect contours and then use them as cues for a
higher-level task such as object detection. However, we claim that recognizing
objects and predicting contours are two mutually related tasks. Contrary to
traditional approaches, we show that we can invert the commonly established
pipeline: instead of detecting contours with low-level cues for a higher-level
recognition task, we exploit object-related features as high-level cues for
contour detection.
We achieve this goal by means of a multi-scale deep network that consists of
five convolutional layers and a bifurcated fully-connected sub-network. The
section from the input layer to the fifth convolutional layer is fixed and
directly lifted from a pre-trained network optimized over a large-scale object
classification task. This section of the network is applied to four different
scales of the image input. These four parallel and identical streams are then
attached to a bifurcated sub-network consisting of two independently-trained
branches. One branch learns to predict the contour likelihood (with a
classification objective) whereas the other branch is trained to learn the
fraction of human labelers agreeing about the contour presence at a given point
(with a regression criterion).
We show that without any feature engineering our multi-scale deep learning
approach achieves state-of-the-art results in contour detection. | [
"cs.CV"
]
|
Music semantics is embodied, in the sense that meaning is biologically
mediated by and grounded in the human body and brain. This embodied cognition
perspective also explains why music structures modulate kinetic and
somatosensory perception. We leverage this aspect of cognition, by considering
dance as a proxy for music perception, in a statistical computational model
that learns semiotic correlations between music audio and dance video. We
evaluate the ability of this model to effectively capture underlying semantics
in a cross-modal retrieval task. Quantitative results, validated with
statistical significance testing, strengthen the body of evidence for embodied
cognition in music and show the model can recommend music audio for dance video
queries and vice-versa. | [
"cs.CV",
"cs.LG",
"cs.SD",
"eess.AS"
]
|
We propose the first approach for the decomposition of a monocular color
video into direct and indirect illumination components in real time. We
retrieve, in separate layers, the contribution made to the scene appearance by
the scene reflectance, the light sources and the reflections from various
coherent scene regions to one another. Existing techniques that invert global
light transport require image capture under multiplexed controlled lighting, or
only enable the decomposition of a single image at slow off-line frame rates.
In contrast, our approach works for regular videos and produces temporally
coherent decomposition layers at real-time frame rates. At the core of our
approach are several sparsity priors that enable the estimation of the
per-pixel direct and indirect illumination layers based on a small set of
jointly estimated base reflectance colors. The resulting variational
decomposition problem uses a new formulation based on sparse and dense sets of
non-linear equations that we solve efficiently using a novel alternating
data-parallel optimization strategy. We evaluate our approach qualitatively and
quantitatively, and show improvements over the state of the art in this field,
in both quality and runtime. In addition, we demonstrate various real-time
appearance editing applications for videos with consistent illumination. | [
"cs.CV"
]
|
Dataset bias is a well-known problem in the field of computer vision. The
presence of implicit bias in any image collection hinders a model trained and
validated on a particular dataset to yield similar accuracies when tested on
other datasets. In this paper, we propose a novel debiasing technique to reduce
the effects of a biased training dataset. Our goal is to augment the training
data using a generative network by learning a non-linear mapping from the
source domain (training set) to the target domain (testing set) while retaining
training set labels. The cycle consistency loss and adversarial loss for
generative adversarial networks are used to learn the mapping. A structured
similarity index (SSIM) loss is used to enforce label retention while
augmenting the training set. Our methods and hypotheses are supported by
quantitative comparisons with prior debiasing techniques. These comparisons
showcase the superiority of our method and its potential to mitigate the
effects of dataset bias during the inference stage. | [
"cs.CV"
]
|
Attribution methods calculate attributions that visually explain the
predictions of deep neural networks (DNNs) by highlighting important parts of
the input features. In particular, gradient-based attribution (GBA) methods are
widely used because they can be easily implemented through automatic
differentiation. In this study, we use the attributions that filter out
irrelevant parts of the input features and then verify the effectiveness of
this approach by measuring the classification accuracy of a pre-trained DNN.
This is achieved by calculating and applying an \textit{attribution mask} to
the input features and subsequently introducing the masked features to the DNN,
for which the mask is designed to recursively focus attention on the parts of
the input related to the target label. The accuracy is enhanced under a certain
condition, i.e., \textit{no implicit bias}, which can be derived based on our
theoretical insight into compressing the DNN into a single-layer neural
network. We also provide Gradient\,*\,Sign-of-Input (GxSI) to obtain the
attribution mask that further improves the accuracy. As an example, on CIFAR-10
that is modified using the attribution mask obtained from GxSI, we achieve the
accuracy ranging from 99.8\% to 99.9\% without additional training. | [
"cs.LG",
"cs.AI"
]
|
While deep reinforcement learning has achieved tremendous successes in
various applications, most existing works only focus on maximizing the expected
value of total return and thus ignore its inherent stochasticity. Such
stochasticity is also known as the aleatoric uncertainty and is closely related
to the notion of risk. In this work, we make the first attempt to study
risk-sensitive deep reinforcement learning under the average reward setting
with the variance risk criteria. In particular, we focus on a
variance-constrained policy optimization problem where the goal is to find a
policy that maximizes the expected value of the long-run average reward,
subject to a constraint that the long-run variance of the average reward is
upper bounded by a threshold. Utilizing Lagrangian and Fenchel dualities, we
transform the original problem into an unconstrained saddle-point policy
optimization problem, and propose an actor-critic algorithm that iteratively
and efficiently updates the policy, the Lagrange multiplier, and the Fenchel
dual variable. When both the value and policy functions are represented by
multi-layer overparameterized neural networks, we prove that our actor-critic
algorithm generates a sequence of policies that finds a globally optimal policy
at a sublinear rate. | [
"cs.LG",
"math.OC",
"stat.ML"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.